diff --git "a/results_retrieval/emb_potion_r32M/retrieval_pagechunker_marker.json" "b/results_retrieval/emb_potion_r32M/retrieval_pagechunker_marker.json" deleted file mode 100644--- "a/results_retrieval/emb_potion_r32M/retrieval_pagechunker_marker.json" +++ /dev/null @@ -1,22210 +0,0 @@ -[ - { - "top_k": 10, - "mrr": 0.4367367724867725, - "recall": 0.71, - "count_empty_strings": 149 - }, - [ - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "What does \"new account\" mean according to the international tax compliance from 2020 ?", - "target_page": 2, - "target_passage": "“new account” means a financial account maintained by a reporting financial institution opened on or after 13th May 2020", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "At December 31, 2004, we had $93 million of deferred tax assets and $1.9 billion of deferred tax liabilities. Except for certain New Jersey state net operating losses and certain other New Jersey state deferred tax assets, we believe that it is more likely than not that our deferred tax assets are fully realizable because of the future reversal of existing taxable temporary differences and future projected taxable income. The valuation allowance at December 31, 2004 related to the New Jersey deferred tax assets was $6 million.\n\nOur income tax returns are subject to examination by the Internal Revenue Service (\"IRS\") and other tax authorities. While positions taken in tax returns are sometimes subject to uncertainty in the tax laws, we do not take such positions unless we have \"substantial authority\" to do so under the Internal Revenue Code and applicable regulations. We may take positions on our tax returns based on substantial authority that are not ultimately accepted by the IRS.\n\nWe assess such potential unfavorable outcomes based on the criteria of Statement of Financial Accounting Standards No. 5, \"Accounting for Contingencies\" (\"SFAS 5\"). We establish a tax reserve if an unfavorable outcome is probable and the amount of the unfavorable outcome can be reasonably estimated. We assess the potential outcomes of tax uncertainties on a quarterly basis. In determining whether the probable criterion of SFAS 5 is met, we presume that the taxing authority will focus on the exposure and we assess the probable outcome of a particular issue based upon the relevant legal and technical merits. We also apply our judgment regarding the potential actions by the tax authorities and resolution through the settlement process.\n\nWe maintain required tax reserves until such time as the underlying issue is resolved. When actual results differ from reserve estimates, we adjust the income tax provision and our tax reserves in the period resolved. For tax years that are examined by taxing authorities, we adjust tax reserves in the year the tax examinations are settled. For tax years that are not examined by taxing authorities, we adjust tax reserves in the year that the statute of limitations expires. Our estimate of the\n\npotential outcome for any uncertain tax issue is highly judgmental, and we believe we have adequately provided for any reasonable and foreseeable outcomes related to uncertain tax matters.\n\nIn December 2002, we settled the IRS audit of the Company's 1995 and 1996 tax returns, which did not result in a material impact on our results of operations or financial position. During 2003, we filed amended returns for tax years subsequent to 1996 to reflect the impact of the IRS audits of the 1993 through 1996 tax years on those subsequent years. In the fourth quarter of 2003, the statutes of limitations expired for the 1997 through 1999 tax years, resulting in a reduction of our tax reserves of $13 million and a corresponding reduction in our provision for income taxes. In the third quarter of 2004, the statute of limitations expired for our 2000 tax return, resulting in a reduction of our tax reserves of $6 million and a corresponding reduction in our provision for income taxes. Subsequent to December 31, 2004, we received notice that the IRS will audit our 2001 and 2002 tax returns, and the tax returns for years after 2002 are subject to possible future examination.\n\nWe classify reserves for tax uncertainties within \"other accrued liabilities\" in the accompanying consolidated balance sheets, separate from any related income tax payable or deferred income taxes. Reserve amounts may relate to the deductibility of an item, as well as potential interest associated with those items.\n\nA portion of our tax reserves was assumed in the Mirage Acquisition. The IRS audit of the tax returns of Mirage through the merger date was settled in August 2003, resulting in a payment to the IRS of $45 million, including interest. These matters had been previously reserved for, so the settlement had no impact on our income tax provision or our results of operations. Any future adjustments to the acquired Mirage tax reserves will be recorded as an adjustment to goodwill.", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- (a) \"new account\" means a financial account maintained by a reporting financial institution(**a**) opened on or after 13th May 2020;\n- (b) \"pre-existing account\" means—\n- (i) a financial account maintained by a reporting financial institution as of 12th May 2020, or\n- (ii) a financial account within Section VIII(C)(9)(b) of Annex 1 of the DAC(**b**), but in the application of that provision the references to \"subparagraph C(9)(a)\" are to be read as references to paragraph (i) of this sub-paragraph.\n\t- (4) The accounts are—\n\t\t- (a) non-registered pension arrangements where the annual contributions are limited to £50,000 and funds contributed cannot be accessed before the age of 55 except in circumstances of serious ill health;\n\t\t- (b) Premium Bonds issued by the UK National Savings and Investments;\n\t\t- (c) Fixed Interest Savings Certificates issued by the UK National Savings and Investments; and\n\t\t- (d) Index Linked Savings Certificates issued by the UK National Savings and Investments.\".\n\n(5) In Schedule 2, omit paragraphs 2, 6, 8 and 9.\n\n#### **Transitional provision**\n\n**3.**—(1) For the purposes of the International Tax Compliance Regulations 2015, in relation to an account that by virtue of regulation 2(5) ceases to be an excluded account, the calendar year 2020 is treated as beginning on 13th May 2020 and ending on 31st December 2020.\n\n(2) Where in consequence of paragraph (1) it is necessary to apportion an amount for the calendar year 2020 to the period ending immediately before 13th May 2020 and the period beginning with that date, it is to be apportioned—\n\n- (a) on a time basis according to the respective length of the periods, or\n- (b) if that method would produce a result that is unjust or unreasonable, on a just and reasonable basis.\n\n*David Rutley Maggie Throup* 20th April 2020 Two of the Lords Commissioners of Her Majesty's Treasury\n\n### **EXPLANATORY NOTE**\n\n*(This note is not part of the Regulations)* \n\nThe Regulations amend the International Tax Compliance Regulations 2015 (\"the principal Regulations\") which give effect to agreements and arrangements reached between the United Kingdom and other jurisdictions to improve international tax compliance.\n\nRegulation 2(2) extends the application of the principal Regulations to arrangements entered into by the United Kingdom for the exchange of financial account information with other jurisdictions up to 19th April 2020, the date before the Regulations are made.\n\nRegulation 2(5) omits various accounts from the category of excluded accounts. Regulation 2(4)(b) amends the definitions of \"new account\" and \"pre-existing account\" in relation to those\n\n(<b>a) \"Financial account\" and \"reporting financial institution\" are defined in the table in regulation 24(2) of the principal Regulations.\n\n(<b>b) \"The DAC\" is defined in regulation 1(3)(a) of the principal Regulations.", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "## **2020 No. 438**\n\n## **TAXES**\n\n# The International Tax Compliance (Amendment) Regulations 2020\n\n| Laid before the House of Commons | | | | 21st April 2020 |\n| --- | --- | --- | --- | --- |\n| Made - Coming into force | - | - - | - - | 20th April 2020 13th May 2020 |\n\nThe Treasury make these Regulations in exercise of the powers conferred by section 222 of the Finance Act 2013(**a**):\n\n#### **Citation and commencement**\n\n**1.** These Regulations may be cited as the International Tax Compliance (Amendment) Regulations 2020 and come into force on 13th May 2020.\n\n#### **Amendments to the International Tax Compliance Regulations 2015**\n\n**2.**—(1) The International Tax Compliance Regulations 2015(**b**) are amended as follows.\n\n(2) In regulation 1(3)(b)(i), for \"16th May 2019\" substitute \"19th April 2020\"(**c**).\n\n- (3) In regulation 3(4A)(a), at the beginning insert \"subject to regulation 24(3)\".\n- (4) In regulation 24—\n\n- (a) in the table in paragraph (2), in the column headed \"the CRS\"—\n\t- (i) at the beginning of the entry for \"new account\" insert \"subject to paragraph (3)\", and\n\t- (ii) at the beginning of the entry for \"pre-existing account\" insert \"subject to regulation 3(4A)(a) and paragraph (3)\", and\n- (b) after paragraph (2) insert—\n\t- \"(3) In respect of the accounts listed in paragraph (4)—\n\n(<b>a) 2013 c. 29; section 222 was amended by section 50 of the Finance (No. 2) Act 2015 (c. 33) but the amendments are not relevant to these Regulations.\n\n(<b>b) S.I. 2015/878 (referred to in these footnotes as \"the principal Regulations\"); relevant amending instruments are S.I. 2017/598, 2018/490 and 2019/881.\n\n(<b>c) In accordance with the common reporting standard for automatic exchange of financial account information developed by the Organisation for Economic Co-operation and Development and adopted by the United Kingdom, the United Kingdom exchanges information received from financial institutions under the principal Regulations with a territory which is a \"Reportable Jurisdiction\" under the CRS and with which the United Kingdom has entered into international exchange arrangements for that year. Reportable Jurisdictions are identified in a published list available at https://www.gov.uk/hmrcinternal-manuals/international-exchange-of-information/ieim402340. A hard copy of this list is available for inspection at the offices of HMRC at 10 South Colonnade, 9th Floor, Canary Wharf, London E14 4PU.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "accounts so that these terms are defined by reference to the date that those accounts ceased to be excluded accounts. Regulation 2(3) and (4)(a) make consequential amendments.\n\nRegulation 3 makes a transitional provision for the calendar year 2020 in relation to accounts which were previously excluded accounts.\n\nA Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-theuks-automatic-exchange-of-information-agreements. It remains an accurate summary of the impacts that apply to this instrument.\n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "#### Table of Contents\n\n#### Energy Generation and Storage Segment\n\n#### Energy Generation and Storage Sales\n\nWe record as deferred revenue any non-refundable amounts that are collected from customers related to prepayments, which is recognized as revenue ratably over the respective customer contract term. As of September 30, 2024 and December 31, 2023, deferred revenue related to such customer payments amounted to $1.73 billion and $1.60 billion, respectively, mainly due to contractual payment terms. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $1.09 billion and $511 million for the nine months ended September 30, 2024 and 2023, respectively. As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $6.61 billion. Of this amount, we expect to recognize $4.23 billion in the next 12 months and the rest over the remaining performance obligation period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our energy products. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $32 million and $31 million, respectively, in Accounts receivable, net, and $641 million and $578 million, respectively, in Other non-current assets for the long-term portion.\n\n#### Income Taxes\n\nWe are subject to income taxes in the U.S. and in many foreign jurisdictions. Significant judgment is required in determining our provision for income taxes, our deferred tax assets and liabilities and any valuation allowance recorded against our net deferred tax assets that are not more likely than not to be realized. We monitor the realizability of our deferred tax assets taking into account all relevant factors at each reporting period. In completing our assessment of realizability of our deferred tax assets, we consider our history of income (loss) measured at pre-tax income (loss) adjusted for permanent book-tax differences on a jurisdictional basis, volatility in actual earnings, excess tax benefits related to stock-based compensation in recent prior years and impacts of the timing of reversal of existing temporary differences. We also rely on our assessment of the Company's projected future results of business operations, including uncertainty in future operating results relative to historical results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all available information. Three Months Ended September 30, Nine Months Ended September 30, 2024 2023 2024 2023 Net income attributable to common stockholders $ 2,167 $ 1,853 $ 4,774 $ 7,069 Less: Buy-outs of noncontrolling interest — 2 (42) (3)\n\nOur provision for or benefit from income taxes for interim periods is determined using an estimate of our annual effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment.\n\n#### Net Income per Share of Common Stock Attributable to Common Stockholders\n\nThe following table presents the reconciliation of net income attributable to common stockholders to net income used in computing basic and diluted net income per share of common stock (in millions):\n\n| Company's projected future results of business operations, including uncertainty in future operating results relative to historical | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions | | | | | | | |\n| impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of | | | | | | | |\n| future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all | | | | | | | |\n| available information. | | | | | | | |\n| Our provision for or benefit from income taxes for interim periods is determined using an estimate of our annual | | | | | | | |\n| effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update | | | | | | | |\n| our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment. | | | | | | | |\n| Net Income per Share of Common Stock Attributable to Common Stockholders | | | | | | | |\n| The following table presents the reconciliation of net income attributable to common stockholders to net income used in | | | | | | | |\n| computing basic and diluted net income per share of common stock (in millions): | | | | | | | |\n| 2024 2023 2024 2023 | | | | | | | |\n| Net income attributable to common stockholders $ $ $ | $ | 2,167 | | 1,853 | 4,774 | 7,069 | |\n| Less: Buy-outs of noncontrolling interest 2 (42) | | — | | | | | (3) |\n| Net income used in computing basic and diluted net | | | | | | | |\n| $ $ income per share of common stock | $ | 2,167 | $ | 1,851 | 4,816 | | 7,072 |\n| 12 | | | | | | | |", - "page_start": 15, - "page_end": 15, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## 6. Income tax continued\n\n| | | 2013 | 2012 |\n| --- | --- | --- | --- |\n| | | $'000 | $'000 |\n| c) | Tax recognised in other comprehensive income | | |\n| | Available-for-sale investment revaluation reserve | (39) | (300) |\n| | Foreign exchange losses recognised directly in foreign currency translation reserves | 566 | 103 |\n| | Total tax recognised in other comprehensive income | 527 | (197) |\n\n#### d) Deferred tax liabilities offset\n\nDeferred tax liabilities amounting to $853,000 (2012: $774,000) have been offset against deferred tax asset.\n\n| e) | Unrecognised deferred tax assets | | |\n| --- | --- | --- | --- |\n| | Tax losses – Australian entities | 211,548 | 5,627 |\n| | Tax losses – other entities | 9,237 | 2,185 |\n| | Temporary difference | 130,113 | – |\n| | Subtotal | 350,898 | 7,812 |\n| | Unrecognised deferred tax assets | 104,345 | 2,344 |\n\n#### f) Tax consolidation group\n\nKingsgate Consolidated Limited and its whollyowned Australian subsidiary have implemented the tax consolidation legislation as of 1 July 2003. The accounting policy in relation to this legislation is set out in Note 2d.\n\nOn adoption of the tax consolidation legislation, the entities in the tax-consolidation group entered into a tax sharing agreement which, in the opinion of the Directors, limits the joint and several liabilities of the wholly-owned entities in\n\nthe case of default by the head entity, Kingsgate Consolidated Limited.\n\nThe entities have also entered into a tax funding agreement under which the wholly-owned entities fully compensate Kingsgate for any current tax payable assumed and are compensated for any current tax receivable and deferred assets relating to the unused tax losses or unused tax credits that are transferred to Kingsgate under the tax legislation. The funding\n\namounts are determined by reference to the amounts recognised in the wholly-owned entities' financial statements.\n\nThe amount receivable / payable under the tax funding agreement are due upon receipt of the funding advice from the head entity, which is issued as soon as practicable after the end of each financial year. The head entity may also require payment of interim funding amounts to assist with its obligations to pay tax instalments.\n\n| | | Assets | | Liabilities | | Net | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| g) | Recognised deferred tax assets | 2013 | 2012 | 2013 | 2012 | 2013 | 2012 |\n| and liabilities | | $'000 | $'000 | $'000 | $'000 | $'000 | $'000 |\n| Deferred tax assets / liabilities: | | | | | | | |\n| Derivatives | | 384 | 808 | – | – | 384 | 808 |\n| Employee benefits | | 1,789 | 1,571 | – | – | 1,789 | 1,571 |\n| Provision for restoration and rehabilitation | | 5,167 | 3,390 | – | – | 5,167 | 3,390 |\n| Provision for obsolescence | | 309 | 278 | – | – | 309 | 278 |\n| Unrealised exchange (gains) / losses | | 1,265 | 2,990 | (2,020) | (200) | (755) | 2,790 |\n| Other items | | 1,147 | 1,096 | (467) | – | 680 | 1,096 |\n| Tax losses | | – | 36,334 | – | – | – | 36,334 |\n| Available-for-sale financial assets | | 334 | 78 | – | (39) | 334 | 39 |\n| Mine properties and exploration | | 3,706 | – | (11,447) | (65,205) | (7,741) | (65,205) |\n| Total deferred tax assets / (liabilities) | | 14,101 | 46,545 | (13,934) | (65,444) | 167 | (18,899) |\n| Set off tax | | (3,706) | (36,334) | 3,706 | 36,334 | – | – |\n| Net deferred tax assets (liabilities) | | 10,395 | 10,211 | (10,228) | (29,110) | 167 | (18,899) |", - "page_start": 83, - "page_end": 83, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\nAs of January 31, 2015, our state and foreign net operating loss carryforwards for income tax purposes were approximately $3 and $11, respectively. As of February 1, 2014, our federal, state and foreign net operating loss carryforwards for income tax purposes were approximately $4, $24 and $0, respectively. The state net operating loss carryforwards are subject to certain statutory limitations of the Internal Revenue Code and applicable state law. If not utilized, a portion of our state and foreign net operating loss carryforwards will begin to expire in 2031 and 2033, respectively.\n\nA reconciliation of the beginning and ending amount of unrecognized tax benefits is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Unrecognized tax benefit at beginning of year | $14 | $15 | $21 |\n| Gross increase to tax positions in prior periods | 9 | 3 | 1 |\n| Gross decrease to tax positions in prior periods | (2) | (1) | (7) |\n| Gross increase to tax positions in current period | 2 | 1 | 1 |\n| Lapses in statute | (3) | — | — |\n| Settlements | (5) | (4) | (1) |\n| Unrecognized tax benefit at end of year | $15 | $14 | $15 |\n\nAt the end of 2014, 2013 and 2012, $13, $7 and $7 of the ending gross unrecognized tax benefit related to items which, if recognized, would affect the effective tax rate.\n\nOur income tax expense included a decrease to expense of $1 in both 2014 and 2012, and an increase to expense of $1 in 2013, for taxrelated interest and penalties. At the end of 2014, 2013 and 2012, our liability for interest and penalties was $2, $7 and $7.\n\nWe file income tax returns in the U.S. and a limited number of foreign jurisdictions. With few exceptions, we are no longer subject to federal, state and local, or non-U.S. income tax examinations for years before 2010. Unrecognized tax benefits related to federal, state and local tax positions may decrease by $4 by January 30, 2016, due to the completion of examinations and the expiration of various statutes of limitations.\n\n## **NOTE 15: EARNINGS PER SHARE**\n\nEarnings per basic share is computed using the weighted-average number of common shares outstanding during the year. Earnings per diluted share uses the weighted-average number of common shares outstanding during the year plus dilutive common stock equivalents, primarily stock options. Dilutive common stock reflects the issuance of stock for all outstanding options that could be exercised, and would also reduce the amount of earnings that each share is entitled to. Anti-dilutive shares (including stock options and other shares) are excluded from the calculation of diluted shares and earnings per diluted share because their impact could increase earnings per diluted share.\n\nThe computation of earnings per share is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Net earnings | $720 | $734 | $735 |\n| Basic shares | 190.0 | 194.5 | 203.0 |\n| Dilutive effect of stock options and other | 3.6 | 3.2 | 3.7 |\n| Diluted shares | 193.6 | 197.7 | 206.7 |\n| Earnings per basic share | $3.79 | $3.77 | $3.62 |\n| Earnings per diluted share | $3.72 | $3.71 | $3.56 |\n| Anti-dilutive stock options and other | 2.1 | 4.1 | 4.2 |", - "page_start": 72, - "page_end": 72, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **NOTE 1 - STATEMENT OF SIGNIFICANT ACCOUNTING POLICIES continued**\n\nDeferred tax assets and liabilities are ascertained based on temporary differences arising between the tax bases of assets and liabilities and their carrying amounts in the financial statements. Deferred tax assets also result where amounts have been fully expensed but future tax deductions are available. No deferred income tax will be recognised from the initial recognition of an asset or liability, excluding a business combination, where there is no effect on accounting or taxable profit or loss.\n\nDeferred tax assets and liabilities are calculated at the tax rates that are expected to apply to the period when the asset recognised or the liability is settled, based on tax rates enacted or substantively enacted at the reporting date. Their measurement also reflects the manner in which management expects to recover or settle the carrying amount of the related asset or liability.\n\nDeferred tax assets relating to temporary differences and unused tax losses are recognised only to the extent that it is probable that future taxable profit will be available against which the benefits of the deferred tax asset can be utilized. Where temporary differences exist in relation to investments in subsidiaries, branches, associates, and joint ventures, deferred tax assets and liabilities are not recognised where the timing of the reversal of the temporary difference can be controlled and it is not probable that the reversal will occur in the foreseeable future.\n\nCurrent tax assets and liabilities are offset where a legally enforceable right of set-off exists and it is intended that net settlement or simultaneous realisation and settlement of the respective asset and liability will occur. Deferred tax assets and liabilities are offset where a legally enforceable right of set-off exists, the deferred tax assets and liabilities relate to income taxes levied by the same taxation authority on either the same taxable entity or different taxable entities where it is intended that net settlement or simultaneous realisation and settlement of the respective asset and liability will occur in future periods in which significant amounts of deferred tax assets or liabilities are expected to be recovered or settled.\n\n# *Tax Consolidation*\n\nSundance Energy Australia Limited and its wholly-owned Australian controlled entities have agreed to implement the income tax consolidation regime, with Sundance Energy Australia Limited being the head company of the newly consolidated group. Under this regime the group entities will be taxed as a single taxpayer. Whilst this choice is yet to be communicated to the Australian Taxation Office, it is intended to be communicated prior to lodgement of the 31 December 2014 income tax return and will be effective from 1 January 2014. Sundance Energy Australia Limited and its wholly-owned Australian controlled entities intend to enter into a Tax Sharing Agreement and Tax Funding Agreement in due course.\n\nThe head entity of the income tax consolidated group and the controlled entities in the tax consolidated group account for their own current and deferred tax amounts. These tax amounts are measured as if each entity in the tax consolidated group continues to be a standalone taxpayer in its own right.\n\nIn addition to its own current and deferred tax amounts, Sundance Energy Australia Limited, as head company, also recognises the current tax liabilities (or assets) and the deferred tax assets arising from unused tax losses and unused tax credits assumed from controlled entities in the tax consolidated group.", - "page_start": 61, - "page_end": 61, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "# **NOTE 7 – INCOME TAX EXPENSE continued**\n\n| | | | 2014 | 2013 |\n| --- | --- | --- | --- | --- |\n| | Year ended 31 December | | US$'000 | US$'000 |\n| c) | | Unused tax losses and temporary differences for which | | |\n| | | no deferred tax asset has been recognised at 30% | 2,685 | 170 |\n| d) | | Deferred tax charged directly to equity: | | |\n| | - | Equity raising costs | 1,147 | 665 |\n| | - | Currency translation adjustment | (268) | - |\n\n- 1) The Oklahoma US state tax jurisdiction computes income taxes on a direct accounting basis. A significant portion of the 2014 impairment related to this jurisdiction resulting in a deferred tax benefit of $3,044 creating deferred tax assets, of which $2,064 were unrecognized.\n- 2) The change in apportioned state tax rates in US controlled entities is a result of the Company disposing of its property in Colorado (income tax rate of 4.63%) (2013: North Dakota with income tax rate of 4.53%) through a tax deferred sale and reinvesting the property in Texas (margin tax rate of 1%). As the Texas margin tax computation is similar in nature to an income tax computation, it is treated as an income tax for financial reporting purposes.\n- 3) This income tax benefit results from the election to consolidate certain Australian subsidiaries for income tax purposes effective 1 January 2014, making previously unrecognized deferred tax assets of one of these Australian subsidiaries available for utilization against future income of the consolidated Australian entities. These deferred tax assets were previously unrecognized due to the lack of evidence of future taxable income for these Australian subsidiaries on a stand-alone basis.\n\n# **NOTE 8 – KEY MANAGEMENT PERSONNEL COMPENSATION**\n\n- **a) Names and positions held of Consolidated Group key management personnel in office at any time during the financial period are:**\n\n| Mr M Hannell | Chairman Non-executive |\n| --- | --- |\n| Mr E McCrady | Managing Director and Chief Executive Officer |\n| Mr D Hannes | Director – Non-executive |\n| Mr N Martin | Director – Non-executive |\n| Mr W Holcombe Director – Non-executive | |\n| Ms C Anderson | Chief Financial Officer |\n| Ms G Ford | Vice President of Exploration and Development |\n\nBased on her increased responsibilities due to the Company's growth, Ms. Ford was deemed to be a KMP during the 2014 fiscal year. Prior to that time, Ms. Ford was not considered to be KMP\n\nOther than Directors and Officers of the Company listed above, there are no additional key management personnel.", - "page_start": 78, - "page_end": 78, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "Under which conditions can the funds of a non-registered pension arrengements be obtained before the age of 55 ?", - "target_page": 2, - "target_passage": "non-registered pension arrangements where the annual contributions are limited to £50,000 and funds contributed cannot be accessed before the age of 55 except in circumstances of serious ill health", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:\n\n- overseeing the funding, administration, communication and investment management of the plans\n- selecting and monitoring the performance of all third parties performing duties in respect of the plans, including audit, actuarial and investment management services\n- proposing, considering and approving amendments to the defined benefit pension plans\n- proposing, considering and approving amendments of the Statement of Investment Policies and Procedures\n- reviewing management and actuarial reports prepared in respect of the administration of the defined benefit pension plans\n- reviewing and approving the audited financial statements of the defined benefit pension plan funds.\n\nThe assets of the defined benefit pension plans are invested and managed following all applicable regulations and the Statement of Investment Policies and Procedures, and reflect the characteristics and asset mix of each defined benefit pension plan. Investment and market return risk is managed by:\n\n- contracting professional investment managers to execute the investment strategy following the Statement of Investment Policies and Procedures and regulatory requirements\n- specifying the kinds of investments that can be held in the plans and monitoring compliance\n- using asset allocation and diversification strategies, and\n- purchasing annuities from time to time.\n\nThe funded pension plans are registered with the Office of the Superintendent of Financial Institutions and are subject to the Federal Pension Benefits Standards Act. The plans are also registered with the Canada Revenue Agency and are subject to the Canada Income Tax Act. The benefits provided under the plans and the contributions to the plans are funded and administered in accordance with all applicable legislation and regulations.\n\nSignificant estimates are involved in determining pension related balances. Actuarial estimates are based on projections of employees' compensation levels at the time of retirement. Maximum retirement benefits are primarily based on career average earnings, subject to certain adjustments. The most recent actuarial valuations were completed as at January 1, 2013.\n\nThe table below sets out the estimated present value of accrued plan benefits and the estimated market value of the net assets available to provide these benefits for our funded plans at December 31, 2013 and 2012.\n\n| | 2013 | | | 2012 |\n| --- | --- | --- | --- | --- |\n| Plan assets, at fair value | | $ 1,037 | $ | 833 |\n| Accrued benefit obligations | | 1,209 | | 1,167 |\n| Deficiency of plan assets over accrued benefit obligations | | (172) | | (334) |\n| Effect of asset ceiling limit | | (9) | | – |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n| Consists of: | | | | |\n| Deferred pension asset | $ | 8 | $ | 9 |\n| Deferred pension liability | | (189) | | (343) |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n\nThe table below shows our pension fund assets for the years ended 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Plan assets, January 1 | $ 833 | $ 684 |\n| Interest income | 40 | 40 |\n| Remeasurements, return on plan assets recognized in other | | |\n| comprehensive income and equity | 65 | 37 |\n| Contributions by employees | 26 | 22 |\n| Contributions by employer | 101 | 85 |\n| Benefits paid | (26) | (33) |\n| Administrative expenses paid from plan assets | (2) | (2) |\n| Plan assets, December 31 | $ 1,037 | $ 833 |\n\nThe table below shows the accrued benefit obligations arising from funded obligations for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Accrued benefit obligations, January 1 | $ 1,167 | $ 817 |\n| Service cost | 71 | 46 |\n| Interest cost | 52 | 45 |\n| Benefits paid | (26) | (33) |\n| Contributions by employees | 26 | 23 |\n| Remeasurements, recognized in other comprehensive | | |\n| income and equity | (81) | 269 |\n| Accrued benefit obligations, December 31 | $ 1,209 | $ 1,167 |\n\nThe table below shows the effect of the asset ceiling for the years ended December 31, 2013 and 2012.\n\n| | 2013 | | 2012 | |\n| --- | --- | --- | --- | --- |\n| Asset ceiling, January 1 | $ | – | $ | – |\n| Interest income | | – | | – |\n| Remeasurements, change in asset ceiling (excluding interest | | | | |\n| income) recognized in comprehensive income and equity | (9) | | – | |\n| Effect of changes in foreign exchange rates | | – | – | |\n| Asset ceiling, December 31 | $ (9) | | $ – | |\n\nPlan assets are comprised mainly of pooled funds that invest in common stocks and bonds that are traded in an active market. The table below shows the fair value of the total pension plan assets by major category for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Equity securities | $ 631 | $ 480 |\n| Debt securities | 403 | 348 |\n| Other – cash | 3 | 5 |\n| Total fair value of plan assets | $ 1,037 | $ 833 |", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "- 113. Tenure of office of Director of Public Prosecutions\n- 114. Tenure of office of Auditor-General\n- 115. Pensions laws and protection of pensions rights\n- 116. Power of Commissions in relation to pensions, etc.\n\n#### CHAPTER VIII\n\n#### Finance\n\n- 117. Consolidated Fund\n- 118. Withdrawals from Consolidated Fund or other public funds\n- 119. Authorization of expenditure\n- 120. Authorization of expenditure in advance of appropriation\n- 121. Contingencies Fund\n- 122. Remuneration of certain officers\n- 123. Public debt\n- 124. Auditor-General\n\n### CHAPTER IX\n\n#### Miscellaneous\n\n- 125. Resignations\n- 126. Reappointments and concurrent appointments\n- 127. Interpretation\n\nFirst Schedule - Election of Specially Elected Members of the National Assembly\n\nSecond Schedule - Division of Districts into regions for the purpose of selecting Members of Ntlo ya Dikgosi\n\n> L.N. 83, 1966, Act 30, 1969, Act 43, 1969, Act 25, 1970, Act 28, 1972, Act 24, 1973, Act 28, 1978, S.I. 25, 1980, Act 32, 1982, Act 1, 1983, Act 22, 1987, S.I. 37, 1991, Act 27, 1992, S.I. 51, 1993, S.I. 119, 1993, Act 16, 1997, Act 18, 1997, Act 1, 1999, Act 2, 2002, Act 12, 2002, Act 9, 2005, S.I. 91, 2006.\n\n[Date of Commencement: 30th September, 1966]\n\n### **CHAPTER I**", - "page_start": 3, - "page_end": 3, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "investigated then-\n\n- (a) the Assembly shall, by resolution, appoint a tribunal which shall consist of a Chairman and not less than two other members, who hold or have held high judicial office;\n- (b) the tribunal shall enquire into the matter and report on the facts thereof to the Assembly;\n- (c) the Assembly shall consider the report of the tribunal at the first convenient sitting of the Assembly after it is received and may, upon such consideration, by resolution, remove the Auditor-General from office.\n\n(4) If the question of removing a person holding the office of Auditor-General from office has been referred to a tribunal under this section, the National Assembly may, by resolution, suspend that person from performing the functions of his or her office, and any such suspension may at any time be revoked by the Assembly by resolution and shall in any case cease to have effect if, upon consideration of the report of the tribunal in accordance with the provisions of this section, the Assembly does not remove the Auditor-General from office.\n\n# **115. Pensions laws and protection of pensions rights**\n\n(1) The law to be applied with respect to any pensions benefits that were granted to any person before the coming into operation of this Constitution shall be the law that was in force at the date on which those benefits were granted or any law in force at a later date that is not less favourable to that person.\n\n(2) The law to be applied with respect to any pensions benefits (not being benefits to which subsection (1) of this section applies) shall-\n\n- (a) in so far as those benefits are wholly in respect of a period of service as a public officer that commenced before the date on which this Constitution comes into operation, be the law that was in force immediately before that date; and\n- (b) in so far as those benefits are wholly or partly in respect of a period of service as a public officer that commenced after the date on which this Constitution comes into operation, be the law in force on the date on which that period of service commenced,\n\nor any law in force at a later date that is not less favourable to that person.\n\n(3) Where a person is entitled to exercise an option as to which of two or more laws shall apply in his or her case, the law for which he or she opts shall, for the purposes of this section, be deemed to be more favourable to him or her than the other law or laws.\n\n(4) All pensions benefits shall (except to the extent to which under any law providing for the funding of pensions benefits they are a charge on a fund established by that law and have been duly paid out of that fund to the person or authority to whom payment is due) be a charge on the Consolidated Fund.\n\n(5) In this section \"pensions benefits\" means any pensions, compensation, gratuities or other like allowances for persons in respect of their service as public officers or as members of the armed forces or for the widows, children, dependants or personal representatives of such persons in respect of such service.\n\n(6) References in this section to the law with respect to pensions benefits include (without prejudice to their generality) references to the law regulating the circumstances in which such benefits may be granted or in which the grant of such benefits may be refused, the law regulating the circumstances in which any such benefits that have been granted may be withheld, reduced in amount or suspended and the law regulating the amount of any such benefits.\n\n(7) In this section references to service as a public officer include references to service as a public officer of the former Protectorate of Bechuanaland.", - "page_start": 49, - "page_end": 49, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## **116. Power of Commissions in relation to pensions, etc.**\n\n- (1) Where under any law any person or authority has a discretion-\n- (a) to decide whether or not any pensions benefits shall be granted; or\n- (b) to withhold, reduce in amount or suspend any such benefits that have been granted,\n\nthose benefits shall be granted and may not be withheld, reduced in amount or suspended unless the appropriate Commission concurs in the refusal to grant the benefits or, as the case may be, in the decision to withhold them, reduce them in amount or suspend them.\n\n(2) Where the amount of any pensions benefits that may be granted to any person is not fixed by law, the amount of the benefits to be granted to him or her shall be the greatest amount for which he or she is eligible unless the appropriate Commission concurs in his or her being granted benefits of a smaller amount.\n\n(3) The appropriate Commission shall not concur under subsection (1) or subsection (2) of this section in action taken on the ground that any person who holds or has held the office of a judge of the Court of Appeal or of the High Court or the Auditor- General or Director of Prosecutions has been guilty of misbehaviour unless he or she has been removed from office by reason of such misbehaviour.\n\n(4) In this section \"the appropriate Commission\" means-\n\n- (a) in the case of benefits for which any person may be eligible in respect of the service in the public service of a person who, immediately before he or she ceased to be a public officer, was subject to the disciplinary control of the Judicial Service Commission or that have been granted in respect of such service, the Judicial Service Commission;\n- (b) in any other case, the Public Service Commission.\n\n(5) In this section \"pensions benefits\" means any pensions, compensation, gratuities or other like allowances for persons in respect of their service as public officers (including service as public officers of the former Protectorate of Bechuanaland) or for the widows, children, dependants or personal representatives of such persons in respect of such service.\n\n## **CHAPTER VIII Finance (ss 117-124)**\n\n# **117. Consolidated Fund**\n\nAll revenues or other moneys raised or received for the purposes of the Government of Botswana (not being revenues or other moneys that are payable by or under any law into some other fund established for a specific purpose or that may by or under any law be retained by the department of Government that received them for the purposes of defraying the expenses of that department) shall be paid into and form one Consolidated Fund.\n\n# **118. Withdrawals from Consolidated Fund or other public funds**\n\n(1) No moneys shall be withdrawn from the Consolidated Fund except-\n\n- (a) to meet expenditure that is charged upon the Fund by this Constitution or by any Act of Parliament;\n- (b) where the issue of those moneys has been authorized by an Appropriation Act, by a supplementary estimate approved by resolution of the National Assembly or by a law enacted in pursuance of section 120 of this Constitution.\n\n(2) No moneys shall be withdrawn from any public fund of Botswana other than the Consolidated Fund unless the issue of those moneys has been authorized by or under a law.\n\n(3) No moneys shall be withdrawn from the Consolidated Fund except in the manner prescribed by Parliament.", - "page_start": 50, - "page_end": 50, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "which the Group becomes a party to the contractual provisions of the instrument.\n\nThe Group derecognises a financial asset when the contractual rights to the cash flows from the asset expire, or it transfers the rights to receive the contractual cash flows on the financial asset in a transaction in which substantially all the risks and rewards of ownership of the financial assets are transferred.\n\nFinancial assets and liabilities are offset and the net amount presented in the statement of financial position when, and only when, the Group has a legal right to offset the amounts and intends either to settle on a net basis or to realise the asset and settle the liability simultaneously.\n\n#### (i) Financial assets at fair value through profit or loss\n\nFinancial assets at fair value through profit or loss are financial assets held for trading if acquired principally for the purpose of selling in the short term. Derivatives are also categorised as held for trading unless they are designated as hedges.\n\nAttributable transaction costs are recognised in the profit or loss when incurred. Assets in this category are classified as current assets if they are expected to be settled within 12 months, otherwise they are classified as non-current.\n\n#### (ii) Loans and receivables\n\nLoans and receivables are non-derivative financial assets with fixed or determinable payments that are not quoted in an active market. They are included in current assets, except for those with maturities greater than 12 months after the reporting date which are classified as noncurrent assets.\n\nLoans and receivables are measured at amortised cost using the effective interest method, less any impairment losses.\n\n#### (iii) Available-for-sale financial assets\n\nAvailable-for-sale financial assets, comprising principally marketable equity securities, are non-derivative financial assets that are either designated in this category or not classified in any of the other categories. They are included in non-current assets unless management intends to dispose of the investment within 12 months of the reporting date. Investments are designated as available-for-sale if they do not have fixed maturities and fixed or determinable payments and management intends to hold them for the medium to long term.\n\nSubsequent to initial recognition, available-forsale financial assets are measured at fair value and changes therein, other than impairment\n\nlosses, are recognised as a separate component of equity net of attributable tax. When an asset is derecognised the cumulative gain or loss in equity is transferred to the statement of comprehensive income.\n\n#### Impairment\n\nThe Group assesses at each reporting date whether there is objective evidence that a financial asset or group of financial assets is impaired. In the case of equity securities classified as available-for-sale, a significant or prolonged decline in the fair value of a security below its cost is considered as an indicator that the securities are impaired. If any such evidence exists for available-for-sale financial assets, the cumulative loss measured as the difference between the acquisition cost and the current fair value, less any impairment loss on that financial asset previously recognised in profit or loss, is removed from equity and recognised in the statement of comprehensive income. Impairment losses recognised in the profit or loss on equity instruments classified as available-for-sale are not reversed through the statement of comprehensive income.\n\nIf there is evidence of impairment for any of the Group's financial assets carried at amortised cost, the loss is measured as the difference between the asset's carrying amount and the present value of estimated future cash flows, excluding future credit losses that have not been incurred. The cash flows are discounted at the financial asset's original effective interest rate. The loss is recognised in the statement of comprehensive income.\n\n#### k. Derivative financial instruments\n\nDerivative financial instruments are used by the Group to protect against the Group's Australian dollar gold price risk exposures. The Group does not apply hedge accounting and accordingly all fair value movements on derivative financial instruments are recognised in the profit or loss.\n\nDerivative financial instruments are stated at fair value on the date a derivative contract is entered into and are subsequently remeasured to their fair value at each reporting date. The resulting gain or loss is recognised in the statement of comprehensive income immediately.\n\n#### l. Property, plant and equipment\n\nProperty, plant and equipment are stated at historical cost less depreciation. Historical cost includes expenditure that is directly attributable to the acquisition of the items.\n\nSubsequent costs are included in the asset's carrying amount or recognised as a separate asset, as appropriate, only when it is probable that future economic benefits associated with the item will flow to the Group and the cost of the item can be measured reliably. The carrying amount of any component accounted for as a separate asset is derecognised when replaced. All other repairs and maintenance are charged to the statement of comprehensive income during the reporting period in which they are incurred.\n\n#### Depreciation\n\nDepreciation and amortisation of mine buildings, plant, machinery and equipment is provided over the assessed life of the relevant mine or asset, whichever is the shorter.\n\nDepreciation and amortisation is determined on a units-of-production basis over the estimated recoverable reserves from the related area. In some circumstances, where conversion of resources into reserves is expected, some elements of resources may be included. For mine plant, machinery and equipment, which have an expected economic life shorter than the life of the mine, a straight line basis is adopted.\n\nThe expected useful lives are as follows:\n\n- 〉 mine buildings the shorter of applicable mine life and 25 years;\n- 〉 plant, machinery and equipment the shorter of applicable mine life and 3–15 years depending on the nature of the asset.\n\nThe estimated recoverable reserves and life of each mine and the remaining useful life of each class of asset are reassessed at least annually. Where there is a change in the reserves during the period, depreciation and amortisation rates are adjusted prospectively from the beginning of the reporting period.\n\nMajor spares purchased specifically for a particular plant are capitalised and depreciated on the same basis as the plant to which they relate.\n\n#### Impairment\n\nAn asset's carrying amount is written down immediately to its recoverable amount if the asset's carrying amount is greater than its estimated recoverable amount (Note 2f).\n\n#### Derecognition\n\nAn item of property, plant and equipment is derecognised upon disposal or when no future economic benefits are expected to arise from the continued use of the asset.\n\nAny gain or loss arising on derecognition of the asset (calculated as the difference between the net disposal proceeds and the carrying amount of the item) is included in the profit or loss in the period the item is derecognised.", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "#### **(p) Receivables**\n\nTrade receivables and other receivables are recorded at amounts due less any provision for doubtful debts.\n\nBills of exchange are recorded at amortised cost, with revenue recognised on an effective yield basis.\n\n#### **(q) Recoverable Amount of Non-Current Assets**\n\nNon-current assets are written down to recoverable amount where the carrying value of any non-current asset exceeds recoverable amount. In determining the recoverable amount of noncurrent assets, the expected net cash flows have not been discounted to their discount value.\n\n#### **(r) Revenue Recognition**\n\n#### Sale of Goods and Disposal of Assets\n\nRevenue from the sale of goods and disposal of other assets is recognised when the economic entity has passed control of the goods or other assets to the buyer.\n\n#### Rendering of Services\n\nRevenue from a contract to provide services is recognised by reference to the stage of completion of the contract.\n\n#### Contribution of Assets\n\nRevenue arising from the contribution of assets is recognised when the economic entity gains control of the contribution or the right to receive the contribution.\n\n#### Liabilities Forgiven\n\nThe gross amount of a liability forgiven by a credit provider is recognised as revenue.", - "page_start": 44, - "page_end": 44, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "- (d) if he or she is elected as Speaker;\n- (e) if he or she is removed from office by a resolution of the Assembly supported by the votes of not less than two-thirds of all the Members of the Assembly; or\n- (f) when the Assembly first sits after any dissolution of Parliament.\n\n# **61. Qualifications for election to National Assembly**\n\nSubject to the provisions of section 62 of this Constitution, a person shall be qualified to be elected as a Member of the National Assembly if, and shall not be qualified to be so elected unless-\n\n- (a) he or she is a citizen of Botswana;\n- (b) he or she has attained the age of 18 years;\n- (c) he or she is qualified for registration as a voter for the purposes of the election of the Elected Members of the National Assembly and is so registered; and\n- (d) he or she is able to speak, and, unless incapacitated by blindness or other physical cause, to read English well enough to take an active part in the proceedings of the Assembly.\n\n# **62. Disqualifications for membership of National Assembly**\n\n(1) No person shall be qualified to be elected as a Member of the National Assembly who-\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law for the time being in force in Botswana and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified to be insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) is a Member of the Ntlo ya Dikgosi;\n- (e) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (f) is under sentence of death imposed on him or her by a court in any part of the Commonwealth, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by competent authority for some other sentence imposed on him or her by such a court;\n- (g) holds, or is acting in, any office the functions of which involve any responsibility for, or in connection with, the conduct of any elections to the Assembly or the compilation or revision of any electoral register for the purposes of such elections.\n\n(2) Parliament may provide that a person shall not be qualified for election to the National Assembly for such period (not exceeding five years) as may be prescribed if he or she is convicted of any such offence connected with elections to the Assembly as may be prescribed.\n\n(3) For the purposes of this section two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n# **63. Constituencies**\n\nBotswana shall be divided into as many constituencies as there are Elected Members of the National Assembly and each of those constituencies shall return one Member to the National Assembly.", - "page_start": 27, - "page_end": 27, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## **122. Remuneration of certain officers**\n\n(1) There shall be paid to the holders of the offices to which this section applies such salaries and such allowances as may be prescribed by Parliament.\n\n(2) The salaries and any allowances payable to the holders of the offices to which this section applies shall be a charge on the Consolidated Fund.\n\n(3) The salary payable to the holder of any office to which this section applies and his or her terms of office, other than allowances, shall not be altered to his or her disadvantage after his or her appointment.\n\n(4) Where a person's salary or terms of office depend upon his or her option, the salary or terms for which he or she opts shall, for the purposes of subsection (3) of this section, be deemed to be more advantageous to him or her than any others for which he or she might have opted.\n\n(5) This section applies to the offices of judge of the Court of Appeal, judge of the High Court, member of the Public Service Commission, member of the Judicial Service Commission, member of the Delimitation Commission, Auditor-General, Director of Public Prosecutions and Attorney-General.\n\n## **123. Public debt**\n\n(1) There shall be charged on the Consolidated Fund all debt charges for which Botswana is liable.\n\n(2) For the purposes of this section debt charges include interest, sinking fund charges, the repayment or amortization of debt, and all expenditure in connection with the raising of loans on the security of the revenues or the Consolidated Fund of the former Protectorate of Bechuanaland or Botswana, and the service and redemption of debt thereby created.\n\n### **124. Auditor-General**\n\n(1) There shall be an Auditor-General, whose office shall be a public office.\n\n(2) The public accounts of Botswana and of all officers, courts and authorities of the Government of Botswana shall be audited and reported on by the Auditor-General and for that purpose the Auditor-General or any person authorized by him or her in that behalf shall have access to all books, records, reports and other documents relating to those accounts:\n\nProvided that, if it is so provided by Parliament in the case of any body corporate directly established by law, the accounts of that body corporate shall be audited and reported on by such person as may be specified by or under that law.\n\n(3) The Auditor-General shall submit his or her reports to the Minister responsible for finance, who shall cause them to be laid before the National Assembly.\n\n(4) The Auditor-General shall perform such other duties and exercise such other powers in relation to the accounts of the Government or the accounts of other public authorities or other bodies as may be prescribed by or under any Act of Parliament.\n\n(5) In the exercise of his or her functions the Auditor-General shall not be subject to the direction or control of any other person or authority.\n\n# **CHAPTER IX Miscellaneous (ss 125-127)**\n\n## **125. Resignations**\n\n(1) Any person who is appointed or elected to any office established by this Constitution may resign from that office by writing under his or her hand addressed to the person or authority by whom he or she was appointed or elected:\n\nProvided that in the case of a person who holds office as President his or her resignation from that office shall be addressed to the Chief Justice, in the case of a person who holds office as Speaker or Deputy Speaker of the National Assembly his or her resignation from that office shall be addressed to the Assembly, in the case of an", - "page_start": 52, - "page_end": 52, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## **ANNUAL REPORT ON FORM 10-K**\n\n## **HORMEL FOODS CORPORATION**\n\n**OCTOBER 25, 2003**\n\n## **FORM 10-K**\n\n**ANNUAL REPORT PURSUANT TO SECTION 13 OR 15 (d) OF THE SECURITIES EXCHANGE ACT OF 1934**\n\n## **HORMEL FOODS CORPORATION**\n\n(Exact name of registrant as specified in its charter)\n\n**DELAWARE 41-0319970**\n\n(State or other jurisdiction of incorporation or organization)\n\n(I.R.S. Employer Identification No.)\n\n**1 HORMEL PLACE AUSTIN, MINNESOTA 55912-3680** (Address of principal executive offices) (Zip Code)\n\nRegistrant's telephone number, including area code **(507) 437-5611**\n\nSecurities registered pursuant to Section 12 (b) of the Act:\n\n**COMMON STOCK, PAR VALUE $.0586 PER SHARE**\n\nTitle of Each Class\n\n**NEW YORK STOCK EXCHANGE** Name of Each Exchange On Which Registered\n\nIndicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months, and (2) has been subject to such filing requirements for the past 90 days. Yes ý No o\n\nSecurities registered pursuant to Section 12 (g) of the Act:\n\nIndicate by check mark if disclosure of delinquent filers pursuant to Item 405 of Regulation S-K is not contained herein, and will not be contained, to the best of registrant's knowledge in definitive proxy or information statements incorporated by reference in Part III of this Form 10-K or any amendments to this Form 10-K. o\n\nIndicate by check mark whether the registrant is an accelerated filer (as defined in Rule 12b-2 of the Act). Yes ý No o\n\nThe aggregate market value of the voting stock held by non-affiliates of the registrant as of April 26, 2003 (the last business day of the registrant's most recently completed second fiscal quarter), was $1,592,020,962 based on the closing price of $21.74 per share on that date.\n\nAs of December 1, 2003, the number of shares outstanding of each of the Corporation's classes of common stock was as follows:\n\nCommon Stock, $.0586 Par Value—138,672,803 shares\n\nCommon Stock Non-Voting, $.01 Par Value—0 shares\n\n## **DOCUMENTS INCORPORATED BY REFERENCE**\n\nPortions of the Annual Stockholders' Report for the year ended October 25, 2003, are incorporated by reference into Part I and Part II Items 5-8, and included as exhibit 13.1 filed herewith.\n\n**HORMEL FOODS CORPORATION**\n\n**TABLE OF CONTENTS**", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "she has attained the age of 70 years or such other age as may be prescribed for the purposes of section 101 of this Constitution;\n\n- (ii) a person appointed under this subsection, who is not a judge of the Court of Appeal, may, notwithstanding the assumption or resumption of the functions of the office of President of the Court of Appeal by the holder of that office, continue to act as a judge of the Court of Appeal for so long thereafter and to such extent as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her previously thereto.\n(6) If the office of a Justice of Appeal is vacant or if any Justice of Appeal is appointed to act as Chief Justice or President of the Court of Appeal or is for any reason unable to perform the functions of his or her office, the President, acting in accordance with the advice of the Judicial Service Commission, may appoint a person qualified for appointment as a Justice of Appeal to act as a Justice of Appeal:\n\nProvided that a person may be so appointed notwithstanding that he or she has attained the age of 70 years or such other age as may be prescribed for the purposes of section 101 of this Constitution.\n\n(7) Any person appointed under subsection (6) of this section to act as a Justice of Appeal, shall subject to the provisions of section 101(4) and (5) of this Constitution, continue to act for the period of his or her appointment or, if no such period is specified, until his or her appointment is revoked by the President, acting in accordance with the advice of the Judicial Service Commission:\n\nProvided that the President, acting in accordance with the advice of the Judicial Service Commission, may permit a person whose appointment to act as a Justice of Appeal has expired or been revoked to continue to act as such a judge for such period as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her previously thereto.\n\n# **101. Tenure of office of judges of Court of Appeal**\n\n(1) Subject to the provisions of this section, a person holding the office of a judge of the Court of Appeal shall vacate that office on attaining the age of 70 years or such other age as may be prescribed by Parliament:\n\nProvided that-\n\n- (i) the President, acting in accordance with the advice of the Judicial Service Commission, may permit a judge who has attained that age to continue in office for such period as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her before he or she attained that age;\n- (ii) a person may be appointed as President of the Court of Appeal or as a Justice of Appeal for a fixed period of three years notwithstanding that he or she has attained the age referred to in this subsection or that he or she will before the expiry of his or her appointment have attained that age; and\n- (iii) the appointment as President of the Court of Appeal or as Justice of Appeal serving for a fixed period under paragraph (ii) above shall not affect the date at which he or she is due to retire.\n\n(2) A judge of the Court of Appeal may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or from any other cause) or for misbehaviour, and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the President considers that the question of removing a judge of the Court of Appeal under this section ought to be investigated then-\n\n- (a) he or she shall appoint a tribunal which shall consist of a Chairman and not less", - "page_start": 43, - "page_end": 43, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What metrics are good indicators of the coverage of gas molecules on carbon nanotubes ?", - "target_page": 1, - "target_passage": "the bind- ing energy and scattering resistance of the molecules", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## arXiv:1001.2538v1 [cond-mat.mes-hall] 14 Jan 2010\n\n## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra1,2 , ∗ D. J. Mowbray1,2, K. S. Thygesen2 , A. Rubio1,3, and K. W. Jacobsen2\n\n*1Nano-Bio Spectroscopy group and ETSF Scientific Development Centre,*\n\n*Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebastian, Spain ´*\n\n*2Center for Atomic-scale Materials Design, Department of Physics,*\n\n*Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany*\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.–b, 68.43.–h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3–8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert – a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11–13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13–21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant (>1 Ω) for small changes in CO concentration in the relevant range of around 0.1–10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 A for ˚ representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 A˚ ×15 A˚ ×14.622 A). For this size ˚ of supercell a Γ-point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nEform[M@VC] = E[M@VC] + nE[C] − E[M@NT] (1)\n\nwhere E[M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E[C] is the energy per carbon atom in a pristine nanotube, and E[M@NT]\n\n<i>Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), \"Grupos Consolidados UPV/EHU del Gobierno Vasco\" (IT-319-07), e-I3 ETSF project (Contract Number 211956), \"Red Espanola de Super- ˜ computacion\", NABIIT and the Danish Center for Scientific ´ Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jose Castillejo programs. ´\n\n∗ Electronic address: juanmaria.garcia@ehu.es\n\n- [1] *Gas Sensing Materials, MRS Bull.*, vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, \"Electronic and transport properties of nanotubes\", Rev. Mod. Phys. 79(2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, \"Nanotube molecular wires as chemical sensors\", Science 287(5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, \"Extreme oxygen sensitivity of electronic properties of carbon nanotubes\", Science 287(5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, *Carbon Nanotube Devices: Properties, Modeling, Integration and Applications* (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-Paez, A. H. Romero, E. Mu ´ noz-Sandoval, ˜ L. M. Mart´ınez, H. Terrones, and M. Terrones, \"Fabrication of vapor and gas sensors using films of aligned CNx nanotubes\", Chem. Phys. Lett. 386(1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, \"Designing real nanotube-based gas sensors\", Phys. Rev. Lett. 100(17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, \"Tailoring gas sensing properties of carbon nanotubes\", J. Appl. Phys. 104(2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, \"Variable range hopping in oxygen-exposed single-wall carbon nanotube networks\", Phys. Stat. Solidi A 205(6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, \"Influence of O2 and N2 on the conductivity of carbon nanotube networks\", Phys. Rev. B 79(19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, \"Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory\", Chem. Phys. Lett. 387(4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, \"Defective carbon nanotubes for single-molecule sensing\", Phys. Rev. B 80(15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc´ıa-Lastra, K. S. Thygesen, M. Strange, and Angel Rubio, \"Conductance of sidewall-functionalized ´ carbon nanotubes: Universal dependence on adsorption sites\", Phys. Rev. Lett. 101(23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, \"*Ab initio* study of an iron atom interacting with single-wall carbon nanotubes\", Phys. Rev. B 67(20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, \"Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3d transition metals: A first-principles study\", Phys. Rev. B 69(7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, \"Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes\", J. Phys. Chem. B 110(28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, \"First-principles study of metal adatom adsorption on graphene\", Phys. Rev. B 77, 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, \"Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes\", J. Phys. Chem. C 112(19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, \"Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems\", J. Phys. Chem. C 112(22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, \" ¨ *Ab initio* study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms\", Phys. Rev. B 78(19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, \"Embedding transition- ¨ metal atoms in graphene: Structure, bonding, and magnetism\", Phys. Rev. Lett. 102(12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, \"Real-space grid implementation of the projector augmented wave method\", Phys. Rev. B 71(3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, \"Generalized gradient approximation made simple\", Phys. Rev. Lett. 77(18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "### **Growing Demand for U.S. Natural Gas Will Drive Improved Prices in the Years Ahead**\n\nSeveral factors are emerging in the U.S. that will drive increased demand for natural gas, which in turn could improve out year natural gas prices:\n\n### **Growing momentum for CNG passenger and LNG long-haul truck vehicles**\n\nEnormous cost savings are available to consumers and businesses that chose to use natural gas as an alternative transportation fuel ($1.39 per gallon for CNG in Oklahoma, for example, compared to $3.75–$4.00 per gallon for gasoline and diesel).\n\n### **Growing industrial demand**\n\nWith recent low prices for domestic natural gas, U.S. industries that utilize natural gas as a feedstock in their manufacturing processes have a significant cost advantage compared with international peers whose feedstock is indexed either to oil or global natural gas prices.\n\n### **Continuing and accelerating shift from coal to natural gas for U.S. electrical power generation**\n\nTo clean our environment, dozens of aging coal-powered electricity plants will be retired in the next decade and replaced with the cleaner alternative of natural gas. A combination of shifting power sources and higher utilization within existing gas-fired power plants will likely increase natural gas demand by 10–15 bcf per day over the next decade.\n\n### **Conversion of U.S. LNG import facilities to LNG export facilities**\n\nWith increasing demand for natural gas around the world and the abundance of U.S. natural gas reserves, producers will be able to tap into higher-margin markets in Europe, South America and Asia once export capabilities are available potentially beginning in 2015.\n\n### **Construction of U.S. gas-to-liquids (GTL) plants**\n\nConverting natural gas to a room temperature liquid would allow U.S. natural gas producers to sell products based on world oil prices instead of domestic natural gas prices. Technological advancements continue to gain traction and may make GTL a realistic possibility by 2016.\n\n### **U.S. natural gas producers are rapidly moving to a more liquids-rich production base**\n\nDue to the premium margins realized in the U.S. when producing liquids as compared to natural gas, there is a meaningful shift of producers targeting liquids-rich drilling prospects. This shift will ultimately help bring U.S. natural gas markets back into balance by reducing the rigs and capital available for natural gas drilling.\n\n$2.25 billion in cash and drilling carries for its 25% stake in the Barnett, and we are extremely proud to have Total as one of our premier joint venture partners.\n\nHaynesville Shale — The Haynesville Shale in Northwest Louisiana and East Texas is the shale play of which we are most proud (to date) because it was discovered by Chesapeake's own geoscientists and engineers. We conducted our geoscientific investigation of the Haynesville in 2005–06 and tested our theories through drilling in 2007. In 2008 we formed an innovative joint venture agreement with our well-respected industry partner, Houston-based Plains Exploration & Production Company, to which we sold 20% of our Haynesville (and Bossier) assets for approximately $3.2 billion in cash and drilling carries.\n\nThe Haynesville Shale is now the nation's largest producing natural gas shale play, having just recently passed the Barnett Shale in production (in last year's letter, I incorrectly estimated it would take until 2014 for the Haynesville to reach this achievement, a testament to the play's enormous productive potential). Ultimate recoveries from the Haynesville could exceed 250 tcfe, likely making it one of the five largest natural gas fields in the world. Today, we are producing from more than 260 net wells in the Haynesville on our 530,000 net leasehold acres, are currently drilling with 35 rigs and estimate we could drill up to 6,300 additional net wells in the years ahead. Our gross operated production in the Haynesville recently set a record of\n\nnearly 1.6 bcfe per day.\n\nBossier Shale — This shale overlies about one-third of our Haynesville acreage and is the first of our two \"sleeper\" natural gas shale plays. The reason is that in Louisiana, leases often restrict the lessee (i.e., the producer) to only holding future drilling rights down through the deepest formation drilled. Because the Bossier lies above the Haynesville,\n\n*One producing and one on the way: the Texas Panhandle Granite Wash offers high volumes of natural gas accompanied by highly valued liquids production as well.*", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "FIG. 3: Fractional coverage Θ in thermal equilibrium of Ni in a (a) monovacancy, (b) divacancy I, (c) divacancy II and (d) change in resistance ∆R per dopant site as a function of CO concentration in a background of air at room temperature and 1 bar of pressure. The reference concentration of CO is taken to be C0 =0.1 ppm. Note the change from linear to log scale on the y-axis at ∆R =10 Ω.\n\nFor a given background composition we may thus estimate the fractional coverages for each available adsorbate for a given type of doping. As an example, Fig. 3(a)-(c) shows the fractional coverage of a Ni atom occupying a monovacancy, divacancy I, and divacancy II, versus CO concentration in a background of air at room temperature and 1 bar of pressure. Due to the relatively small binding energy of N2 and H2O as compared to O2 and CO, all Ni sites will be either empty or occupied by O2 or CO. In particular, Ni in a monovacancy (top panel of Fig. 3) will be completely oxidized for all relevant CO concentrations. For the Ni occupied divacancy II structures we find the coverage of CO changes significantly around toxic concentrations (∼10 ppm).\n\nTo estimate the effect of adsorbates on the electrical conductance of doped CNTs, we first consider the change in conductance when a single molecule is adsorbed on a metal site of an otherwise pristine CNT. In Fig. 2(b) we show the calculated change in conductance relative to the metal site with no adsorbate. In contrast to the binding energies, there are no clear trends in the conductances. The sensitivity of the conductance is perhaps most clearly demonstrated by the absence of correlation between different types of vacancies, i.e. between the three panels in Fig. 2(b). Close to the Fermi level, the conductance of a perfect armchair CNT equals 2G0. The presence of the metal dopant leads to several dips in the transmission function known as Fano antiresonances [20]. The position and shape of these dips depend on the d-levels of the transition metal atom, the character of its bonding to the CNT, and is further affected by the presence of the adsorbate molecule. The coupling of all these factors is very complex and makes it difficult to estimate or rationalize the value of the conductance. For the spin polarized cases, we use the spin-averaged conductances, i.e. G = (G↑ + G↓)/2.\n\nNext, we estimate the resistance of a CNT containing several impurities (a specific metal dopant with different molecular adsorbates). Under the assumption that the electron phasecoherence length, lφ, is smaller than the average distance between the dopants, d, we may neglect quantum interference and obtain the total resistance by adding the scattering resistances due to each impurity separately. The scattering resistance due to a single impurity is given by\n\n$R_{s}(X)=1/G(X)-1/(2G_{0})$, (6)\n\nwhere G(X) is the Landauer conductance of the pristine CNT with a single metal dopant occupied by molecule X and 1/(2G0) is the contact resistance of a (6,6) CNT.\n\nWe may now obtain the total resistance per dopant site relative to the reference background signal as a function of the target molecule concentration\n\n∆R N ≈ X X Rs(X)(Θ[X, C] − Θ[X, C0]), (7)\n\nwhere N is the number of dopants, Θ[X, C] is the fractional coverage of species X at concentration C of the target and C0 is the reference concentration. Notice that the contact resistance drops out as we evaluate a change in resistance.\n\nIn Fig. 3(d) we show the change in resistance calculated from Eq. (7) as a function of CO concentration for Ni occupying the three types of vacancies. The background reference concentration of CO is taken to be C0 = 0.1 ppm. For the monovacancy there is very little change in resistivity. This is because most active sites are blocked by O2 at relevant CO concentrations, as shown in the upper panel of Fig. 3. For Ni in the divacancies there is, however, a change in resistance on the order of 1Ω per site. For concentrations above ∼1 ppm, the CO coverage of Ni in the divacancy II increases dramatically and this leads to a significant increase in resistance.\n\nWe now return to the discussion of the validity of Eq. (7). As mentioned, the series coupling of individual scatterers should be valid when lφ < d. However, even for lφ > d and assuming that the Anderson localization length, lloc in the system exceeds lφ, Eq. (7) remains valid if one replaces the actual resistance R by the sample averaged resistance hRi [29]. At room temperature under ambient conditions, interactions with external degrees of freedom such as internal CNT phonons and vibrational modes of the adsorbed molecules would rapidly randomize the phase of the electrons. Therefore Eq. (7) should certainly be valid in the limit of low doping concentrations. On the other hand, the total number of dopants, N, should be large enough for the statistical treatment of the coverage to hold. Finally, we stress that Eq. (7) represents a conservative estimate of the change in resistance. In fact, in the regime where lφ > lloc, i.e. in the Anderson localization regime, the resistance would be highly sensitive to changes in the fractional coverage of active sites. Calculation of the actual resistance of the CNT in this regime would, however, involve a full transport calculation in the presence of", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2538.pdf" - }, - { - "text": "# **Glossary of terms and abbreviations**\n\nAD – Activity Data AWMS – Animal Waste Management System BOD – Biochemical Oxygen Demand C – Carbon C2F6 – Hexafluoroethane CF4 – Tetrafluoromethane CH4 – Methane CO – Carbon Monoxide CO2 – Carbon dioxide COD – Chemical Oxygen Demand dm – dry matter Gg – Gigagram ha – hectare HFC – Hydrofluorocarbon hl – hectolitre k – kilo kg – kilogram kha – kilo hectare kt – kilotonne LTO – Landing/Take Off LUCF – Land-Use Change and Forestry LULUCF – Land Use, Land-Use Change and Forestry m3 – cubic meter MCF – Methane Correction Factor Mg – Megagram Mha – Megahectare MSW – Municipal Solid Waste N – Nitrogen N2O – Nitrous Oxide NFP – National Focal Point NH3 – Ammonia NMVOC – Non-Methane Volatile Organic Compound NOX – Nitrogen Dioxide PFC – Perfluorocarbon RA - Reference Approach SE – Sectoral Expert SF6 – Sulphur Hexafluoride SO2 – Sulphur Dioxide SWDS – Solid Waste Disposal Site t – tonne Tg – Teragram TJ – Terajoules XML – Extensible Markup Language year t – inventory year", - "page_start": 43, - "page_end": 43, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# Managing Options\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 22\n\n# UNLOCKING THE VALUE OF STRATEGIC ASSETS\n\n**'Our objective is to derive value from undeveloped assets which have been outside of Santos' base business.'**\n\n**BRUCE WOOD** Vice President Strategic Projects Santos' Strategic Projects team focuses on assets that have proven difficult to commercialise or that need to be considered in a regional context rather than on an individual basis.\n\nThe other key activity for this team has been to lead Santos' continuous improvement focus.\n\n#### **UNITED STATES GAS**\n\nThe US gas business was a major focus in 2004 for a number of reasons, not the least of which are the higher gas prices in the US compared with the domestic Australian market, and the ability to rapidly commercialise new discoveries.\n\nAn ongoing development and delineation program was carried out during the year, yielding better than planned production. The exploration initiative also continued to seek higher risk but more material prospects, aimed at enhancing the move into the shallow water area of the Gulf of Mexico. Exploration results in this area during 2005 will shape Santos' future strategy in the US.\n\n#### **TIGHT GAS**\n\nHydrocarbons contained in traps with poor permeability are known as 'tight gas'. Large tight gas resources are known to exist in the Cooper Basin. Under current circumstances, this gas cannot be economically developed but, with the combination of improved production techniques and better commercial terms, could prove attractive.\n\nSantos assessed the resources and potential technologies that could be applied to unlock these resources during 2004 and is now working up a range of possible evaluation projects to be undertaken in 2005.\n\n#### **NORTHERN AUSTRALIA GAS**\n\nSantos has a significant existing gas resource base and some promising exploration acreage in the waters offshore Darwin, where it intends to drill a gas exploration well later this year.\n\nThe Company currently operates the Mereenie gas field in the Amadeus Basin in central Australia, which supplies gas to Darwin. Santos' first offshore gas production in northern Australia begins in 2006, sending Bayu-Undan gas to Darwin for conversion to LNG. Santos plans to build upon its growing position in the region to target further development which could ensure long-term gas supplies for the current market, or an expanded Northern Territory domestic market, or for export.\n\n#### **PAPUA NEW GUINEA GAS**\n\nSantos is in active discussions with the PNG Gas Project participants to potentially re-enter the PNG Gas Project. Santos has a significant interest in a large part of the liquids-rich Hides gas field which is integral to the development of the Project.\n\n## **2004 CONTINGENT RESOURCES** (TOTAL 1,443 mmboe)\n\n- Northern Australia 709 mmboe\n- Western Australia 71 mmboe\n- Central Australia 240 mmboe\n- Southern Australia 32 mmboe\n- Papua New Guinea 391 mmboe", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\n$$E_{\\rm form}[{\\rm VC}]=E[{\\rm VC}]+nE[{\\rm C}]-E[{\\rm NT}],\\tag{2}$$\n\nwhere E[VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov *et al.* for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\n$$E_{\\rm ads}[X\\,\\mbox{\\small@M@VC}]=E[X\\,\\mbox{\\small@M@VC}]-E[X]-E[\\mbox{\\small@VC}],\\tag{3}$$\n\nFIG. 2: Calculated (a) adsorption energy Eads in eV and (b) change in conductance ∆G in units of G0 =2e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E[X@M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E[X] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\n$$\\Theta[X]=\\frac{K[X]C[X]}{1+\\sum_{Y}K[Y]C[Y]},\\tag{4}$$\n\nwhere K = k+/k− is the ratio of forward and backward rate constants for the adsorption reaction,\n\n$$K[X]=\\exp\\left[-\\frac{E_{\\rm ads}[X]+TS[X]}{k_{B}T}\\right].\\tag{5}$$\n\nIn these expressions C[X] is the concentration of species X, S[X] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "# DEAR FELLOW SHAREHOLDERS »\n\n2010 was a very important year of transition and achievement for Chesapeake, a year in which we initiated three very important strategic shifts: from asset gathering to asset harvesting, from focusing exclusively on natural gas to a balanced focus on natural gas and liquids and from having a leveraged balance sheet to one worthy of an investment grade rating.\n\n*Home to three distinct forms of hydrocarbons: dry natural gas, natural gas liquids and oil, the Eagle Ford Shale in South Texas epitomizes Chesapeake's shift to a balanced focus on natural gas and liquids.*\n\n2010 also marked a truly transformative year for our industry. We and a handful of our peers enhanced our capabilities to find and produce significant new resources of oil and natural gas liquids (collectively, \"liquids\") in unconventional formations. Chesapeake and these other companies combined creativity, innovation and technology to reinvent the way that our industry explores for and produces natural gas and liquids.\n\nFurthermore, 2010 was the year when global energy companies more fully recognized the importance of these developments and the tremendous opportunities that have emerged in the U.S. Through a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world. This realization has already increased the value of highquality unconventional assets in the U.S. and, in time, should lead to higher\n\nstock prices for the leading U.S. onshore E&P companies, especially Chesapeake. Simply put, the global energy industry is beating a path to our door, and we are welcoming it with open arms.\n\nBefore we move ahead, I want to emphasize that even though 2010 was a year of transition and achievement, our stock price was essentially unchanged. Nevertheless, it was still a very strong year for the company operationally and financially. Here are the year's highlights for your review:\n\n- >> Average daily natural gas and oil production increased 14% from 2.5 billion cubic feet of natural gas equivalent (bcfe) in 2009 to 2.8 bcfe in 2010;\n- >> Proved natural gas and oil reserves increased 20% in 2010, from 14.3 trillion cubic feet of natural gas equivalent (tcfe) to 17.1 tcfe;\n- >> Reserve replacement for 2010 reached 375% at a drilling, completion and net acquisition cost of only $0.76 per thousand cubic feet of natural gas equivalent (mcfe)(1);\n- >> Realized hedging gains were $2.1 billion;\n- >> Revenues increased 22% to $9.4 billion;\n- >> Adjusted ebitda(2) increased 15% to $5.1 billion;\n- >> Operating cash flow(2) increased 5% to $4.5 billion; and\n- >> Adjusted earnings per fully diluted share(2) increased 16% to $2.95.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "What is the source of inaccuracy of the MSA3 model at high ionic concentrations ?", - "target_page": 3, - "target_passage": "At high concentration (about 1 mol l−1), the MSA3 overestimates the free energy", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "FIG. 5: (Color online) RDF obtained from MC simulations (diamond), BIMSA3 (solid line), and MSA-fit (dot dashed) at two concentrations.\n\nThe RDF obtained within BIMSA3 are compared with the MC and MSA-fit results in Fig. 5. Our BIMSA3 model accounts for the strong molecular peak of the CIP and provides the correct distances of minimal approach; whereas the naive MSA-fit procedure ignores the former and gives poor estimates for the latter. At larger separations, the BIMSA3 results do not reproduce the oscillations observed in the MC simulations, but the corresponding energy oscillations in the effective potentials are less than kBT . In addition, the perturbation term of the BIMSA3 appears to be negligible compared to the reference term for concentrations less than 1 mol l−1 . The perturbation can then be omitted to obtain a fully analytical theory, determined by the hard sphere diameters and the pair fraction given by LPT; with the free energy and the RDF given in terms of the BIMSA and MSA solutions, as described above. While the procedure we have followed uses two different approximations for the reference and perturbation terms (MSA vs BIMSA), these are known to be accurate for the systems under consideration and do not appear to be inconsistent with each other.\n\nTo conclude, we have combined MD simulations with LPT to construct simple models of electrolyte solutions which account for the molecular nature of the solvent. The final result is fully analytical and it yields the thermodynamic and structural properties of the solution, in agreement with the original molecular description. The methodology can in principle be adapted to any molecular description of the system (MD simulations involving interaction potentials accounting for polarization effects or Car-Parrinello MD simulations for example) as long as the ion-ion RDF are known. It can also be generalized to study interfaces. The method appears to be a promising approach toward the description of the specific effects of ions, especially for complex systems whose modeling requires an analytic solution.\n\nThe authors are particularly grateful to Werner Kunz for fruitful discussions.\n\n- [1] W. G. McMillan and J. E. Mayer, J. Chem. Phys. 13, 276 (1945).\n- [2] J. M. G. Barthel, H. Krienke, and W. Kunz, Physical Chemistry of Electrolyte Solutions (Springer, 1998).\n- [3] L. Blum, in Theoretical Chemistry: Advances and Perspectives, edited by H. Eyring and D. Henderson (Academic Press, 1980), vol. 5, pp. 1–66.\n- [4] L. Blum and O. Bernard, J. Stat. Phys. 79, 569 (1995).\n- [5] J.-F. Dufrˆeche et al., J. Phys. Chem. B 109, 9873 (2005).\n- [6] P. Jungwirth and D. J. Tobias, Chem. Rev. 106, 1259 (2006).\n- [7] W. Kunz, P. LoNostro, and B. W. Ninham, Curr. Opin. Colloid Interface Sci. 9, 1 (2004).\n- [8] B. Hess, C. Holm, and N. van der Vegt, Phys. Rev. Lett. 96, 147801 (2006).\n- [9] I. Kalcher and J. Dzubiella, J. Chem. Phys. 130, 134507 (2009).\n- [10] S. Gavryushov and P. Linse, J. Phys. Chem. B 110, 10878 (2006)\n- [11] A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52, 3730 (1995).\n- [12] D. Horinek and R. R. Netz, Phys. Rev. Lett. 99, 226104 (2007).\n- [13] M. Lund, P. Jungwirth, and C. E. Woodward, Phys. Rev. Lett. 100, 258105 (2008).\n- [14] S. Van Damme et al., J. Phys. Chem. B 113, 3105 (2009).\n- [15] J.-P. Hansen and I. R. McDonald, Theory of Simple Liquids (Academic Press, 1986).\n- [16] J. C. Rasaiah and R. M. Lynden-Bell, Philos. Trans. R. Soc. London, Ser. A 359, 1545 (2001).\n- [17] A. P. Lyubartsev and S. Marcelja, Phys. Rev. E 65, 041202 (2002).\n- [18] V. M. M. Lobo, Electrolyte Solutions, Data on Thermodynamic and Transport Properties, vol. I-II (Coimbra Editora, Lisbon, Portugal, 1984).\n- [19] G. Ciccotti, P. Turq, and F. Lantelme, Chem. Phys. 88, 333 (1984).\n- [20] J.-F. Dufrˆeche, T. O. White, and J.-P. Hansen, Mol. Phys. 101, 1741 (2003).\n- [21] The average contact distance between a symmetric dumbbell and an infinite plane at β = 0.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2648.pdf" - }, - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina1,2,3 , ∗ Jean-Fran¸cois Dufrˆeche1,2,3 , † Mathieu\n\nSalanne1,2 , Olivier Bernard1,2 , Marie Jardat1,2 , and Pierre Turq1,2\n\n1 UPMC-Universit´e Paris 06, UMR 7195, PECSA, F-75005 Paris, France\n\nUMR 5257 CEA–CNRS–Universit´e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C`eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, H¨uckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8–11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from molecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.\n\nThe first stage consists in calculating the McMillan-Mayer effective ion-ion interaction potentials V eff ij (r), by inverting the radial distribution functions (RDF) gij (r) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0.64 mol l−1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij (r) (which depends on the dielectric constant of the solvent) from V eff ij (r), we obtain the short-range contribution V SR ij (r) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na+ and Cl− free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier (& 2kBT ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-\n\n2 CNRS, UMR 7195, PECSA, F-75005 Paris, France 3\n\nInstitut de Chimie S´eparative de Marcoule (ICSM),\n\nElectronic address: john.molina@etu.upmc.fr\n\nElectronic address: jean-francois.dufreche@upmc.fr", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "**4**\n\nRather than using the original CMIP5 ensemble as in previous studies, the aim is to allow for an improved representation of atmospheric and land surface processes including extremes by using higher spatial resolution [11].\n\nHadGEM3 (Hadley Centre Global Environment Model version 3) is a configuration of the UK Met Office Unified Model (MetUM) which has been developed for use for both climate research and weather prediction applications. It is the result of converging the development of the Met Office's weather and climate global atmospheric model components so that, where possible, atmospheric processes are modelled or parametrized seamlessly across spatial resolutions and timescales.\n\nThe high-resolution simulations were performed using the HadGEM3A Global Atmosphere (GA) 3.0 model [12–14] at a resolution of N216 (0.556° of latitude by 0.833° of longitude with gridboxes of approx. 60 km length in mid-latitudes). This is the atmospheric component of the HadGEM3-GC2 coupled climate model [15,16], which is part of the HadGEM3 family of climate models [12]. This represents the third generation of HadGEM configurations, leading on from the HadGEM2 family of climate model configurations [13] which was used for CMIP5. Key improvements over the previous model, HadGEM2, include increased vertical levels in the atmosphere (85 compared to 38) and substantial changes to the model dynamics (ENDGame) [17]. This version of the HadGEM3 model lies in the transition from CMIP5 to CMIP6 versions. The Met Office is currently operationally running the coupled HadGEM3-GC2 model at N216 resolution for seasonal and decadal forecasting and clear benefits are emerging from this use at higher resolution [18,19].\n\nWe ran the model using only its atmosphere and land components, with time-varying seasurface temperatures (SSTs) and sea-ice concentrations (SICs) prescribed as input quantities. This approach was taken for two reasons: (i) to provide a rapid first analysis of the implications of the higher resolution for projections of climate extremes and impacts—an atmosphereonly simulation requires considerably less computing time than a coupled ocean–atmosphere general circulation model (GCM); (ii) to allow us to explore, to some degree, uncertainties in regional climate changes by using SSTs and SICs from different climate models. To explore these uncertainties in the regional impacts of climate change, we carried out six HadGEM3 atmospheric simulations driven by time-varying SSTs and SICs from a subset of projections from the CMIP5 with the RCP8.5 scenario. The assumption here is that SSTs and SICs provide a substantial influence on regional patterns of climate change over land, so using a range of SST and SIC patterns in a single atmosphere model goes some way towards representing the range of regional climate changes that would arise in a set of different coupled ocean–atmosphere GCMs. This approach will not capture the full range of uncertainty affecting regional climate changes over land, because it still relies on one atmosphere model and one land surface scheme, so responses to radiative forcing that depend mainly on atmospheric process or land-atmosphere interactions will still be constrained by the behaviour of that single model. Nevertheless, we consider that our experimental design avoids the reliance on one single realization of climate and hence allows some of the uncertainties in regional climate-change impacts to be illustrated and explored.\n\nThe SSTs and SICs were taken from a subset of the CMIP5 transient projections performed with the RCP8.5 scenario from 1979 to 2100—the CMIP5 members were selected as representative of a range of outcomes for future climate change, including high and low climate sensitivity, different biases in baseline precipitation climatology, and different global patterns of precipitation change. Specific levels of global warming such as 1.5°C or 2°C were defined on the basis of the global mean temperature in the original CMIP5 projections. The time of reaching a specific level of global warming, therefore, varied between ensemble members. The CMIP5 SSTs were not bias-corrected, which means that the results here may be sensitive to systematic errors arising from biases in the present-day SST patterns.\n\nAtmospheric greenhouse gas concentrations were prescribed from the standard RCP8.5 concentration scenario. Aerosol concentrations were calculated within the model, with aerosol emissions prescribed again from the standard RCP8.5 scenario. This means that the greenhouse gas and aerosol concentrations, and hence radiative forcing, were the same in all ensemble", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "FIG. 11: The evolution of the optical integral in the NS (top) and the SCS (bottom) in the original MFLI model. Parameters are the same as above. Note that only ∼ 75− 80% of the spectral weight is recovered up to 1eV .\n\nFIG. 12: Evolution of the difference of the optical integrals in the SCS and the NS with the upper cut-off ωc. Parameters are the same as before. Observe that the optical sum in the SCS is larger than in the NS and that ∆W has not yet reached ∆WK up to the bandwidth. The dashed line is the FGT result.\n\nThis clearly affects nk because it is expressed via the full Green's function and competes with the conventional effect of the gap opening. The distribution function from this model, which we show in Fig.2b brings this point out by showing that in a MFLI model, at ǫ < 0, nk in a superconductor is larger than nk in the normal state, in clear difference with the BCSI case.\n\nWe analyzed the original MFLI model for various parameters and found that the behavior presented in Fig. 12, where ∆W(ωc) > 0 for all frequencies, is typical but\n\nFIG. 13: Behavior of WK with Γ for the original MFLI model at very small α = 0.05. We set ω1 = ∆ = 32 meV . Observe the inconsistency with WK in the BCSI model in Fig 4.\n\nFIG. 14: The special case of α = 1.5,Γ = 5 meV , other parameters the same as in Fig. 10. These parameters are chosen to illustrate that two sign changes (indicated by arrows in the figure) are also possible within the original MFLI model.\n\nnot not a generic one. There exists a range of parameters α and Γ where ∆WK is still positive, but ∆W(ωc) changes the sign twice and is negative at intermediate frequencies. We show an example of such behavior in Fig14. Still, for most of the parameters, the behavior of ∆W(ωc) is the same as in Fig. 12.\n\nOn more careful looking we found the problem with the original MFLI model. We recall that in this model the self-energy in the SCS state was obtained by just cutting the NS self energy at ω1 (see Eq.18). We argue that this phenomenological formalism is not fully consistent, at least for small α. Indeed, for α = 0, the MFLI model reduces to BCSI model for which the behavior of the selfenergy is given by Eq. (12). This self-energy evolves with ω and Σ′′ has a square-root singularity at ω = ∆ + ωo (with ωo = 0). Meanwhile Σ′′ in the original MFLI model in Eq. (18) simply jumps to zero at ω = ω1 = ∆, and this happens for all values of α including α = 0 where the MFLI and BCSI model should merge. This inconsistency is reflected in Fig 13, where we plot the near-BCS limit of MFLI model by taking a very small α = 0.05. We see that the optical integral WK in the SCS still remains larger than in the NS over a wide range of Γ, in clear difference with the exactly known behavior in the BCSI", - "page_start": 8, - "page_end": 8, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 15: Top – σ(ω) in the NS and the SCS in the 'corrected' MFLI model with the feedback from SC on the quasiparticle damping: iΓ term transforms into √ Γ −ω2+∆2 . In the SCS σ now begins at Ω = 2∆. The parameters are same as in Fig. 10. Bottom – the behavior of Kubo sum with Γ. Observe that W(ωc) in the NS is larger than in the SCS.\n\nFIG. 16: Evolution of the difference of the optical integrals between the SCS and the NS with the upper cut-off ωc for the \"corrected\" MFLI model. Now ∆W(ωc) is negative above some frequency. Parameters are same as in the Fig 15.\n\nmodel, where WK is larger in the NS for all Γ (see Fig. 4). In other words, the original MFLI model does not have the BCSI theory as its limiting case.\n\nWe modified the MFLI model is a minimal way by changing the damping term in a SCS to √ Γ −ω2+∆2 to be consistent with BCSI model. We still use Eq. (18) for the MFL term simply because this term was introduced in the NS on phenomenological grounds and there is no way to guess how it gets modified in the SCS state without first deriving the normal state self-energy microscopically (this is what we will do in the next section). The results of the calculations for the modified MFLI model are presented in Figs. 15 and 16. We clearly see that the behavior is now different and ∆WK < 0 for all Γ. This is the same behavior as we previously found in BCSI and EB models. So we argue that the 'unconventional' behavior exhibited by the original MFLI model is most likely the manifestation of a particular modeling inconsistency. Still, Ref. 30 made a valid point that the fact that quasiparticles behave more close to free fermions in a SCS than in a NS, and this effect tends to reverse the signs of ∆WK and of the kinetic energy 43. It just happens that in a modified MFLI model the optical integral is still larger in the NS.\n\n#### D. The collective boson model\n\nWe now turn to a more microscopic model- the CB model. The model describes fermions interacting by exchanging soft, overdamped collective bosons in a particular, near-critical, spin or charge channel31,44,45. This interaction is responsible for the normal state self-energy and also gives rise to a superconductivity. A peculiar feature of the CB model is that the propagator of a collective boson changes below Tc because this boson is not an independent degree of freedom (as in EB model) but is made out of low-energy fermions which are affected by superconductivity32 .\n\nThe most relevant point for our discussion is that this model contains the physics which we identified above as a source of a potential sign change of ∆WK. Namely, at strong coupling the fermionic self-energy in the NS is large because there exists strong scattering between low-energy fermions mediated by low-energy collective bosons. In the SCS, the density of low-energy fermions drops and a continuum collective excitations becomes gaped. Both effects reduce fermionic damping and lead to the increase of WK in a SCS. If this increase exceeds a conventional loss of WK due to a gap opening, the total ∆WK may become positive.\n\nThe CB model has been applied numerous times to the cuprates, most often under the assumption that nearcritical collective excitations are spin fluctuations with momenta near Q = (π, π). This version of a CB boson is commonly known as a spin-fermion model. This model yields dx2−y 2 superconductivity and explains in a quantitative way a number of measured electronic features of the cuprates, in particular the near-absence of the quasiparticle peak in the NS of optimally doped and underdoped cuprates39 and the peak-dip-hump structure in the ARPES profile in the SCS31,32,46,47. In our analysis we assume that a CB is a spin fluctuation.\n\nThe results for the conductivity within a spin-fermion model depend in quantitative (but not qualitative) way on the assumption for the momentum dispersion of a collective boson. This momentum dependence comes from", - "page_start": 9, - "page_end": 9, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, Vij = V (0) ij + ∆Vij , a first-order truncated expression for the free energy density of the system βfv is obtained,\n\n$$\\beta f_{v}\\lesssim\\beta f_{v}^{(0)}+\\frac{1}{2}\\beta\\sum_{i,j}\\rho_{i}\\rho_{j}\\int\\mathrm{d}\\mathbf{r}\\,g_{i j}^{(0)}(r)\\Delta V_{i j}(r)\\qquad(1)$$\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = (kBT ) −1 and ρi the concentration of species i. The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter (σi) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆Vij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g(r) = exp [gMSA(r) − 1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye H¨uckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillan-Mayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\nWe first used LPT for a two-component system (Na+ and Cl− free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2.0 mol l−1 . The minimization leads to almost constant diameters on the whole range of concentration: σ1 = 3.67 ˚A and σ2 = 4.78 ˚A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0.1 mol l−1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4.2 ˚A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.\n\nTo overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and fnally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative infuence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superfcial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specifc brain areas or networks but are plausibly distributed across many of them. However, as a frst approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this fgure.\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli suffciently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (refex arcs) and autonomic actions (autonomic refexes). Endowing predictive coding with these refexes—hence realizing an \"active inference\" architecture—permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.\n\nEquipped with a generative model like the one shown in Fig. 1, an active inference agent can continuously infer (and act upon) the state of the world and of the body, including the internal milieu, at multiple time scales. Of particular interest, here are multimodal inferences that unite exteroceptive and interoceptive sources of evidence. One example of this is the perception of faces expressing emotions. Two studies reported that", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "**Figure 5. A** learning for the actual reward condition (reward condition left). The agent correctly learned the probability of receiving rewards in the rewarding arm. It did not learn the probabilities of the non-rewarding arm since it did not explore that option. The color grading signifies the likelihood of an observation being generated by a specific state. The more saturated the color, the higher the likelihood.\n\n## *4.3. Fitting the Model to the Data*\n\nSimulations are useful for a variety of purposes, like exploring the consequences of different priors and parameters and establishing the face validity of hypothetical mechanisms underlying behavioural phenomena. However, we often want to use models to make inferences about specific observed phenomena, like the differences in behaviour between various populations, as in computational psychiatry [14]. One standard method here is model fitting, where we estimate the parameter values (e.g., prior beliefs) of an AIF model that are the most likely given some observed behaviour of a participant. This is often performed with approximate Bayesian methods. In the cognitive and behavioural sciences, the predominant method is Markov Chain Monte Carlo (MCMC) methods [34], which are slower but in the limit can estimate parameter posteriors without making assumptions about their functional form. An alternative, which is more often used in other fields and also available in ActiveInference is variational methods, which are faster but require making assumptions about the functional form of the posterior. In general, MCMC methods are favourable when making parameter inferences (i.e., comparing parameters of the same model fitted to different data, like two groups of subjects). When performing a Bayesian model comparison (i.e., comparing different models fitted to the same data), the different approaches rely on different approximations of the model evidence, with the variational", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "# **ANNEX III**\n\n- Model for specific contracts\n- Model for order forms", - "page_start": 41, - "page_end": 41, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "In the health regulation regarding coronavirus, what is considered a \"device\" ?", - "target_page": 3, - "target_passage": "means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (e) where P is required to obtain a testing package or undertake a test under regulation 6 or Schedule 8—\n\t- (i) information generated where P books, or attempts to book, a testing package for the purposes of regulation 6,\n\t- (ii) a copy of any notice given to P which contains information about the requirement to book a testing package or to undertake a test,\n\t- (iii) information A obtained under paragraph 10(3) or (4) of Schedule 8,\n\t- (iv) the results of a test undertaken by P in accordance with Schedule 8 (whether or not that test was provided as part of a testing package),\n\t- (v) information obtained by A in the course of providing a test that falls within paragraph (iv) and is undertaken, or in the course of arranging for such a test to be undertaken, by P (including confirmation that the test was undertaken, details of when and where it was undertaken, any reasons for a test not be being undertaken and the details of any replacement test to be undertaken);\n- (f) information provided to an immigration officer pursuant to regulations 3(7), 4(4) or 6(11);\n- (g) where a sample taken in respect of a day 2 test under regulation 6 has been sequenced, the sorted BAM file relating to that sample containing all reads aligning to the SARS-CoV-2 reference genome with unaligned and human reads removed;\n- (h) information provided by, or on behalf of, A by way of explanation for failing to comply with regulation 3, 4 or 6, or paragraph 3 of Schedule 8; or\n- (i) information about any steps taken in relation to A, including details of any fixed penalty notice issued under these Regulations.\n- (3) A may only use relevant information where it is necessary—\n\t- (a) for the purpose of carrying out a function under these Regulations;\n\t- (b) for the purpose of—\n\t\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n\t- (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n\n(4) Subject to paragraph (7), A may only disclose relevant information to another person (the \"recipient\") where it is necessary for the recipient to have the information —\n\n- (a) for the purpose of carrying out a function of the recipient under—\n\t- (i) these Regulations, or\n\t- (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n- (b) for the purpose of—\n\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "and the Channel Islands. The British Overseas Territories are not in the common travel area. Public health requirements may vary depending upon in which nation of the UK you are staying.\n\nEngland: https://www.gov.uk/uk-border-control\n\nNorthern Ireland: https://www.nidirect.gov.uk/articles/coronavirus-covid-19-international-traveladvice\n\nScotland: https://www.gov.scot/publications/coronavirus-covid-19-international-travel-quarantine/\n\nWales: https://gov.wales/arriving-wales-overseas\n\nFailure to comply with these measures is a criminal offence and you could be fined. There are a limited set of exemptions from these measures. Check the list of exemptions carefully. You may be fined if you fraudulently claim an exemption.\n\n# PART 2\n\n#### **Onboard announcement**\n\nThe following is a public health message on behalf of the UK's public health agencies.\n\nIf you have been in or transited through an amber or red country within the previous 10 days you must quarantine for the first 10 days after you arrive. This is to protect yourself and others.\n\nThe symptoms of coronavirus are a new continuous cough, a high temperature or a loss of, or change in, normal sense of taste or smell. If you experience any of these symptoms, however mild, you are advised to make yourself known to the crew.\n\nSimple measures you can take to help protect yourself and family are:\n\nwash your hands\n\navoid touching your face with your hands\n\ncatch coughs and sneezes in a tissue and dispose of it immediately.\n\n### PART 3\n\n### Relevant websites\n\n**1.** The following are \"the relevant websites\" for the purposes of regulation 14—\n\nhttps://www.gov.uk/government/publications/coronavirus-covid-19-travellers-exempt-from-ukborder-rules/coronavirus-covid-19-travellers-exempt-from-uk-border-rules\n\nhttps://www.gov.uk/guidance/booking-and-staying-in-a-quarantine-hotel-when-you-arrive-inengland\n\nhttps://www.gov.uk/guidance/coronavirus-covid-19-testing-for-people-travelling-to-england\n\nhttp://www.gov.uk/travel-quarantine-and-testing\n\nhttps://www.gov.uk/guidance/red-amber-and-green-list-rules-for-entering-england", - "page_start": 82, - "page_end": 82, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## **2020 No. 471**\n\n## **EDUCATION, ENGLAND**\n\n# The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\n| Made - - | - | - 28th April 2020 |\n| --- | --- | --- |\n| Laid before Parliament | | 30th April 2020 |\n| Coming into force | - | - 1st May 2020 |\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014(**a**) and sections 29(3) and 569(4) of the Education Act 1996(**b**).\n\n## **Citation and commencement**\n\n**1.** These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## **Review and expiry**\n\n**2.**—(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n\n(2) These Regulations cease to have effect on 25th September 2020.\n\n## **Amendment of the Special Educational Needs and Disability Regulations 2014**\n\n**3.** The Special Educational Needs and Disability Regulations 2014(**c**) are amended as follows.\n\n**4.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**5.** After regulation 2 (interpretation) insert—\n\n## \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of\n\n(<b>a) 2014 c.6. Section 30(8) was amended by Schedule 2, Part 1, paragraph 4 to the Children and Social Work Act 2017 (c.16).\n\n(<b>b) 1996 c.56. Section 29(3) was amended by Schedule 30, paragraph 67 and Schedule 31 to the School Standards and Framework Act 1998 (c.31) and S.I. 2010/1158 and section 569(4) was amended by section 8(1) and (5) of the Education (Wales) Measure 2009.\n\n(<b>c) S.I. 2014/1530, relevant amending instruments are S.I. 2014/2096, S.I. 2015/359 and S.I. 2017/1306.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "The Secretary of State makes the following Regulations in exercise of the powers conferred by sections 45B, 45F(2) and 45P(2) of the Public Health (Control of Disease) Act 1984(**a**).\n\n## PART 1\n\n### Introductory\n\n#### **Citation, commencement, extent and application**\n\n**1.**—(1) These Regulations may be cited as the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021.\n\n(2) These Regulations come into force at 4.00 a.m. on 17th May 2021.\n\n(3) These Regulations extend to England and Wales and apply in relation to England only.\n\n#### **Interpretation and introduction of Schedules 1 to 4**\n\n**2.**—(1) In these Regulations—\n\n\"category 1 arrival\" means person who has arrived in England from a category 1 country or territory, and has not been in a category 2 country or territory or a category 3 country or territory in the period beginning with the 10th day before the date of their arrival in England;\n\n\"category 1 country or territory\" means a country or territory, or part of a country or territory, specified in Schedule 1(**b**);\n\n\"category 2 country or territory\" means a country or territory or part of a country or territory specified in Schedule 2(**c**);\n\n\"category 3 country or territory\" means a country or territory or part of a country or territory specified in Schedule 3(**d**);\n\n\"child\" means a person under the age of 18;\n\n\"the common travel area\" has the meaning given in section 1(3) of the Immigration Act 1971(**e**);\n\n\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n\"coronavirus disease\" means COVID-19 (the official designation of the disease which can be caused by coronavirus);\n\n\"designated port\" means a port designated for the purposes of Schedule 11;\n\n\"device\" means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002(**f**);\n\n\"disability\" has the meaning given in the Equality Act 2010(**g**) (see section 6 of, and Schedule 1 to, that Act);\n\n\"immigration officer\" means a person appointed by the Secretary of State as an immigration officer under paragraph 1 of Schedule 2 to the Immigration Act 1971(**h**);\n\n\"managed self-isolation package\" has the meaning given in paragraph 8 of Schedule 11;\n\n\"operator\" except in regulation 18, means an operator of a relevant service;\n\n(**b**) Category 1 countries and territories are referred to colloquially and in guidance as \"Green List\" countries and territories.\n\n(**c**) Category 2 countries and territories are referred to colloquially and in guidance as \"Amber List\" countries and territories.\n\n(**f**) S.I. 2002/618.\n\n(<b>a) 1984 c. 22. Part 2A was inserted by section 129 of the Health and Social Care Act 2008 (c. 14).\n\n(<b>d) Category 3 countries and territories are referred to colloquially and in guidance as \"Red List\" countries and territories. (**e**) 1971 c. 77; section 1(3) provides that the United Kingdom, the Channel Islands, the Isle of Man and the Republic of Ireland are collectively referred to in that Act as \"the common travel area\".\n\n(<b>g) 2010 c. 15.\n\n(<b>h) Paragraph 1 was amended by paragraph 3 of Schedule 3 to the Health Protection Agency Act 2004 (c. 17), and by S.I. 1993/1813.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(2) (transfer of EHC plans) (in relation to the second reference to 15 working days), (4), (5), (7) (in relation to the second reference to 15 working days) and (8);\n- (b) regulation 16(2) and (3) (change of responsible commissioning body);\n- (c) regulation 20(9) and (10) (review where the child or young person attends a school or other institution);\n- (d) regulation 21(7), (8) and (9) (review of EHC plan where the child or young person does not attend a school or other institution);\n- (e) regulation 25(1) (notification of decision whether it is necessary to re-assess educational, health care and social care provision);\n- (f) regulation 27(4) (amending or replacing an EHC plan following a re-assessment);\n- (g) regulation 33 (requirement to consider mediation);\n- (h) regulation 34(1) and (2) (where a parent or young person does not wish to or fails to pursue mediation);\n- (i) regulation 35(2), (3) and (4) (mediation health care issues);\n- (j) regulation 36(2) (mediation no health care issues);\n- (k) regulation 39(1) and (3) (mediation certificate under section 55(5));\n- (l) regulation 42(3) and (4) (steps to be taken by a local authority);\n- (m) regulation 44(2)(d), (e), (f) and (h) (compliance with the orders of the First-tier Tribunal);\n- (n) regulation 45(4), (5) and (6A) (unopposed appeals);\n- (o) regulation 47 (disclosure of EHC plans in relation to higher education); and\n- (p) regulation 56(3) (publication of comments on the local offer).\".\n\n**6.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**7.** In regulation 5(4) (decision whether or not to conduct an EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\t- \"; or\n\t- (e) of a reason relating to the incidence or transmission of coronavirus\".\n- **8.** In regulation 8(2) (duty to co-operate in EHC needs assessments)—\n\t- (a) at the end of sub-paragraph (b) omit \"or\"; and\n\t- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**9.** In regulation 10(4) (decision not to secure an EHC plan)—", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**10.** In regulation 13(3) (timescales for EHC plans), for \"(d)\" substitute \"(e)\".\n\n**11.** After regulation 18 (circumstances in which a local authority must review an EHC plan) insert—\n\n## \"**Circumstances in which it is not necessary to review an EHC plan**\n\n**18A.**—(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n\n(2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.\".\n\n**12.** In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert—\n\n\"(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**13.** In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**14.** In regulation 45 (unopposed appeals), after paragraph (7) insert—\n\n\"(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.\".\n\n### **Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014**\n\n**15.** The Special Educational Needs (Personal Budgets) Regulations 2014(**a**) are amended as follows.\n\n**16.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n**17.** After regulation 2 (interpretation) insert—\n\n\".\n\n#### \"**Relaxation of time period due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(<b>a) S.I. 2014/1652, to which there are amendments not relevant to these Regulations.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "Regarding the regulation of Enforcement of requirement to self-isolate concerning travel and coronavirus, who are considered an \"authorised persons\" ?", - "target_page": 19, - "target_passage": "For the purposes of this regulation, “authorised person” means— (a) a constable; (b) for the purposes of paragraphs (2) and (3) only, an immigration officer; or (c) a person designated by the Secretary of State for the purposes of this regulation.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "and the Channel Islands. The British Overseas Territories are not in the common travel area. Public health requirements may vary depending upon in which nation of the UK you are staying.\n\nEngland: https://www.gov.uk/uk-border-control\n\nNorthern Ireland: https://www.nidirect.gov.uk/articles/coronavirus-covid-19-international-traveladvice\n\nScotland: https://www.gov.scot/publications/coronavirus-covid-19-international-travel-quarantine/\n\nWales: https://gov.wales/arriving-wales-overseas\n\nFailure to comply with these measures is a criminal offence and you could be fined. There are a limited set of exemptions from these measures. Check the list of exemptions carefully. You may be fined if you fraudulently claim an exemption.\n\n# PART 2\n\n#### **Onboard announcement**\n\nThe following is a public health message on behalf of the UK's public health agencies.\n\nIf you have been in or transited through an amber or red country within the previous 10 days you must quarantine for the first 10 days after you arrive. This is to protect yourself and others.\n\nThe symptoms of coronavirus are a new continuous cough, a high temperature or a loss of, or change in, normal sense of taste or smell. If you experience any of these symptoms, however mild, you are advised to make yourself known to the crew.\n\nSimple measures you can take to help protect yourself and family are:\n\nwash your hands\n\navoid touching your face with your hands\n\ncatch coughs and sneezes in a tissue and dispose of it immediately.\n\n### PART 3\n\n### Relevant websites\n\n**1.** The following are \"the relevant websites\" for the purposes of regulation 14—\n\nhttps://www.gov.uk/government/publications/coronavirus-covid-19-travellers-exempt-from-ukborder-rules/coronavirus-covid-19-travellers-exempt-from-uk-border-rules\n\nhttps://www.gov.uk/guidance/booking-and-staying-in-a-quarantine-hotel-when-you-arrive-inengland\n\nhttps://www.gov.uk/guidance/coronavirus-covid-19-testing-for-people-travelling-to-england\n\nhttp://www.gov.uk/travel-quarantine-and-testing\n\nhttps://www.gov.uk/guidance/red-amber-and-green-list-rules-for-entering-england", - "page_start": 82, - "page_end": 82, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "### **Form B: positive test result**\n\nYour coronavirus test result is positive. You had the virus when the test was done.\n\nIf you have not had symptoms of coronavirus, you must self-isolate for 10 days from the day after your test date. If you have symptoms of coronavirus, you must self-isolate for 10 days from the day your symptoms started, if earlier than when you took your test.\n\nPeople you live with or are travelling with should also self-isolate for 10 days from the day after you took the test.\n\nYou may be contacted for contact tracing and to check that you, and those who you live or are travelling with, are self-isolating.\n\nYou must not travel, including to leave the UK, during self-isolation.\n\nContact 111 if you need medical help. In an emergency dial 999.\n\n#### **Form C: unclear test result**\n\nYour coronavirus test result is unclear. It is not possible to say if you had the virus when the test was done.\n\nYou must, by law, continue self-isolating for the remainder of your self-isolation period as an international arrival travelling to the UK from an amber-list country, territory or region. You may be contacted to check that you are self-isolating.\n\nIf you want to shorten your self-isolation period you will need to take another test for international arrivals from amber list countries, territories or regions. For more information, go to https://www.gov.uk/guidance/coronavirus-covid-19-test-to-release-for-international-travel.\n\n(4) The test provider must, on request, provide a constable or any other person employed in or for the purposes of any police force, with—\n\n- (a) P's passport number, or travel document reference number (as appropriate);\n- (b) P's test result;\n- (c) the date on which P undertook the test;\n- (d) the date on which the test result was notified or made available to P or X in accordance with sub-paragraphs (2) and (3).\n\n(5) Where—\n\n- (a) regulation 4 or 4A of the Health Protection (Notification) Regulations 2010(**a**) applies in relation to the test provider; or\n- (b) if the test provider arranges with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, either of those regulations applies to X in the carrying out of that element,\n\n(<b>a) S.I. 2010/659; regulation 4 was amended by S.I. 2013/235, 2020/1175, 2020/764, 2021/150 and regulation 4A was inserted by S.I. 2020/1175.", - "page_start": 72, - "page_end": 72, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "you have been traced as a contact of someone who tested positive\n\nFor advice on when you might need to self-isolate and what to do, go to www.nhs.uk/conditions/coronavirus-covid-19 and read 'Self-isolation and treating symptoms'.\n\n#### **Form B: positive test result**\n\nYour coronavirus test result is positive. You had the virus when the test was done.\n\nEven if you have not had symptoms of coronavirus, you must self-isolate for 10 days from the day after your test date. Your test sample may be genome sequenced to check whether you have a virus variant of concern or variant under investigation.\n\nPeople you live with or have travelled with should also self-isolate for 10 days from the day after you took a test.\n\nIf you received a positive test result for the test taken you do not need to take any further tests. People you are travelling with must still take a day 8 test if they have travelled from an amber list country.\n\nYou may be contacted for contact tracing and to check that you, and those who you live or are travelling with, are self-isolating.\n\nYou must not travel, including to leave the UK, during self-isolation.\n\nContact 111 if you need medical help. In an emergency dial 999.\n\n#### **Form C: unclear test result**\n\nYour coronavirus test result is unclear. It is not possible to say if you had the virus when the test was done.\n\nYou must take another test or self-isolate for 10 days from the day after your test date.\n\nYou may be contacted to check that you are self-isolating.\n\n- (4) Where—\n\t- (a) regulation 4 or 4A of the Health Protection (Notification) Regulations 2010 applies in relation to the test provider; or\n\t- (b) if the test provider arranges with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, either of those regulations applies to X in the carrying out of that element,\n\nthe regulation applies as if it required the information described in sub-paragraph (5) to be included in the notification to Public Health England.\n\n(5) The information mentioned in sub-paragraph (4) is—\n\n- (a) the date on which P last departed from or transited through a category 2 country or territory;\n- (b) P's coach number, flight number or vessel name (as appropriate);\n- (c) the country or territory P was travelling from when P arrived in England, and any country or territory they transited through as part of that journey;\n- (d) the date on which P undertook the appropriate test;\n- (e) whether the test is—\n\t- (i) a day 2 test for a category 1 arrival,\n\t- (ii) a day 2 test for a person who is not a category 1 arrival, or\n\t- (iii) a day 8 test.", - "page_start": 65, - "page_end": 65, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (e) where P is required to obtain a testing package or undertake a test under regulation 6 or Schedule 8—\n\t- (i) information generated where P books, or attempts to book, a testing package for the purposes of regulation 6,\n\t- (ii) a copy of any notice given to P which contains information about the requirement to book a testing package or to undertake a test,\n\t- (iii) information A obtained under paragraph 10(3) or (4) of Schedule 8,\n\t- (iv) the results of a test undertaken by P in accordance with Schedule 8 (whether or not that test was provided as part of a testing package),\n\t- (v) information obtained by A in the course of providing a test that falls within paragraph (iv) and is undertaken, or in the course of arranging for such a test to be undertaken, by P (including confirmation that the test was undertaken, details of when and where it was undertaken, any reasons for a test not be being undertaken and the details of any replacement test to be undertaken);\n- (f) information provided to an immigration officer pursuant to regulations 3(7), 4(4) or 6(11);\n- (g) where a sample taken in respect of a day 2 test under regulation 6 has been sequenced, the sorted BAM file relating to that sample containing all reads aligning to the SARS-CoV-2 reference genome with unaligned and human reads removed;\n- (h) information provided by, or on behalf of, A by way of explanation for failing to comply with regulation 3, 4 or 6, or paragraph 3 of Schedule 8; or\n- (i) information about any steps taken in relation to A, including details of any fixed penalty notice issued under these Regulations.\n- (3) A may only use relevant information where it is necessary—\n\t- (a) for the purpose of carrying out a function under these Regulations;\n\t- (b) for the purpose of—\n\t\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n\t- (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n\n(4) Subject to paragraph (7), A may only disclose relevant information to another person (the \"recipient\") where it is necessary for the recipient to have the information —\n\n- (a) for the purpose of carrying out a function of the recipient under—\n\t- (i) these Regulations, or\n\t- (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n- (b) for the purpose of—\n\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "(4) In this regulation—\n\n\"authorised person\" means—\n\n- (a) a constable,\n- (b) the Civil Aviation Authority,\n- (c) the Secretary of State, or\n- (d) a person authorised by the Civil Aviation Authority or the Secretary of State under the Air Navigation Order 2016(**a**);\n\n\"operator\" has the meaning given in article 4 of the Air Navigation Order 2016;\n\n\"pilot in command\" and \"private aircraft\" have the meanings given in the Air Navigation Order 2016 (see Schedule 1 to that Order);\n\n\"relevant transport service\", in relation to an operator, means a transport service provided by or on behalf of that operator;\n\n\"transport service\" means—\n\n- (a) a relevant service,\n- (b) a shuttle service,\n- (c) a service (other than a relevant service) which—\n\t- (i) is carrying passengers travelling to England from outside the common travel area (whether for payment or valuable consideration or otherwise), and\n\t- (ii) is provided by means of an aircraft (other than a private aircraft), or\n- (d) a flight which—\n\t- (i) is carrying passengers travelling to England from outside the common travel area (whether for payment or valuable consideration or otherwise), and\n\t- (ii) is provided by means of a private aircraft.\n\n# PART 5\n\n### Offences, proceedings and information\n\n#### **Offences and penalties**\n\n**19.**—(1) A person (\"P\") commits an offence where—\n\n- (a) without reasonable excuse P contravenes a requirement in regulation 3 (requirement to provide information);\n- (b) without reasonable excuse P contravenes a requirement in regulation 4 (requirement to possess notification of negative test result);\n- (c) without reasonable excuse P contravenes a requirement in regulation 6 (requirement to book and undertake tests);\n- (d) without reasonable excuse P contravenes a requirement in regulation 7 (requirement to undertake workforce tests);\n- (e) without reasonable excuse P contravenes a requirement in regulation 8 (requirement for offshore installation workers to take tests);\n- (f) P contravenes a requirement in regulation 9 (requirement to self-isolate);\n- (g) without reasonable excuse P contravenes a requirement in or imposed under regulation 11 (enforcement of requirement to self-isolate) apart from paragraph (2) of that regulation;\n\n(<b>a) S.I. 2016/765.", - "page_start": 22, - "page_end": 22, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "The Secretary of State makes the following Regulations in exercise of the powers conferred by sections 45B, 45F(2) and 45P(2) of the Public Health (Control of Disease) Act 1984(**a**).\n\n## PART 1\n\n### Introductory\n\n#### **Citation, commencement, extent and application**\n\n**1.**—(1) These Regulations may be cited as the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021.\n\n(2) These Regulations come into force at 4.00 a.m. on 17th May 2021.\n\n(3) These Regulations extend to England and Wales and apply in relation to England only.\n\n#### **Interpretation and introduction of Schedules 1 to 4**\n\n**2.**—(1) In these Regulations—\n\n\"category 1 arrival\" means person who has arrived in England from a category 1 country or territory, and has not been in a category 2 country or territory or a category 3 country or territory in the period beginning with the 10th day before the date of their arrival in England;\n\n\"category 1 country or territory\" means a country or territory, or part of a country or territory, specified in Schedule 1(**b**);\n\n\"category 2 country or territory\" means a country or territory or part of a country or territory specified in Schedule 2(**c**);\n\n\"category 3 country or territory\" means a country or territory or part of a country or territory specified in Schedule 3(**d**);\n\n\"child\" means a person under the age of 18;\n\n\"the common travel area\" has the meaning given in section 1(3) of the Immigration Act 1971(**e**);\n\n\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n\"coronavirus disease\" means COVID-19 (the official designation of the disease which can be caused by coronavirus);\n\n\"designated port\" means a port designated for the purposes of Schedule 11;\n\n\"device\" means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002(**f**);\n\n\"disability\" has the meaning given in the Equality Act 2010(**g**) (see section 6 of, and Schedule 1 to, that Act);\n\n\"immigration officer\" means a person appointed by the Secretary of State as an immigration officer under paragraph 1 of Schedule 2 to the Immigration Act 1971(**h**);\n\n\"managed self-isolation package\" has the meaning given in paragraph 8 of Schedule 11;\n\n\"operator\" except in regulation 18, means an operator of a relevant service;\n\n(**b**) Category 1 countries and territories are referred to colloquially and in guidance as \"Green List\" countries and territories.\n\n(**c**) Category 2 countries and territories are referred to colloquially and in guidance as \"Amber List\" countries and territories.\n\n(**f**) S.I. 2002/618.\n\n(<b>a) 1984 c. 22. Part 2A was inserted by section 129 of the Health and Social Care Act 2008 (c. 14).\n\n(<b>d) Category 3 countries and territories are referred to colloquially and in guidance as \"Red List\" countries and territories. (**e**) 1971 c. 77; section 1(3) provides that the United Kingdom, the Channel Islands, the Isle of Man and the Republic of Ireland are collectively referred to in that Act as \"the common travel area\".\n\n(<b>g) 2010 c. 15.\n\n(<b>h) Paragraph 1 was amended by paragraph 3 of Schedule 3 to the Health Protection Agency Act 2004 (c. 17), and by S.I. 1993/1813.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "What is the expiracy date of the regulation regarding travel during the coronavirus pandemic made in 2021 ?", - "target_page": 31, - "target_passage": "These Regulations expire at the end of 16th May 2022.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "and the Channel Islands. The British Overseas Territories are not in the common travel area. Public health requirements may vary depending upon in which nation of the UK you are staying.\n\nEngland: https://www.gov.uk/uk-border-control\n\nNorthern Ireland: https://www.nidirect.gov.uk/articles/coronavirus-covid-19-international-traveladvice\n\nScotland: https://www.gov.scot/publications/coronavirus-covid-19-international-travel-quarantine/\n\nWales: https://gov.wales/arriving-wales-overseas\n\nFailure to comply with these measures is a criminal offence and you could be fined. There are a limited set of exemptions from these measures. Check the list of exemptions carefully. You may be fined if you fraudulently claim an exemption.\n\n# PART 2\n\n#### **Onboard announcement**\n\nThe following is a public health message on behalf of the UK's public health agencies.\n\nIf you have been in or transited through an amber or red country within the previous 10 days you must quarantine for the first 10 days after you arrive. This is to protect yourself and others.\n\nThe symptoms of coronavirus are a new continuous cough, a high temperature or a loss of, or change in, normal sense of taste or smell. If you experience any of these symptoms, however mild, you are advised to make yourself known to the crew.\n\nSimple measures you can take to help protect yourself and family are:\n\nwash your hands\n\navoid touching your face with your hands\n\ncatch coughs and sneezes in a tissue and dispose of it immediately.\n\n### PART 3\n\n### Relevant websites\n\n**1.** The following are \"the relevant websites\" for the purposes of regulation 14—\n\nhttps://www.gov.uk/government/publications/coronavirus-covid-19-travellers-exempt-from-ukborder-rules/coronavirus-covid-19-travellers-exempt-from-uk-border-rules\n\nhttps://www.gov.uk/guidance/booking-and-staying-in-a-quarantine-hotel-when-you-arrive-inengland\n\nhttps://www.gov.uk/guidance/coronavirus-covid-19-testing-for-people-travelling-to-england\n\nhttp://www.gov.uk/travel-quarantine-and-testing\n\nhttps://www.gov.uk/guidance/red-amber-and-green-list-rules-for-entering-england", - "page_start": 82, - "page_end": 82, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "# PART 6\n\n### Final provisions\n\n### **Review of need for requirements**\n\n**24.** The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n#### **Expiry of Regulations**\n\n**25.** These Regulations expire at the end of 16th May 2022.\n\n### **Revocations, transitional provision consequential amendments and savings**\n\n**26.**—(1) The following Regulations are revoked—\n\n- (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020(**a**);\n- (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\")(**b**); and\n- (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021(**c**).\n\n(2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n\n(3) Schedule 16 makes transitional provisions.\n\n(4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\n*Robert Courts* Parliamentary Under Secretary of State At 10.32 a.m. on 14th May 2021 Department for Transport\n\n(**a**) S.I. 2020/567.\n\n(<b>b) S.I. 2020/568.\n\n(<b>c) S.I. 2021/38.", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## **2020 No. 471**\n\n## **EDUCATION, ENGLAND**\n\n# The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\n| Made - - | - | - 28th April 2020 |\n| --- | --- | --- |\n| Laid before Parliament | | 30th April 2020 |\n| Coming into force | - | - 1st May 2020 |\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014(**a**) and sections 29(3) and 569(4) of the Education Act 1996(**b**).\n\n## **Citation and commencement**\n\n**1.** These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## **Review and expiry**\n\n**2.**—(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n\n(2) These Regulations cease to have effect on 25th September 2020.\n\n## **Amendment of the Special Educational Needs and Disability Regulations 2014**\n\n**3.** The Special Educational Needs and Disability Regulations 2014(**c**) are amended as follows.\n\n**4.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**5.** After regulation 2 (interpretation) insert—\n\n## \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of\n\n(<b>a) 2014 c.6. Section 30(8) was amended by Schedule 2, Part 1, paragraph 4 to the Children and Social Work Act 2017 (c.16).\n\n(<b>b) 1996 c.56. Section 29(3) was amended by Schedule 30, paragraph 67 and Schedule 31 to the School Standards and Framework Act 1998 (c.31) and S.I. 2010/1158 and section 569(4) was amended by section 8(1) and (5) of the Education (Wales) Measure 2009.\n\n(<b>c) S.I. 2014/1530, relevant amending instruments are S.I. 2014/2096, S.I. 2015/359 and S.I. 2017/1306.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (e) where P is required to obtain a testing package or undertake a test under regulation 6 or Schedule 8—\n\t- (i) information generated where P books, or attempts to book, a testing package for the purposes of regulation 6,\n\t- (ii) a copy of any notice given to P which contains information about the requirement to book a testing package or to undertake a test,\n\t- (iii) information A obtained under paragraph 10(3) or (4) of Schedule 8,\n\t- (iv) the results of a test undertaken by P in accordance with Schedule 8 (whether or not that test was provided as part of a testing package),\n\t- (v) information obtained by A in the course of providing a test that falls within paragraph (iv) and is undertaken, or in the course of arranging for such a test to be undertaken, by P (including confirmation that the test was undertaken, details of when and where it was undertaken, any reasons for a test not be being undertaken and the details of any replacement test to be undertaken);\n- (f) information provided to an immigration officer pursuant to regulations 3(7), 4(4) or 6(11);\n- (g) where a sample taken in respect of a day 2 test under regulation 6 has been sequenced, the sorted BAM file relating to that sample containing all reads aligning to the SARS-CoV-2 reference genome with unaligned and human reads removed;\n- (h) information provided by, or on behalf of, A by way of explanation for failing to comply with regulation 3, 4 or 6, or paragraph 3 of Schedule 8; or\n- (i) information about any steps taken in relation to A, including details of any fixed penalty notice issued under these Regulations.\n- (3) A may only use relevant information where it is necessary—\n\t- (a) for the purpose of carrying out a function under these Regulations;\n\t- (b) for the purpose of—\n\t\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n\t- (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n\n(4) Subject to paragraph (7), A may only disclose relevant information to another person (the \"recipient\") where it is necessary for the recipient to have the information —\n\n- (a) for the purpose of carrying out a function of the recipient under—\n\t- (i) these Regulations, or\n\t- (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n- (b) for the purpose of—\n\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "The Secretary of State makes the following Regulations in exercise of the powers conferred by sections 45B, 45F(2) and 45P(2) of the Public Health (Control of Disease) Act 1984(**a**).\n\n## PART 1\n\n### Introductory\n\n#### **Citation, commencement, extent and application**\n\n**1.**—(1) These Regulations may be cited as the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021.\n\n(2) These Regulations come into force at 4.00 a.m. on 17th May 2021.\n\n(3) These Regulations extend to England and Wales and apply in relation to England only.\n\n#### **Interpretation and introduction of Schedules 1 to 4**\n\n**2.**—(1) In these Regulations—\n\n\"category 1 arrival\" means person who has arrived in England from a category 1 country or territory, and has not been in a category 2 country or territory or a category 3 country or territory in the period beginning with the 10th day before the date of their arrival in England;\n\n\"category 1 country or territory\" means a country or territory, or part of a country or territory, specified in Schedule 1(**b**);\n\n\"category 2 country or territory\" means a country or territory or part of a country or territory specified in Schedule 2(**c**);\n\n\"category 3 country or territory\" means a country or territory or part of a country or territory specified in Schedule 3(**d**);\n\n\"child\" means a person under the age of 18;\n\n\"the common travel area\" has the meaning given in section 1(3) of the Immigration Act 1971(**e**);\n\n\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n\"coronavirus disease\" means COVID-19 (the official designation of the disease which can be caused by coronavirus);\n\n\"designated port\" means a port designated for the purposes of Schedule 11;\n\n\"device\" means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002(**f**);\n\n\"disability\" has the meaning given in the Equality Act 2010(**g**) (see section 6 of, and Schedule 1 to, that Act);\n\n\"immigration officer\" means a person appointed by the Secretary of State as an immigration officer under paragraph 1 of Schedule 2 to the Immigration Act 1971(**h**);\n\n\"managed self-isolation package\" has the meaning given in paragraph 8 of Schedule 11;\n\n\"operator\" except in regulation 18, means an operator of a relevant service;\n\n(**b**) Category 1 countries and territories are referred to colloquially and in guidance as \"Green List\" countries and territories.\n\n(**c**) Category 2 countries and territories are referred to colloquially and in guidance as \"Amber List\" countries and territories.\n\n(**f**) S.I. 2002/618.\n\n(<b>a) 1984 c. 22. Part 2A was inserted by section 129 of the Health and Social Care Act 2008 (c. 14).\n\n(<b>d) Category 3 countries and territories are referred to colloquially and in guidance as \"Red List\" countries and territories. (**e**) 1971 c. 77; section 1(3) provides that the United Kingdom, the Channel Islands, the Isle of Man and the Republic of Ireland are collectively referred to in that Act as \"the common travel area\".\n\n(<b>g) 2010 c. 15.\n\n(<b>h) Paragraph 1 was amended by paragraph 3 of Schedule 3 to the Health Protection Agency Act 2004 (c. 17), and by S.I. 1993/1813.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "### **Form B: positive test result**\n\nYour coronavirus test result is positive. You had the virus when the test was done.\n\nIf you have not had symptoms of coronavirus, you must self-isolate for 10 days from the day after your test date. If you have symptoms of coronavirus, you must self-isolate for 10 days from the day your symptoms started, if earlier than when you took your test.\n\nPeople you live with or are travelling with should also self-isolate for 10 days from the day after you took the test.\n\nYou may be contacted for contact tracing and to check that you, and those who you live or are travelling with, are self-isolating.\n\nYou must not travel, including to leave the UK, during self-isolation.\n\nContact 111 if you need medical help. In an emergency dial 999.\n\n#### **Form C: unclear test result**\n\nYour coronavirus test result is unclear. It is not possible to say if you had the virus when the test was done.\n\nYou must, by law, continue self-isolating for the remainder of your self-isolation period as an international arrival travelling to the UK from an amber-list country, territory or region. You may be contacted to check that you are self-isolating.\n\nIf you want to shorten your self-isolation period you will need to take another test for international arrivals from amber list countries, territories or regions. For more information, go to https://www.gov.uk/guidance/coronavirus-covid-19-test-to-release-for-international-travel.\n\n(4) The test provider must, on request, provide a constable or any other person employed in or for the purposes of any police force, with—\n\n- (a) P's passport number, or travel document reference number (as appropriate);\n- (b) P's test result;\n- (c) the date on which P undertook the test;\n- (d) the date on which the test result was notified or made available to P or X in accordance with sub-paragraphs (2) and (3).\n\n(5) Where—\n\n- (a) regulation 4 or 4A of the Health Protection (Notification) Regulations 2010(**a**) applies in relation to the test provider; or\n- (b) if the test provider arranges with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, either of those regulations applies to X in the carrying out of that element,\n\n(<b>a) S.I. 2010/659; regulation 4 was amended by S.I. 2013/235, 2020/1175, 2020/764, 2021/150 and regulation 4A was inserted by S.I. 2020/1175.", - "page_start": 72, - "page_end": 72, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "Who first suggested the notions of \"hard\" and \"easy\" problems regarding consciousness ?", - "target_page": 1, - "target_passage": "The terms \"hard problem\" and \"easy problems\" were coined by the philosopher David Chalmers", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem—philosophical zombies, Mary's room, and Nagel's bats—are only persuasive if one already assumes that \"consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem.\" Hence, the arguments beg the question. The authors suggest that \"instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments.\"[64]\n\nThe philosopher Massimo Pigliucci argued in 2013 that the hard problem is misguided, resulting from a \"category mistake\".[17] He said: \"Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.\"[17]\n\nIn 2017, the philosopher Marco Stango, in a paper on John Dewey's approach to the problem of consciousness (which preceded Chalmers' formulation of the hard problem by over half a century), noted that Dewey's approach would see the hard problem as the consequence of an unjustified assumption that feelings and functional behaviors are not the same physical process: \"For the Deweyan philosopher, the 'hard problem' of consciousness is a 'conceptual fact' only in the sense that it is a *philosophical mistake*: the mistake of failing to see that the physical can be had as an episode of immediate sentiency.\"[65]\n\nThe philosopher Thomas Metzinger likens the hard problem of consciousness to vitalism, a formerly widespread view in biology which was not so much solved as abandoned.[66] Brian Jonathan Garrett has also argued that the hard problem suffers from flaws analogous to those of vitalism.[67]\n\nThe philosopher Peter Hacker argues that the hard problem is misguided in that it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of living organisms.[68] He states: \"The hard problem isn't a hard problem at all. The really hard problems are the problems the scientists are dealing with. [...] The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme.\"[68] Hacker's critique extends beyond Chalmers and the hard problem, being directed against contemporary philosophy of mind and neuroscience more broadly. Along with the neuroscientist Max Bennett, he has argued that most of contemporary neuroscience remains implicitly dualistic in its conceptualizations and is predicated on the *mereological fallacy* of ascribing psychological concepts to the brain that can properly be ascribed only to the person as a whole.[69] Hacker further states that \"consciousness studies\", as it exists today, is \"literally a total waste of time\" and that \"the conception of consciousness which they have is incoherent\".[68]\n\n#### **Eliminative materialism / Illusionism**\n\nEliminative materialism or eliminativism is the view that many or all of the mental states used in folk psychology (i.e., common-sense ways of discussing the mind) do not, upon scientific examination, correspond to real brain mechanisms.[59] According the 2020 PhilPapers survey, 4.51% of philosophers surveyed subscribe to eliminativism.[25]\n\nWhile Patricia Churchland and Paul Churchland have famously applied eliminative materialism to propositional attitudes, philosophers including Daniel Dennett, Georges Rey, and Keith Frankish have applied it to qualia or phenomenal consciousness (i.e., conscious experience).[59] On their view, it is mistaken not only to believe there is a hard problem of consciousness, but to believe phenomenal consciousness exists at all.[19][61]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - }, - { - "text": "# **Hard problem of consciousness**\n\nIn the philosophy of mind, the **hard problem of consciousness** is to explain why and how humans and other organisms have qualia, phenomenal consciousness, or subjective experience. [1][2] It is contrasted with the \"easy problems\" of explaining why and how physical systems give a (healthy) human being the ability to discriminate, to integrate information, and to perform behavioral functions such as watching, listening, speaking (including generating an utterance that appears to refer to personal behaviour or belief), and so forth.[1] The easy problems are amenable to functional explanation—that is, explanations that are mechanistic or behavioral—since each physical system can be explained (at least in principle) purely by reference to the \"structure and dynamics\" that underpin the phenomenon.[1][3]\n\nProponents of the hard problem argue that it is categorically different from the easy problems since no mechanistic or behavioral explanation could explain the character of an experience, not even in principle. Even after all the relevant functional facts are explicated, they argue, there will still remain a further question: \"why is the performance of these functions accompanied by experience?\"[1] To bolster their case, proponents of the hard problem frequently turn to various philosophical thought experiments, involving philosophical zombies (which, they claim, are conceivable) or inverted qualia, or the claimed ineffability of colour experiences, or the claimed unknowability of foreign states of consciousness, such as the experience of being a bat.\n\nThe terms \"hard problem\" and \"easy problems\" were coined by the philosopher David Chalmers in a 1994 talk given at The Science of Consciousness conference held in Tucson, Arizona.[4] The following year, the main talking points of Chalmers' talk were published in *The Journal of Consciousness Studies*. [1] The publication gained significant attention from consciousness researchers and became the subject of a special volume of the journal,[5][6] which was later published into a book.[7] In 1996, Chalmers published *The Conscious Mind*, a book-length treatment of the hard problem, in which he elaborated on his core arguments and responded to counterarguments. His use of the word *easy* is \"tongue-in-cheek\".[8] As the\n\nChalmers on stage for an Alan Turing Year event at De La Salle University, Manila, 27 March 2012\n\ncognitive psychologist Steven Pinker puts it, they are about as easy as going to Mars or curing cancer. \"That is, scientists more or less know what to look for, and with enough brainpower and funding, they would probably crack it in this century.\"[9]\n\nThe existence of the hard problem is disputed. It has been accepted by some philosophers of mind such as Joseph Levine, [10] Colin McGinn, [11] and Ned Block[12] and cognitive neuroscientists such as Francisco Varela, [13] Giulio Tononi, [14][15] and Christof Koch. [14][15] On the other hand, its existence is denied by other philosophers of mind, such as Daniel Dennett, [16] Massimo Pigliucci, [17] Thomas Metzinger, Patricia Churchland, [18] and Keith Frankish, [19] and by cognitive neuroscientists such as Stanislas Dehaene, [20] Bernard Baars, [21] Anil Seth, [22] and Antonio Damasio. [23] Clinical neurologist and skeptic", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia2.pdf" - }, - { - "text": "patterns. A clock, a hurricane, and the easy problems, are all the sum of their parts (as are most things).[27]\n\nThe easy problems relevant to consciousness concern mechanistic analysis of the neural processes that accompany behaviour. Examples of these include how sensory systems work, how sensory data is processed in the brain, how that data influences behaviour or verbal reports, the neural basis of thought and emotion, and so on. They are problems that can be analyzed through \"structures and functions\".[27]\n\n#### **Hard problem**\n\nThe hard problem, in contrast, is the problem of *why* and *how* those processes are accompanied by experience.[1] It may further include the question of why these processes are accompanied by this or that particular experience, rather than some other kind of experience. In other words, the hard problem is the problem of explaining why certain mechanisms are accompanied by conscious experience.[27] For example, why should neural processing in the brain lead to the felt sensations of, say, feelings of hunger? And why should those neural firings lead to feelings of hunger rather than some other feeling (such as, for example, feelings of thirst)?\n\nChalmers argues that it is conceivable that the relevant behaviours associated with hunger, or any other feeling, could occur even in the absence of that feeling. This suggests that experience is irreducible to physical systems such as the brain. This is the topic of the next section.\n\n#### **How the easy and hard problems are related**\n\nChalmers believes that the hard problem is irreducible to the easy problems: solving the easy problems will not lead to a solution to the hard problems. This is because the easy problems pertain to the causal structure of the world while the hard problem pertains to consciousness, and facts about consciousness include facts that go beyond mere causal or structural description.[32]\n\nFor example, suppose someone were to stub their foot and yelp. In this scenario, the easy problems are mechanistic explanations that involve the activity of the nervous system and brain and its relation to the environment (such as the propagation of nerve signals from the toe to the brain, the processing of that information and how it leads to yelping, and so on). The hard problem is the question of why these mechanisms are accompanied by *the feeling of pain*, or why these feelings of pain feel the particular way that they do. Chalmers argues that facts about the neural mechanisms of pain, and pain behaviours, do not lead to facts about conscious experience. Facts about conscious experience are, instead, further facts, not derivable from facts about the brain.[27][32]\n\nAn explanation for all of the relevant physical facts about neural processing would leave unexplained facts about what it is like to feel pain. This is in part because functions and physical structures of any sort could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. For example, it is logically possible for a perfect replica of Chalmers to have no experience at all, or for it to have a different set of experiences (such as an inverted visible spectrum, so that the blue-yellow red-green axes of its visual field are flipped).[32]\n\nThe same cannot be said about clocks, hurricanes, or other physical things. In those cases, a structural or functional description is a complete description. A perfect replica of a clock is a clock, a perfect replica of a hurricane is a hurricane, and so on. The difference is that physical things are nothing more than their", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia2.pdf" - }, - { - "text": "from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\").[140] The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene. [141]\n\nIn his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the \"easy problems\" of consciousness.[1] In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that \"now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.\"[1] J. W. Dalton similarly criticized GWT on the grounds that it provides, at best, an account of the cognitive *function* of consciousness, and fails to explain its experiential aspect.[142] By contrast, A. C. Elitzur argued: \"While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition.\"[143]\n\nFor his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal.[21] Dehaene, in his 2014 book *Consciousness and the Brain*, rejected the concept of qualia and argued that Chalmers' \"easy problems\" of consciousness are actually the hard problems.[20] He further stated that the \"hard problem\" is based only upon ill-defined intuitions that are continually shifting as understanding evolves:[20]\n\n> Once our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.\n\n# **Meta-problem**\n\nIn 2018, Chalmers highlighted what he calls the \"**meta-problem of consciousness**\", another problem related to the hard problem of consciousness:[76]\n\n> The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.\n\nIn his \"second approximation\", he says it is the problem of explaining the behavior of \"phenomenal reports\", and the behavior of expressing a belief that there is a hard problem of consciousness.[76]\n\nExplaining its significance, he says:[76]\n\nAlthough the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Steven Novella has dismissed it as \"the hard non-problem\".[24] According to a 2020 PhilPapers survey, a majority (62.42%) of the philosophers surveyed said they believed that the hard problem is a genuine problem, while 29.72% said that it does not exist.[25]\n\nThere are a number of other potential philosophical problems that are related to the Hard Problem. Ned Block believes that there exists a \"Harder Problem of Consciousness\", due to the possibility of different physical and functional neurological systems potentially having phenomenal overlap.[12] Another potential philosophical problem which is closely related to Benj Hellie's vertiginous question, dubbed \"The Even Harder Problem of Consciousness\", refers to why a given individual has their own particular personal identity, as opposed to existing as someone else.[26]\n\n# **Overview**\n\nCognitive scientist David Chalmers first formulated the hard problem in his paper \"Facing up to the problem of consciousness\" (1995)[1] and expanded upon it in *The Conscious Mind* (1996). His works provoked comment. Some, such as philosopher David Lewis and Steven Pinker, have praised Chalmers for his argumentative rigour and \"impeccable clarity\".[27] Pinker later said, in 2018, \"In the end I still think that the hard problem is a meaningful conceptual problem, but agree with Dennett that it is not a meaningful scientific problem. No one will ever get a grant to study whether you are a zombie or whether the same Captain Kirk walks on the deck of the Enterprise and the surface of Zakdorn. And I agree with several other philosophers that it may be futile to hope for a solution at all, precisely because it is a conceptual problem, or, more accurately, a problem with our concepts.\"[28] Daniel Dennett and Patricia Churchland, among others, believe that the hard problem is best seen as a collection of easy problems that will be solved through further analysis of the brain and behaviour. [29][30]\n\nConsciousness is an ambiguous term. It can be used to mean self consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel's definition of consciousness: \"*the feeling of what it is like to be something.\"* Consciousness, in this sense, is synonymous with *experience.*[31][27]\n\n### **Chalmers' formulation**\n\n. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: *Why is the performance of these functions accompanied by experience?*\n\n—David Chalmers, Facing up to the problem of consciousness\n\nThe problems of consciousness, Chalmers argues, are of two kinds: the *easy problems* and the *hard problem*.\n\n#### **Easy problems**\n\nThe easy problems are amenable to reductive inquiry. They are a logical consequence of lower-level facts about the world, similar to how a clock's ability to tell time is a logical consequence of its clockwork and structure, or a hurricane being a logical consequence of the structures and functions of certain weather", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Today there is a strong tendency to simply *equate* consciousness with the qualia. Yet there is clearly something not quite right about this. The \"itchiness of itches\" and the \"hurtfulness of pain\" are qualities we are conscious *of*. So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely *consciousness* of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged \"mysterious, nonpublic objects\", i.e. objects that seem to be only \"visible\" to the respective subject, but rather to the nature of \"seeing\" itself (and in today's philosophy of mind astonishingly little is said about the latter).[129]\n\n# **Relationship to scientific frameworks**\n\nMost neuroscientists and cognitive scientists believe that Chalmers' alleged \"hard problem\" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called \"easy problems\", although a significant minority disagrees.[9][130]\n\n#### **Neural correlates of consciousness**\n\nSince 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness.[131] These postulated events are referred to as *neural correlates of consciousness* or NCCs. However, this research arguably addresses the question of *which* neurobiological mechanisms are linked to consciousness but not the question of *why* they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In \"On the Search for the Neural Correlate of Consciousness\", Chalmers said he is confident that, granting the principle that something such as what he terms \"global availability\" can be used as an indicator of consciousness, the neural correlates will be discovered \"in a century or two\".[132] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:\n\n> One can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved.[132]\n\nThe neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted.[133] Kandel went on to note Crick and Koch's suggestion that once the binding problem—understanding what accounts for the unity of experience—is solved, it will be possible to solve the hard problem empirically. [133] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the \"real problem\": understanding the neurobiology underlying", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- Weisberg, Josh. \"The hard problem of consciousness\" (http://www.iep.utm.edu/hard-con). *Internet Encyclopedia of Philosophy*.\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Hard_problem_of_consciousness&oldid=1261818884\"", - "page_start": 27, - "page_end": 27, - "source_file": "wikipedia2.pdf" - }, - { - "text": "consciousness, namely the neural correlates of various conscious processes.[22] This more modest goal is the focus of most scientists working on consciousness.[133] Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness.[134]\n\n### **Computational cognition**\n\nA functionalist view in cognitive science holds that the mind is an information processing system, and that cognition and consciousness together are a form of computation. Cognition, distinct from consciousness, is explained by neural computation in the computational theory of cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. While the computation system is realized by neurons rather than electronics, in theory it would be possible for artificial intelligence to be conscious.\n\n### **Integrated information theory**\n\nIntegrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere.[135][136] The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable.[136][137] The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness.[15] However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem.[15] In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):\n\n> While identifying the \"neural correlates of consciousness\" is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyze systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind.[138]\n\nAs part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the \"Pretty Hard Problem\" of methodically inferring which physical systems are conscious—but would not solve Chalmers' hard problem.[136] \"Even if IIT is correct,\" he argues, \"it does not explain why integrated information generates (or is) consciousness.\"[136] Chalmers agrees that IIT, if correct, would solve the \"Pretty Hard Problem\" rather than the hard problem.[139]\n\n### **Global workspace theory**\n\nGlobal workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988.[140] Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage.[140] This theater integrates inputs", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia2.pdf" - }, - { - "text": "This stance has recently taken on the name of *illusionism*: the view that phenomenal consciousness is an illusion. The term was popularized by the philosopher Keith Frankish. [60] Frankish argues that \"illusionism\" is preferable to \"eliminativism\" for labelling the view that phenomenal consciousness is an illusion. More substantively, Frankish argues that illusionism about phenomenal consciousness is preferable to realism about phenomenal consciousness. He states: \"Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist.\"[19] Frankish concludes that illusionism \"replaces the hard problem with the illusion problem—the problem of explaining how the illusion of phenomenality arises and why it is so powerful.\"[19]\n\nThe philosopher Daniel Dennett is another prominent figure associated with illusionism. After Frankish published a paper in the Journal of Consciousness Studies titled *Illusionism as a Theory of Consciousness,*[60] Dennett responded with his own paper with the spin-off title *Illusionism as the Obvious Default Theory of Consciousness.*[61] Dennett has been arguing for the illusory status of consciousness since early on in his career. For example, in 1979 he published a paper titled *On the Absence of Phenomenology* (where he argues for the nonexistence of phenomenal consciousness).[70] Similar ideas have been explicated in his 1991 book Consciousness Explained. [71] Dennett argues that the so-called \"hard problem\" will be solved in the process of solving what Chalmers terms the \"easy problems\".[16] He compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[72] To show how people might be commonly fooled into overstating the accuracy of their introspective abilities, he describes a phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[73] He accordingly argues that consciousness need not be what it seems to be based on introspection. To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[16] Thus, Dennett argues that the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[72]\n\nEliminativists differ on the role they believe intuitive judgement plays in creating the apparent reality of consciousness. The philosopher Jacy Reese Anthis is of the position that this issue is born of an overreliance on intuition, calling philosophical discussions on the topic of consciousness a form of \"intuition jousting\". [74] But when the issue is tackled with \"formal argumentation\" and \"precise semantics\" then the hard problem will dissolve.[74] The philosopher Elizabeth Irvine, in contrast, can be read as having the opposite view, since she argues that phenomenal properties (that is, properties of consciousness) do not exist in our common-sense view of the world. She states that \"the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers).\"[75]\n\nA complete illusionist theory of consciousness must include the description of a mechanism by which the illusion of subjective experience is had and reported by people. Various philosophers and scientists have proposed possible theories.[76] For example, in his book *Consciousness and the Social Brain* neuroscientist Michael Graziano advocates what he calls attention schema theory, in which our perception", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia2.pdf" - }, - { - "text": "physical constituents. For example, water is nothing more than H2O molecules, and understanding everything about H2O molecules is to understand everything there is to know about water. But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical.[27]\n\n#### **Implications for physicalism**\n\nChalmers's idea contradicts physicalism, sometimes labelled materialism. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. For example, the rings of Saturn are a physical thing because they are nothing more than a complex arrangement of a large\n\nnumber of subatomic particles interacting in a certain way. According to physicalism, everything, including consciousness, can be explained by appeal to its microphysical constituents. Chalmers's *hard problem* presents a counterexample to this view and to other phenomena like swarms of birds, since it suggests that consciousness, like swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem then physicalism must be false, and if physicalism is true then the hard problem must not be a real problem.\n\nThe hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.\n\nA swarm of birds showing high order structure emerging from simpler physical constituents\n\nThough Chalmers rejects physicalism, he is still a naturalist. [27]\n\n#### **Historical precedents**\n\nThe hard problem of consciousness has scholarly antecedents considerably earlier than Chalmers. Chalmers himself notes that \"a number of thinkers in the recent and distant past\" have \"recognised the particular difficulties of explaining consciousness.\"[33] He states that all his original 1996 paper contributed to the discussion was \"a catchy name, a minor reformulation of philosophically familiar points\".[33]\n\nAmong others, thinkers who have made arguments similar to Chalmers' formulation of the hard problem include Isaac Newton, [34] John Locke, [35] Gottfried Wilhelm Leibniz, [36][34] John Stuart Mill, [37] and Thomas Henry Huxley. [38][34] Likewise, Asian philosophers like Dharmakirti and Guifeng Zongmi discussed the problem of how consciousness arises from unconscious matter. [34][39][40][41]\n\n#### **Related concepts**\n\n#### **The mind–body problem**", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "What is David Chalmer's definition of \"consciousness\" ?", - "target_page": 2, - "target_passage": "Chalmers uses Thomas Nagel's definition of consciousness: \"the feeling of what it is like to be something.\"", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "Today there is a strong tendency to simply *equate* consciousness with the qualia. Yet there is clearly something not quite right about this. The \"itchiness of itches\" and the \"hurtfulness of pain\" are qualities we are conscious *of*. So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely *consciousness* of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged \"mysterious, nonpublic objects\", i.e. objects that seem to be only \"visible\" to the respective subject, but rather to the nature of \"seeing\" itself (and in today's philosophy of mind astonishingly little is said about the latter).[129]\n\n# **Relationship to scientific frameworks**\n\nMost neuroscientists and cognitive scientists believe that Chalmers' alleged \"hard problem\" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called \"easy problems\", although a significant minority disagrees.[9][130]\n\n#### **Neural correlates of consciousness**\n\nSince 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness.[131] These postulated events are referred to as *neural correlates of consciousness* or NCCs. However, this research arguably addresses the question of *which* neurobiological mechanisms are linked to consciousness but not the question of *why* they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In \"On the Search for the Neural Correlate of Consciousness\", Chalmers said he is confident that, granting the principle that something such as what he terms \"global availability\" can be used as an indicator of consciousness, the neural correlates will be discovered \"in a century or two\".[132] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:\n\n> One can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved.[132]\n\nThe neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted.[133] Kandel went on to note Crick and Koch's suggestion that once the binding problem—understanding what accounts for the unity of experience—is solved, it will be possible to solve the hard problem empirically. [133] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the \"real problem\": understanding the neurobiology underlying", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia2.pdf" - }, - { - "text": "philosophers \"were to use panhuman concepts expressed in crosstranslatable words\" (such as *know*, *think*, or *feel*) then the hard problem would dissolve.[96] David Chalmers has responded to these criticisms by saying that he will not \"apologize for using technical terms in an academic article . . . they play a key role in efficient communication in every discipline, including Wierzbicka's\".[89]\n\n## **Type-C Materialism**\n\nType-C materialists acknowledge a distinction between knowledge and experience[98] without asserting a more complete explanation for the experiential phenomenon. One taking this view would admit that there is an explanatory gap for which no answer to date may be satisfactory, but trust that inevitably the gap will be closed.[52] This is described by analogy to progression in other areas of science, such as massenergy equivalence which would have been unfathomable in ancient times,[52] abiogenesis which was once considered paradoxical from an evolutionary framework,[99][98] or a suspected future theory of everything combining relativity and quantum mechanics. Similarly, type-C materialism posits that the problem of consciousness is a consequence of our ignorance[71][100] but just as resolvable as any other question in neuroscience.\n\nBecause the explanatory question of consciousness is evaded, type-C materialism does not presuppose[101] the descriptive question, for instance that there is any self-consciousness, wakefulness, or even sentience[102] in a rock. Principally, the basis for the argument arises from the apparently high correlation of consciousness with living brain tissue,[103] thereby rejecting panpsychism[101] without explicitly formulating physical causation. More specifically this position denies the existence of philosophical zombies[64] for which there is an absence of data and no proposed method of testing.[104][105] Whether via the inconceivability or actual nonexistence of zombies, a contradiction is exposed nullifying the premise of the consciousness problem's \"hardness\".\n\nType-C materialism is compatible with several cases and could collapse into one of these other metaphysical views[52] depending on scientific discovery and its interpretation. With evidence of emergence, it resolves to strong reductionism under type A. With a different, possibly cultural paradigm for understanding consciousness, it resolves to type-B materialism.[32] If consciousness is explained by the quantum mind, then it resolves to property dualism under type D.[106] With characterization of intrinsic properties in physics extending beyond structure and dynamics, it could resolve to type-F monism.[52]\n\n# **Type-D Dualism**\n\nDualism views consciousness as either a non-physical substance separate from the brain or a non-physical property of the physical brain.[107] Dualism is the view that the mind is irreducible to the physical body. [107] There are multiple dualist accounts of the causal relationship between the mental and the physical, of which interactionism and epiphenomenalism are the most common today. Interactionism posits that the mental and physical causally impact one another, and is associated with the thought of René Descartes (1596–1650).[52] Epiphenomenalism holds the mental is causally dependent on the physical, but does not in turn causally impact it.[52]\n\nIn contemporary philosophy, interactionism has been defended by philosophers including Martine Nida-Rümelin, [108] while epiphenomenalism has been defended by philosophers including Frank Jackson[109][110] (although Jackson later changed his stance to physicalism).[111] Chalmers has also", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia2.pdf" - }, - { - "text": "This stance has recently taken on the name of *illusionism*: the view that phenomenal consciousness is an illusion. The term was popularized by the philosopher Keith Frankish. [60] Frankish argues that \"illusionism\" is preferable to \"eliminativism\" for labelling the view that phenomenal consciousness is an illusion. More substantively, Frankish argues that illusionism about phenomenal consciousness is preferable to realism about phenomenal consciousness. He states: \"Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist.\"[19] Frankish concludes that illusionism \"replaces the hard problem with the illusion problem—the problem of explaining how the illusion of phenomenality arises and why it is so powerful.\"[19]\n\nThe philosopher Daniel Dennett is another prominent figure associated with illusionism. After Frankish published a paper in the Journal of Consciousness Studies titled *Illusionism as a Theory of Consciousness,*[60] Dennett responded with his own paper with the spin-off title *Illusionism as the Obvious Default Theory of Consciousness.*[61] Dennett has been arguing for the illusory status of consciousness since early on in his career. For example, in 1979 he published a paper titled *On the Absence of Phenomenology* (where he argues for the nonexistence of phenomenal consciousness).[70] Similar ideas have been explicated in his 1991 book Consciousness Explained. [71] Dennett argues that the so-called \"hard problem\" will be solved in the process of solving what Chalmers terms the \"easy problems\".[16] He compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[72] To show how people might be commonly fooled into overstating the accuracy of their introspective abilities, he describes a phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[73] He accordingly argues that consciousness need not be what it seems to be based on introspection. To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[16] Thus, Dennett argues that the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[72]\n\nEliminativists differ on the role they believe intuitive judgement plays in creating the apparent reality of consciousness. The philosopher Jacy Reese Anthis is of the position that this issue is born of an overreliance on intuition, calling philosophical discussions on the topic of consciousness a form of \"intuition jousting\". [74] But when the issue is tackled with \"formal argumentation\" and \"precise semantics\" then the hard problem will dissolve.[74] The philosopher Elizabeth Irvine, in contrast, can be read as having the opposite view, since she argues that phenomenal properties (that is, properties of consciousness) do not exist in our common-sense view of the world. She states that \"the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers).\"[75]\n\nA complete illusionist theory of consciousness must include the description of a mechanism by which the illusion of subjective experience is had and reported by people. Various philosophers and scientists have proposed possible theories.[76] For example, in his book *Consciousness and the Social Brain* neuroscientist Michael Graziano advocates what he calls attention schema theory, in which our perception", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- 1. Chalmers, David (1995). \"Facing up to the problem of consciousness\" (http://consc.net/pape rs/facing.pdf) (PDF). *Journal of Consciousness Studies*. **2** (3): 200–219.\n- 2. Harnad, Stevan (1995). \"Why and how we are not zombies\" (http://cogprints.org/1601/6/har nad95.zombies.html). *Journal of Consciousness Studies*. **1**: 164–167. See also Harnad, Stevan (April 2000). \"How/why the mind–body problem is hard\" (http://cogprints.org/1617/1/ harnad00.mind.humphrey.html). *Journal of Consciousness Studies*. **7** (4): 54–61.\n- 3. See Cooney's foreword to the reprint of Chalmers' paper: Brian Cooney, ed. (1999). \"Chapter 27: Facing up to the problem of consciousness\". *The place of mind*. Cengage Learning. pp. 382 *ff*. ISBN 978-0534528256.\n- 4. Problem of Consciousness (Tuscan 1994) (https://www.youtube.com/watch?v=_lWp-6hH_6 g%7CHard)\n- 5. JCS vol. 4, pp. 3-46, 1997\n- 6. Chalmers, David (1997). \"Moving forward on the problem of consciousness\". *Journal of Consciousness Studies*. **4** (1): 3–46.\n- 7. Shear, Jonathan (1997). *Explaining Consciousness: The Hard Problem*. MIT Press. ISBN 978-0262692212.\n- 8. \"Episode 83, The David Chalmers Interview (Part I Consciousness)\" (https://thepanpsycas t.com/panpsycast2/episode83-1). *The Panpsycast Philosophy Podcast*. 19 July 2020. Retrieved 2020-09-05.\n- 9. Pinker, Steven (29 January 2007). \"The Brain: The Mystery of Consciousness\" (http://conten t.time.com/time/magazine/article/0,9171,1580394-1,00.html). *Time*. Retrieved 19 December 2018.\n- 10. Levine, Joseph (2009-01-15). \"The Explanatory Gap\" (https://www.oxfordhandbooks.com/vi ew/10.1093/oxfordhb/9780199262618.001.0001/oxfordhb-9780199262618-e-17). *The Oxford Handbook of Philosophy of Mind*: 281–291. doi:10.1093/oxfordhb/9780199262618.003.0017 (https://doi.org/10.1093%2Foxfordhb%2F9 780199262618.003.0017). ISBN 978-0199262618.\n- 11. McGinn, Colin (20 February 2012). \"All machine and no ghost?\" (http://www.newstatesman. com/ideas/2012/02/consciousness-mind-brain). *New Statesman*. Retrieved 27 March 2012.\n- 12. Block, Ned (2002). \"The Harder Problem of Consciousness\" (https://philpapers.org/rec/BLO THP). *The Journal of Philosophy*. **99** (8): 391–425. doi:10.2307/3655621 (https://doi.org/10. 2307%2F3655621). JSTOR 3655621 (https://www.jstor.org/stable/3655621). S2CID 111383062 (https://api.semanticscholar.org/CorpusID:111383062).\n- 13. Varela, F.J. (1 April 1996). \"Neurophenomenology: a methodological remedy for the hard problem\" (https://www.ingentaconnect.com/content/imp/jcs/1996/00000003/00000004/718). *Journal of Consciousness Studies*. **3** (4): 330–349.\n- 14. Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (July 2016). \"Integrated information theory: from consciousness to its physical substrate\". *Nature Reviews Neuroscience*. **17** (7): 450–461. doi:10.1038/nrn.2016.44 (https://doi.org/10.1038%2Fnrn.20 16.44). PMID 27225071 (https://pubmed.ncbi.nlm.nih.gov/27225071). S2CID 21347087 (htt ps://api.semanticscholar.org/CorpusID:21347087).\n- 15. Tononi, Giulio; Koch, Christof (March 2015). \"Consciousness: here, there and everywhere?\" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4387509). *Philosophical Transactions of the Royal Society B: Biological Sciences*. **370** (1668): 20140167. doi:10.1098/rstb.2014.0167 (ht tps://doi.org/10.1098%2Frstb.2014.0167). PMC 4387509 (https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC4387509). PMID 25823865 (https://pubmed.ncbi.nlm.nih.gov/25823865).\n- 16. Dennett, Daniel C. (2013). \"The tuned deck\" (https://books.google.com/books?id=sicVcPjfPx UC&pg=RA3-PA59). *Intuition pumps and other tools for thinking*. W. W. Norton & Company. pp. 310 *ff*. ISBN 978-0393240689. and also \"Commentary on Chalmers\": Dennett, Daniel C. (1996). \"Facing backwards on the problem of consciousness\" (http://ase.tufts.edu/cogstud/d ennett/papers/chalmers.htm). *Journal of Consciousness Studies*. **3** (1): 4–6.", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia2.pdf" - }, - { - "text": "consciousness, namely the neural correlates of various conscious processes.[22] This more modest goal is the focus of most scientists working on consciousness.[133] Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness.[134]\n\n### **Computational cognition**\n\nA functionalist view in cognitive science holds that the mind is an information processing system, and that cognition and consciousness together are a form of computation. Cognition, distinct from consciousness, is explained by neural computation in the computational theory of cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. While the computation system is realized by neurons rather than electronics, in theory it would be possible for artificial intelligence to be conscious.\n\n### **Integrated information theory**\n\nIntegrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere.[135][136] The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable.[136][137] The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness.[15] However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem.[15] In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):\n\n> While identifying the \"neural correlates of consciousness\" is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyze systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind.[138]\n\nAs part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the \"Pretty Hard Problem\" of methodically inferring which physical systems are conscious—but would not solve Chalmers' hard problem.[136] \"Even if IIT is correct,\" he argues, \"it does not explain why integrated information generates (or is) consciousness.\"[136] Chalmers agrees that IIT, if correct, would solve the \"Pretty Hard Problem\" rather than the hard problem.[139]\n\n### **Global workspace theory**\n\nGlobal workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988.[140] Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage.[140] This theater integrates inputs", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia2.pdf" - }, - { - "text": "from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\").[140] The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene. [141]\n\nIn his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the \"easy problems\" of consciousness.[1] In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that \"now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.\"[1] J. W. Dalton similarly criticized GWT on the grounds that it provides, at best, an account of the cognitive *function* of consciousness, and fails to explain its experiential aspect.[142] By contrast, A. C. Elitzur argued: \"While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition.\"[143]\n\nFor his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal.[21] Dehaene, in his 2014 book *Consciousness and the Brain*, rejected the concept of qualia and argued that Chalmers' \"easy problems\" of consciousness are actually the hard problems.[20] He further stated that the \"hard problem\" is based only upon ill-defined intuitions that are continually shifting as understanding evolves:[20]\n\n> Once our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.\n\n# **Meta-problem**\n\nIn 2018, Chalmers highlighted what he calls the \"**meta-problem of consciousness**\", another problem related to the hard problem of consciousness:[76]\n\n> The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.\n\nIn his \"second approximation\", he says it is the problem of explaining the behavior of \"phenomenal reports\", and the behavior of expressing a belief that there is a hard problem of consciousness.[76]\n\nExplaining its significance, he says:[76]\n\nAlthough the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia2.pdf" - }, - { - "text": "defended versions of both positions as plausible.[52] Traditional dualists such as Descartes believed the mental and the physical to be two separate substances, or fundamental types of entities (hence \"substance dualism\"); some more recent dualists, however, accept only one substance, the physical, but state it has both mental and physical properties (hence \"property dualism\").[107]\n\n### **Type-E Dualism**\n\n### **Type-F Monism**\n\nMeanwhile, panpsychism and neutral monism, broadly speaking, view consciousness as intrinsic to matter. [52] In its most basic form, panpsychism holds that all physical entities have minds (though its proponents take more qualified positions),[112] while neutral monism, in at least some variations, holds that entities are composed of a substance with mental and physical aspects—and is thus sometimes described as a type of panpsychism.[113]\n\nForms of panpsychism and neutral monism were defended in the early twentieth century by the psychologist William James, [114][115][note 2] the philosopher Alfred North Whitehead, [115] the physicist Arthur Eddington, [116][117] and the philosopher Bertrand Russell, [112][113] and interest in these views has been revived in recent decades by philosophers including Thomas Nagel, [115] Galen Strawson, [115][118] Philip Goff, [115] and David Chalmers.[112] Chalmers describes his overall view as \"naturalistic dualism\",[1] but he says panpsychism is in a sense a form of physicalism,[52] as does Strawson.[118] Proponents of panpsychism argue it solves the hard problem of consciousness parsimoniously by making consciousness a fundamental feature of reality. [43][119]\n\n#### **Idealism and cosmopsychism**\n\nA traditional solution to the hard problem is idealism, according to which consciousness is fundamental and not simply an emergent property of matter. It is claimed that this avoids the hard problem entirely. [120] Objective idealism and cosmopsychism consider mind or consciousness to be the fundamental substance of the universe. Proponents claim that this approach is immune to both the hard problem of consciousness and the combination problem that affects panpsychism.[121][122][123]\n\nFrom an idealist perspective, matter is a representation or image of mental processes. Supporters suggest that this avoids the problems associated with the materialist view of mind as an emergent property of a physical brain.[124] Critics argue that this then leads to a decombination problem: how is it possible to split a single, universal conscious experience into multiple, distinct conscious experiences? In response, Bernardo Kastrup claims that nature hints at a mechanism for this in the condition dissociative identity disorder (previously known as Multiple Personality Disorder).[125] Kastrup proposes dissociation as an example from nature showing that multiple minds with their own individual subjective experience could develop within a single universal mind.\n\nCognitive psychologist Donald D. Hoffman uses a mathematical model based around conscious agents, within a fundamentally conscious universe, to support conscious realism as a description of nature—one that falls within the objective idealism approaches to the hard problem: \"The objective world, i.e., the world whose existence does not depend on the perceptions of a particular conscious agent, consists entirely of conscious agents.\"[126]", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Steven Novella has dismissed it as \"the hard non-problem\".[24] According to a 2020 PhilPapers survey, a majority (62.42%) of the philosophers surveyed said they believed that the hard problem is a genuine problem, while 29.72% said that it does not exist.[25]\n\nThere are a number of other potential philosophical problems that are related to the Hard Problem. Ned Block believes that there exists a \"Harder Problem of Consciousness\", due to the possibility of different physical and functional neurological systems potentially having phenomenal overlap.[12] Another potential philosophical problem which is closely related to Benj Hellie's vertiginous question, dubbed \"The Even Harder Problem of Consciousness\", refers to why a given individual has their own particular personal identity, as opposed to existing as someone else.[26]\n\n# **Overview**\n\nCognitive scientist David Chalmers first formulated the hard problem in his paper \"Facing up to the problem of consciousness\" (1995)[1] and expanded upon it in *The Conscious Mind* (1996). His works provoked comment. Some, such as philosopher David Lewis and Steven Pinker, have praised Chalmers for his argumentative rigour and \"impeccable clarity\".[27] Pinker later said, in 2018, \"In the end I still think that the hard problem is a meaningful conceptual problem, but agree with Dennett that it is not a meaningful scientific problem. No one will ever get a grant to study whether you are a zombie or whether the same Captain Kirk walks on the deck of the Enterprise and the surface of Zakdorn. And I agree with several other philosophers that it may be futile to hope for a solution at all, precisely because it is a conceptual problem, or, more accurately, a problem with our concepts.\"[28] Daniel Dennett and Patricia Churchland, among others, believe that the hard problem is best seen as a collection of easy problems that will be solved through further analysis of the brain and behaviour. [29][30]\n\nConsciousness is an ambiguous term. It can be used to mean self consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel's definition of consciousness: \"*the feeling of what it is like to be something.\"* Consciousness, in this sense, is synonymous with *experience.*[31][27]\n\n### **Chalmers' formulation**\n\n. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: *Why is the performance of these functions accompanied by experience?*\n\n—David Chalmers, Facing up to the problem of consciousness\n\nThe problems of consciousness, Chalmers argues, are of two kinds: the *easy problems* and the *hard problem*.\n\n#### **Easy problems**\n\nThe easy problems are amenable to reductive inquiry. They are a logical consequence of lower-level facts about the world, similar to how a clock's ability to tell time is a logical consequence of its clockwork and structure, or a hurricane being a logical consequence of the structures and functions of certain weather", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia2.pdf" - }, - { - "text": "# **Hard problem of consciousness**\n\nIn the philosophy of mind, the **hard problem of consciousness** is to explain why and how humans and other organisms have qualia, phenomenal consciousness, or subjective experience. [1][2] It is contrasted with the \"easy problems\" of explaining why and how physical systems give a (healthy) human being the ability to discriminate, to integrate information, and to perform behavioral functions such as watching, listening, speaking (including generating an utterance that appears to refer to personal behaviour or belief), and so forth.[1] The easy problems are amenable to functional explanation—that is, explanations that are mechanistic or behavioral—since each physical system can be explained (at least in principle) purely by reference to the \"structure and dynamics\" that underpin the phenomenon.[1][3]\n\nProponents of the hard problem argue that it is categorically different from the easy problems since no mechanistic or behavioral explanation could explain the character of an experience, not even in principle. Even after all the relevant functional facts are explicated, they argue, there will still remain a further question: \"why is the performance of these functions accompanied by experience?\"[1] To bolster their case, proponents of the hard problem frequently turn to various philosophical thought experiments, involving philosophical zombies (which, they claim, are conceivable) or inverted qualia, or the claimed ineffability of colour experiences, or the claimed unknowability of foreign states of consciousness, such as the experience of being a bat.\n\nThe terms \"hard problem\" and \"easy problems\" were coined by the philosopher David Chalmers in a 1994 talk given at The Science of Consciousness conference held in Tucson, Arizona.[4] The following year, the main talking points of Chalmers' talk were published in *The Journal of Consciousness Studies*. [1] The publication gained significant attention from consciousness researchers and became the subject of a special volume of the journal,[5][6] which was later published into a book.[7] In 1996, Chalmers published *The Conscious Mind*, a book-length treatment of the hard problem, in which he elaborated on his core arguments and responded to counterarguments. His use of the word *easy* is \"tongue-in-cheek\".[8] As the\n\nChalmers on stage for an Alan Turing Year event at De La Salle University, Manila, 27 March 2012\n\ncognitive psychologist Steven Pinker puts it, they are about as easy as going to Mars or curing cancer. \"That is, scientists more or less know what to look for, and with enough brainpower and funding, they would probably crack it in this century.\"[9]\n\nThe existence of the hard problem is disputed. It has been accepted by some philosophers of mind such as Joseph Levine, [10] Colin McGinn, [11] and Ned Block[12] and cognitive neuroscientists such as Francisco Varela, [13] Giulio Tononi, [14][15] and Christof Koch. [14][15] On the other hand, its existence is denied by other philosophers of mind, such as Daniel Dennett, [16] Massimo Pigliucci, [17] Thomas Metzinger, Patricia Churchland, [18] and Keith Frankish, [19] and by cognitive neuroscientists such as Stanislas Dehaene, [20] Bernard Baars, [21] Anil Seth, [22] and Antonio Damasio. [23] Clinical neurologist and skeptic", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia2.pdf" - }, - { - "text": "of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world.[77][78]\n\n#### **Criticisms**\n\nThe main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called *Moorean Arguments*. A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument.[79]\n\nThe roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception.[note 1][80]\n\nIn the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase *\"Je pense, donc je suis\"* (\"I think, therefore I am\").[81] Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite.[82]\n\nThis same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct \"acquaintance\" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise.[83] Chalmers concludes that \"there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy.\"[84]\n\nEliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled \"The Consciousness Deniers\". In it, Strawson describes illusionism as the \"silliest claim ever made\", next to which \"every known religious belief is only a little less sensible than the belief that the grass is green.\"[85] Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book *The Feeling of Life Itself*. In the early pages of the book, Koch describes eliminativism as the \"metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive.\"[86] Koch takes the prevalence of eliminativism as evidence that \"much of twentieth-century analytic philosophy has gone to the dogs\".[87]\n\n### **Type-B Materialism**\n\nType-B Materialism, also known as *Weak Reductionism* or *A Posteriori Physicalism*, is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world.[43] Like Type-A Materialists, Type-B Materialists are", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "What is the role of the PhilPapers organization ?", - "target_page": 6, - "target_passage": " PhilPapers is an organization that archives academic philosophy papers and periodically surveys professional philosophers about their views.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- Blair, J. Anthony; Johnson, Ralph H. (2000). \"Informal Logic: An Overview\" (https://philpaper s.org/rec/BLAILA-3). *Informal Logic*. **20** (2): 93–107. doi:10.22329/il.v20i2.2262 (https://doi.o rg/10.22329%2Fil.v20i2.2262). Archived (https://web.archive.org/web/20211209195317/http s://philpapers.org/rec/BLAILA-3) from the original on 9 December 2021. Retrieved 29 December 2021.\n- Blair, J. Anthony (20 October 2011). *Groundwork in the Theory of Argumentation: Selected Papers of J. Anthony Blair*. Springer Science & Business Media. p. 47. ISBN 978-94-007- 2363-4.\n- Bobzien, Susanne (2020). \"Ancient Logic: 2. Aristotle\" (https://plato.stanford.edu/entries/logi c-ancient/#Ari). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20180828102117/https://plato.sta nford.edu/entries/logic-ancient/#Ari) from the original on 28 August 2018. Retrieved 3 January 2022.\n- Borchert, Donald, ed. (2006a). \"Computability Theory\". *Macmillan Encyclopedia of Philosophy Volume 2* (https://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 372– 390. ISBN 978-0-02-865782-0.\n- Borchert, Donald (2006b). \"Induction\". *Macmillan Encyclopedia of Philosophy Volume 4* (htt ps://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 635–648. ISBN 978-0-02- 865784-4. Archived (https://web.archive.org/web/20210112065913/https://philpapers.org/re c/BORMEO) from the original on 12 January 2021. Retrieved 4 January 2022.\n- Borchert, Donald (2006c). \"Logic, Non-Classical\". *Macmillan Encyclopedia of Philosophy Volume 5* (https://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 485–492. ISBN 978-0-02-865785-1. Archived (https://web.archive.org/web/20210112065913/https://ph ilpapers.org/rec/BORMEO) from the original on 12 January 2021. Retrieved 4 January 2022.\n- Boris, Kulik; Alexander, Fridman (30 November 2017). *N-ary Relations for Logical Analysis of Data and Knowledge*. IGI Global. p. 74. ISBN 978-1-5225-2783-1.\n- Bridges, Douglas; Ishihara, Hajime; Rathjen, Michael; Schwichtenberg, Helmut (30 April 2023). *Handbook of Constructive Mathematics*. Cambridge University Press. pp. 73–4. ISBN 978-1-316-51086-5.\n- Brody, Boruch A. (2006). *Encyclopedia of Philosophy*. Vol. 5. Donald M. Borchert (2nd ed.). Thomson Gale/Macmillan Reference US. pp. 535–536. ISBN 978-0-02-865780-6. OCLC 61151356 (https://search.worldcat.org/oclc/61151356). \"The two most important types of logical calculi are propositional (or sentential) calculi and functional (or predicate) calculi. A propositional calculus is a system containing propositional variables and connectives (some also contain propositional constants) but not individual or functional variables or constants. In the extended propositional calculus, quantifiers whose operator variables are propositional variables are added.\"\n- Bunnin, Nicholas; Yu, Jiyuan (27 January 2009). *The Blackwell Dictionary of Western Philosophy*. John Wiley & Sons. p. 179. ISBN 978-1-4051-9112-8.\n- Burgess, John P. (2009). \"1. Classical logic\". *Philosophical Logic* (https://philpapers.org/rec/ BURPL-3). Princeton, NJ: Princeton University Press. pp. 1–12. ISBN 978-0-691-15633-0. Archived (https://web.archive.org/web/20211216143954/https://philpapers.org/rec/BURPL-3) from the original on 16 December 2021. Retrieved 4 January 2022.\n- Bäck, Allan T. (2016). *Aristotle's Theory of Predication*. Brill. p. 317. ISBN 978-90-04-32109- 0.\n- Calderbank, Robert; Sloane, Neil J. A. (April 2001). \"Claude Shannon (1916–2001)\" (https:// doi.org/10.1038%2F35071223). *Nature*. **410** (6830): 768. doi:10.1038/35071223 (https://doi. org/10.1038%2F35071223). ISSN 1476-4687 (https://search.worldcat.org/issn/1476-4687). PMID 11298432 (https://pubmed.ncbi.nlm.nih.gov/11298432). S2CID 4402158 (https://api.s emanticscholar.org/CorpusID:4402158).\n- Carnielli, Walter; Pizzi, Claudio (2008). *Modalities and Multimodalities*. Springer Science & Business Media. p. 3. ISBN 978-1-4020-8590-1.", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Vidyabhusana, Satis Chandra (1988). *A History of Indian Logic: Ancient, Mediaeval and Modern Schools*. Motilal Banarsidass Publisher. p. 221. ISBN 978-81-208-0565-1.\n- Vleet, Van Jacob E. (2010). \"Introduction\". *Informal Logical Fallacies: A Brief Guide* (https://p hilpapers.org/rec/VLEILF). Upa. pp. ix–x. ISBN 978-0-7618-5432-6. Archived (https://web.ar chive.org/web/20220228035654/https://philpapers.org/rec/VLEILF) from the original on 28 February 2022. Retrieved 2 January 2022.\n- Väänänen, Jouko (2021). \"Second-order and Higher-order Logic\" (https://plato.stanford.edu/ entries/logic-higher-order/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211030222316/ https://plato.stanford.edu/entries/logic-higher-order/) from the original on 30 October 2021. Retrieved 23 November 2021.\n- Walton, Douglas N. (1987). *Informal Fallacies: Towards a Theory of Argument Criticisms* (htt ps://philpapers.org/rec/WALIFT). John Benjamins. ISBN 978-1-55619-010-0. Archived (http s://web.archive.org/web/20220302001111/https://philpapers.org/rec/WALIFT) from the original on 2 March 2022. Retrieved 2 January 2022.\n- Warren, Jared (2020). *Shadows of Syntax: Revitalizing Logical and Mathematical Conventionalism* (https://global.oup.com/academic/product/shadows-of-syntax-9780190086 152). Oxford University Press. ISBN 978-0-19-008615-2.\n- Washell, Richard F. (1973). \"Logic, Language, and Albert the Great\" (https://philpapers.org/r ec/WASLLA-3). *Journal of the History of Ideas*. **34** (3): 445–50. doi:10.2307/2708963 (http s://doi.org/10.2307%2F2708963). JSTOR 2708963 (https://www.jstor.org/stable/2708963).\n- Wasilewska, Anita (2018). *Logics for Computer Science: Classical and Non-Classical*. Springer. pp. 145–6. ISBN 978-3-319-92591-2.\n- Weber, Zach. \"Paraconsistent Logic\" (https://iep.utm.edu/para-log/). *Internet Encyclopedia of Philosophy*. Retrieved 12 December 2021.\n- Weddle, Perry (2011). \"Chapter 36. Informal logic and the eductive-inductive distinction\". *Across the Lines of Disciplines* (https://www.degruyter.com/document/doi/10.1515/97831108 67718.383/html). De Gruyter Mouton. pp. 383–388. doi:10.1515/9783110867718.383 (http s://doi.org/10.1515%2F9783110867718.383). ISBN 978-3-11-086771-8. Archived (https://w eb.archive.org/web/20211231172343/https://www.degruyter.com/document/doi/10.1515/978 3110867718.383/html) from the original on 31 December 2021. Retrieved 2 January 2022.\n- Westerståhl, Dag (1989). \"Aristotelian Syllogisms and Generalized Quantifiers\" (https://philp apers.org/rec/WESASA). *Studia Logica*. **48** (4): 577–585. doi:10.1007/BF00370209 (https:// doi.org/10.1007%2FBF00370209). S2CID 32089424 (https://api.semanticscholar.org/Corpu sID:32089424). Archived (https://web.archive.org/web/20220104182746/https://philpapers.o rg/rec/WESASA) from the original on 4 January 2022. Retrieved 4 January 2022.\n- Wilbanks, Jan J. (1 March 2010). \"Defining Deduction, Induction, and Validity\" (https://link.sp ringer.com/article/10.1007/s10503-009-9131-5). *Argumentation*. **24** (1): 107–124. doi:10.1007/s10503-009-9131-5 (https://doi.org/10.1007%2Fs10503-009-9131-5). ISSN 1572-8374 (https://search.worldcat.org/issn/1572-8374). S2CID 144481717 (https://ap i.semanticscholar.org/CorpusID:144481717). Archived (https://web.archive.org/web/202201 08171721/https://link.springer.com/article/10.1007/s10503-009-9131-5) from the original on 8 January 2022. Retrieved 8 January 2022.\n- Wilce, Alexander (2021). \"Quantum Logic and Probability Theory: 2.1 Realist Quantum Logic\" (https://plato.stanford.edu/entries/qt-quantlog/#RealQuanLogi). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.\n- Wile, Bruce; Goss, John; Roesner, Wolfgang (2005). *Comprehensive Functional Verification: The Complete Industry Cycle*. Elsevier. p. 447. ISBN 978-0-08-047664-3.\n- Willman, Marshall D. (2022). \"Logic and Language in Early Chinese Philosophy\" (https://plat o.stanford.edu/entries/chinese-logic-language/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Introduction. Retrieved 11 March 2023.", - "page_start": 36, - "page_end": 36, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Haack, Susan (1978). \"1. 'Philosophy of logics' \". *Philosophy of Logics* (https://philpapers.or g/rec/HAAPOL-2). London and New York: Cambridge University Press. pp. 1–10. ISBN 978- 0-521-29329-7. Archived (https://web.archive.org/web/20211207200551/https://philpapers.o rg/rec/HAAPOL-2) from the original on 7 December 2021. Retrieved 29 December 2021.\n- Haack, Susan (1996). *Deviant Logic, Fuzzy Logic: Beyond the Formalism*. University of Chicago Press. ISBN 978-0-226-31133-3.\n- Haaparanta, Leila (2009). \"1. Introduction\". *The Development of Modern Logic*. Oxford University Press. pp. 4–6. ISBN 978-0-19-513731-6.\n- Hansen, Hans (2020). \"Fallacies\" (https://plato.stanford.edu/entries/fallacies/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (http s://web.archive.org/web/20210329182946/https://plato.stanford.edu/entries/fallacies/) from the original on 29 March 2021. Retrieved 18 March 2021.\n- Hartmann, Stephan; Sprenger, Jan (2010). \"Bayesian Epistemology\". *The Routledge Companion to Epistemology* (https://philpapers.org/rec/BOVSIO). London: Routledge. pp. 609–620. ISBN 978-0-415-96219-3. Archived (https://web.archive.org/web/2021051609 5047/https://philpapers.org/rec/BOVSIO) from the original on 16 May 2021. Retrieved 4 January 2022.\n- Hasse, Dag Nikolaus (2008). \"Influence of Arabic and Islamic Philosophy on the Latin West\" (https://plato.stanford.edu/entries/arabic-islamic-influence/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 19 July 2023.\n- Hawthorne, James (2021). \"Inductive Logic\" (https://plato.stanford.edu/entries/logic-inductiv e/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20220121081805/https://plato.stanford.ed u/entries/logic-inductive/) from the original on 21 January 2022. Retrieved 6 January 2022.\n- Hintikka, Jaakko J. (2019). \"Philosophy of logic\" (https://www.britannica.com/topic/philosoph y-of-logic). *Encyclopædia Britannica*. Archived (https://web.archive.org/web/2015042810173 2/http://www.britannica.com/EBchecked/topic/346240/philosophy-of-logic) from the original on 28 April 2015. Retrieved 21 November 2021.\n- Hintikka, Jaakko J. (2023). \"Logical systems\" (https://www.britannica.com/topic/logic/Logical -systems). *Encyclopædia Britannica*. Archived (https://web.archive.org/web/2021120718465 6/https://www.britannica.com/topic/logic/Logical-systems) from the original on 7 December 2021. Retrieved 4 December 2021.\n- Hintikka, Jaakko (1970). \"Information, Deduction, and the A Priori\". *Noûs*. **4** (2): 135–152. doi:10.2307/2214318 (https://doi.org/10.2307%2F2214318). ISSN 0029-4624 (https://searc h.worldcat.org/issn/0029-4624). JSTOR 2214318 (https://www.jstor.org/stable/2214318).\n- Hintikka, Jaakko; Sandu, Gabriel (2006). \"What is Logic?\". In Jacquette, D. (ed.). *Philosophy of Logic* (https://philpapers.org/rec/JAAWIL). North Holland. pp. 13–39. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207235525/https://ph ilpapers.org/rec/JAAWIL) from the original on 7 December 2021. Retrieved 29 December 2021.\n- Hintikka, Jaakko J.; Spade, Paul Vincent. \"History of logic\" (https://www.britannica.com/topi c/history-of-logic). *Encyclopædia Britannica*. Retrieved 23 September 2022.\n- Honderich, Ted (2005). *The Oxford Companion to Philosophy* (https://philpapers.org/rec/HO NTOC-2). Oxford University Press. ISBN 978-0-19-926479-7. Archived (https://web.archive. org/web/20210129082636/https://philpapers.org/rec/HONTOC-2) from the original on 29 January 2021. Retrieved 2 January 2022.\n- Hurley, Patrick J. (2015). \"4. Categorical Syllogisms\". *Logic: The Essentials*. Wadsworth. pp. 189–237. ISBN 978-1-305-59041-0.\n- IEP Staff. \"Deductive and Inductive Arguments\" (https://iep.utm.edu/ded-ind/). Archived (http s://web.archive.org/web/20100528032124/https://iep.utm.edu/ded-ind/) from the original on 28 May 2010. Retrieved 6 January 2022.", - "page_start": 29, - "page_end": 29, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Iqbal, Mohammad (2013). \"The Spirit of Muslim Culture\". *The Reconstruction of Religious Thought in Islam* (http://www.allamaiqbal.com/works/prose/english/reconstruction/). Stanford University Press. pp. 99–115. ISBN 978-0-8047-8686-7.\n- Irvine, Andrew David (2022). \"Bertrand Russell\" (https://plato.stanford.edu/entries/russell/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 29 September 2022.\n- Jacquette, Dale (2006). \"Introduction: Philosophy of logic today\". *Philosophy of Logic* (http s://philpapers.org/rec/JACPOL). North Holland. pp. 1–12. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207184932/https://philpapers.org/rec/JACPOL) from the original on 7 December 2021. Retrieved 29 December 2021.\n- Jago, Mark (2014). *The Impossible: An Essay on Hyperintensionality*. OUP Oxford. p. 41. ISBN 978-0-19-101915-9.\n- Janssen, Theo M. V.; Zimmermann, Thomas Ede (2021). \"Montague Semantics\" (https://plat o.stanford.edu/entries/montague-semantics/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. pp. 3–4. Retrieved 10 March 2023.\n- Johnson, Ralph H. (1999). \"The Relation Between Formal and Informal Logic\" (https://philpa pers.org/rec/JOHTRB-2). *Argumentation*. **13** (3): 265–274. doi:10.1023/A:1007789101256 (https://doi.org/10.1023%2FA%3A1007789101256). S2CID 141283158 (https://api.semantic scholar.org/CorpusID:141283158). Archived (https://web.archive.org/web/20211207184706/ https://philpapers.org/rec/JOHTRB-2) from the original on 7 December 2021. Retrieved 2 January 2022.\n- Johnson, Ralph H. (15 July 2014). *The Rise of Informal Logic: Essays on Argumentation, Critical Thinking, Reasoning and Politics*. University of Windsor. ISBN 978-0-920233-71-9.\n- Ketland, Jeffrey (2005). \"Second Order Logic\". *Macmillan Encyclopedia of Philosophy Volume 8* (https://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-a nd-maps/second-order-logic). Macmillan Reference USA. pp. 707–708. ISBN 978-0-02- 865788-2. Archived (https://web.archive.org/web/20211207184921/https://www.encyclopedi a.com/humanities/encyclopedias-almanacs-transcripts-and-maps/second-order-logic) from the original on 7 December 2021. Retrieved 4 January 2022.\n- King, Jeffrey C. (2 September 2009). \"Formal Semantics\". *The Oxford Handbook of Philosophy of Language*. pp. 557–8. doi:10.1093/oxfordhb/9780199552238.003.0023 (http s://doi.org/10.1093%2Foxfordhb%2F9780199552238.003.0023). ISBN 978-0-19-955223-8.\n- King, Jeffrey C. (2019). \"Structured Propositions\" (https://plato.stanford.edu/entries/propositi ons-structured/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211025211706/https://plato.sta nford.edu/entries/propositions-structured/) from the original on 25 October 2021. Retrieved 4 December 2021.\n- Klement, Kevin C. (1995b). \"Propositional Logic\" (https://iep.utm.edu/prop-log/). *Internet Encyclopedia of Philosophy*. ISSN 2161-0002 (https://search.worldcat.org/issn/2161-0002). Retrieved 23 September 2022.\n- Kline, Morris (1972). *Mathematical Thought From Ancient to Modern Times*. Oxford University Press. ISBN 978-0-19-506135-2.\n- Kneale, William; Kneale, Martha (1962). *The Development of Logic*. Clarendon Press. ISBN 978-0-19-824773-9.\n- Knuuttila, Simo (1980). *Reforging the Great Chain of Being: Studies of the History of Modal Theories*. Springer Science & Business Media. p. 71. ISBN 978-90-277-1125-0.\n- Korb, Kevin (2004). \"Bayesian Informal Logic and Fallacy\" (https://philpapers.org/rec/KORBI L). *Informal Logic*. **24** (1): 41–70. doi:10.22329/il.v24i1.2132 (https://doi.org/10.22329%2Fil. v24i1.2132). Archived (https://web.archive.org/web/20211110075255/https://philpapers.org/r ec/KORBIL) from the original on 10 November 2021. Retrieved 2 January 2022.", - "page_start": 30, - "page_end": 30, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- 79. Scarfone, Matthew (2022). \"Using and Abusing Moorean Arguments\" (https://philpapers.org/ rec/SCAUAA-2). *Journal of the American Philosophical Association*. **8** (1): 52–71. doi:10.1017/apa.2020.47 (https://doi.org/10.1017%2Fapa.2020.47). S2CID 239672728 (http s://api.semanticscholar.org/CorpusID:239672728).\n- 80. Augustine of Hippo. \"Book 11, Chapter 26\". *City of God*.\n- 81. Descartes, René (1637). \"4\". *Discourse on the Method*.\n- 82. Descartes, René (1641). \"Second Meditation\". *Meditations on First Philosophy*.\n- 83. Chalmers, David (2020). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). *Journal of Consciousness Studies*. **27** (5–6): 258–281.\n- 84. Chalmers, David (2002). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). *Journal of Consciousness Studies*. **27** (5–6): 258–281.\n- 85. Strawson, G. (2018). \"The Consciousness Deniers\" (https://www.nybooks.com/daily/2018/0 3/13/the-consciousness-deniers/). *The New York Review of Books*.\n- 86. Koch, Christof (2019). *The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed*. MIT Press. p. 2.\n- 87. Koch, Christof (2019). *The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed*. MIT Press. p. 3.\n- 88. Balmer, A. (2020). \"Soft-Wired Illusionism vs. the Meta-Problem of Consciousness\" (https://p hilpapers.org/rec/BALSIV). *Journal of Consciousness Studies*. **27** (5–6): 26–37.\n- 89. Chalmers, David (2020). \"Is the Hard Problem of Consciousness Universal?\". *Journal of Consciousness Studies*. **27** (5–6): 227–257.\n- 90. Papineau, D. (2019). \"Response to Chalmers' 'The Meta-Problem of Consciousness' \" (http s://philpapers.org/rec/PAPRTC-6). *Journal of Consciousness Studies*. **26** (9–10): 173–181.\n- 91. J. Levine, \"Conceivability, Identity, and the Explanatory Gap\" in Stuart R. Hameroff, Alfred W. Kaszniak and David Chalmers (eds.), *Towards a Science of Consciousness III: The Third Tucson Discussions and Debates*, The MIT Press, 1999,. pp 3–12.\n- 92. Gennaro, Rocco J. \"Consciousness\" (https://www.iep.utm.edu/consciou). *Internet Encyclopedia of Philosophy*.\n- 93. Block, Ned; Stalnaker, Robert (1999). \"Conceptual Analysis, Dualism, and the Explanatory Gap\" (http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/ExplanatoryGap.pdf) (PDF). *The Philosophical Review*. **108** (1): 1–46. CiteSeerX 10.1.1.693.2421 (https://citeseerx.ist.ps u.edu/viewdoc/summary?doi=10.1.1.693.2421). doi:10.2307/2998259 (https://doi.org/10.230 7%2F2998259). JSTOR 2998259 (https://www.jstor.org/stable/2998259).\n- 94. Stoljar, Daniel (2005). \"Physicalism and Phenomenal Concepts\". *Mind & Language*. **20** (5): 469–494. doi:10.1111/j.0268-1064.2005.00296.x (https://doi.org/10.1111%2Fj.0268-1064.2 005.00296.x).\n- 95. Chalmers, David (2006). \"Phenomenal Concepts and the Explanatory Gap\" (http://consc.ne t/papers/pceg.pdf) (PDF). In Alter, Torin; Walter, Sven (eds.). *Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism*. Oxford University Press. ISBN 9780195171655. Retrieved 27 March 2019.\n- 96. Wierzbicka, A. (2019). \"From 'Consciousness' to 'I Think, I Feel, I Know': A Commentary on David Chalmers\". *Journal of Consciousness Studies*. **26** (9–10): 257–269.\n- 97. Lau, Hakwan; Michel, Matthias (2019). \"A Socio-Historical Take on the Meta-Problem of Consciousness\". *Journal of Consciousness Studies*. **26** (9–10): 136–147.\n- 98. \"Is the hard problem of consciousness really that hard? | Brian Greene and Pat Churchland lock horns\" (https://www.youtube.com/watch?v=hru5d_wsu7g). *YouTube*. 9 July 2022.\n- 99. \"Abiogenesis\" (https://www.allaboutscience.org/abiogenesis.htm).\n- 100. *Ignorance and Imagination: The Epistemic Origin of the Problem of Consciousness.* Daniel Stoljar. Oxford University Press.", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- Shermer, Michael (25 October 2022). *Conspiracy: Why the Rational Believe the Irrational*. JHU Press. ISBN 978-1-4214-4445-1.\n- Sider, Theodore (2010). *Logic for Philosophy*. Oxford University Press. ISBN 978-0-19- 957558-9.\n- Siegel, Harvey; Biro, John (1997). \"Epistemic Normativity, Argumentation, and Fallacies\" (htt ps://philpapers.org/rec/SIEENA). *Argumentation*. **11** (3): 277–292. doi:10.1023/A:1007799325361 (https://doi.org/10.1023%2FA%3A1007799325361). S2CID 126269789 (https://api.semanticscholar.org/CorpusID:126269789). Archived (https:// web.archive.org/web/20220228035651/https://philpapers.org/rec/SIEENA) from the original on 28 February 2022. Retrieved 4 January 2022.\n- Simpson, R. L. (2008). *Essentials of Symbolic Logic* (3rd ed.). Broadview Press. p. 14. ISBN 978-1-77048-495-5.\n- Smith, Robin (2022). \"Aristotle's Logic\" (https://plato.stanford.edu/entries/aristotle-logic/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.\n- Spade, Paul Vincent; Panaccio, Claude (2019). \"William of Ockham\" (https://plato.stanford.e du/entries/ockham/#SummLogi). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University.\n- Spriggs, John (2012). *GSN The Goal Structuring Notation: A Structured Approach to Presenting Arguments*. Springer Science & Business Media. pp. 20–22. ISBN 978-1-4471- 2312-5.\n- Stairs, Allen (2017). *A Thinker's Guide to the Philosophy of Religion*. Routledge. p. 343. ISBN 978-1-351-21981-5.\n- Sternberg, Robert J. \"Thought\" (https://www.britannica.com/topic/thought). *Encyclopædia Britannica*. Archived (https://web.archive.org/web/20211013145532/https://www.britannica.c om/topic/thought) from the original on 13 October 2021. Retrieved 14 October 2021.\n- Stolyar, Abram Aronovich (1 January 1984). *Introduction to Elementary Mathematical Logic*. Courier Corporation. ISBN 978-0-486-64561-2.\n- Stone, Mark A. (2012). \"Denying the Antecedent: Its Effective Use in Argumentation\" (https:// philpapers.org/rec/STODTA). *Informal Logic*. **32** (3): 327–356. doi:10.22329/il.v32i3.3681 (ht tps://doi.org/10.22329%2Fil.v32i3.3681). Archived (https://web.archive.org/web/2022022812 3240/https://philpapers.org/rec/STODTA) from the original on 28 February 2022. Retrieved 8 January 2022.\n- Stump, David J. \"Fallacy, Logical\" (https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical). *encyclopedia.com*. Archived (https://web. archive.org/web/20210215112403/https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical) from the original on 15 February 2021. Retrieved 20 March 2021.\n- Talbott, William (2016). \"Bayesian Epistemology\" (https://plato.stanford.edu/entries/epistemo logy-bayesian/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20210401034856/https://plato.sta nford.edu/entries/epistemology-bayesian/) from the original on 1 April 2021. Retrieved 6 March 2021.\n- Tarski, Alfred (1994). *Introduction to Logic and to the Methodology of the Deductive Sciences*. Oxford University Press. p. 40. ISBN 978-0-19-802139-1.\n- Tondl, L. (2012). *Problems of Semantics: A Contribution to the Analysis of the Language Science*. Springer Science & Business Media. p. 111. ISBN 978-94-009-8364-9.\n- Velleman, Daniel J. (2006). *How to Prove It: A Structured Approach*. Cambridge University Press. p. 8, 103. ISBN 978-0-521-67599-4.\n- Vickers, John M. (2022). \"Inductive Reasoning\" (https://www.oxfordbibliographies.com/displ ay/document/obo-9780195396577/obo-9780195396577-0171.xml). *Oxford Bibliographies*. Oxford University Press. Retrieved 18 January 2023.", - "page_start": 35, - "page_end": 35, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Wolf, Robert G. (1978). \"Are Relevant Logics Deviant?\" (https://philpapers.org/rec/WOLAR L). *Philosophia*. **7** (2): 327–340. doi:10.1007/BF02378819 (https://doi.org/10.1007%2FBF02 378819). S2CID 143697796 (https://api.semanticscholar.org/CorpusID:143697796). Archived (https://web.archive.org/web/20211216143955/https://philpapers.org/rec/WOLAR L) from the original on 16 December 2021. Retrieved 4 January 2022.\n- Zegarelli, Mark (2010). *Logic For Dummies*. John Wiley & Sons. p. 30. ISBN 978-1-118- 05307-2.\n\n# **External links**\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Logic&oldid=1266818857\"", - "page_start": 37, - "page_end": 37, - "source_file": "wikipedia1.pdf" - }, - { - "text": "# **Bibliography**\n\n- Aloni, Maria; Dekker, Paul (7 July 2016). *The Cambridge Handbook of Formal Semantics*. Cambridge University Press. pp. 22–23. ISBN 978-1-316-55273-5.\n- Angell, Richard B. (1964). *Reasoning and Logic*. Ardent Media. p. 164. OCLC 375322 (http s://search.worldcat.org/oclc/375322).\n- Audi, Robert (1999a). \"Informal logic\". *The Cambridge Dictionary of Philosophy* (https://philp apers.org/rec/AUDTCD-2). Cambridge University Press. p. 435. ISBN 978-1-107-64379-6. Archived (https://web.archive.org/web/20210414132344/https://philpapers.org/rec/AUDTCD-2) from the original on 14 April 2021. Retrieved 29 December 2021.\n- Audi, Robert (1999b). \"Philosophy of logic\". *The Cambridge Dictionary of Philosophy* (http s://philpapers.org/rec/AUDTCD-2). Cambridge University Press. pp. 679–681. ISBN 978-1- 107-64379-6. Archived (https://web.archive.org/web/20210414132344/https://philpapers.org/ rec/AUDTCD-2) from the original on 14 April 2021. Retrieved 29 December 2021.\n- Backmann, Marius (1 June 2019). \"Varieties of Justification—How (Not) to Solve the Problem of Induction\" (https://doi.org/10.1007%2Fs12136-018-0371-6). *Acta Analytica*. **34** (2): 235–255. doi:10.1007/s12136-018-0371-6 (https://doi.org/10.1007%2Fs12136-018-037 1-6). ISSN 1874-6349 (https://search.worldcat.org/issn/1874-6349). S2CID 125767384 (http s://api.semanticscholar.org/CorpusID:125767384).\n- Bagaria, Joan (2021). \"Set Theory\" (https://plato.stanford.edu/entries/set-theory/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 23 September 2022.\n- Barnes, Jonathan (25 January 2007). *Truth, etc.: Six Lectures on Ancient Logic*. Clarendon Press. p. 274. ISBN 978-0-19-151574-3.\n- Benthem, Johan van. \"Modal Logic: Contemporary View: 1. Modal Notions and Reasoning Patterns: a First Pass\" (https://iep.utm.edu/modal-lo/#H1). *Internet Encyclopedia of Philosophy*. Retrieved 11 March 2023.\n- Berlemann, Lars; Mangold, Stefan (10 July 2009). *Cognitive Radio and Dynamic Spectrum Access*. John Wiley & Sons. p. 194. ISBN 978-0-470-75443-6.\n- Berman, Harold J. (1 July 2009). *Law and Revolution, the Formation of the Western Legal Tradition*. Harvard University Press. ISBN 978-0-674-02085-6.\n- Bimbo, Katalin (2 April 2016). *J. Michael Dunn on Information Based Logics*. Springer. pp. 8–9. ISBN 978-3-319-29300-4.\n- Blackburn, Simon (1 January 2008). \"argument\". *The Oxford Dictionary of Philosophy* (http s://www.oxfordreference.com/view/10.1093/oi/authority.20110803095423356). Oxford University Press. ISBN 978-0-19-954143-0. Archived (https://web.archive.org/web/2022010 8194756/https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095423356) from the original on 8 January 2022. Retrieved 8 January 2022.\n- Blackburn, Simon (24 March 2016). \"rule of inference\". *The Oxford Dictionary of Philosophy* (https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100432990). Oxford University Press. ISBN 978-0-19-954143-0. Archived (https://web.archive.org/web/2022010 8194809/https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100432990) from the original on 8 January 2022. Retrieved 8 January 2022.\n- Blair, J. Anthony; Johnson, Ralph H. (1987). \"The Current State of Informal Logic\" (https://ph ilpapers.org/rec/BLATCS). *Informal Logic*. **9** (2): 147–51. doi:10.22329/il.v9i2.2671 (https://d oi.org/10.22329%2Fil.v9i2.2671). Archived (https://web.archive.org/web/20211230194638/ht tps://philpapers.org/rec/BLATCS) from the original on 30 December 2021. Retrieved 2 January 2022.", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Gamut, L.T.F. (1991). *Logic, Language and Meaning Vol 1: Introduction to Logic*. University of Chicago Press. 5.5. ISBN 978-0-226-28085-1.\n- Garson, James (2023). \"Modal Logic\" (https://plato.stanford.edu/entries/logic-modal/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.\n- Gensler, Harry J. (2006). *The A to Z of Logic*. Scarecrow Press. pp. xliii–xliv. ISBN 978-1- 4617-3182-5.\n- Goble, Lou (2001). \"Introduction\". *The Blackwell Guide to Philosophical Logic* (https://philpa pers.org/rec/GOBTBG-2). Wiley-Blackwell. pp. 1–8. ISBN 978-0-631-20692-7. Archived (htt ps://web.archive.org/web/20211207184959/https://philpapers.org/rec/GOBTBG-2) from the original on 7 December 2021. Retrieved 4 January 2022.\n- Goodman, Lenn Evan (1992). *Avicenna*. Routledge. p. 188. ISBN 978-0-415-01929-3.\n- Goodman, Lenn Evan (2003). *Islamic Humanism*. Oxford University Press. p. 155. ISBN 978-0-19-513580-0.\n- Groarke, Louis F. \"Aristotle: Logic\" (https://iep.utm.edu/aris-log/). *Internet Encyclopedia of Philosophy*. Archived (https://web.archive.org/web/20211229235433/https://iep.utm.edu/aris -log/) from the original on 29 December 2021. Retrieved 1 January 2022.\n- Groarke, Leo (2021). \"Informal Logic\" (https://plato.stanford.edu/entries/logic-informal/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20220112030519/https://plato.stanford.edu/entries/lo gic-informal/) from the original on 12 January 2022. Retrieved 31 December 2021.\n- Gómez-Torrente, Mario (2019). \"Logical Truth\" (https://plato.stanford.edu/entries/logical-trut h/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211002190110/https://plato.stanford.ed u/entries/logical-truth/) from the original on 2 October 2021. Retrieved 22 November 2021.\n- Gödel, Kurt (1984). \"Russell's mathematical logic\". In Benacerraf, Paul; Putnam, Hilary (eds.). *Philosophy of Mathematics: Selected Readings* (https://www.cambridge.org/core/boo ks/abs/philosophy-of-mathematics/russells-mathematical-logic/4D82F215FABFE06149D03 EF1EF5BE7E4) (2nd ed.). Cambridge University Press. pp. 447–469. ISBN 978-0-521- 29648-9. Archived (https://web.archive.org/web/20220111091740/https://www.cambridge.or g/core/books/abs/philosophy-of-mathematics/russells-mathematical-logic/4D82F215FABFE 06149D03EF1EF5BE7E4) from the original on 11 January 2022. Retrieved 9 January 2022.\n- Hájek, Petr (3 September 2006). \"Fuzzy Logic\" (https://plato.stanford.edu/Archives/Win201 2/entries/logic-fuzzy/). *Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 19 July 2023.\n- Hájek, Alan; Lin, Hanti (2017). \"A Tale of Two Epistemologies?\" (https://philpapers.org/rec/H JEATO). *Res Philosophica*. **94** (2): 207–232. doi:10.11612/resphil.1540 (https://doi.org/10.1 1612%2Fresphil.1540). S2CID 160029122 (https://api.semanticscholar.org/CorpusID:16002 9122). Archived (https://web.archive.org/web/20220104182746/https://philpapers.org/rec/HJ EATO) from the original on 4 January 2022. Retrieved 4 January 2022.\n- Hall, Cordelia; O'Donnell, John (2000). *Discrete Mathematics Using a Computer*. Springer Science & Business Media. p. 48. ISBN 978-1-85233-089-7.\n- Houde, R.; Camacho, L. (2003). \"Induction\". *New Catholic Encyclopedia* (https://www.encycl opedia.com/science-and-technology/computers-and-electrical-engineering/electrical-engine ering/induction). ISBN 978-0-7876-4004-0. Archived (https://web.archive.org/web/20220108 171720/https://www.encyclopedia.com/science-and-technology/computers-and-electrical-en gineering/electrical-engineering/induction) from the original on 8 January 2022. Retrieved 8 January 2022.\n- Haack, Susan (1974). *Deviant Logic: Some Philosophical Issues*. CUP Archive. p. 51. ISBN 978-0-521-20500-9.", - "page_start": 28, - "page_end": 28, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- 17. Massimo Pigliucci (2013). \"What hard problem?\" (http://philpapers.org/archive/PIGWHP.pdf) (PDF). *Philosophy Now* (99).\n- 18. Churchland, Patricia (1996). \"The Hornswoggle Problem\" (http://joelvelasco.net/teaching/23 00/hornswoggleprob.pdf) (PDF). *Journal of Consciousness Studies*. **3** (5–6): 402–408. Retrieved 10 January 2021.\n- 19. Frankish, Keith (2016). \"Illusionism as a Theory of Consciousness\" (https://nbviewer.jupyter. org/github/k0711/kf_articles/blob/master/Frankish_Illusionism%20as%20a%20theory%20o f%20consciousness_eprint.pdf) (PDF). *Journal of Consciousness Studies*. **23** (11–12): 11– 39. Retrieved 20 December 2018.\n- 20. Dehaene, Stanislas (2014). *Consciousness and the brain: deciphering how the brain codes our thoughts*. Viking Adult. pp. 259–266 (https://books.google.com/books?id=CWw2AAAAQ BAJ&pg=PT197). ISBN 978-0670025435.\n- 21. Edelman, Gerald; Gally, Joseph; Baars, Bernard (2011). \"Biology of Consciousness\" (https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC3111444). *Frontiers in Psychology*. **2** (4): 4. doi:10.3389/fpsyg.2011.00004 (https://doi.org/10.3389%2Ffpsyg.2011.00004). PMC 3111444 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3111444). PMID 21713129 (https://pubmed.ncbi.nlm.nih.gov/21713129).\n- 22. Seth, Anil (November 2016). \"The real problem\" (https://aeon.co/essays/the-hard-problem-of -consciousness-is-a-distraction-from-the-real-one). *Aeon*. Retrieved 22 April 2018.\n- 23. Sean Carroll (29 April 2019). \"Sean Carroll's Mindscape\" (https://www.preposterousunivers e.com/podcast/2019/04/29/episode-44-antonio-damasio-on-feelings-thoughts-and-the-evolu tion-of-humanity/). *Preposterousuniverse.com* (Podcast). Sean Carroll. Event occurs at 1:04.46. \"I'm just saying that the idea of a hard problem that you cannot transpose, I think is wrong.\"\n- 24. \"Psychological Scales. The Hard Problem of Consciousness\" (https://scales.arabpsycholog y.com/2022/11/19/hard-problem-of-consciousness-2/). *arabpsychology.com*. Retrieved 2023-10-29.\n- 25. Bourget, David; Chalmers, David J. (2020). \"Philosophers on Philosophy: The 2020 PhilPapers Survey\" (https://survey2020.philpeople.org). *Philosophers' Imprint*.\n- 26. Roberts, Tim S. (September 2007). \"*The Even Harder Problem of Consciousness* by Roberts. Tim S.\" (https://www.researchgate.net/publication/228618472) *NeuroQuantology*. **5** (2): 214–221. doi:10.14704/nq.2007.5.2.129 (https://doi.org/10.14704%2Fnq.2007.5.2.129).\n- 27. Chalmers, David (1996). *The Conscious Mind*. New York: Oxford University Press. pp. xii– xiii, 95–106, backcover.\n- 28. Pinker, Steven (2018). *Enlightenment Now*. Viking. p. 481. ISBN 9780525427575.\n- 29. Dennett, Daniel; commentary on T. Moody, O. Flanagan and T. Polger. \"The Unimagined Preposterous of Zombies (https://ase.tufts.edu/cogstud/dennett/papers/unzombie.htm)\", *Journal of Consciousness Studies* vol. 2, no. 4, 1995, pp. 322–326.\n- 30. Churchland, Patricia Smith (2005). \"A neurophilosophical slant on consciousness research\". *Cortical Function: A View from the Thalamus*. Progress in Brain Research. Vol. 149. pp. 285–293. doi:10.1016/S0079-6123(05)49020-2 (https://doi.org/10.1016%2FS0079-612 3%2805%2949020-2). ISBN 9780444516794. PMID 16226591 (https://pubmed.ncbi.nlm.ni h.gov/16226591).\n- 31. Nagel, Thomas (October 1974). \"What is it like to be a bat?\". *The Philosophical Review*. **83** (4): 435–450. doi:10.2307/2183914 (https://doi.org/10.2307%2F2183914). JSTOR 2183914 (https://www.jstor.org/stable/2183914). S2CID 49125889 (https://api.semanticscholar.org/Co rpusID:49125889).\n- 32. \"Hard Problem of Consciousness\" (https://iep.utm.edu/hard-problem-of-conciousness/). *Internet Encyclopedia of Philosophy*. Retrieved 2024-10-09.\n- 33. Chalmers, David (January 1997). \"Moving forward on the problem of consciousness\" (http s://philpapers.org/rec/CHAMFO). *Journal of Consciousness Studies*. **4** (1): 3–46.", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "What explains mostly the physical behavior that occurs in region iii of thin films ?", - "target_page": 5, - "target_passage": "The observed behaviour in region iii) can be reason- ably attributed to the decreasing relevance of the con- tribution to the total energy of the system coming from the competitive interactions among NNN planes as the film thickness decreases", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: *J. Phys.-Cond. Mat.* 21, 264016 (2009), in the Volume \"Nanofluids on solid substrates\" and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, \"Drying of solids wetted by thin liquid films,\" Can. J. Phys. 68, 1084–1088 (1989).\n- [6] P. Muller-Buschbaum, \"Dewetting and pattern formation in thin polymer films as investigated in real ¨ and reciprocal space,\" J. Phys.-Condes. Matter 15, R1549–R1582 (2003).\n- [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, \"Dynamics and structure formation in thin polymer melt films,\" J. Phys.-Condes. Matter 17, S267–S290 (2005).\n- [8] U. Thiele, \"Structure formation in thin liquid films,\" in S. Kalliadasis and U. Thiele, editors, \"Thin films of Soft Matter,\" pages 25–93, Springer, Wien (2007).\n- [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, \"Spinodal dewetting of thin polymer films,\" Phys. Rev. Lett. 81, 1251–1254 (1998).\n- [10] R. Seemann, S. Herminghaus, and K. Jacobs, \"Dewetting patterns and molecular forces: A reconciliation,\" Phys. Rev. Lett. 86, 5534–5537 (2001).\n- [11] U. Thiele, M. G. Velarde, and K. Neuffer, \"Dewetting: Film rupture by nucleation in the spinodal regime,\" Phys. Rev. Lett. 87, 016104 (2001).\n- [12] M. Bestehorn and K. Neuffer, \"Surface patterns of laterally extended thin liquid films in three dimensions,\" Phys. Rev. Lett. 87, 046101 (2001).\n- [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, \"Complex ¨ dewetting scenarios captured by thin-film models,\" Nat. Mater. 2, 59–63 (2003).\n- [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, \"Dynamics of dewetting,\" Phys. Rev. Lett. 66, 715– 718 (1991).\n- [15] R. Seemann, S. Herminghaus, and K. Jacobs, \"Shape of a liquid front upon dewetting,\" Phys. Rev. Lett. 87, 196101 (2001).\n- [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, \"New slip regimes and the shape of ¨ dewetting thin liquid films,\" Phys. Rev. Lett. 95, 127801 (2005).\n- [17] F. Brochard-Wyart and C. Redon, \"Dynamics of liquid rim instabilities,\" Langmuir 8, 2324–2329 (1992).\n- [18] G. Reiter and A. Sharma, \"Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,\" Phys. Rev. Lett. 87, 166103 (2001).\n- [19] A. Munch and B. Wagner, \"Contact-line instability of dewetting thin films,\" Physica D ¨ 209, 178–190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height hp = hφ. The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\nshould also be investigated further in the simple case presented here.\n\n### IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and – in the case of DNA – liquid crystalline structures [22, 30, 45–49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51–53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55–58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n### II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37–40, 61]. The gold core of 2 – 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms (C6 to C12) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "fast evaporation [104, 105]. These complex experimental systems all represent systems of high practical interest that the theories presented here are not (yet) able to describe. Such experiments do, however, provide a strong motivation for further work to extend the theories presented here, as well as to develop new approaches.\n\nLet us finally mention that several topics were entirely excluded from our discussion here. First, we focused on a limited range of descriptions and did, for instance, not mention lattice Boltzmann, molecular dynamics or dissipative particle dynamics approaches that may also be employed to describe fluid suspensions [106–109]. Second, we have only discussed spatially homogeneous substrates. Patterned substrates are widely used in dewetting experiments [38, 110–112]. Theoretical descriptions are well developed for the dewetting of films of pure non-volatile liquids on such substrates [68, 113–119]. However, in the case of volatile liquids on heterogeneous substrates, much less work has been done. A third topic that we did not touch upon are possible continuum thin film approaches to demixing dewetting suspensions. We believe it is feasible to extend the diffuse interface theories such as model-H [120] to include the influence of evaporation in dewetting nanoparticle suspensions. For instance, such models have already been adapted to describe demixing free surface films of polymer blends [121–123].\n\n## Acknowledgments\n\nAJA and MJR gratefully acknowledge RCUK and EPSRC, respectively, for financial support. We acknowledge support by the European Union via the FP6 and FP7 Marie Curie schemes [Grants MRTN-CT-2004005728 (PATTERNS) and PITN-GA-2008-214919 (MULTIFLOW)].\n\n- [1] G. Reiter, \"Dewetting of thin polymer films,\" Phys. Rev. Lett. 68, 75–78 (1992).\n- [2] G. Reiter, \"Mobility of polymers in films thinner than their unperturbed size,\" Europhys. Lett. 23, 579–584 (1993).\n- [3] A. Sharma and G. Reiter, \"Instability of thin polymer films on coated substrates: Rupture, dewetting and drop formation,\" J. Colloid Interface Sci. 178, 383–399 (1996).\n- [4] P.-G. de Gennes, \"Wetting: Statics and dynamics,\" Rev. Mod. Phys. 57, 827–863 (1985).", - "page_start": 24, - "page_end": 24, - "source_file": "1001.2669.pdf" - }, - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "physical constituents. For example, water is nothing more than H2O molecules, and understanding everything about H2O molecules is to understand everything there is to know about water. But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical.[27]\n\n#### **Implications for physicalism**\n\nChalmers's idea contradicts physicalism, sometimes labelled materialism. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. For example, the rings of Saturn are a physical thing because they are nothing more than a complex arrangement of a large\n\nnumber of subatomic particles interacting in a certain way. According to physicalism, everything, including consciousness, can be explained by appeal to its microphysical constituents. Chalmers's *hard problem* presents a counterexample to this view and to other phenomena like swarms of birds, since it suggests that consciousness, like swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem then physicalism must be false, and if physicalism is true then the hard problem must not be a real problem.\n\nThe hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.\n\nA swarm of birds showing high order structure emerging from simpler physical constituents\n\nThough Chalmers rejects physicalism, he is still a naturalist. [27]\n\n#### **Historical precedents**\n\nThe hard problem of consciousness has scholarly antecedents considerably earlier than Chalmers. Chalmers himself notes that \"a number of thinkers in the recent and distant past\" have \"recognised the particular difficulties of explaining consciousness.\"[33] He states that all his original 1996 paper contributed to the discussion was \"a catchy name, a minor reformulation of philosophically familiar points\".[33]\n\nAmong others, thinkers who have made arguments similar to Chalmers' formulation of the hard problem include Isaac Newton, [34] John Locke, [35] Gottfried Wilhelm Leibniz, [36][34] John Stuart Mill, [37] and Thomas Henry Huxley. [38][34] Likewise, Asian philosophers like Dharmakirti and Guifeng Zongmi discussed the problem of how consciousness arises from unconscious matter. [34][39][40][41]\n\n#### **Related concepts**\n\n#### **The mind–body problem**", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- [110] L. Rockford, Y. Liu, P. Mansky, T. P. Russell, M. Yoon, and S. G. J. Mochrie, \"Polymers on nanoperiodic, heterogeneous surfaces,\" Phys. Rev. Lett. 82, 2602–2605 (1999).\n- [111] A. Sehgal, V. Ferreiro, J. F. Douglas, E. J. Amis, and A. Karim, \"Pattern-directed dewetting of ultrathin polymer films,\" Langmuir 18, 7041–7048 (2002).\n- [112] M. Geoghegan and G. Krausch, \"Wetting at polymer surfaces and interfaces,\" Prog. Polym. Sci. 28, 261–302 (2003).\n- [113] P. Lenz and R. Lipowsky, \"Morphological transitions of wetting layers on structured surfaces,\" Phys. Rev. Lett. 80, 1920–1923 (1998).\n- [114] C. Bauer, S. Dietrich, and A. O. Parry, \"Morphological phase transitions of thin fluid films on chemically structured substrates,\" Europhys. Lett. 47, 474–480 (1999).\n- [115] R. Konnur, K. Kargupta, and A. Sharma, \"Instability and morphology of thin liquid films on chemically heterogeneous substrates,\" Phys. Rev. Lett. 84, 931–934 (2000).\n- [116] M. Brinkmann and R. Lipowsky, \"Wetting morphologies on substrates with striped surface domains,\" J. Appl. Phys. 92, 4296–4306 (2002).\n- [117] L. Brusch, H. Kuhne, U. Thiele, and M. B ¨ ar, \"Dewetting of thin films on heterogeneous substrates: ¨ Pinning vs. coarsening,\" Phys. Rev. E 66, 011602 (2002).\n- [118] U. Thiele, L. Brusch, M. Bestehorn, and M. Bar, \"Modelling thin-film dewetting on structured sub- ¨ strates and templates: Bifurcation analysis and numerical simulations,\" Eur. Phys. J. E 11, 255–271 (2003).\n- [119] U. Thiele, \"Open questions and promising new fields in dewetting,\" Eur. Phys. J. E 12, 409–416 (2003).\n- [120] D. M. Anderson, G. B. McFadden, and A. A. Wheeler, \"Diffuse-interface methods in fluid mechanics,\" Ann. Rev. Fluid Mech. 30, 139–165 (1998).\n- [121] U. Thiele, S. Madruga, and L. Frastia, \"Decomposition driven interface evolution for layers of binary mixtures: I. Model derivation and stratified base states,\" Phys. Fluids 19, 122106 (2007).\n- [122] O. A. Frolovskaya, A. A. Nepomnyashchy, A. Oron, and A. A. Golovin, \"Stability of a two-layer binary-fluid system with a diffuse interface,\" Phys. Fluids 20, 112105 (2008).\n- [123] S. Madruga and U. Thiele, \"Decomposition driven interface evolution for layers of binary mixtures: II. Influence of convective transport on linear stability,\" Phys. Fluids 21, 062104 (2009).", - "page_start": 32, - "page_end": 32, - "source_file": "1001.2669.pdf" - }, - { - "text": "**3**\n\n# **Chapter 3. Planning**\n\nThis chapter describes steps that are required to plan the installation of an IBM System Storage SAN Volume Controller in your storage network.\n\nThis chapter includes the following topics:\n\n- -3.1, \"General planning rules\" on page 44\n- -3.2, \"Planning for availability\" on page 46\n- -3.3, \"Connectivity planning\" on page 47\n- -3.4, \"Physical planning\" on page 48\n- -3.5, \"Planning IP connectivity\" on page 48\n- -3.6, \"SAN configuration planning\" on page 50\n- -3.7, \"iSCSI configuration planning\" on page 60\n- -3.8, \"Back-end storage subsystem configuration\" on page 63\n- -3.9, \"Internal storage configuration\" on page 65\n- -3.10, \"Storage pool configuration\" on page 65\n- -3.11, \"Volume configuration\" on page 67\n- -3.12, \"Host attachment planning\" on page 70\n- -3.13, \"Host mapping and LUN masking\" on page 71\n- -3.14, \"NPIV planning\" on page 72\n- -3.15, \"Advanced Copy Services\" on page 72\n- -3.17, \"Data migration from a non-virtualized storage subsystem\" on page 79\n- -3.17, \"Data migration from a non-virtualized storage subsystem\" on page 79\n- -3.18, \"Storwize V7000 configuration backup procedure\" on page 80\n- -3.19, \"Performance considerations\" on page 80\n- -3.20, \"Storage Insights\" on page 83", - "page_start": 64, - "page_end": 64, - "source_file": "sg247938.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "Where are located the magnetic ions in the lattice of the studied layers ?", - "target_page": 2, - "target_passage": "the magnetic ions are located on the sites of a body-centered tetragonal (BCT) lattice", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90◦ to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane h100i orientations, and the latter dominant close to TC (∼35 K) giving an easy axis along the [1¯10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the TC of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field HE, indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/d dependence of HE was found previously for MnAs/(Ga,Mn)As bilayers4 , and is generally observed in exchanged-biased thin films12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆E = MF SHEd = 0.003 erg/cm2 . This value is rather small compared to typical exchange bias systems12, reflecting the low moment density MF S of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures13, while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe L2,3 absorption edges in order to determine the magnetic response of the individual elements. In L2,3 XMCD, electrons are excited from a 2p core level to the unoccupied 3d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products – either fluorescent x-rays or electrons – of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L2,3 absorption, the probing depths for FY and TEY detection are λF Y ≈ 100 nm and λT EY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as (Il − Ir)/(Il + Ir) where Il(r) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70◦ to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L2,3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "FIG. 1: (colors online) (a): body-centered tetragonal (BCT) lattice with J0 in-plane coupling constant, and out-of-plane J1, and J2 competing interactions.\n\nbe achieved with different number of interacting layers: notably, nearest and next-nearest layers competitive interactions are enough to get a helical structure with a whatever pitch wavevector. Such observation gives us a possible way to solve the conundrum previously emerged, as we have the possibility of varying the range of interactions without modifying the helical pitch, thus decoupling the two relevant length scales along the film growth direction, and making accessible a range of n of the order of, or smaller than, the helical pitch, but still large enough that a substantial number of layers can behave as \"bulk\" layers. Therefore, while in the previous papers we have studied the properties of ultrathin magnetic films of Ho assuming a model with six interlayer exchange interactions, here we investigate by MC simulations the properties of the same system by making use of the simplest model Hamiltonian able to describe the onset of a helical magnetic order in Holmium, i.e. we consider only two inter-layer coupling constants, as previously done in Ref. 11.\n\nThe paper is organized as follows: In Sec. II the model Hamiltonian will be defined, and the MC techniques, and all the thermodynamic quantities relevant for this study, will be introduced. In Sec. III the results obtained for different thicknesses will be presented, both in the matter of the critical properties of the model and of the magnetic ordered structures observed. Finally, in Sec. IV we shall discuss such results, drawing also some conclusions.\n\n# II. MODEL HAMILTONIAN AND MONTE CARLO OBSERVABLES\n\nThe model Hamiltonian we use in our simulations is the minimal one able to describe helimagnetic structures:\n\n$${\\mathcal{H}}=-\\left[J_{0}\\sum_{\\langle i j\\rangle}{\\vec{S}}_{i}\\cdot{\\vec{S}}_{j}+J_{1}\\sum_{\\langle i k\\rangle}{\\vec{S}}_{i}\\cdot{\\vec{S}}_{k}+J_{2}\\sum_{\\langle i l\\rangle}{\\vec{S}}_{i}\\cdot{\\vec{S}}_{l}\\right].\\tag{1}$$\n\nS~ i are classical planar unit vectors representing the direction of the total angular momentum of the magnetic ions, whose magnitude p j(j + 1) (j = 8 for Holmium ions) is already encompassed within the definition of the interaction constants J0,1,2. As sketched in Fig. 1, the magnetic ions are located on the sites of a body-centered tetragonal (BCT) lattice; the first sum appearing in the Hamiltonian describes the in-plane (xy) nearest neighbor (NN) interaction, which is taken ferromagnetic (FM), with exchange strength J0 > 0; the second sum represents the coupling, of exchange strength J1, between spins belonging to nearest neighbor (NN) planes along the z-direction (which we will assume to coincide with the film growth direction); finally, the third sum takes into account the interaction, of exchange strength J2, between spins lying on next-nearest neighbor (NNN) planes along z. In order to have frustration, giving rise to noncollinear order along z in the bulk, NN interaction J1 can be taken both ferro- or antiferromagnetic, but NNN coupling J2 has necessarily to be antiferromagnetic, and the condition |J2| > |J1|/4 must be fulfilled. Such simplified Hamiltonian was already employed to simulate helical ordering in bulk systems by Diep1,17 and Loison18 . In the bulk limit, the state of minimal energy of a system described by Eq.(1) corresponds to a helical arrangement of spins. The ground state energy per spin is equal to eg(Qz) = [−4J0 − 2J1 (4 cos (Qzc ′ ) + δ cos (2Qzc ′ ))] where c ′ is the distance between NN layers, δ = J2 J1 , and Qzc ′ = arccos − 1 δ is the angle between spins lying on adjacent planes along the z-direction. The observed helical arrangement in bulk holmium corresponds to Qzc ′ ≃ 30.5 ◦10: such value can be obtained from the formula above with the set of coupling constants J0=67.2 K, J1=20.9 K, and J2 = −24.2 K, that we have employed in our simulations. The given values for the exchange constants are the same already used by Weschke et al. in Ref. 13 to interpret experimental data on Holmium films on the basis of a J1 − J2 model, after a proper scaling by the numbers of NN and NNN on neighboring layers of a BCT lattice.\n\nIn the following we will denote with n the film thickness, i.e. the number of spin layers along the z direction, and with L×L the number of spins in each layer (i.e., L is the lattice size along both the x and y directions). In our simulations thickness values from 1 to 24 were considered, while the range of lateral size L was from 8 to 64. Periodic boundary conditions were applied along x and y, while free boundaries were obviously taken along the film growth direction z.\n\nThermal equilibrium was attained by the usual Metropolis algorithm19, supplemented by the overrelaxed technique20 in order to speed-up the sampling of the spin configuration space: a typical \"Monte Carlo step\" was composed by four Metropolis and four-five over-relaxed moves per particle. Such judicious mix of moves is able both to get faster the thermal equilibrium and to minimize the correlation \"time\" between successive samples, i.e. the undesired effects due to lack of in", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0510.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n### A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l. The resulting three possible states of a cell are: liquid (l = 1, n = 0), nanoparticle (l = 0, n = 1), and vapour (l = 0, n = 0, i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\n$$E\\,=\\,-\\frac{\\varepsilon_{nn}}{2}\\sum_{}n_{i}n_{j}\\,-\\,\\frac{\\varepsilon_{nl}}{2}\\sum_{}n_{i}l_{j}\\,-\\,\\frac{\\varepsilon_{ll}}{2}\\sum_{}l_{i}l_{j}\\,-\\,\\mu\\sum_{i}l_{i}\\tag{3}$$\n\nwhere P denotes a sum over nearest neighbour pairs and εll, εnn and εnl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters εll, εnn, εnl and the effective chemical potential µ determines the equilibrium state of the system. We choose εll as unit of energy – i.e. we set εll = 1.\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang1\n\n1Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n| I. Introduction. | 1 |\n| --- | --- |\n| II. Formulation of the Pseudo-spin-1/2 from | |\n| Four-spin Cluster. | 2 |\n| III. Realization of the Kitaev Model. | 3 |\n| IV. Generate the High Order Physical Spin | |\n| Interactions by Perturbative Expansion. | 5 |\n| A. Generate the High Order Terms by Coupling | |\n| to Optical Phonon. | 5 |\n| B. Generate the High Order Terms by Magnetic | |\n| Interactions between Clusters. | 7 |\n| V. Conclusions. | 8 |\n| Acknowledgments | 8 |\n| A. Coupling between Distortions of a | |\n| Tetrahedron and the Pseudo-spins | 8 |\n| B. Derivation of the Terms Generated by | |\n| Second Order Perturbation of Inter-cluster | |\n| Magnetic Interactions | 9 |\n| References | 10 |\n\n#### I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential to realize non-Abelian anyons. The model simply reads\n\n$$H_{\\rm Kitaev}=-\\sum_{x-{\\rm links}\\ }J_{x}\\tau_{j}^{x}\\tau_{k}^{x}-\\sum_{y-{\\rm links}\\ }J_{y}\\tau_{j}^{y}\\tau_{k}^{y}$$\n \n$$-\\sum_{z-{\\rm links}\\ }J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere τ x,y,z are Pauli matrices, and x, y, z-links are defined in FIG. 1. It was shown by Kitaev1 that this spin-1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as |Jx|, |Jy|, and |Jz| satisfy the triangular relation, sum of any two of them is greater than the third one1 . It was further proposed by Kitaev1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems2,3. The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works4–7. Exact diagonalization has been used to study the Kitaev model on small lattices8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models9 .\n\nMany generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability pacc = min[1, exp(−∆E/kT)] where k is the Boltzmann constant, T the temperature and ∆E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1. This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kTc ≈ 0.567 liquid and vapour coexist when µ = µcoex = −2. For µ > −2 [µ < −2] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < −2 [µ > −2], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < −2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further – at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0.2. Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26–28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina1,2,3 , ∗ Jean-Fran¸cois Dufrˆeche1,2,3 , † Mathieu\n\nSalanne1,2 , Olivier Bernard1,2 , Marie Jardat1,2 , and Pierre Turq1,2\n\n1 UPMC-Universit´e Paris 06, UMR 7195, PECSA, F-75005 Paris, France\n\nUMR 5257 CEA–CNRS–Universit´e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C`eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, H¨uckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8–11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from molecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.\n\nThe first stage consists in calculating the McMillan-Mayer effective ion-ion interaction potentials V eff ij (r), by inverting the radial distribution functions (RDF) gij (r) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0.64 mol l−1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij (r) (which depends on the dielectric constant of the solvent) from V eff ij (r), we obtain the short-range contribution V SR ij (r) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na+ and Cl− free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier (& 2kBT ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-\n\n2 CNRS, UMR 7195, PECSA, F-75005 Paris, France 3\n\nInstitut de Chimie S´eparative de Marcoule (ICSM),\n\nElectronic address: john.molina@etu.upmc.fr\n\nElectronic address: jean-francois.dufreche@upmc.fr", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and – in the case of DNA – liquid crystalline structures [22, 30, 45–49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51–53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55–58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n### II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37–40, 61]. The gold core of 2 – 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms (C6 to C12) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n### B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78–83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρl and the nanoparticles ρn. The densities ρl and ρn are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F[ρl , ρn], and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "What is the minimum number of spin layers in a film before a correct bulk is reached ?", - "target_page": 1, - "target_passage": "For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "chirality interactions in cold atom optical lattices has been proposed38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λx,y,z/Jcluster ∼ p |Jx,y,z|/Jcluster.\n\n### V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n#### Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n# Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref.35 the couplings of all tetrahedron distortion modes to the spin system. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\n$$H_{\\rm cluster},\\ {\\rm SL}=(J_{\\rm cluster}/2)(\\sum_{\\ell}{\\bf S}_{\\ell})^{2}+J^{\\prime}\\sum_{\\ell}J_{x}\\tau_{j}^{x}\\tau_{k}^{x}-\\sum_{y-{\\rm links}\\ }J_{y}\\tau_{j}^{y}\\tau_{k}^{y}$$\n \n$$-\\sum_{z-{\\rm links}\\ }J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere τ x,y,z are Pauli matrices, and x, y, z-links are defined in FIG. 1. It was shown by Kitaev1 that this spin-1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as |Jx|, |Jy|, and |Jz| satisfy the triangular relation, sum of any two of them is greater than the third one1 . It was further proposed by Kitaev1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems2,3. The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works4–7. Exact diagonalization has been used to study the Kitaev model on small lattices8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models9 .\n\nMany generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "FIG. 1: The honeycomb lattice for the Kitaev model. Filled and open circles indicate two sublattices. x, y, z label the links along three different directions used in (1).\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability4,6. And many generalizations to other(even 3D) lattices have been developed in the last few years10–16. All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices17,18, or in superconducting circuits19. But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin20, in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t2g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of Sj ·Sk [or the spin-chirality Sj ·(Sk ×Sℓ)] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the Jx,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref.9,10. Right: enlarged picture of the clusters with the four physical spins labeled as 1, . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling Jcluster.\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n# II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.\n\nIn this Section we will construct the pseudo-spin-1/2 from a cluster of four physical spins, and map the physical spin operators to pseudo-spin operators. The mapping constructed here will be used in later Sections to construct the effective Kitaev model. In this Section we will work entirely within the four-spin cluster, all unspecified physical spin subscripts take values 1, . . . , 4.\n\nConsider a cluster of four spin-1/2 moments(called physical spins hereafter), labeled by S1,...,4, antiferromagnetically coupled to each other (see the right bottom part of FIG. 2). The Hamiltonian within the cluster(up to a constant) is simply the Heisenberg antiferromagnetic(AFM) interactions,\n\n$$H_{\\rm cluster}=\\left(J_{\\rm cluster}/2\\right)\\left({\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4}\\right)^{2}\\tag{2}$$\n\nThe energy levels should be apparent from this form: one group of spin-2 quintets with energy 3Jcluster, three groups of spin-1 triplets with energy Jcluster, and two spin singlets with energy zero. We will consider large positive", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "Figure 4.32. Spin Characteristics", - "page_start": 327, - "page_end": 327, - "source_file": "00-80T-80.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C5 and C8) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C10 and C12). For even longer chains (C14), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale (> 100µm) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\n$$H_{\\rm cluster}=(J_{\\rm cluster}/2)(r\\cdot{\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4})^{2}\\tag{11}$$\n\nwith Jcluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap (∼ Jcluster), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref.24–27 .\n\n# IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the Jx,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1, . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j(k) as j1, . . . , j4 (k1, . . . , k4), and denote pseudo-spins on cluster j(k) as ~τj (~τk).\n\n# A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling Jcluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of Sℓ · Sm. Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials28–34. More details can be found in a recent review by Tchernyshyov and Chern35. And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1, . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref.35). Only two tetragonal to orthorhombic distortion modes, QE 1 and QE 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "excessive angles of attack. Of course, a low speed airplane could be: designed to be spinproof by making it stallproof. By limiting the amount of control deflection, the airplane may not have the longitudinal control power to trim to maximum lift angle of attack. Such a provision may be possible for certain light planes and commercial aircraft but would create an unrealistic and impractical limitation on the utility of a military airplane.\n\nThe modern high speed airplane configuration is typified by low aspect ratio, swept wing planforms with relatively large yaw and pitch inertia. The aerodynamic characteristics of such a configuration are shown in figure 4.32. The lift curve (C, versus U) is quite shallow at high angles of attack and maximum lift is not clearly defined. When this type of airplane is provided a rolling motion at high angles of attack, relatively small changes in C, take place. When this effect is combined with the relatively short span of this type airplane, it is apparent that the wing autorotation contribution will be quite weak and will not be a predominating pro-spin moment. The relatively large changes in drag coefficient with rolling motion imply .a predominance of yaw for the spin of the high speed airplane configuration.\n\nActually, various other factors contribute to the predominating yaw tendency for the spin of the modern airplane configuration. The static directional stability deteriorates at high angles of attack and may be so weak that extemely large yaw displacements result. In certain instances, very high angles of attack may bring such a decay in directional stability that a \"slice\" or extreme yaw displacement takes place before a true spin is apparent. At these high angles of attack, the adverse yaw due to roll and aileron deflection can be very strong and create large yaw displacements of the airplane prior to realizing a stall.\n\nThe aircraft with the relatively large, long fuselage can exhibit a significant moment contribution from the fuselage alone. The cross flow pattern on the fuselage at high angles of\n\nattack is capable of producing pro-spin moments of considerable magnitude which contribute to the self-sustaining nature of the spin. Also, the large distributed mass of the fuselage in rolling-yawing rotation contributes to inertia moments which flatten the spin and place the aircraft at extreme angles of attack.\n\nThe spin recovery of the modern high speed airplane involves principles which are similar to those of the spin recovery of the conventional airplane. However, the nature of the spin for the modern configuration may involve specific differences in technique necessary to reduce the sideslip and angle of attack. The use of opposite rudder to control the sideslip and effect recovery will depend on the effectiveness of the rudder when the airplane is in the spin. At high positive angles of attack and high sideslip the rudder effectiveness may be reduced and additional anti-spin moments must be provided for rapid recovery. The deflection of ailerons into the spin reduces the autorotation rolling moment and can produce adverse yaw to aid the rudder yawing moment in effecting recovery.\n\nThere may be many other specific differences in the technique necessary to effect spin recovery . The effectiveness of the rudder during recovery may be altered by the position of elevators or horizontal tail. Generally, full aft stick may be necessary during the initial phase of recovery to increase the effectiveness of the rudder. The use of power during the spin recovery of a propeller powered airplane may or may not aid recovery depending on the specific airplane and the particular nature of the slipstream effects. The use of power during the spin recovery of a jet powered airplane induces no significant or helpful flow but does offer the possibility of a severe compressor stall and adverse gyroscopic moments. Since the airplane is at high angle of attack and sideslip, the flow at the inlet may be very poor and the staI1 limits considerably reduced. These items serve to point out possible differences in technique required for various configurations. The spin recovery specific for", - "page_start": 328, - "page_end": 328, - "source_file": "00-80T-80.pdf" - }, - { - "text": "FIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height hp = hφ. The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\nshould also be investigated further in the simple case presented here.\n\n### IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability pacc = min[1, exp(−∆E/kT)] where k is the Boltzmann constant, T the temperature and ∆E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1. This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kTc ≈ 0.567 liquid and vapour coexist when µ = µcoex = −2. For µ > −2 [µ < −2] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < −2 [µ > −2], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < −2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further – at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0.2. Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26–28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "What the rough sales amount of the nordstrom.com website ?", - "target_page": 3, - "target_passage": "$2 billion in nordstrom.com sales", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n#### Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n#### **Retail Business Gross Profit**\n\nThe following table summarizes the Retail Business gross profit:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Retail gross profit1 | $4,709 | $4,434 | $4,335 |\n| Retail gross profit as a % of net sales | 35.9% | 36.4% | 36.9% |\n| Ending inventory per square foot2 | $64.05 | $58.84 | $53.77 |\n| Inventory turnover rate3 | 4.67 | 5.07 | 5.37 |\n\n1 Retailers do not uniformly record the costs of buying and occupancy and supply chain operations (freight, purchasing, receiving, distribution, etc.) between gross profit and selling, general and administrative expense. As such, our gross profit and selling, general and administrative expenses and rates may not be comparable to other retailers' expenses and rates.\n\n2 Ending inventory includes pack and hold inventory of $222, $173 and $125 in 2014, 2013 and 2012, which represents strategic purchases of merchandise for upcoming selling seasons.\n\n3 Inventory turnover rate is calculated as annual cost of sales and related buying and occupancy costs (for all segments) divided by 4-quarter average inventory. Retailers do not uniformly calculate inventory turnover as buying and occupancy costs may be included in selling, general and administrative expenses. As such, our inventory turnover rates may not be comparable to other retailers.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **Item 1. Business.**\n\n## **DESCRIPTION OF BUSINESS**\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the **Retail** segment includes our 115 \"Nordstrom\" branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name \"Last Chance.\" Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur **Credit** segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n#### **FISCAL YEAR**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **TRADEMARKS**\n\nWe have 156 trademarks, each of which is the subject of one or more trademark registrations and/or trademark applications. Our most notable trademarks include Nordstrom, Nordstrom Rack, HauteLook, Halogen, BP., Zella, Caslon and Trunk Club. Each of our trademarks is renewable indefinitely, provided that it is still used in commerce at the time of the renewal.\n\n#### **RETURN POLICY**\n\nWe have a fair and liberal approach to returns as part of our objective to provide high-quality customer service. We do not have a formal return policy at our Nordstrom full-line stores or online at Nordstrom.com. Our goal is to take care of our customers, which includes making returns and exchanges easy, whether in stores or online, where we offer free shipping and free returns. Our Nordstrom Rack stores generally accept returns up to 90 days from the date of purchase with the original price tag and sales receipt, and also accept returns of Nordstromrack.com and HauteLook merchandise. Nordstromrack.com and HauteLook generally accept returns of apparel, footwear and accessories within 90 days from the date of shipment.\n\n#### **SEASONALITY**\n\nDue to our Anniversary Sale in July and the holidays in December, our sales are typically higher in the second and fourth quarters than in the first and third quarters of the fiscal year.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### **Retail Business Net Sales**\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel2: | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | — |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot3: | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |\n\n1 Other retail includes our Jeffrey boutiques, Trunk Club and our Nordstrom Canada full-line store.\n\n2 Comparable sales include sales from stores that have been open at least one full year at the beginning of the year. We also include sales from our online channels (Nordstrom.com, Nordstromrack.com and HauteLook) in comparable sales because of the integration with our stores. Fiscal year 2012 includes an extra week (the 53rd week) as a result of our 4-5-4 retail reporting calendar. The 53rd week is not included in comparable sales calculations.\n\n3 Sales per square foot is calculated as net sales divided by weighted-average square footage. Weighted-average square footage includes a percentage of year-end square footage for new stores equal to the percentage of the year during which they were open. 4-wall sales per square foot is calculated as sales for Nordstrom U.S. full-line stores, Nordstrom Rack stores, Jeffrey boutiques, our Canada full-line store, Last Chance and Trunk Club showrooms divided by their weighted-average square footage.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»›**THE RACK GOES ONLINE** SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by oering programs that appeal to more customers will be beneficial in the long term.\n\n#### CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n21008 - 037404B 2014 ANNUAL REPORT pg 9\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers—no matter how they choose to shop with Nordstrom.\n\n**Blake W. Nordstrom** President, Nordstrom, Inc.\n\n**Peter E. Nordstrom** President of Merchandising, Nordstrom, Inc.\n\n**Erik B. Nordstrom** President of Nordstrom.com, Nordstrom, Inc.\n\n*I don't think I could've* **\"** *received better news today. Nordstrom Rack has now launched online!* **\"**\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "**OUR NEW LOOK** FROM WINDOWS THAT BRING THE OUTSIDE IN TO DEPARTMENTS THAT SEAMLESSLY FLOW TOGETHER— OUR NEW STORE DESIGN CREATES AN EXCITING SPACE THAT CAN CHANGE WITH HOW OUR CUSTOMERS SHOP.\n\nto be within two-day ground delivery of approximately half the population of the United States, which will help improve delivery times for customers and help us meet their rising expectations.\n\nFinally, in 2014, we acquired Trunk Club, a high-growth personalized men's clothing business based on a service model that is highly complementary to our own. We believe Trunk Club is a natural extension of our business, and together we will continue to evolve and bring together the online and oine worlds to deliver a great shopping experience.\n\n#### OFF-PRICE: NORDSTROM RACK, NORDSTROMRACK.COM AND HAUTELOOK\n\n21008 - 037404B 2014 ANNUAL REPORT pg 7\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nWe opened a record 27 new Nordstrom Rack stores, ending 2014 with 167 stores and on track to meet our long-term growth plans\n\nof 300 stores by 2020. Customers continue to respond favorably to the treasure-hunt experience that defines Nordstrom Rack stores. As we expand in many markets for the first time, we hope to continue delivering a great experience, as this business represents a terrific opportunity for us to attract new customers. Last year, Nordstrom Rack was our biggest source of new customers, attracting nearly 4 million. Also, a year ago, we began accepting returns of HauteLook and Nordstromrack.com merchandise at any Nordstrom Rack store. This drove nearly 1 million trips to Nordstrom Rack stores in 2014. The Nordstrom Rack customer also tends to be younger than our full-line customer, and there is a meaningful opportunity for these customers to begin shopping our full-price channels as well. We plan to open 27 more Nordstrom Racks in 2015 across the U.S.\n\n*I love how you used models with* **\"** *physical challenges in your Anniversary catalog. Nice work!* **\"**\n\nOUR CUSTOMER, DONNA A.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n#### **NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES**\n\n#### **The Company**\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 \"Nordstrom\" branded full-line stores in the U.S. and at Nordstrom.com (collectively, \"Nordstrom\"), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n#### **Fiscal Year**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **Principles of Consolidation**\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n#### **Use of Estimates**\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n#### **Net Sales**\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n1 Returns, net consist of actual returns offset by the value of the merchandise returned and any related sales commission.\n\n#### **Credit Card Revenues**\n\nCredit card revenues include finance charges, late fees and other revenue generated by our combined Nordstrom private label card and Nordstrom Visa credit card programs, and interchange fees generated by the use of Nordstrom Visa credit cards at third-party merchants. Finance charges and late fees are assessed according to the terms of the related cardholder agreements and recognized as revenue when earned. Credit card revenues are recorded net of estimated uncollectible finance charges and fees.\n\n#### **Cost of Sales**\n\nCost of sales includes the purchase cost of inventory sold (net of vendor allowances), in-bound freight and certain costs of loyalty program benefits related to our credit and debit cards.", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **Item 7. Management's Discussion and Analysis of Financial Condition and Results of Operations.**\n\nDollar, share and square footage amounts in millions except percentages, per share and per square foot amounts\n\n#### **OVERVIEW**\n\nNordstrom is a leading fashion specialty retailer offering apparel, shoes, cosmetics and accessories for women, men and children. We offer an extensive selection of high-quality brand-name and private label merchandise through our various channels: \"Nordstrom\" branded full-line stores and online store at Nordstrom.com, Nordstrom Rack stores, Nordstromrack.com and HauteLook and other retail channels, including Trunk Club showrooms and TrunkClub.com, our Jeffrey boutiques and our clearance store that operates under the name \"Last Chance.\" As of January 31, 2015, our stores are located in 38 states throughout the United States and in one province in Canada. In addition, we offer our customers a Nordstrom Rewards™ loyalty program along with a variety of payment products and services, including credit and debit cards.\n\nWe continue to see the ongoing evolution of retail, with increasing customer interaction between our stores and ecommerce. We are making progress to meet customer expectations of a personalized experience that merges the richness of stores with the convenience of online. Because the customer views us simply as Nordstrom, we believe there is tremendous value in strengthening our platform for the customer experience that encompasses full-price, off-price, in-store and online. While each channel represents a substantial growth opportunity, there are significant synergies across channels to create a unique customer experience to gain market share.\n\nWe considered 2014 a watershed year in our company history, with our successful entry into Canada, continued expansion of our Nordstrom Rack business through store growth, the launch of Nordstromrack.com and the acquisition of Trunk Club. Our performance in 2014 reflected continued progress in executing our customer strategy through investments to drive growth across channels. We achieved total net sales growth of 7.8%, adding nearly $1 billion to our top-line and delivering record sales and earnings per diluted share. Our financial position remains strong and this marked the sixth consecutive year we generated over $1 billion in cash flow from operations.\n\nOur partnership with vendors and brands enhances our product offering. We offer Topshop merchandise at 53 full-line stores and online, with plans to reach over 80 stores in 2015. Our new partnership with Madewell in 2015, initially available at 15 of our stores and online, is another way to provide sought-after brands that appeal to new and existing customers.\n\nIn 2014, we opened our first full-line store in Canada in Calgary, Alberta, reflecting a multi-year effort from our team to address the unique challenges of crossing the border. With our store outperforming our expectations, we are encouraged with our customers' response in this market. We are looking forward to opening stores in 2015 in Ottawa, Ontario and Vancouver, British Columbia. In the U.S. we increased our presence with two full-line stores in The Woodlands, Texas and Jacksonville, Florida. In 2015, we plan to open three full-line stores in Puerto Rico, Minneapolis, Minnesota and Milwaukee, Wisconsin.\n\nAt Nordstrom Rack, we offer customers great brands at great prices, with 48 of the top 50 full-line brands represented. We opened 27 Nordstrom Rack stores in 2014, a record number of openings, contributing to Nordstrom Rack's total sales growth of 17%.\n\nOur online businesses continue to be our fastest-growing channels. In the spring of 2014, we expanded our capabilities through the launch of Nordstromrack.com, providing a seamless integration with HauteLook. We more than doubled our merchandise selection, which accelerated growth in this channel in the second half of 2014. Demonstrating synergies across our businesses, we enabled customers to return purchases from HauteLook and Nordstromrack.com to any of our Nordstrom Rack stores, which drove nearly one million incremental trips to Nordstrom Rack stores.\n\nNordstrom.com finished its fifth consecutive year of approximately 20% or more comparable sales growth, with a key driver being increased merchandise selection. In 2015, we plan to open our third fulfillment center, located in Pennsylvania, which will enhance the customer experience through faster delivery. Furthermore, we have extended our full-price offering with our acquisition of Trunk Club, a high-growth business offering a new approach to personalized service.\n\nOur credit business, through our Nordstrom Rewards program, continues to play an important role in attracting new customers and deepening our engagement with existing customers. The program contributes to our overall results, with members shopping more frequently and spending more on average than non-members. For the third consecutive year, we opened over one million new accounts. With over four million active members, 2014 sales from members represented approximately 40% of our sales.\n\nWe are confident in our ability to execute our customer strategy as we evolve with customers and continue to leverage capabilities across all channels to serve customers on their terms. To enhance the customer experience, we continue to make investments in our stores in new markets such as Canada, Puerto Rico and Manhattan, in our ecommerce and fulfillment capabilities and in technology to support growth across all channels. We believe these investments in our customer strategy will help us achieve long-term top-quartile shareholder returns through high single-digit total sales growth and mid-teens Return on Invested Capital.", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Nordstrom Rack net sales for the quarter increased $130, or 17%, reflecting 27 new Nordstrom Rack store openings since the fourth quarter of 2013, while comparable sales increased 3.2%. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat. Shoes and Accessories were the category highlights for Nordstrom Rack.\n\n#### Gross Profit\n\nOur total company gross profit rate decreased 53 basis points compared with the same period in the prior year, primarily due to increased markdowns at Nordstrom Rack.\n\n#### Retail Selling, General, and Administrative Expenses\n\nOur Retail SG&A rate increased 80 basis points primarily due to expenses related to the acquisition of Trunk Club and ongoing technology and fulfillment expenses.\n\n#### Credit Expenses\n\nIn the fourth quarter, expenses for our Credit segment of $54 increased from $38 in the prior year. The increase was primarily driven by higher operational expenses resulting from a 6% increase in credit volume during the fourth quarter of 2014. The fourth quarter of 2013 also included the impact of the conversion of our Nordstrom Rewards travel benefit into Nordstrom Notes, which decreased operational expenses in the prior year.\n\nFor further information on our quarterly results in 2014 and 2013, refer to Note 17: Selected Quarterly Data in the Notes to Consolidated Financial Statements in Item 8: Financial Statements and Supplementary Data.\n\n#### **2015 Outlook**\n\nOur expectations for 2015 are as follows:\n\n| Net sales | 7 percent to 9 percent increase |\n| --- | --- |\n| Comparable sales | 2 percent to 4 percent increase |\n| Earnings per diluted share1 | $3.65 to $3.80 |\n\n1 This outlook does not include the impact of any future share repurchases.\n\nCapital expenditures, net of property incentives, of approximately $1.2 billion are expected in 2015, an increase from $751 in 2014. The increase relates to store expansion, including Canada and Manhattan, and ongoing investments to improve the customer experience through flagship store remodels and a third fulfillment center expected to open in the second half of the year. To date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# DEAR CUSTOMERS, EMPLOYEES AND SHAREHOLDERS,\n\nFor 114 years, our focus has been on our customers. We have been most successful when we view our business through their eyes. In today's rapidly changing retail landscape, this approach has never been more important, so our strategy remains squarely focused on serving\n\ncustomers on their terms. Knowing customers increasingly desire an experience that's both personalized and convenient, we continue to make investments that further integrate our store and online experience to enable our customers to shop seamlessly any way they choose.\n\nA RECORD\n\n**IN TOTAL COMPANY SALES.** WITH SALES GROWTH OF 7.8% AND COMPARABLE SALES INCREASE OF 4%, WE BEAT OUR OWN EXPECTATIONS.\n\n**4 million NEW CUSTOMERS** SHOPPED AT NEARLY\n\nNORDSTROM RACK—THAT'S MORE THAN AT ANY OTHER CHANNEL.\n\n**27 NEW NORDSTROM RACK STORES.** PLUS, RACK SALES INCREASED 17% AND RACK COMPARABLE SALES GAINED 3.8%.\n\n21008 - 037404B 2014 ANNUAL REPORT pg 3\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nMORE THAN\n\n# **1 million**\n\n**STORE VISITS** FROM CUSTOMERS RETURNING THEIR HAUTELOOK AND NORDSTROMRACK.COM PURCHASES TO NORDSTROM RACK.\n\n**IN NORDSTROM.COM SALES.** THAT'S MORE THAN DOUBLE OUR SALES FROM JUST THREE YEARS AGO.\n\n# **1 million** MORE THAN\n\n**NEW MEMBERS** JOINED OUR NORDSTROM REWARDS™ PROGRAM FOR THE THIRD YEAR IN A ROW.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "«‹**A PERFECT PAIR: SHOES AND SJP** ACTRESS AND STYLE ICON SARAH JESSICA PARKER DESIGNED HER OWN SHOE LINE, SJP, AND WE WERE THE EXCLUSIVE RETAILER FOR ITS LAUNCH.\n\n»›**THAT'S BRILLIANT!** WE'LL HAVE TOPSHOP IN 80 STORES BY THE END OF 2015—AND THAT'S JUST ONE OF THE WAYS WE'RE ATTRACTING NEW YOUNG CUSTOMERS WITH GREAT BRANDS AT ACCESSIBLE PRICE POINTS.\n\n*Praise the fashion gods.* **\"** *Nordstrom Downtown Portland is opening Topshop in the next month.* **\"**\n\nOUR CUSTOMER, KARLY T.\n\n21008 - 037404B 2014 ANNUAL REPORT pg 8\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nIn addition to our new stores, we improved our online/o\f-price capabilities with the launch of Nordstromrack.com. Combined with HauteLook, the integrated ecommerce site o\fers a consistent merchandise selection as well as flash sales in a single web or mobile experience, providing customers a wide range of merchandise with one easy-to-use, shared checkout. Since the launch last spring, we've more than doubled the selection at Nordstromrack.com. We will continue to work on ways to further integrate our business to improve our customer experience.\n\n#### INCREASING RELEVANCE\n\nWe know ultimately customers come to Nordstrom for great merchandise. They continue to respond to fresh, relevant brands. Last year, we were the exclusive retail partner for the global launch of\n\nSarah Jessica Parker's SJP line of shoes and launched Charlotte Tilbury in Beauty. We increased the number of full-line stores with Topshop to 53 and launched Kate Moss for Topshop, which helped us rapidly grow the number of Topshop customers, including a younger customer who in many cases is new to Nordstrom. By the end of 2015, we plan to have Topshop in more than 80 stores.\n\nThis March, we were excited to begin carrying Madewell, representing a new partnership with J.Crew. Our initial launch was on Nordstrom.com and in 15 of our stores in our t.b.d. department. This is a terrific example of our continued focus to bring great fashion brands to customers at accessible price points.\n\nFinally, Nordstrom Rewards has been a successful program enabling us to deepen", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "How many employees did Nordstrom count in 2014 ?", - "target_page": 17, - "target_passage": "During 2014, we employed approximately 67,000 employees on a full- or part-time basis.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Item 1. Business.**\n\n## **DESCRIPTION OF BUSINESS**\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the **Retail** segment includes our 115 \"Nordstrom\" branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name \"Last Chance.\" Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur **Credit** segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n#### **FISCAL YEAR**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **TRADEMARKS**\n\nWe have 156 trademarks, each of which is the subject of one or more trademark registrations and/or trademark applications. Our most notable trademarks include Nordstrom, Nordstrom Rack, HauteLook, Halogen, BP., Zella, Caslon and Trunk Club. Each of our trademarks is renewable indefinitely, provided that it is still used in commerce at the time of the renewal.\n\n#### **RETURN POLICY**\n\nWe have a fair and liberal approach to returns as part of our objective to provide high-quality customer service. We do not have a formal return policy at our Nordstrom full-line stores or online at Nordstrom.com. Our goal is to take care of our customers, which includes making returns and exchanges easy, whether in stores or online, where we offer free shipping and free returns. Our Nordstrom Rack stores generally accept returns up to 90 days from the date of purchase with the original price tag and sales receipt, and also accept returns of Nordstromrack.com and HauteLook merchandise. Nordstromrack.com and HauteLook generally accept returns of apparel, footwear and accessories within 90 days from the date of shipment.\n\n#### **SEASONALITY**\n\nDue to our Anniversary Sale in July and the holidays in December, our sales are typically higher in the second and fourth quarters than in the first and third quarters of the fiscal year.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n#### Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n#### **Retail Business Gross Profit**\n\nThe following table summarizes the Retail Business gross profit:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Retail gross profit1 | $4,709 | $4,434 | $4,335 |\n| Retail gross profit as a % of net sales | 35.9% | 36.4% | 36.9% |\n| Ending inventory per square foot2 | $64.05 | $58.84 | $53.77 |\n| Inventory turnover rate3 | 4.67 | 5.07 | 5.37 |\n\n1 Retailers do not uniformly record the costs of buying and occupancy and supply chain operations (freight, purchasing, receiving, distribution, etc.) between gross profit and selling, general and administrative expense. As such, our gross profit and selling, general and administrative expenses and rates may not be comparable to other retailers' expenses and rates.\n\n2 Ending inventory includes pack and hold inventory of $222, $173 and $125 in 2014, 2013 and 2012, which represents strategic purchases of merchandise for upcoming selling seasons.\n\n3 Inventory turnover rate is calculated as annual cost of sales and related buying and occupancy costs (for all segments) divided by 4-quarter average inventory. Retailers do not uniformly calculate inventory turnover as buying and occupancy costs may be included in selling, general and administrative expenses. As such, our inventory turnover rates may not be comparable to other retailers.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n#### **NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES**\n\n#### **The Company**\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 \"Nordstrom\" branded full-line stores in the U.S. and at Nordstrom.com (collectively, \"Nordstrom\"), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n#### **Fiscal Year**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **Principles of Consolidation**\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n#### **Use of Estimates**\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n#### **Net Sales**\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n1 Returns, net consist of actual returns offset by the value of the merchandise returned and any related sales commission.\n\n#### **Credit Card Revenues**\n\nCredit card revenues include finance charges, late fees and other revenue generated by our combined Nordstrom private label card and Nordstrom Visa credit card programs, and interchange fees generated by the use of Nordstrom Visa credit cards at third-party merchants. Finance charges and late fees are assessed according to the terms of the related cardholder agreements and recognized as revenue when earned. Credit card revenues are recorded net of estimated uncollectible finance charges and fees.\n\n#### **Cost of Sales**\n\nCost of sales includes the purchase cost of inventory sold (net of vendor allowances), in-bound freight and certain costs of loyalty program benefits related to our credit and debit cards.", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### **Retail Business Net Sales**\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel2: | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | — |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot3: | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |\n\n1 Other retail includes our Jeffrey boutiques, Trunk Club and our Nordstrom Canada full-line store.\n\n2 Comparable sales include sales from stores that have been open at least one full year at the beginning of the year. We also include sales from our online channels (Nordstrom.com, Nordstromrack.com and HauteLook) in comparable sales because of the integration with our stores. Fiscal year 2012 includes an extra week (the 53rd week) as a result of our 4-5-4 retail reporting calendar. The 53rd week is not included in comparable sales calculations.\n\n3 Sales per square foot is calculated as net sales divided by weighted-average square footage. Weighted-average square footage includes a percentage of year-end square footage for new stores equal to the percentage of the year during which they were open. 4-wall sales per square foot is calculated as sales for Nordstrom U.S. full-line stores, Nordstrom Rack stores, Jeffrey boutiques, our Canada full-line store, Last Chance and Trunk Club showrooms divided by their weighted-average square footage.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»›**THE RACK GOES ONLINE** SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by oering programs that appeal to more customers will be beneficial in the long term.\n\n#### CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n21008 - 037404B 2014 ANNUAL REPORT pg 9\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers—no matter how they choose to shop with Nordstrom.\n\n**Blake W. Nordstrom** President, Nordstrom, Inc.\n\n**Peter E. Nordstrom** President of Merchandising, Nordstrom, Inc.\n\n**Erik B. Nordstrom** President of Nordstrom.com, Nordstrom, Inc.\n\n*I don't think I could've* **\"** *received better news today. Nordstrom Rack has now launched online!* **\"**\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "**OUR NEW LOOK** FROM WINDOWS THAT BRING THE OUTSIDE IN TO DEPARTMENTS THAT SEAMLESSLY FLOW TOGETHER— OUR NEW STORE DESIGN CREATES AN EXCITING SPACE THAT CAN CHANGE WITH HOW OUR CUSTOMERS SHOP.\n\nto be within two-day ground delivery of approximately half the population of the United States, which will help improve delivery times for customers and help us meet their rising expectations.\n\nFinally, in 2014, we acquired Trunk Club, a high-growth personalized men's clothing business based on a service model that is highly complementary to our own. We believe Trunk Club is a natural extension of our business, and together we will continue to evolve and bring together the online and oine worlds to deliver a great shopping experience.\n\n#### OFF-PRICE: NORDSTROM RACK, NORDSTROMRACK.COM AND HAUTELOOK\n\n21008 - 037404B 2014 ANNUAL REPORT pg 7\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nWe opened a record 27 new Nordstrom Rack stores, ending 2014 with 167 stores and on track to meet our long-term growth plans\n\nof 300 stores by 2020. Customers continue to respond favorably to the treasure-hunt experience that defines Nordstrom Rack stores. As we expand in many markets for the first time, we hope to continue delivering a great experience, as this business represents a terrific opportunity for us to attract new customers. Last year, Nordstrom Rack was our biggest source of new customers, attracting nearly 4 million. Also, a year ago, we began accepting returns of HauteLook and Nordstromrack.com merchandise at any Nordstrom Rack store. This drove nearly 1 million trips to Nordstrom Rack stores in 2014. The Nordstrom Rack customer also tends to be younger than our full-line customer, and there is a meaningful opportunity for these customers to begin shopping our full-price channels as well. We plan to open 27 more Nordstrom Racks in 2015 across the U.S.\n\n*I love how you used models with* **\"** *physical challenges in your Anniversary catalog. Nice work!* **\"**\n\nOUR CUSTOMER, DONNA A.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# DEAR CUSTOMERS, EMPLOYEES AND SHAREHOLDERS,\n\nFor 114 years, our focus has been on our customers. We have been most successful when we view our business through their eyes. In today's rapidly changing retail landscape, this approach has never been more important, so our strategy remains squarely focused on serving\n\ncustomers on their terms. Knowing customers increasingly desire an experience that's both personalized and convenient, we continue to make investments that further integrate our store and online experience to enable our customers to shop seamlessly any way they choose.\n\nA RECORD\n\n**IN TOTAL COMPANY SALES.** WITH SALES GROWTH OF 7.8% AND COMPARABLE SALES INCREASE OF 4%, WE BEAT OUR OWN EXPECTATIONS.\n\n**4 million NEW CUSTOMERS** SHOPPED AT NEARLY\n\nNORDSTROM RACK—THAT'S MORE THAN AT ANY OTHER CHANNEL.\n\n**27 NEW NORDSTROM RACK STORES.** PLUS, RACK SALES INCREASED 17% AND RACK COMPARABLE SALES GAINED 3.8%.\n\n21008 - 037404B 2014 ANNUAL REPORT pg 3\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nMORE THAN\n\n# **1 million**\n\n**STORE VISITS** FROM CUSTOMERS RETURNING THEIR HAUTELOOK AND NORDSTROMRACK.COM PURCHASES TO NORDSTROM RACK.\n\n**IN NORDSTROM.COM SALES.** THAT'S MORE THAN DOUBLE OUR SALES FROM JUST THREE YEARS AGO.\n\n# **1 million** MORE THAN\n\n**NEW MEMBERS** JOINED OUR NORDSTROM REWARDS™ PROGRAM FOR THE THIRD YEAR IN A ROW.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n# **NOTE 16: SEGMENT REPORTING**\n\n### **Segments**\n\nWe have two reportable segments: **Retail** and **Credit**. Our **Retail** segment includes our \"Nordstrom\" operating segment, which is composed of our Nordstrom full-line stores in the U.S. and our online store at Nordstrom.com. Through our multi-channel initiatives, we have integrated the operations, merchandising and technology of our Nordstrom full-line and online stores, consistent with our customers' expectations of a seamless shopping experience regardless of channel. Our internal reporting to our president, who is our chief operating decision maker, is consistent with these multi-channel initiatives. We aggregate our Nordstrom Rack operating segment into the Retail reporting segment, based on similar economic and other qualitative characteristics. Additionally, we include Nordstromrack.com, HauteLook, Jeffrey, Trunk Club and our Canadian operations in the Retail reporting segment.\n\nThrough our **Credit** segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. Our credit and debit card products also include a loyalty program that provides benefits to our cardholders based on their level of spending.\n\nAmounts in the **Corporate/Other** column include unallocated corporate expenses and assets, sales return reserve, inter-segment eliminations and other adjustments to segment results necessary for the presentation of consolidated financial results in accordance with generally accepted accounting principles.\n\n#### **Accounting Policy**\n\nIn general, we use the same measurements to compute earnings before income taxes for reportable segments as we do for the consolidated company. However, redemptions of our Nordstrom Notes are included in net sales for our Retail segment. The sales amount in our Corporate/Other column includes an entry to eliminate these transactions from our consolidated net sales. The related Nordstrom Notes expenses are included in our Retail segment at face value. Our Corporate/Other column includes an adjustment to reduce the Nordstrom Notes expense from face value to their estimated cost. In addition, our sales return reserve and other corporate adjustments are recorded in the Corporate/Other column. Other than as described above, the accounting policies of the operating segments are the same as those described in Note 1: Nature of Operations and Summary of Significant Accounting Policies.", - "page_start": 73, - "page_end": 73, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Nordstrom Rack net sales for the quarter increased $130, or 17%, reflecting 27 new Nordstrom Rack store openings since the fourth quarter of 2013, while comparable sales increased 3.2%. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat. Shoes and Accessories were the category highlights for Nordstrom Rack.\n\n#### Gross Profit\n\nOur total company gross profit rate decreased 53 basis points compared with the same period in the prior year, primarily due to increased markdowns at Nordstrom Rack.\n\n#### Retail Selling, General, and Administrative Expenses\n\nOur Retail SG&A rate increased 80 basis points primarily due to expenses related to the acquisition of Trunk Club and ongoing technology and fulfillment expenses.\n\n#### Credit Expenses\n\nIn the fourth quarter, expenses for our Credit segment of $54 increased from $38 in the prior year. The increase was primarily driven by higher operational expenses resulting from a 6% increase in credit volume during the fourth quarter of 2014. The fourth quarter of 2013 also included the impact of the conversion of our Nordstrom Rewards travel benefit into Nordstrom Notes, which decreased operational expenses in the prior year.\n\nFor further information on our quarterly results in 2014 and 2013, refer to Note 17: Selected Quarterly Data in the Notes to Consolidated Financial Statements in Item 8: Financial Statements and Supplementary Data.\n\n#### **2015 Outlook**\n\nOur expectations for 2015 are as follows:\n\n| Net sales | 7 percent to 9 percent increase |\n| --- | --- |\n| Comparable sales | 2 percent to 4 percent increase |\n| Earnings per diluted share1 | $3.65 to $3.80 |\n\n1 This outlook does not include the impact of any future share repurchases.\n\nCapital expenditures, net of property incentives, of approximately $1.2 billion are expected in 2015, an increase from $751 in 2014. The increase relates to store expansion, including Canada and Manhattan, and ongoing investments to improve the customer experience through flagship store remodels and a third fulfillment center expected to open in the second half of the year. To date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **Item 7. Management's Discussion and Analysis of Financial Condition and Results of Operations.**\n\nDollar, share and square footage amounts in millions except percentages, per share and per square foot amounts\n\n#### **OVERVIEW**\n\nNordstrom is a leading fashion specialty retailer offering apparel, shoes, cosmetics and accessories for women, men and children. We offer an extensive selection of high-quality brand-name and private label merchandise through our various channels: \"Nordstrom\" branded full-line stores and online store at Nordstrom.com, Nordstrom Rack stores, Nordstromrack.com and HauteLook and other retail channels, including Trunk Club showrooms and TrunkClub.com, our Jeffrey boutiques and our clearance store that operates under the name \"Last Chance.\" As of January 31, 2015, our stores are located in 38 states throughout the United States and in one province in Canada. In addition, we offer our customers a Nordstrom Rewards™ loyalty program along with a variety of payment products and services, including credit and debit cards.\n\nWe continue to see the ongoing evolution of retail, with increasing customer interaction between our stores and ecommerce. We are making progress to meet customer expectations of a personalized experience that merges the richness of stores with the convenience of online. Because the customer views us simply as Nordstrom, we believe there is tremendous value in strengthening our platform for the customer experience that encompasses full-price, off-price, in-store and online. While each channel represents a substantial growth opportunity, there are significant synergies across channels to create a unique customer experience to gain market share.\n\nWe considered 2014 a watershed year in our company history, with our successful entry into Canada, continued expansion of our Nordstrom Rack business through store growth, the launch of Nordstromrack.com and the acquisition of Trunk Club. Our performance in 2014 reflected continued progress in executing our customer strategy through investments to drive growth across channels. We achieved total net sales growth of 7.8%, adding nearly $1 billion to our top-line and delivering record sales and earnings per diluted share. Our financial position remains strong and this marked the sixth consecutive year we generated over $1 billion in cash flow from operations.\n\nOur partnership with vendors and brands enhances our product offering. We offer Topshop merchandise at 53 full-line stores and online, with plans to reach over 80 stores in 2015. Our new partnership with Madewell in 2015, initially available at 15 of our stores and online, is another way to provide sought-after brands that appeal to new and existing customers.\n\nIn 2014, we opened our first full-line store in Canada in Calgary, Alberta, reflecting a multi-year effort from our team to address the unique challenges of crossing the border. With our store outperforming our expectations, we are encouraged with our customers' response in this market. We are looking forward to opening stores in 2015 in Ottawa, Ontario and Vancouver, British Columbia. In the U.S. we increased our presence with two full-line stores in The Woodlands, Texas and Jacksonville, Florida. In 2015, we plan to open three full-line stores in Puerto Rico, Minneapolis, Minnesota and Milwaukee, Wisconsin.\n\nAt Nordstrom Rack, we offer customers great brands at great prices, with 48 of the top 50 full-line brands represented. We opened 27 Nordstrom Rack stores in 2014, a record number of openings, contributing to Nordstrom Rack's total sales growth of 17%.\n\nOur online businesses continue to be our fastest-growing channels. In the spring of 2014, we expanded our capabilities through the launch of Nordstromrack.com, providing a seamless integration with HauteLook. We more than doubled our merchandise selection, which accelerated growth in this channel in the second half of 2014. Demonstrating synergies across our businesses, we enabled customers to return purchases from HauteLook and Nordstromrack.com to any of our Nordstrom Rack stores, which drove nearly one million incremental trips to Nordstrom Rack stores.\n\nNordstrom.com finished its fifth consecutive year of approximately 20% or more comparable sales growth, with a key driver being increased merchandise selection. In 2015, we plan to open our third fulfillment center, located in Pennsylvania, which will enhance the customer experience through faster delivery. Furthermore, we have extended our full-price offering with our acquisition of Trunk Club, a high-growth business offering a new approach to personalized service.\n\nOur credit business, through our Nordstrom Rewards program, continues to play an important role in attracting new customers and deepening our engagement with existing customers. The program contributes to our overall results, with members shopping more frequently and spending more on average than non-members. For the third consecutive year, we opened over one million new accounts. With over four million active members, 2014 sales from members represented approximately 40% of our sales.\n\nWe are confident in our ability to execute our customer strategy as we evolve with customers and continue to leverage capabilities across all channels to serve customers on their terms. To enhance the customer experience, we continue to make investments in our stores in new markets such as Canada, Puerto Rico and Manhattan, in our ecommerce and fulfillment capabilities and in technology to support growth across all channels. We believe these investments in our customer strategy will help us achieve long-term top-quartile shareholder returns through high single-digit total sales growth and mid-teens Return on Invested Capital.", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "How many stores did Nordstrom posses at the end of 2014 ?", - "target_page": 22, - "target_passage": "Number of stores, end of year : 292", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "# **Item 1. Business.**\n\n## **DESCRIPTION OF BUSINESS**\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the **Retail** segment includes our 115 \"Nordstrom\" branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name \"Last Chance.\" Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur **Credit** segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n#### **FISCAL YEAR**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **TRADEMARKS**\n\nWe have 156 trademarks, each of which is the subject of one or more trademark registrations and/or trademark applications. Our most notable trademarks include Nordstrom, Nordstrom Rack, HauteLook, Halogen, BP., Zella, Caslon and Trunk Club. Each of our trademarks is renewable indefinitely, provided that it is still used in commerce at the time of the renewal.\n\n#### **RETURN POLICY**\n\nWe have a fair and liberal approach to returns as part of our objective to provide high-quality customer service. We do not have a formal return policy at our Nordstrom full-line stores or online at Nordstrom.com. Our goal is to take care of our customers, which includes making returns and exchanges easy, whether in stores or online, where we offer free shipping and free returns. Our Nordstrom Rack stores generally accept returns up to 90 days from the date of purchase with the original price tag and sales receipt, and also accept returns of Nordstromrack.com and HauteLook merchandise. Nordstromrack.com and HauteLook generally accept returns of apparel, footwear and accessories within 90 days from the date of shipment.\n\n#### **SEASONALITY**\n\nDue to our Anniversary Sale in July and the holidays in December, our sales are typically higher in the second and fourth quarters than in the first and third quarters of the fiscal year.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n#### Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n#### **Retail Business Gross Profit**\n\nThe following table summarizes the Retail Business gross profit:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Retail gross profit1 | $4,709 | $4,434 | $4,335 |\n| Retail gross profit as a % of net sales | 35.9% | 36.4% | 36.9% |\n| Ending inventory per square foot2 | $64.05 | $58.84 | $53.77 |\n| Inventory turnover rate3 | 4.67 | 5.07 | 5.37 |\n\n1 Retailers do not uniformly record the costs of buying and occupancy and supply chain operations (freight, purchasing, receiving, distribution, etc.) between gross profit and selling, general and administrative expense. As such, our gross profit and selling, general and administrative expenses and rates may not be comparable to other retailers' expenses and rates.\n\n2 Ending inventory includes pack and hold inventory of $222, $173 and $125 in 2014, 2013 and 2012, which represents strategic purchases of merchandise for upcoming selling seasons.\n\n3 Inventory turnover rate is calculated as annual cost of sales and related buying and occupancy costs (for all segments) divided by 4-quarter average inventory. Retailers do not uniformly calculate inventory turnover as buying and occupancy costs may be included in selling, general and administrative expenses. As such, our inventory turnover rates may not be comparable to other retailers.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "**OUR NEW LOOK** FROM WINDOWS THAT BRING THE OUTSIDE IN TO DEPARTMENTS THAT SEAMLESSLY FLOW TOGETHER— OUR NEW STORE DESIGN CREATES AN EXCITING SPACE THAT CAN CHANGE WITH HOW OUR CUSTOMERS SHOP.\n\nto be within two-day ground delivery of approximately half the population of the United States, which will help improve delivery times for customers and help us meet their rising expectations.\n\nFinally, in 2014, we acquired Trunk Club, a high-growth personalized men's clothing business based on a service model that is highly complementary to our own. We believe Trunk Club is a natural extension of our business, and together we will continue to evolve and bring together the online and oine worlds to deliver a great shopping experience.\n\n#### OFF-PRICE: NORDSTROM RACK, NORDSTROMRACK.COM AND HAUTELOOK\n\n21008 - 037404B 2014 ANNUAL REPORT pg 7\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nWe opened a record 27 new Nordstrom Rack stores, ending 2014 with 167 stores and on track to meet our long-term growth plans\n\nof 300 stores by 2020. Customers continue to respond favorably to the treasure-hunt experience that defines Nordstrom Rack stores. As we expand in many markets for the first time, we hope to continue delivering a great experience, as this business represents a terrific opportunity for us to attract new customers. Last year, Nordstrom Rack was our biggest source of new customers, attracting nearly 4 million. Also, a year ago, we began accepting returns of HauteLook and Nordstromrack.com merchandise at any Nordstrom Rack store. This drove nearly 1 million trips to Nordstrom Rack stores in 2014. The Nordstrom Rack customer also tends to be younger than our full-line customer, and there is a meaningful opportunity for these customers to begin shopping our full-price channels as well. We plan to open 27 more Nordstrom Racks in 2015 across the U.S.\n\n*I love how you used models with* **\"** *physical challenges in your Anniversary catalog. Nice work!* **\"**\n\nOUR CUSTOMER, DONNA A.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n#### **NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES**\n\n#### **The Company**\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 \"Nordstrom\" branded full-line stores in the U.S. and at Nordstrom.com (collectively, \"Nordstrom\"), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n#### **Fiscal Year**\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31st. References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n#### **Principles of Consolidation**\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n#### **Use of Estimates**\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n#### **Net Sales**\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n1 Returns, net consist of actual returns offset by the value of the merchandise returned and any related sales commission.\n\n#### **Credit Card Revenues**\n\nCredit card revenues include finance charges, late fees and other revenue generated by our combined Nordstrom private label card and Nordstrom Visa credit card programs, and interchange fees generated by the use of Nordstrom Visa credit cards at third-party merchants. Finance charges and late fees are assessed according to the terms of the related cardholder agreements and recognized as revenue when earned. Credit card revenues are recorded net of estimated uncollectible finance charges and fees.\n\n#### **Cost of Sales**\n\nCost of sales includes the purchase cost of inventory sold (net of vendor allowances), in-bound freight and certain costs of loyalty program benefits related to our credit and debit cards.", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **Item 7. Management's Discussion and Analysis of Financial Condition and Results of Operations.**\n\nDollar, share and square footage amounts in millions except percentages, per share and per square foot amounts\n\n#### **OVERVIEW**\n\nNordstrom is a leading fashion specialty retailer offering apparel, shoes, cosmetics and accessories for women, men and children. We offer an extensive selection of high-quality brand-name and private label merchandise through our various channels: \"Nordstrom\" branded full-line stores and online store at Nordstrom.com, Nordstrom Rack stores, Nordstromrack.com and HauteLook and other retail channels, including Trunk Club showrooms and TrunkClub.com, our Jeffrey boutiques and our clearance store that operates under the name \"Last Chance.\" As of January 31, 2015, our stores are located in 38 states throughout the United States and in one province in Canada. In addition, we offer our customers a Nordstrom Rewards™ loyalty program along with a variety of payment products and services, including credit and debit cards.\n\nWe continue to see the ongoing evolution of retail, with increasing customer interaction between our stores and ecommerce. We are making progress to meet customer expectations of a personalized experience that merges the richness of stores with the convenience of online. Because the customer views us simply as Nordstrom, we believe there is tremendous value in strengthening our platform for the customer experience that encompasses full-price, off-price, in-store and online. While each channel represents a substantial growth opportunity, there are significant synergies across channels to create a unique customer experience to gain market share.\n\nWe considered 2014 a watershed year in our company history, with our successful entry into Canada, continued expansion of our Nordstrom Rack business through store growth, the launch of Nordstromrack.com and the acquisition of Trunk Club. Our performance in 2014 reflected continued progress in executing our customer strategy through investments to drive growth across channels. We achieved total net sales growth of 7.8%, adding nearly $1 billion to our top-line and delivering record sales and earnings per diluted share. Our financial position remains strong and this marked the sixth consecutive year we generated over $1 billion in cash flow from operations.\n\nOur partnership with vendors and brands enhances our product offering. We offer Topshop merchandise at 53 full-line stores and online, with plans to reach over 80 stores in 2015. Our new partnership with Madewell in 2015, initially available at 15 of our stores and online, is another way to provide sought-after brands that appeal to new and existing customers.\n\nIn 2014, we opened our first full-line store in Canada in Calgary, Alberta, reflecting a multi-year effort from our team to address the unique challenges of crossing the border. With our store outperforming our expectations, we are encouraged with our customers' response in this market. We are looking forward to opening stores in 2015 in Ottawa, Ontario and Vancouver, British Columbia. In the U.S. we increased our presence with two full-line stores in The Woodlands, Texas and Jacksonville, Florida. In 2015, we plan to open three full-line stores in Puerto Rico, Minneapolis, Minnesota and Milwaukee, Wisconsin.\n\nAt Nordstrom Rack, we offer customers great brands at great prices, with 48 of the top 50 full-line brands represented. We opened 27 Nordstrom Rack stores in 2014, a record number of openings, contributing to Nordstrom Rack's total sales growth of 17%.\n\nOur online businesses continue to be our fastest-growing channels. In the spring of 2014, we expanded our capabilities through the launch of Nordstromrack.com, providing a seamless integration with HauteLook. We more than doubled our merchandise selection, which accelerated growth in this channel in the second half of 2014. Demonstrating synergies across our businesses, we enabled customers to return purchases from HauteLook and Nordstromrack.com to any of our Nordstrom Rack stores, which drove nearly one million incremental trips to Nordstrom Rack stores.\n\nNordstrom.com finished its fifth consecutive year of approximately 20% or more comparable sales growth, with a key driver being increased merchandise selection. In 2015, we plan to open our third fulfillment center, located in Pennsylvania, which will enhance the customer experience through faster delivery. Furthermore, we have extended our full-price offering with our acquisition of Trunk Club, a high-growth business offering a new approach to personalized service.\n\nOur credit business, through our Nordstrom Rewards program, continues to play an important role in attracting new customers and deepening our engagement with existing customers. The program contributes to our overall results, with members shopping more frequently and spending more on average than non-members. For the third consecutive year, we opened over one million new accounts. With over four million active members, 2014 sales from members represented approximately 40% of our sales.\n\nWe are confident in our ability to execute our customer strategy as we evolve with customers and continue to leverage capabilities across all channels to serve customers on their terms. To enhance the customer experience, we continue to make investments in our stores in new markets such as Canada, Puerto Rico and Manhattan, in our ecommerce and fulfillment capabilities and in technology to support growth across all channels. We believe these investments in our customer strategy will help us achieve long-term top-quartile shareholder returns through high single-digit total sales growth and mid-teens Return on Invested Capital.", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»›**THE RACK GOES ONLINE** SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by oering programs that appeal to more customers will be beneficial in the long term.\n\n#### CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n21008 - 037404B 2014 ANNUAL REPORT pg 9\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers—no matter how they choose to shop with Nordstrom.\n\n**Blake W. Nordstrom** President, Nordstrom, Inc.\n\n**Peter E. Nordstrom** President of Merchandising, Nordstrom, Inc.\n\n**Erik B. Nordstrom** President of Nordstrom.com, Nordstrom, Inc.\n\n*I don't think I could've* **\"** *received better news today. Nordstrom Rack has now launched online!* **\"**\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### **Retail Business Net Sales**\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel2: | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | — |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot3: | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |\n\n1 Other retail includes our Jeffrey boutiques, Trunk Club and our Nordstrom Canada full-line store.\n\n2 Comparable sales include sales from stores that have been open at least one full year at the beginning of the year. We also include sales from our online channels (Nordstrom.com, Nordstromrack.com and HauteLook) in comparable sales because of the integration with our stores. Fiscal year 2012 includes an extra week (the 53rd week) as a result of our 4-5-4 retail reporting calendar. The 53rd week is not included in comparable sales calculations.\n\n3 Sales per square foot is calculated as net sales divided by weighted-average square footage. Weighted-average square footage includes a percentage of year-end square footage for new stores equal to the percentage of the year during which they were open. 4-wall sales per square foot is calculated as sales for Nordstrom U.S. full-line stores, Nordstrom Rack stores, Jeffrey boutiques, our Canada full-line store, Last Chance and Trunk Club showrooms divided by their weighted-average square footage.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **Item 1B. Unresolved Staff Comments.**\n\nNone.\n\n# **Item 2. Properties.**\n\nThe following table summarizes the number of retail stores we own or lease, and the percentage of total store square footage represented by each listed category as of January 31, 2015:\n\n| | Number of stores | % of total store square footage |\n| --- | --- | --- |\n| Leased stores on leased land | 195 | 38% |\n| Owned stores on leased land | 61 | 40% |\n| Owned stores on owned land | 35 | 21% |\n| Partly owned and partly leased store | 1 | 1% |\n| Total | 292 | 100% |\n\nThe following table summarizes our store activity during the last three years:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Number of stores, beginning of year | 260 | 240 | 225 |\n| Stores opened | 31 | 22 | 16 |\n| Stores acquired | 4 | — | — |\n| Stores closed | (3) | (2) | (1) |\n| Number of stores, end of year | 292 | 260 | 240 |\n| Nordstrom full-line stores - U.S. | 116 | 117 | 117 |\n| Nordstrom Rack | 167 | 140 | 119 |\n| Other1 | 9 | 3 | 4 |\n\n1 Other includes Jeffrey boutiques, Trunk Club showrooms, our Nordstrom Canada full-line store and Last Chance.\n\nIn 2014, we opened three Nordstrom full-line stores (The Woodlands, Texas; Calgary, Alberta; and Jacksonville, Florida) and 27 Nordstrom Rack stores (Palm Desert, California; San Francisco, California; Chicago, Illinois; Riverside, California; Skokie, Illinois; Tulsa, Oklahoma; Wauwatosa, Wisconsin; Brooklyn, New York; Columbus, Ohio; Houston, Texas; Manhassett, New York; Chicago, Illinois; Dayton, Ohio; Houston, Texas; Queens, New York; Brentwood, Tennessee; Greenville, South Carolina; Madison, Wisconsin; Tempe, Arizona; Brooklyn, New York; Livingston, New Jersey; West Palm Beach, Florida; Brandon, Florida; Columbia, South Carolina; Des Moines, Iowa; Philadelphia, Pennsylvania; and Summerlin, Nevada). As part of our purchase of Trunk Club in August 2014, we acquired four Trunk Club showrooms (Los Angeles, California; Chicago, Illinois; Dallas, Texas; and Washington D.C.) and opened one additional Trunk Club showroom (New York City, New York) in December 2014. Additionally, in 2014, we closed three Nordstrom full-line stores (Orlando, Florida; Vancouver, Washington; and Portland, Oregon).\n\nTo date in 2015, we have opened one Nordstrom full-line store in Ottawa, Ontario. During the remainder of 2015, we have announced the opening of four additional Nordstrom full-line stores (San Juan, Puerto Rico; Vancouver, British Columbia; Minneapolis, Minnesota; and Wauwatosa, Wisconsin) and the opening of 27 additional Nordstrom Rack stores (Bakersfield, California; Redlands, California; Reno, Nevada; Princeton, New Jersey; Westwood, Massachusetts; Webster, Texas; Laguna Niguel, California; Miami, Florida; Springfield, Virginia; St. Louis Park, Minnesota; Dublin, California; Albany, New York; Anchorage, Alaska; Baton Rouge, Louisiana; Buffalo, New York; Cerritos, California; Clearwater, Florida; Eatontown, New Jersey; Emeryville, California; Fort Collins, Colorado; Long Beach, California; Mount Pleasant, South Carolina; Newark, Delaware; Rockaway, New Jersey; Syracuse, New York; Thousand Oaks, California; and Wayne, New Jersey).\n\nWe also own six merchandise distribution centers (Portland, Oregon; Dubuque, Iowa; Ontario, California; Newark, California; Upper Marlboro, Maryland; and Gainesville, Florida) and we own one fulfillment center on leased land (Cedar Rapids, Iowa), all of which are utilized by our Retail segment. Trunk Club and HauteLook, which are included in our Retail segment, lease three administrative offices (Chicago, Illinois; Los Angeles, California and New York City, New York) and one fulfillment center (San Bernardino, California). We plan to open a third, owned fulfillment center (Elizabethtown, Pennsylvania) in the second half of 2015. We lease office buildings in Centennial, Colorado and Scottsdale, Arizona, both for use by our Credit segment. Our administrative offices in Seattle, Washington are a combination of leased and owned space. We also lease a data center in Centennial, Colorado.", - "page_start": 21, - "page_end": 21, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "«‹**A PERFECT PAIR: SHOES AND SJP** ACTRESS AND STYLE ICON SARAH JESSICA PARKER DESIGNED HER OWN SHOE LINE, SJP, AND WE WERE THE EXCLUSIVE RETAILER FOR ITS LAUNCH.\n\n»›**THAT'S BRILLIANT!** WE'LL HAVE TOPSHOP IN 80 STORES BY THE END OF 2015—AND THAT'S JUST ONE OF THE WAYS WE'RE ATTRACTING NEW YOUNG CUSTOMERS WITH GREAT BRANDS AT ACCESSIBLE PRICE POINTS.\n\n*Praise the fashion gods.* **\"** *Nordstrom Downtown Portland is opening Topshop in the next month.* **\"**\n\nOUR CUSTOMER, KARLY T.\n\n21008 - 037404B 2014 ANNUAL REPORT pg 8\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nIn addition to our new stores, we improved our online/o\f-price capabilities with the launch of Nordstromrack.com. Combined with HauteLook, the integrated ecommerce site o\fers a consistent merchandise selection as well as flash sales in a single web or mobile experience, providing customers a wide range of merchandise with one easy-to-use, shared checkout. Since the launch last spring, we've more than doubled the selection at Nordstromrack.com. We will continue to work on ways to further integrate our business to improve our customer experience.\n\n#### INCREASING RELEVANCE\n\nWe know ultimately customers come to Nordstrom for great merchandise. They continue to respond to fresh, relevant brands. Last year, we were the exclusive retail partner for the global launch of\n\nSarah Jessica Parker's SJP line of shoes and launched Charlotte Tilbury in Beauty. We increased the number of full-line stores with Topshop to 53 and launched Kate Moss for Topshop, which helped us rapidly grow the number of Topshop customers, including a younger customer who in many cases is new to Nordstrom. By the end of 2015, we plan to have Topshop in more than 80 stores.\n\nThis March, we were excited to begin carrying Madewell, representing a new partnership with J.Crew. Our initial launch was on Nordstrom.com and in 15 of our stores in our t.b.d. department. This is a terrific example of our continued focus to bring great fashion brands to customers at accessible price points.\n\nFinally, Nordstrom Rewards has been a successful program enabling us to deepen", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Nordstrom Rack net sales for the quarter increased $130, or 17%, reflecting 27 new Nordstrom Rack store openings since the fourth quarter of 2013, while comparable sales increased 3.2%. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat. Shoes and Accessories were the category highlights for Nordstrom Rack.\n\n#### Gross Profit\n\nOur total company gross profit rate decreased 53 basis points compared with the same period in the prior year, primarily due to increased markdowns at Nordstrom Rack.\n\n#### Retail Selling, General, and Administrative Expenses\n\nOur Retail SG&A rate increased 80 basis points primarily due to expenses related to the acquisition of Trunk Club and ongoing technology and fulfillment expenses.\n\n#### Credit Expenses\n\nIn the fourth quarter, expenses for our Credit segment of $54 increased from $38 in the prior year. The increase was primarily driven by higher operational expenses resulting from a 6% increase in credit volume during the fourth quarter of 2014. The fourth quarter of 2013 also included the impact of the conversion of our Nordstrom Rewards travel benefit into Nordstrom Notes, which decreased operational expenses in the prior year.\n\nFor further information on our quarterly results in 2014 and 2013, refer to Note 17: Selected Quarterly Data in the Notes to Consolidated Financial Statements in Item 8: Financial Statements and Supplementary Data.\n\n#### **2015 Outlook**\n\nOur expectations for 2015 are as follows:\n\n| Net sales | 7 percent to 9 percent increase |\n| --- | --- |\n| Comparable sales | 2 percent to 4 percent increase |\n| Earnings per diluted share1 | $3.65 to $3.80 |\n\n1 This outlook does not include the impact of any future share repurchases.\n\nCapital expenditures, net of property incentives, of approximately $1.2 billion are expected in 2015, an increase from $751 in 2014. The increase relates to store expansion, including Canada and Manhattan, and ongoing investments to improve the customer experience through flagship store remodels and a third fulfillment center expected to open in the second half of the year. To date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What type of nanostructured material works notably well to build gas nanosensors ?", - "target_page": 1, - "target_passage": "carbon nanotubes (CNT) [2] have been shown to work remarkably well as de- tectors of small gas molecules", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## arXiv:1001.2538v1 [cond-mat.mes-hall] 14 Jan 2010\n\n## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra1,2 , ∗ D. J. Mowbray1,2, K. S. Thygesen2 , A. Rubio1,3, and K. W. Jacobsen2\n\n*1Nano-Bio Spectroscopy group and ETSF Scientific Development Centre,*\n\n*Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebastian, Spain ´*\n\n*2Center for Atomic-scale Materials Design, Department of Physics,*\n\n*Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany*\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.–b, 68.43.–h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3–8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert – a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11–13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13–21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant (>1 Ω) for small changes in CO concentration in the relevant range of around 0.1–10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 A for ˚ representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 A˚ ×15 A˚ ×14.622 A). For this size ˚ of supercell a Γ-point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nEform[M@VC] = E[M@VC] + nE[C] − E[M@NT] (1)\n\nwhere E[M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E[C] is the energy per carbon atom in a pristine nanotube, and E[M@NT]\n\n<i>Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), \"Grupos Consolidados UPV/EHU del Gobierno Vasco\" (IT-319-07), e-I3 ETSF project (Contract Number 211956), \"Red Espanola de Super- ˜ computacion\", NABIIT and the Danish Center for Scientific ´ Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jose Castillejo programs. ´\n\n∗ Electronic address: juanmaria.garcia@ehu.es\n\n- [1] *Gas Sensing Materials, MRS Bull.*, vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, \"Electronic and transport properties of nanotubes\", Rev. Mod. Phys. 79(2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, \"Nanotube molecular wires as chemical sensors\", Science 287(5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, \"Extreme oxygen sensitivity of electronic properties of carbon nanotubes\", Science 287(5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, *Carbon Nanotube Devices: Properties, Modeling, Integration and Applications* (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-Paez, A. H. Romero, E. Mu ´ noz-Sandoval, ˜ L. M. Mart´ınez, H. Terrones, and M. Terrones, \"Fabrication of vapor and gas sensors using films of aligned CNx nanotubes\", Chem. Phys. Lett. 386(1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, \"Designing real nanotube-based gas sensors\", Phys. Rev. Lett. 100(17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, \"Tailoring gas sensing properties of carbon nanotubes\", J. Appl. Phys. 104(2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, \"Variable range hopping in oxygen-exposed single-wall carbon nanotube networks\", Phys. Stat. Solidi A 205(6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, \"Influence of O2 and N2 on the conductivity of carbon nanotube networks\", Phys. Rev. B 79(19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, \"Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory\", Chem. Phys. Lett. 387(4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, \"Defective carbon nanotubes for single-molecule sensing\", Phys. Rev. B 80(15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc´ıa-Lastra, K. S. Thygesen, M. Strange, and Angel Rubio, \"Conductance of sidewall-functionalized ´ carbon nanotubes: Universal dependence on adsorption sites\", Phys. Rev. Lett. 101(23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, \"*Ab initio* study of an iron atom interacting with single-wall carbon nanotubes\", Phys. Rev. B 67(20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, \"Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3d transition metals: A first-principles study\", Phys. Rev. B 69(7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, \"Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes\", J. Phys. Chem. B 110(28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, \"First-principles study of metal adatom adsorption on graphene\", Phys. Rev. B 77, 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, \"Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes\", J. Phys. Chem. C 112(19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, \"Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems\", J. Phys. Chem. C 112(22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, \" ¨ *Ab initio* study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms\", Phys. Rev. B 78(19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, \"Embedding transition- ¨ metal atoms in graphene: Structure, bonding, and magnetism\", Phys. Rev. Lett. 102(12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, \"Real-space grid implementation of the projector augmented wave method\", Phys. Rev. B 71(3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, \"Generalized gradient approximation made simple\", Phys. Rev. Lett. 77(18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\n$$E_{\\rm form}[{\\rm VC}]=E[{\\rm VC}]+nE[{\\rm C}]-E[{\\rm NT}],\\tag{2}$$\n\nwhere E[VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov *et al.* for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\n$$E_{\\rm ads}[X\\,\\mbox{\\small@M@VC}]=E[X\\,\\mbox{\\small@M@VC}]-E[X]-E[\\mbox{\\small@VC}],\\tag{3}$$\n\nFIG. 2: Calculated (a) adsorption energy Eads in eV and (b) change in conductance ∆G in units of G0 =2e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E[X@M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E[X] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\n$$\\Theta[X]=\\frac{K[X]C[X]}{1+\\sum_{Y}K[Y]C[Y]},\\tag{4}$$\n\nwhere K = k+/k− is the ratio of forward and backward rate constants for the adsorption reaction,\n\n$$K[X]=\\exp\\left[-\\frac{E_{\\rm ads}[X]+TS[X]}{k_{B}T}\\right].\\tag{5}$$\n\nIn these expressions C[X] is the concentration of species X, S[X] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "- [34] P. Moriarty, M. D. R. Taylor, and M. Brust, \"Nanostructured cellular networks,\" Phys. Rev. Lett. 89, 248303 (2002).\n- [35] E. Rabani, D. R. Reichman, P. L. Geissler, and L. E. Brus, \"Drying-mediated self-assembly of nanoparticles,\" Nature 426, 271–274 (2003).\n- [36] L. V. Govor, G. Reiter, J. Parisi, and G. H. Bauer, \"Self-assembled nanoparticle deposits formed at the contact line of evaporating micrometer-size droplets,\" Phys. Rev. E 69, 061609 (2004).\n- [37] C. P. Martin, M. O. Blunt, and P. Moriarty, \"Nanoparticle networks on silicon: Self-organized or disorganized?\" Nano Lett. 4, 2389–2392 (2004).\n- [38] C. P. Martin, M. O. Blunt, E. Pauliac-Vaujour, A. Stannard, P. Moriarty, I. Vancea, and U. Thiele, \"Controlling pattern formation in nanoparticle assemblies via directed solvent dewetting,\" Phys. Rev. Lett. 99, 116103 (2007).\n- [39] A. Stannard, C. P. Martin, E. Pauliac-Vaujour, P. Moriarty, and U. Thiele, \"Dual-scale pattern formation in nanoparticle assemblies,\" J. Chem. Phys. C 112, 15195–15203 (2008).\n- [40] E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, I. Notingher, P. J. Moriarty, I. Vancea, and U. Thiele, \"Fingering instabilities in dewetting nanofluids,\" Phys. Rev. Lett. 100, 176102 (2008).\n- [41] I. Vancea, U. Thiele, E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, and P. J. Moriarty, \"Front instabilities in evaporatively dewetting nanofluids,\" Phys. Rev. E 78, 041601 (2008).\n- [42] U. Thiele, *Entnetzung von Kollagenfilmen*, Ph.D. thesis, Technische Universitat Dresden (1998). ¨\n- [43] H. Yabu and M. Shimomura, \"Preparation of self-organized mesoscale polymer patterns on a solid substrate: Continuous pattern formation from a receding meniscus,\" Adv. Funct. Mater. 15, 575–581 (2005).\n- [44] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, \"Capillary flow as the cause of ring stains from dried liquid drops,\" Nature 389, 827–829 (1997).\n- [45] E. Adachi, A. S. Dimitrov, and K. Nagayama, \"Stripe patterns formed on a glass-surface during droplet evaporation,\" Langmuir 11, 1057–1060 (1995).\n- [46] R. D. Deegan, \"Pattern formation in drying drops,\" Phys. Rev. E 61, 475–485 (2000).\n- [47] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, \"Contact line deposits in an evaporating drop,\" Phys. Rev. E 62, 756–765 (2000).\n- [48] L. Shmuylovich, A. Q. Shen, and H. A. Stone, \"Surface morphology of drying latex films: Multiple ring formation,\" Langmuir 18, 3441–3445 (2002).\n- [49] V. X. Nguyen and K. J. Stebe, \"Patterning of small particles by a surfactant-enhanced Marangoni-", - "page_start": 27, - "page_end": 27, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 2: Typical KMC results for the final dried-in nanoparticle structures resulting from the evaporative dewetting processes of nanoparticle solutions (nanofluids) in the case of (a) a spinodal-like process at µ = −2.55, (b) nucleation and growth of holes at µ = −2.3, (c) unstable fronts at µ = −2.3 and low mobility M = 5, and (d) unstable fronts at µ = −2.3 and medium mobility M = 10. The starting configuration in (a) and (b) is a homogeneous liquid film with uniformly distributed particles whereas in (c) and (d) a hole at the center is nucleated 'by hand'. The remaining parameters are (a,b) M = 50, nl = 2.0, nn = 1.5, ρ av n = 0.2, kT = 0.3, MC steps= 500, domain size 1200 × 1200; (c,d) εnn = 2.0, nl = 1.5, ρ av n = 0.2, kT = 0.2, MC steps= 3000, domain size 1200 × 1200. Lattice sites occupied by particles are coloured black, and the empty sites are coloured white.", - "page_start": 10, - "page_end": 10, - "source_file": "1001.2669.pdf" - }, - { - "text": "Environmental Technologies is another growth business for us, and one in which we are focusing a significant portion of our overall R&D investment. We have helped shape this industry since our invention of the ceramic catalytic converter substrate in the 1970s. Today, we are working globally with the automotive and truck industries to develop and manufacture innovative new substrate and filter products to further reduce emissions from both gasoline- and diesel-powered vehicles. By around 2008, the diesel emission control business could be as big as our automotive emission control business is today. Our new clean-air products plant in Erwin, N.Y., is expected to be up and running by mid-decade in support of this great diesel opportunity.\n\nOur Semiconductor Optics business — with some exciting new developments in the production of calcium fluoride crystals and continued breakthroughs in the creation of HPFS® fused silica — is helping the semiconductor industry develop faster and more powerful integrated circuits. Our long-standing leadership in optical materials composition and optics design continues to be based on our unique set of skills in basic materials science, glass chemistry and metrology.\n\nE NABLING : MICROCIRCUIT LINES AT 1/1000 THE WIDTH OF A HUMAN HAIR", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "- [24] M. Strange, I. S. Kristensen, K. S. Thygesen, and K. W. Jacobsen, \"Benchmark density functional theory calculations for nanoscale conductance\", J. Chem. Phys. 128(11), 114714 (Mar. 2008), doi:10.1063/1.2839275.\n- [25] J. M. Soler, E. Artacho, J. D. Gale, A. Garcia, J. Junquera, P. Ordejon, and D. S ´ anchez-Portal, \"The SIESTA method for ´ *ab initio* order-n materials simulation\", J. Phys.: Condens. Matter 14(11), 2745 (Mar. 2002), doi:10.1088/0953-8984/14/11/302.\n- [26] J. S. Griffith, *The Theory of Transition-Metal Ions* (Cambridge University Press, London, 1961).\n- [27] P. Atkins and J. de Paula, *Physical Chemistry*, 8th ed. (Oxford University Press, London, 2006).\n- [28] D. Lide, *Handbook of Chemistry and Physics*, 87th ed. (CRC-Press, 2006–2007).\n- [29] T. Markussen, R. Rurali, A.-P. Jauho, and M. Brandbyge, \"Scal-\n\ning theory put into practice: First-principles modeling of transport in doped silicon wires\", Phys. Rev. Lett. 99(7), 076803 (Aug. 2007), doi:10.1103/PhysRevLett.99.076803.\n\n- [30] M. Ushiro, K. Uno, T. Fujikawa, Y. Sato, K. Tohji, F. Watari, W.-J. Chun, Y. Koike, and K. Asakura, \"X-ray absorption fine structure (XAFS) analyses of Ni species trapped in graphene sheet of carbon nanofibers\", Phys. Rev. B 73(14), 144103 (Apr. 2006), doi:10.1103/PhysRevB.73.144103.\n- [31] C. Gomez-Navarro, P. J. de Pablo, J. Gomez-Herrero, B. Biel, F. J. Garcia-Vidal, A. Rubio, and F. Flores, \"Tuning the conductance of single-walled carbon nanotubes by ion irradiation in the Anderson localization regime\", Nature Materials 4, 534 (Jun. 2005), doi:10.1038/nmat1414.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2538.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n### A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l. The resulting three possible states of a cell are: liquid (l = 1, n = 0), nanoparticle (l = 0, n = 1), and vapour (l = 0, n = 0, i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\n$$E\\,=\\,-\\frac{\\varepsilon_{nn}}{2}\\sum_{}n_{i}n_{j}\\,-\\,\\frac{\\varepsilon_{nl}}{2}\\sum_{}n_{i}l_{j}\\,-\\,\\frac{\\varepsilon_{ll}}{2}\\sum_{}l_{i}l_{j}\\,-\\,\\mu\\sum_{i}l_{i}\\tag{3}$$\n\nwhere P denotes a sum over nearest neighbour pairs and εll, εnn and εnl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters εll, εnn, εnl and the effective chemical potential µ determines the equilibrium state of the system. We choose εll as unit of energy – i.e. we set εll = 1.\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What seems to be a great technique to ensure vacancies are formed in carbon nanotubes (CNT) ?", - "target_page": 4, - "target_passage": "Furthermore, it has been shown that CNT vacan- cies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ion", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## arXiv:1001.2538v1 [cond-mat.mes-hall] 14 Jan 2010\n\n## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra1,2 , ∗ D. J. Mowbray1,2, K. S. Thygesen2 , A. Rubio1,3, and K. W. Jacobsen2\n\n*1Nano-Bio Spectroscopy group and ETSF Scientific Development Centre,*\n\n*Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebastian, Spain ´*\n\n*2Center for Atomic-scale Materials Design, Department of Physics,*\n\n*Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany*\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.–b, 68.43.–h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3–8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert – a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11–13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13–21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant (>1 Ω) for small changes in CO concentration in the relevant range of around 0.1–10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 A for ˚ representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 A˚ ×15 A˚ ×14.622 A). For this size ˚ of supercell a Γ-point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nEform[M@VC] = E[M@VC] + nE[C] − E[M@NT] (1)\n\nwhere E[M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E[C] is the energy per carbon atom in a pristine nanotube, and E[M@NT]\n\n<i>Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), \"Grupos Consolidados UPV/EHU del Gobierno Vasco\" (IT-319-07), e-I3 ETSF project (Contract Number 211956), \"Red Espanola de Super- ˜ computacion\", NABIIT and the Danish Center for Scientific ´ Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jose Castillejo programs. ´\n\n∗ Electronic address: juanmaria.garcia@ehu.es\n\n- [1] *Gas Sensing Materials, MRS Bull.*, vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, \"Electronic and transport properties of nanotubes\", Rev. Mod. Phys. 79(2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, \"Nanotube molecular wires as chemical sensors\", Science 287(5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, \"Extreme oxygen sensitivity of electronic properties of carbon nanotubes\", Science 287(5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, *Carbon Nanotube Devices: Properties, Modeling, Integration and Applications* (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-Paez, A. H. Romero, E. Mu ´ noz-Sandoval, ˜ L. M. Mart´ınez, H. Terrones, and M. Terrones, \"Fabrication of vapor and gas sensors using films of aligned CNx nanotubes\", Chem. Phys. Lett. 386(1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, \"Designing real nanotube-based gas sensors\", Phys. Rev. Lett. 100(17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, \"Tailoring gas sensing properties of carbon nanotubes\", J. Appl. Phys. 104(2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, \"Variable range hopping in oxygen-exposed single-wall carbon nanotube networks\", Phys. Stat. Solidi A 205(6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, \"Influence of O2 and N2 on the conductivity of carbon nanotube networks\", Phys. Rev. B 79(19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, \"Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory\", Chem. Phys. Lett. 387(4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, \"Defective carbon nanotubes for single-molecule sensing\", Phys. Rev. B 80(15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc´ıa-Lastra, K. S. Thygesen, M. Strange, and Angel Rubio, \"Conductance of sidewall-functionalized ´ carbon nanotubes: Universal dependence on adsorption sites\", Phys. Rev. Lett. 101(23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, \"*Ab initio* study of an iron atom interacting with single-wall carbon nanotubes\", Phys. Rev. B 67(20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, \"Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3d transition metals: A first-principles study\", Phys. Rev. B 69(7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, \"Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes\", J. Phys. Chem. B 110(28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, \"First-principles study of metal adatom adsorption on graphene\", Phys. Rev. B 77, 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, \"Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes\", J. Phys. Chem. C 112(19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, \"Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems\", J. Phys. Chem. C 112(22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, \" ¨ *Ab initio* study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms\", Phys. Rev. B 78(19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, \"Embedding transition- ¨ metal atoms in graphene: Structure, bonding, and magnetism\", Phys. Rev. Lett. 102(12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, \"Real-space grid implementation of the projector augmented wave method\", Phys. Rev. B 71(3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, \"Generalized gradient approximation made simple\", Phys. Rev. Lett. 77(18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\n$$E_{\\rm form}[{\\rm VC}]=E[{\\rm VC}]+nE[{\\rm C}]-E[{\\rm NT}],\\tag{2}$$\n\nwhere E[VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov *et al.* for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\n$$E_{\\rm ads}[X\\,\\mbox{\\small@M@VC}]=E[X\\,\\mbox{\\small@M@VC}]-E[X]-E[\\mbox{\\small@VC}],\\tag{3}$$\n\nFIG. 2: Calculated (a) adsorption energy Eads in eV and (b) change in conductance ∆G in units of G0 =2e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E[X@M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E[X] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\n$$\\Theta[X]=\\frac{K[X]C[X]}{1+\\sum_{Y}K[Y]C[Y]},\\tag{4}$$\n\nwhere K = k+/k− is the ratio of forward and backward rate constants for the adsorption reaction,\n\n$$K[X]=\\exp\\left[-\\frac{E_{\\rm ads}[X]+TS[X]}{k_{B}T}\\right].\\tag{5}$$\n\nIn these expressions C[X] is the concentration of species X, S[X] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C5 and C8) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C10 and C12). For even longer chains (C14), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale (> 100µm) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 3: Fractional coverage Θ in thermal equilibrium of Ni in a (a) monovacancy, (b) divacancy I, (c) divacancy II and (d) change in resistance ∆R per dopant site as a function of CO concentration in a background of air at room temperature and 1 bar of pressure. The reference concentration of CO is taken to be C0 =0.1 ppm. Note the change from linear to log scale on the y-axis at ∆R =10 Ω.\n\nFor a given background composition we may thus estimate the fractional coverages for each available adsorbate for a given type of doping. As an example, Fig. 3(a)-(c) shows the fractional coverage of a Ni atom occupying a monovacancy, divacancy I, and divacancy II, versus CO concentration in a background of air at room temperature and 1 bar of pressure. Due to the relatively small binding energy of N2 and H2O as compared to O2 and CO, all Ni sites will be either empty or occupied by O2 or CO. In particular, Ni in a monovacancy (top panel of Fig. 3) will be completely oxidized for all relevant CO concentrations. For the Ni occupied divacancy II structures we find the coverage of CO changes significantly around toxic concentrations (∼10 ppm).\n\nTo estimate the effect of adsorbates on the electrical conductance of doped CNTs, we first consider the change in conductance when a single molecule is adsorbed on a metal site of an otherwise pristine CNT. In Fig. 2(b) we show the calculated change in conductance relative to the metal site with no adsorbate. In contrast to the binding energies, there are no clear trends in the conductances. The sensitivity of the conductance is perhaps most clearly demonstrated by the absence of correlation between different types of vacancies, i.e. between the three panels in Fig. 2(b). Close to the Fermi level, the conductance of a perfect armchair CNT equals 2G0. The presence of the metal dopant leads to several dips in the transmission function known as Fano antiresonances [20]. The position and shape of these dips depend on the d-levels of the transition metal atom, the character of its bonding to the CNT, and is further affected by the presence of the adsorbate molecule. The coupling of all these factors is very complex and makes it difficult to estimate or rationalize the value of the conductance. For the spin polarized cases, we use the spin-averaged conductances, i.e. G = (G↑ + G↓)/2.\n\nNext, we estimate the resistance of a CNT containing several impurities (a specific metal dopant with different molecular adsorbates). Under the assumption that the electron phasecoherence length, lφ, is smaller than the average distance between the dopants, d, we may neglect quantum interference and obtain the total resistance by adding the scattering resistances due to each impurity separately. The scattering resistance due to a single impurity is given by\n\n$R_{s}(X)=1/G(X)-1/(2G_{0})$, (6)\n\nwhere G(X) is the Landauer conductance of the pristine CNT with a single metal dopant occupied by molecule X and 1/(2G0) is the contact resistance of a (6,6) CNT.\n\nWe may now obtain the total resistance per dopant site relative to the reference background signal as a function of the target molecule concentration\n\n∆R N ≈ X X Rs(X)(Θ[X, C] − Θ[X, C0]), (7)\n\nwhere N is the number of dopants, Θ[X, C] is the fractional coverage of species X at concentration C of the target and C0 is the reference concentration. Notice that the contact resistance drops out as we evaluate a change in resistance.\n\nIn Fig. 3(d) we show the change in resistance calculated from Eq. (7) as a function of CO concentration for Ni occupying the three types of vacancies. The background reference concentration of CO is taken to be C0 = 0.1 ppm. For the monovacancy there is very little change in resistivity. This is because most active sites are blocked by O2 at relevant CO concentrations, as shown in the upper panel of Fig. 3. For Ni in the divacancies there is, however, a change in resistance on the order of 1Ω per site. For concentrations above ∼1 ppm, the CO coverage of Ni in the divacancy II increases dramatically and this leads to a significant increase in resistance.\n\nWe now return to the discussion of the validity of Eq. (7). As mentioned, the series coupling of individual scatterers should be valid when lφ < d. However, even for lφ > d and assuming that the Anderson localization length, lloc in the system exceeds lφ, Eq. (7) remains valid if one replaces the actual resistance R by the sample averaged resistance hRi [29]. At room temperature under ambient conditions, interactions with external degrees of freedom such as internal CNT phonons and vibrational modes of the adsorbed molecules would rapidly randomize the phase of the electrons. Therefore Eq. (7) should certainly be valid in the limit of low doping concentrations. On the other hand, the total number of dopants, N, should be large enough for the statistical treatment of the coverage to hold. Finally, we stress that Eq. (7) represents a conservative estimate of the change in resistance. In fact, in the regime where lφ > lloc, i.e. in the Anderson localization regime, the resistance would be highly sensitive to changes in the fractional coverage of active sites. Calculation of the actual resistance of the CNT in this regime would, however, involve a full transport calculation in the presence of", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2538.pdf" - }, - { - "text": "# INVESTING IN OUR WORLD AND OUR PEOPLE »\n\nAs we explore for and produce clean, affordable, abundant, American natural gas, we provide an important solution to our nation's energy challenges and its quest for energy independence. With at least a 200 year supply of natural gas located right here in the U.S., this versatile fuel can be used to not only heat homes, create electricity and meet America's transportation needs, but also to fuel the country's future by creating jobs and stimulating local and national economies through investment and taxes.\n\n# **Environmentally Friendly Operations**\n\nAt Chesapeake, we realize that the way a great product is produced is as important as the product itself. For example, we have helped pioneer the use of multiwell padsites to drill up to 16 wells from a single location, greatly reducing our land and road use and overall environmental footprint. We use the latest horizontal and directional drilling technology to place wells at a safe distance from homes, schools and businesses. In addition, we build and maintain access roads and work to eliminate soil erosion near our sites, as well as restore local vegetation.\n\nWe implement advanced, modern protective measures known as Best Management Practices (BMPs) to help ensure energy development is conducted in an environmentally responsible manner. Procedures are implemented throughout our operations to protect freshwater aquifers and reduce environmental impacts. BMPs protect wildlife, air quality, water and landscapes as we work to develop vitally needed domestic energy sources.\n\nImplemented throughout the entire life cycle of a well, BMPs can be as simple as strategically placing a berm, or land barrier, on locations to control surface water runoff. Others involve cutting-edge operational technologies such as utilizing the most advanced techniques offered in drilling fluids, well casing and cement design. Regardless of complexity, all BMPs are based on the idea that the environmental footprint of energy development should be as small and temporary as possible. These practices are continually evolving and further improving as Chesapeake and the industry develop new innovative techniques and approaches to business.\n\nIn addition to our BMPs, Chesapeake has also initiated several innovative internal programs focused on water recycling and greener hydraulic fracturing processes.\n\n# *Aqua Renew***®**\n\nCreated to meet the challenge of reducing our water usage, Chesapeake's *Aqua Renew*® program uses state-of-the-art technology to recycle pro-\n\nduced water. Since the company's preliminary reclamation project in\n\n2006, our focus on water reuse and conservation has become a companywide endeavor, stretching from the Barnett Shale of North Texas to the Marcellus Shale of northern Pennsylvania.\n\nThe *Aqua Renew* program has yet to find a limit to how much recycled water could be used without compromising well production. In fact, our Marcellus Shale operations are treating and recycling virtually 100% of produced water (more than 10 million gallons per month) for reuse in our hydraulic fracturing operations. Properly conducted modern fracking is a highly engineered, controlled, sophisticated and safe procedure.\n\nWith such large volumes of recycled water, the company is seeing more than just environmental advantages. We estimate that this\n\n*Green operations — Chesapeake's Best Management Practices ensure our operations are as environmentally friendly as possible, while protecting our employees, neighbors and the areas where we operate.*", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability pacc = min[1, exp(−∆E/kT)] where k is the Boltzmann constant, T the temperature and ∆E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1. This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kTc ≈ 0.567 liquid and vapour coexist when µ = µcoex = −2. For µ > −2 [µ < −2] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < −2 [µ > −2], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < −2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further – at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0.2. Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26–28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 2: Typical KMC results for the final dried-in nanoparticle structures resulting from the evaporative dewetting processes of nanoparticle solutions (nanofluids) in the case of (a) a spinodal-like process at µ = −2.55, (b) nucleation and growth of holes at µ = −2.3, (c) unstable fronts at µ = −2.3 and low mobility M = 5, and (d) unstable fronts at µ = −2.3 and medium mobility M = 10. The starting configuration in (a) and (b) is a homogeneous liquid film with uniformly distributed particles whereas in (c) and (d) a hole at the center is nucleated 'by hand'. The remaining parameters are (a,b) M = 50, nl = 2.0, nn = 1.5, ρ av n = 0.2, kT = 0.3, MC steps= 500, domain size 1200 × 1200; (c,d) εnn = 2.0, nl = 1.5, ρ av n = 0.2, kT = 0.2, MC steps= 3000, domain size 1200 × 1200. Lattice sites occupied by particles are coloured black, and the empty sites are coloured white.", - "page_start": 10, - "page_end": 10, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 1: (Colour online) Images of strongly ramified dewetting structures obtained using Atomic Force Microscopy in the case of (a) an aqueous collagen solution on graphite (courtesy of U. Thiele, M. Mertig and W. Pompe; see also Ref. [42]. Image size: 5µm×5µm); (b) poly(acrylic acid) in water spin-coated onto a polystyrene substrate (reprinted with permission of John Wiley & Sons, Inc. from Ref. [23]; copyright John Wiley & Sons, Inc. 2002; Image size: 2.5µm×2.5µm); and in both (c) and (d), a solution of gold nanoparticles in toluene, spin-coated onto native oxide terminated silicon substrates (scale bars given in panels). In all the images the lighter areas correspond to the deposited solute and the dark areas to the empty substrate.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2669.pdf" - }, - { - "text": "### I. INTRODUCTION\n\nThe patterns formed in dewetting processes have attracted strong interest since Reiter analysed the process quantitatively in the early nineties. In these experiments, that proved to be a paradigm in our understanding of dewetting, a uniform thin film of polystyrene (tens of nanometers thick) is deposited on a flat silicon oxide substrate is brought above the glass transition temperature. The film ruptures in several places, forming holes which subsequently grow, competing for space. As a result, a random polygonal network of liquid rims emerges. The rims may further decay into lines of small drops due to a Rayleigh-type instability [1–3]. The related problems of retracting contact lines on partially wetting substrates and the opening of single holes in rather thick films have also been studied [4, 5].\n\nSubsequent work has mainly focused on many different aspects of the dewetting process for simple non-volatile liquids and polymers (for reviews see Refs. [6–8]). All stages of the dewetting of a film are studied: the initial film rupture via nucleation or a surface instability (called spinodal dewetting) [1, 9–13], the growth process of individual holes [14–16], the evolution of the resulting hole pattern [3, 13], and the stability of the individual dewetting fronts [17–19]. We note in passing, that descriptions of dewetting patterns may also be found in historic papers, particularly for the dewetting of a liquid film on a liquid substrate. Tomlinson [20, footnote 18 on p. 40] considered turpentine on water and Marangoni [21, p. 352f] oil on water.\n\nMore recently, interest has turned to the dewetting processes of solutions and suspensions. However, these systems have not yet been investigated in any great depth. Such systems are complicated because their behaviour is determined by the interplay between the various solute (or colloid) and solvent transport processes. Furthermore, the solvents that are used often evaporate, i.e., one has to distinguish between 'normal' convective dewetting and evaporative dewetting. A number of experiments have been performed employing (colloidal) solutions of polymers [22–25], macromolecules like collagen and DNA [26–31] and nanoparticles [32–40]. The latter are sometimes referred to as 'nanofluids'. The initial focus of much of the research in the field has been on investigating the structures that are formed which are similar to the ones observed in the 'classical' dewetting of non-volatile liquids. Labyrinthine structures and polygonal networks result from spinodal dewetting and heterogeneous nucleation and growth, respectively. They are 'decorated' with the solute and therefore conserve the transient dewetting pattern as a dried-in structure when all the solvent has evaporated [28, 34]. The picture is, however, not complete. The solute may", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "How many employees did HON Industries count in 2003 ?", - "target_page": 15, - "target_passage": "Members (employees) at year-end : 8,926", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "### M A N A G E M E N T ' S D I S C U S S I O N A N D A N A L Y S I S\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n#### **Overview**\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n#### **Critical Accounting Policies and Estimates** *G E N E R A L*\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.\n\nAn accounting policy is deemed to be critical if it requires an accounting estimate to be made based on assumptions about matters that are uncertain at the time the estimate is made, and if different estimates that reasonably could have been used, or changes in the accounting estimates that are reasonably likely to occur periodically, could materially impact the financial statements. Management believes the following critical accounting policies reflect its more significant estimates and assumptions used in the preparation of the Consolidated Financial Statements.\n\n*Fiscal year end* – The Company's fiscal year ends on the Saturday nearest December 31. Fiscal year 2003, the year ended January 3, 2004, contained 53 weeks, while fiscal year 2002, the year ended December 28, 2002, and fiscal year 2001, the year ended December 29, 2001, contained 52 weeks. A 53-week year occurs approximately every sixth year.\n\n*Revenue recognition* – Revenue is normally recognized upon shipment of goods to customers. In certain circumstances revenue is not recognized until the goods are received by the customer or upon installation and customer acceptance based on the terms of the sale agreement. Revenue includes freight charged to customers; related", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## A M E S S A G E F R O M T H E B O A R D O F D I R E C T O R S\n\n#### **Dear Shareholders:**\n\nWe, the members of the HON INDUSTRIES Board of Directors, believe that integrity is central to good corporate governance. This belief is reflected in the HON INDUSTRIES vision statement (shown on the back of this annual report), adopted many years ago. Our Vision statement represents much more than a traditional \"mission,\" and it goes much deeper than company policy. The beliefs and values represented in that document are the very foundation of our corporate culture, and guide the attitude and actions of every member, every day.\n\nFrom its beginnings, HON INDUSTRIES has sought to implement its vision through sound policies and practices, and by maintaining a strong Board composed predominantly of outside directors. We are fully committed to executing our responsibilities, and we will continue to maintain the company's long-standing tradition of an independent, well-informed, active, and engaged Board of Directors.\n\nOur board meetings and procedures have been developed and refined to encourage open and informed communication. The company's accounting policies have always been conservative and straightforward. The Board's three committees — Audit; Human Resources and Compensation; Public Policy and Corporate Governance — have consisted entirely of non-management directors for many years.\n\nDuring 2003, we have given significant attention to the newly released rules emanating from the Sarbanes-Oxley Act of 2002 and the New York Stock Exchange listing requirements — rules intended to improve corporate governance across the country. It is gratifying to report that HON INDUSTRIES governance practices were already in accord with the spirit of the rules.\n\nIt is an honor to serve as directors of HON INDUSTRIES. We are very proud to represent you, the shareholder, as we oversee the management of this great company. Please be assured that we intend to remain vigilant and focused on good corporate governance.\n\nSincerely, The HON INDUSTRIES Board of Directors\n\nStan A. Askren\n\nGary M. Christensen\n\nCheryl A. Francis\n\nRobert L. Katz\n\nDennis J. Martin\n\nJack D. Michaels\n\nJoseph Scalzo\n\nAbbie J. Smith\n\nRichard H. Stanley\n\nBrian E. Stern\n\nRonald V. Waters, III", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### *O F F I C E F U R N I T U R E*\n\nOffice furniture comprised 74% of consolidated net sales for 2003 and 76% of consolidated net sales for 2002 and 2001. Net sales for office furniture increased 2% in 2003 and decreased 6% in 2002. The increase in 2003 is due to the increased week from the Company's 52/53-week fiscal year. The office furniture industry has experienced an unprecedented three-year decline in shipments. The Business and Institutional Furniture Manufacturer's Association (BIFMA) reported 2003 shipments down over 5% and 2002 shipments down 19%. The Company's estimated share of the market based on reported office furniture shipments increased to 15.3% in 2003 compared to 14.4% in 2002 and 12.4% in 2001. This increase was achieved by providing strong brands, innovative products and services, and greater value to end-users.\n\nOperating profit as a percent of sales was 10.0% in 2003, 10.2% in 2002, and 8.2% in 2001. Included in 2003 were $15.2 million of net pretax charges related to the closure of two office furniture facilities, which impacted operating margins by 1.1 percentage points. Included in 2002 were $3.0 million of restructuring charges, which impacted operating margins by 0.2 percentage points, and 2001 included $22.5 million of restructuring charges, which impacted operating margins by 1.7 percentage points. The increase in operating margins is due to increased gross profit from the benefits of restructuring initiatives, rapid continuous improvement programs, and increased price realization, offset by additional investments in brand building and selling initiatives and increased freight expense.\n\n#### *H E A R T H P R O D U C T S*\n\nHearth products sales increased 9% in 2003 and decreased 3% in 2002, respectively. The growth in 2003 was attributable to strong housing starts, growth in market share in both the new construction and retail channels, strengthening alliances with key distributors and dealers, as well as focused new product introductions. The decrease in 2002 was mainly due to pruning out less profitable product lines.\n\nOperating profit as a percent of sales in 2003 was 12.1% compared to 10.8% and 9.2% in 2002 and 2001, respectively. The improved profitability in 2003 was the result of leveraging fixed costs over a higher sales volume and increased sales through company-owned distribution offset by increased freight costs and higher labor costs from increased use of overtime and temporary labor to meet record levels of demand. The increase in 2002 was mainly due to discontinuance of goodwill and indefinite-lived intangible amortization of approximately $7 million due to the adoption of SFAS 142.\n\n#### **Liquidity and Capital Resources**\n\nDuring 2003, cash flow from operations was $141.3 million, which along with funds from stock option exercises under employee stock plans, provided the funds necessary to meet working capital needs, invest in capital improvements, repay long-term debt, repurchase common stock, and pay increased dividends.\n\nCash, cash equivalents, and short-term investments totaled $204.2 million at the end of 2003 compared to $155.5 million at the end of 2002 and $78.8 million at the end of 2001. The Company used approximately $80 million of cash to acquire Paoli Inc. on January 5, 2004. These remaining funds, coupled with cash from future operations and additional long-term debt, if needed, are expected to be adequate to finance operations, planned improvements, and internal growth. The Company is not aware of any known trends or demands, commitments, events, or uncertainties that are reasonably likely to result in its liquidity increasing or decreasing in any material way.\n\nThe Company places special emphasis on the management and reduction of its working capital with a particular focus on trade receivables and inventory levels. The success achieved in managing receivables is in large part a result of doing business with quality customers and maintaining close communication with them. Trade receivables at year-end 2003 were virtually unchanged from the prior year. Trade receivable days outstanding have averaged approximately 37 to 38 days over the past three years. The Company's inventory turns were 23, 23, and 18 for 2003, 2002, and 2001, respectively. Increased imports of raw materials and finished goods may negatively affect inventory turns in the future but the Company is constantly looking for ways to add efficiency to its supply chain. The decrease in accounts payable and accrued expenses is due to timing of vendor and marketing program payments and the payment of additional purchase consideration and debenture earn out related to a prior acquisition. The Company also funded the retiree medical portion of its postretirement benefit obligation in 2003.\n\n#### *I N V E S T M E N T S*\n\nThe Company has investments in investment grade equity and debt securities. Management classifies investments in marketable securities at the time of purchase and reevaluates such classification at each balance sheet date. Equity securities are classified as available-for-sale and are stated at current market value with unrealized gains and losses included as a separate component of equity, net of any related tax effect. Debt securities are classified as held-to-maturity and are stated at amortized cost. A table of holdings as of year-end 2003 and 2002 is", - "page_start": 35, - "page_end": 35, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### **NOTE 11 — STOCKHOLDERS' EQUITY**\n\nShare repurchases are only conducted under repurchase programs approved by the Board of Directors and publicly announced. Share repurchase activity was as follows:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| August 2001 authorization (0, 1.4 million | | | | |\n| and 6.4 million shares purchased) $ | — | $ 36,034 | $ 207,590 | |\n| February 2003 authorization | | | | |\n| (10 million shares purchased) | — | 335,911 | — | |\n| November 2003 authorization (8 million | | | | |\n| and 2 million shares purchased) | 348,895 | 70,919 | — | |\n| | $ 348,895 | $ 442,864 | $ 207,590 | |\n| Average price of shares repurchased $ | 43.59 | $ 33.17 | $ 32.28 | |\n\nAt December 31, 2004, we had 10 million shares available for repurchase under a July 2004 authorization.\n\nIn May 2002, the Board of Directors approved a restricted stock plan. The plan allowed for the issuance of up to 1 million shares of Company common stock to certain key employees. The restrictions on selling these shares lapse 50% on the third anniversary date from the grant date and 50% on the fourth anniversary date after the grant date. Through December 31, 2004, 903,000 shares were issued, with an aggregate value of $32 million. This amount was recorded as deferred compensation in the accompanying consolidated balance sheet and is being amortized to operating expenses on a straight-line basis through the period in which the restrictions fully lapse. Amortization of deferred compensation was $7 million, $8 million and $5 million for the years ended December 31, 2004, 2003 and 2002, respectively, and 855,000 shares were outstanding under the plan at December 31, 2004. In November 2002, the Board of Directors determined that no more awards would be granted under the plan.\n\n#### **NOTE 12 — EMPLOYEE BENEFIT PLANS**\n\nEmployees of the Company who are members of various unions are covered by union-sponsored, collectively bargained, multi-employer health and welfare and defined benefit pension plans. The Company recorded an expense of $86 million in 2004, $77 million in 2003 and $66 million in 2002 under such plans. The plans' sponsors have not provided sufficient information to permit the Company to determine its share of unfunded vested benefits, if any.\n\nThe Company is self-insured for most health care benefits for its non-union employees. The liability for claims filed and estimates of claims incurred but not reported is included in the \"Other accrued liabilities\" caption in the accompanying consolidated balance sheets.\n\nThe Company has a retirement savings plan under Section 401(k) of the Internal Revenue Code for eligible employees not covered by a collective bargaining agreement that does not specifically provide for participation in the plan. The plans allow employees to defer, within prescribed limits, up to 30% of their income on a pre-tax basis through contributions to the plans. The Company matches, within prescribed limits, a portion of eligible employees' contributions. In the case of certain union employees, the Company contributes to the plan are based on hours worked. The Company recorded charges for 401(k) contributions of $12 million in 2004, $10 million in 2003 and $12 million in 2002.\n\nThe Company maintains a nonqualified deferred retirement plan for certain key employees. The plan allows participants to defer, on a pre-tax basis, a portion of their salary and bonus and accumulate tax deferred earnings, plus investment earnings on the deferred balances, as a retirement fund. Participants receive a Company match of up to 4% of salary, net of any Company match received under the Company's 401(k) plan. All employee deferrals vest immediately. The Company matching contributions vest ratably over a three-year period. The Company recorded charges for matching contributions of $1 million in 2004, $2 million in 2003 and $1 million in 2002.", - "page_start": 72, - "page_end": 72, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "**Figure 18: Employment types in EU27, development 2005 to 202265 – Eurostat**\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by **two major contractual distinctions** that are important for OSH: 1) **full- or part-time** work, and 2) the **time limit of the contract** (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n#### **Definitions Eurostat66**\n\n**Employers = self-employed with employee:** employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\n**Self-employed:** not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\n**Employees:** persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## **(b)** *Industry Segment*\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n#### **(c)** *Description of Business*\n\n## **Products and Distribution**\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n| --- | --- | --- | --- |\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| Other | 8.7 | 4.6 | 4.0 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).\n\nNo new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n### **Raw Materials**\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "### **Production Growth**\n\nAverage mmcfe per day for year\n\n### **Proved Reserve Growth** Bcfe at end of year\n\n20,000 20,000\n\n### **Total Resource Base Growth*** Bcfe at end of year\n\n0\n\n0\n\n0\n\n0\n\n500\n\n100\n\n200\n\n300\n\n400\n\n500\n\n100\n\n200\n\n300\n\n400\n\n500\n\n5\n\n10\n\n15\n\n20\n\n0\n\n0\n\n**Chesapeake's Stock Price** Chesapeake's Stock Price at Month End Henry Hub Natural Gas Spot Price at Month End\n\n### **Chesapeake's Five-Year and Ten-Year Common Stock Performance**\n\n0\n\n0\n\n0\n\n80 The graphs below compare the performance of our common stock to the S&P 500 Stock Index and a group of peer companies for the past five and 10 years. The graph on the left assumes an investment of $100 on December 31, 2004 and the reinvestment of all dividends. The graph on the right assumes an investment of $100 on December 31, 1999 and the reinvestment of all dividends. The graphs show the value of the investment at the end of each year.\n\n0\n\n0\n\n30\n\n60\n\n90\n\n120\n\n0\n\n0\n\n30\n\n60\n\n90\n\n120\n\n150\n\n500\n\n150\n\n100\n\n200\n\n300\n\n400\n\n500\n\n100\n\n200\n\n300\n\n400\n\n0\n\n### **FIVE-YEAR PERFORMANCE** 70\n\n0\n\n0\n\n0\n\n0\n\n150\n\n30\n\n60\n\n90\n\n120\n\n150\n\n30\n\n60\n\n90\n\n120\n\n150\n\n0\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\n0\n\n0\n\n500\n\n1,000\n\n1,500\n\n2,000\n\n2,500\n\n3,000\n\n500\n\n1,000\n\n1,500\n\n2,000\n\n2,500\n\n3,000\n\n### **TEN-YEAR PERFORMANCE**\n\n(1) The 2010 peer group is comprised of Anadarko Petroleum Corp., Apache Corp., Devon Energy Corp., Encana Corp. and EOG Resources, Inc. XTO Energy, Inc. was not included in the 2010 peer group due to its acquisition by Exxon Mobil Corp. 500 150", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## March 22, 2004\n\n## Dear Shareholder:\n\nI am pleased to report on an excellent year for your Company, one in which we achieved strong financial results and reached several significant accomplishments.\n\nOur financial performance was very positive in 2003. For the first time in our history, revenues exceeded $100 million, reaching a total of $105.9 million for the year. While revenues increased by $12.9 million, our operating expenses only increased by $3.6 million, resulting in a $9.3 million improvement in operating income which reached $18.6 million. Helped by the one-time gain from the sale of our cellular partnership interest, our net income was a record $32.1 million. Our net income from continuing operations, which excludes the cellular impact, reached $9.8 million.\n\nWhile the last few years have not been kind to many companies in the telecommunications industry, your Company has not just survived, it has thrived. In addition to the operating results, our balance sheet became even\n\n*Christopher E. French*\n\nstronger. Total debt was again reduced, decreasing by $12.2 million to $43.3 million as of the end of the year. At the same time, our cash and equivalents at the end of the year was $28.7 million, while total assets were $185.4 million. With our total long-term debt equaling only 23.4 percent of total assets, your Company's balance sheet is envied in an industry where many companies have encountered problems just meeting their debt obligations, much less being able to invest in their future.\n\nAs previously announced, the Company completed the sale of our cellular partnership interest on February 28, 2003. While our participation in cellular, a subset of the wireless industry, had been very profitable, competitive pressures in the wireless industry were having an increasing impact. We had already lost half of our customers, and growth in revenues and profits had begun to slow. Exiting the cellular segment through the sale allows the Company to focus on our significantly larger digital PCS operation. It also made available a large source of cash to finance our other operating needs and future growth opportunities.\n\nOur wireless priorities are now focused on improving results within our PCS operation. After many years of multiplemillion dollar losses, our PCS business produced a slight profit in 2003. While many non-recurring factors contributed to this small profit, the basic operating results within this subsidiary showed significant improvement during 2003. PCS revenues grew 20.8 percent to a total of $67.0 million. Operating income in this subsidiary was $2.9 million, an $8.2 million change from the previous year's loss. Despite these improvements, we still have a long way to go before we are earning a satisfactory return on our investment.\n\nThe Company continued its efforts to successfully grow revenues and profits from other lines of business and by furnishing more and newer services in our enlarged footprint extending beyond Shenandoah County. Revenues from our information access services, which includes contract work on the 511Virginia Travel Project and Internet access services, increased $0.6 million, to $7.0 million during 2003. The Virginia Department of Transportation has requested proposals to continue the 511 project for future years, as well as to expand it to cover all the interstate highways throughout the Commonwealth. The success of the project to date has attracted many other bidders competing against our Company to win the contracts. Our recently launched regional phone book, Shentel Pages, exceeded our initial revenue expectations. It is hoped that a single source of phone listings and Yellow Page advertising, in both printed and online versions, will increasingly be demanded by residents and businesses in the northern Shenandoah Valley region.\n\nWhile our 2003 results have been good, we recognize we still have many challenges to overcome in order to continue our history of profitable long-term growth. Foremost is sustaining profitability in our PCS business which is so heavily dependent on Sprint's decisions and overall success with PCS. Our recently announced amendment to our Sprint agreements will provide some cost savings and allow us greater certainty in fees paid to Sprint. The recently announced merger between two of Sprint's competitors may provide some much needed consolidation in the U.S. wireless industry.", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### *P R O D U C T D E V E L O P M E N T C O S T S*\n\nProduct development costs relating to the development of new products and processes, including significant improvements and refinements to existing products, are expensed as incurred. These costs include salaries, contractor fees, building costs, utilities, and administrative fees. The amounts charged against income were $25,791,000 in 2003, $25,849,000 in 2002, and $21,415,000 in 2001.\n\n#### *S T O C K - B A S E D C O M P E N S A T I O N*\n\nThe Company accounts for its stock option plan using Accounting Principles Board Opinion No. 25, \"Accounting for Stock Issued to Employees,\" whereby stock-based employee compensation is reflected in net income as all options granted under the plan had an exercise price equal to the market value of the underlying common stock on the date of grant. SFAS No. 123, \"Accounting for Stock-Based Compensation\" issued subsequent to APB No. 25 and amended by SFAS No. 148, \"Accounting for Stock-Based Compensation — Transition and Disclosure\" defines a fair value-based method of accounting for employees' stock options but allows companies to continue to measure compensation cost for employee stock options using the intrinsic value-based method described in APB No. 25.\n\nThe following table illustrates the effect on net income and earnings per share if the Company had applied the fair value recognition provisions of SFAS No. 123, \"Accounting for Stock-Based Compensation,\" as amended by SFAS No. 148 \"Accounting for Stock-Based Compensation — Transition and Disclosure,\" to stock-based employee compensation.\n\n| (In thousands) | 2003 | 2002 | 2001 |\n| --- | --- | --- | --- |\n| Net income, as reported | $ 98.1 | $ 91.4 | $ 74.4 |\n| Deduct: Total stock-based | | | |\n| employee compensation | | | |\n| expense determined under fair | | | |\n| value-based method for all | | | |\n| awards, net of related tax effects | (3.0) | (2.2) | (1.4) |\n| Pro forma net income | $ 95.1 | $ 89.2 | $ 73.0 |\n| Earnings per share: | | | |\n| Basic – as reported | $ 1.69 | $ 1.55 | $ 1.26 |\n| Basic – pro forma | $ 1.64 | $ 1.52 | $ 1.24 |\n| Diluted – as reported | $ 1.68 | $ 1.55 | $ 1.26 |\n| Diluted – pro forma | $ 1.62 | $ 1.51 | $ 1.24 |\n\nIncrease in expense in 2003 is due to accelerated vesting upon the retirement of plan participants.\n\n#### *I N C O M E T A X E S*\n\nThe Company accounts for income taxes under SFAS No. 109, \"Accounting for Income Taxes.\" This Statement uses an asset and liability approach that requires the recognition of deferred tax assets and liabilities for the expected future tax consequences of events that have been recognized in the Company's financial statements or tax returns. Deferred income taxes are provided to reflect the differences between the tax bases of assets and liabilities and their reported amounts in the financial statements.\n\n#### *E A R N I N G S P E R S H A R E*\n\nBasic earnings per share are based on the weighted-average number of common shares outstanding during the year. Shares potentially issuable under options and deferred restricted stock have been considered outstanding for purposes of the diluted earnings per share calculation.\n\n#### *U S E O F E S T I M A T E S*\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States requires management to make estimates and assumptions that affect the amounts reported in the financial statements and accompanying notes. The more significant areas requiring the use of management estimates relate to allowance for doubtful accounts, inventory reserves, marketing program accruals, warranty accruals, accruals for self-insured medical claims, workers' compensation, legal contingencies, general liability and auto insurance claims, and useful lives for depreciation and amortization. Actual results could differ from those estimates.\n\n#### *S E L F - I N S U R A N C E*\n\nThe Company is partially self-insured for general and product liability, workers' compensation, and certain employee health benefits. The general, product, and workers' compensation liabilities are managed using a wholly owned insurance captive; the related liabilities are included in the accompanying consolidated financial statements. The Company's policy is to accrue amounts in accordance with the actuarially determined liabilities. The actuarial valuations are based on historical information along with certain assumptions about future events. Changes in assumptions for such matters as legal actions, medical costs, and changes in actual experience could cause these estimates to change in the near term.\n\n#### *R E C E N T A C C O U N T I N G P R O N O U N C E M E N T S*\n\nIn December 2003, the Financial Accounting Standards Board issued Interpretation 46R (FIN 46R), a revision to Interpretation 46 (FIN 46), \"Consolidation of Variable Interest Entities.\" Fin 46R clarifies some of the provisions of FIN 46 and exempts certain entities from its requirements.", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "### **5.3 Workforce structure**\n\nThe **workforce** was often set identical with employed workers under a permanent contract, on average mostly male, mainly national, and most of the skills were achieved during apprenticeships or studying. During the past three decades a rapid economic, technological and demographic development took place: the variety of contracts has grown, and the share of women and of an international workforce increased; moreover, the average age of the workforce is rapidly increasing; and technological developments require repeated and often permanent acquisition of new skills. All these developments have shattered traditional ideas and conceptions of working life.297 This also has an impact on OSH.\n\n#### **Figure 38: Workforce structure, demography – Eurostat**\n\n**In 2005, approximately 80 million women and 101 million men** were employed in the EU. This was a rate of female workforce of 44.1%; in 2019, this rate went up to 46.1%, with 90 million women and 106 million men making up a total of 196 million workers. The employment rate of women between 15 and 64 years stood in 2019 at 67.9% and the employment rate of men at 78.9%.298\n\nDuring the past 15 years the number of women in the Eurostat category 'Employed persons' (Employed persons = employees and employers including self-employed) grew by 12.9%. The number of female employees grew by 16.3% and the number of male employees by 7.8%.", - "page_start": 108, - "page_end": 108, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "Did automating the writing of EM-to-IP handoffs notes using LLM lead to life-threatening outputs ?", - "target_page": 8, - "target_passage": "none of the incorrect output text elements reached life-threatening risk", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "#### Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## **Introduction**\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors.1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event.3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors.5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints.7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems.11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care.15-18 Limited work to date has demonstrated EM electronic handoff tools as feasible, efficient, and effective.19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.\n\nIn recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout.22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes.23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries,24 radiology reports,25 patient messaging,26 after-visit summaries,27 and ambient dictation28 with various levels of perceived quality in each workflow.29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records.30 A common concern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content.31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets.32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes.34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency.35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases,36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach.37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary.24 However, recently published clinical\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 2/12", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "# **Original Investigation | Emergency Medicine** Developing and Evaluating Large LanguageModel–Generated EmergencyMedicine Handoff Notes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n# **Abstract**\n\n**IMPORTANCE** An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\n**OBJECTIVE** To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\n**DESIGN, SETTING, AND PARTICIPANTS** This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\n**EXPOSURE** LLM-generated EM handoff notes.\n\n**MAIN OUTCOMES AND MEASURES** LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\n**RESULTS** In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE (0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\n**CONCLUSIONS AND RELEVANCE** In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n# **Key Points**\n\n**Question** Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\n**Findings** In this cohort study of 1600 EM patient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\n**Meaning** These findings suggest the value of a manual, patient safety– focused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n# **+ Invited Commentary**\n\n# **+ Supplemental content**\n\nAuthor affiliations and article information are listed at the end of this article.\n\n**Open Access.** This is an open access article distributed under the terms of the CC-BY License.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 1/12", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety.38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n# **Methods**\n\n### **Data Collection**\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n### **EM-to-IP Handoff Note Template**\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n### **Data Curation for Automated ED Note Generation**\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in **Table 1**: EM clinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 3/12", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## **Discussion**\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user–developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\n| | | | Table 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)–Generated and Physician-Written | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| | | | Likert rating 1-5, No. (%)a | | | | | | Likert rating 1-5, No. (%)a | | | |\n| Criteria | Mean score (SD) | 1 | 2 | 3 | 4 | 5 | Mean score (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.00 (0.88) | 0 | 12 (8) | 31 (20.7) | 69 (46) | 38 (25.3) | 4.16 (0.84) | 0 | 3 (2) | 31 (20.7) | 48 (32) | 68 (45.3) |\n| Curation | 4.24 (0.58) | 0 | 1 (0.7) | 13 (8.7) | 85 (56.7) | 51 (34) | 4.76 (0.48) | 0 | 0 | 6 (4) | 39 (26) | 105 (70) |\n| Readability | 4.00 (0.64) | 0 | 8 (5.3) | 17 (11.3) | 87 (58) | 38 (25.3) | 4.64 (0.49) | 0 | 0 | 5 (3.3) | 38 (25.3) | 107 (71.3) |\n| Correctness | 4.52 (0.64) | 0 | 0 | 13 (8.7) | 39 (26) | 98 (65.3) | 4.90 (0.39) | 0 | 0 | 2 (1.3) | 12 (8) | 136 (90.7) |\n| Usefulness | 4.04 (0.86) | 0 | 12 (8) | 30 (20) | 59 (39.3) | 49 (32.7) | 4.36 (0.71) | 0 | 5 (3.3) | 13 (8.7) | 53 (35.3) | 79 (52.7) |\n\na Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or LLM summaries in the 150 evaluation results.\n\n#### Table 4. Mean Clinical Safety Evaluation, Large Language Model (LLM)–Generated and Physician-Written\n\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | Likert score 1-5, No. (%)a | | | | | | Likert score 1-5, No. (%)a | | | |\n| Criteria | Mean (SD) | 1 | 2 | 3 | 4 | 5 | Mean (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.20 (0.93) | 0 | 13 (8.7) | 19 (12.7) | 58 (38.7) | 60 (40) | 4.50 (0.65) | 0 | 0 | 17 (11.3) | 43 (28.7) | 90 (60) |\n| Curation | 4.82 (0.32) | 0 | 1 (0.7) | 3 (2) | 21 (14) | 125 (83.3) | 4.90 (0.31) | 0 | 0 | 3 (2) | 8 (5.3) | 139 (92.7) |\n| Readability | 4.74 (0.37) | 0 | 1 (0.7) | 6 (4) | 23 (15.3) | 120 (80) | 4.94 (0.14) | 0 | 0 | 0 | 10 (6.7) | 140 (93.3) |\n| Correctness: hallucination | 4.96 (0.14) | 0 | 0 | 0 | 5 (3.3) | 145 (96.7) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Correctness: knowledge gap | 4.88 (0.48) | 0 | 3 (2) | 2 (1.3) | 6 (4) | 139 (92.7) | 4.90 (0.42) | 0 | 1 (0.7) | 5 (3.3) | 3 (2) | 141 (94) |\n| Correctness: faulty logic | 4.60 (0.75) | 0 | 11 (7.3) | 12 (8) | 13 (8.7) | 114 (76) | 4.94 (0.24) | 0 | 0 | 2 (1.3) | 2 (1.3) | 146 (97.3) |\n| Correctness: bias | 5.00 | 0 | 0 | 0 | 0 | 150 (100) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Overall safety risk | 4.06 (0.86) | 0 | 11 (7.3) | 27 (18) | 60 (40) | 52 (34.7) | 4.50 (0.56) | 0 | 1 (0.7) | 16 (10.7) | 41 (27.3) | 92 (61.3) |\n| | | | | | a Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or AI summaries in the 150 evaluation results. | | | | | | | |\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 7/12", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise.49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows.29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy.54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps),30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n### **Limitations**\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.\n\nWhile the dataset was smaller, we made significant efforts to reduce model variance and prevent overfitting by allocating more data to the training cohort and using k-fold cross validation. And while our ratio split choice implies the testing results will have slightly greater variance than expected, this is mitigated through the extensive manual clinical assessment that was performed. The study's multidimensional clinical evaluation was labor intensive, requiring more than 200 hours from expert informaticists and quality trained clinician experts to both curate the dataset of 1600\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 8/12", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "LLM-model training, an informatics professional (V.H.) worked over a period of 200 hours with 3 board certified emergency medicine physician leaders with experience in formal quality and patient safety review processes (M.M., A.F., and P.S.) to improve the dataset through manual curation and annotation. As the task of EM-handoff note generation is not dependent on racial characteristics of the patients, we removed all mentions of race during the annotation stage as a means to avoid race bias; therefore, the model was trained to generate text without race-based assumptions. Although resource intensive, a small and carefully curated dataset of at least 1000 examples has been shown to be sufficient to produce remarkable results for the language model chosen.42 Given the size of our dataset, we created a train and test dataset with a ratio of 1500:100, with a higher ratio of data placed in the training set and eschewed a validation set to lower the variance of the models. We used k-fold cross validation on the training dataset to avoid sampling bias for the hyperparameter optimization of the LLMs.\n\n### **Models**\n\nFor this study, we chose the LLMs Robustly Optimized BERT Approach (RoBERTa; hereafter referred to as LLM 1)43 for saliency content selection and Large Language Model Meta AI 2 (Llama-2; hereafter referred to as LLM 2) 7B44 for abstractive summarization. Further information about the models and technology specifications is provided in detail in eAppendix 1 in Supplement 1.\n\n#### **Data Processing**\n\nAs LLM 2 only has a context size of 4096 tokens,44 we used 2 steps to process the EM notes to both shorten the input size while maintaining content salience. First, we adopted a number of heuristic strategies for prioritization and filtration: (1) clinical note types (hierarchy presented in Table 1), (2) time of authorship, and (3) duplicate sentence detection. Second, we used an LLM 1–based saliency model to infer EM note sentences based on likelihood of content contribution to the EM-to-IP handoff notes.\n\n#### **Model Training and Inference**\n\nOur summarization model is a fine-tuned decoder-only causal language model based on LLM 2. We used different prompts for the separate types of summarization: HPI and EM handoff. Additional information about the model training and inference process is provided in eAppendix 1 in Supplement 1.\n\nUsing a combination of generative AI powered by our fine-tuned LLM 2 model and a set of heuristic rules, our summarization system produced ED handoff notes with various sections for downstream clinical tasks. The inference process is shown in the **Figure**.\n\n| | Table 1. Types of Data Included From the Emergency Department (ED) Patient Electronic Health Recorda |\n| --- | --- |\n| Type of data | Description |\n| Descriptive | Date of birth, medical record number, encounter number, and total time of stay in ED |\n| Encounter | ED arrival date and time, IP admit date and time |\n| Laboratory tests | Examples: hemoglobin, hematocrit, white blood cell count, neutrophil count, platelets, sodium, |\n| (all results available) | potassium, chloride, bicarbonate, creatinine, blood urea nitrogen, troponin, D dimer, lactate, |\n| | urinalysis, ketone, blood, nitrite, leucocytes, and red blood cells |\n| Laboratory tests | Examples: β-human chorionic gonadotropin hormone, all serum drug levels (alcohol level, |\n| (only if abnormal) | salicylate level, Tylenol level), magnesium, lipase, and erythrocyte sedimentation rate |\n| Notes (in order of | EM clinician notes, consultation notes, EM progress notes, and EM procedure notes |\n| hierarchy) | |\n| Vitals | Height, weight, temperature, heart rate, blood pressure, and peripheral capillary |\n| | oxygen saturation |\n| Orders | Medications, consults, and radiology results |\n\nAbbreviations: EM, emergency medicine; IP, inpatient.\n\na Automated EM handoff notes are generated from the curation of the data through both rule-based and large language model–summarization approaches.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 4/12", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed8.pdf" - }, - { - "text": "During the VLAN configuration for each IP address, the VLAN settings for the local and failover ports on two nodes of an I/O Group can differ. To avoid any service disruption, switches must be configured so that the failover VLANs are configured on the local switch ports and the failover of IP addresses from a failing node to a surviving node succeeds. If failover VLANs are not configured on the local switch ports, there are no paths to the IBM Spectrum Virtualize system nodes during a node failure and the replication fails.\n\nConsider the following requirements and procedures when implementing VLAN tagging:\n\n- -VLAN tagging is supported for IP partnership traffic between two systems.\n- -VLAN provides network traffic separation at the layer 2 level for Ethernet transport.\n- - VLAN tagging by default is disabled for any IP address of a node port. You can use the CLI or GUI to optionally set the VLAN ID for port IPs on both systems in the IP partnership.\n- - When a VLAN ID is configured for the port IP addresses that are used in remote copy port groups, appropriate VLAN settings on the Ethernet network must also be configured to prevent connectivity issues.\n\nSetting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop the partnership first before you configure VLAN tags. Restart the partnership after the configuration is complete.\n\n# **11.8.5 IP partnership and terminology**\n\nThe IP partnership terminology and abbreviations that are used are listed in Table 11-12.\n\n| IP partnership terminology | Description |\n| --- | --- |\n| Remote copy group or Remote copy port group | The following numbers group a set of IP addresses that are |\n| | connected to the same physical link. Therefore, only IP |\n| | addresses that are part of the same remote copy group can |\n| | form remote copy connections with the partner system: |\n| | 0 – Ports that are not configured for remote copy |\n| | 1 – Ports that belong to remote copy port group 1 |\n| | 2 – Ports that belong to remote copy port group 2 |\n| | Each IP address can be shared for iSCSI host attach and |\n| | remote copy functionality. Therefore, appropriate settings must |\n| | be applied to each IP address. |\n| IP partnership | Two systems that are partnered to perform remote copy over |\n| | native IP links. |\n| FC partnership | Two systems that are partnered to perform remote copy over |\n| | native Fibre Channel links. |\n| Failover | Failure of a node within an I/O group causes the volume access |\n| | to go through the surviving node. The IP addresses fail over to |\n| | the surviving node in the I/O group. When the configuration |\n| | node of the system fails, management IPs also fail over to an |\n| | alternative node. |\n| Failback | When the failed node rejoins the system, all failed over IP |\n| | addresses are failed back from the surviving node to the |\n| | rejoined node, and volume access is restored through |\n| | this node. |\n| linkbandwidthmbits | Aggregate bandwidth of all physical links between two sites |\n| | in Mbps. |\n\n*Table 11-12 Terminology for IP partnership*", - "page_start": 574, - "page_end": 574, - "source_file": "sg247938.pdf" - }, - { - "text": "- c. Configure IP ports for remote copy on System A1 by using the following settings:\n\t- Node 1:\n\t\t- Port 1, remote copy port group 1\n\t\t- Host: Yes\n\t\t- Assign IP address\n\t- Node 2:\n\t\t- Port 4, Remote Copy Port Group 2\n\t\t- Host: Yes\n\t\t- Assign IP address\n- d. Configure IP ports for remote copy on System B1 by using the following settings:\n\t- Node 1:\n\t\t- Port 1, remote copy port group 1\n\t\t- Host: Yes\n\t\t- Assign IP address\n\t- Node 2:\n\t\t- Port 4, remote copy port group 2\n\t\t- Host: Yes\n\t\t- Assign IP address\n- e. Check the MTU levels across the network as set (default MTU is 1500 on IBM SAN Volume Controller and Storwize V7000).\n- f. Establish IP partnerships from both systems.\n- g. After the partnerships are in the Fully_Configured state, you can create the remote copy relationships.\n\n# **11.9 Managing Remote Copy by using the GUI**\n\nIt is often easier to control MM/GM with the GUI if you have few mappings. When many mappings are used, run your commands by using the CLI. This section describes the tasks that you can perform at a remote copy level.\n\n**Note:** The **Copy Services** → **Consistency Groups** menu relates to FlashCopy consistency groups only, not Remote Copy ones.", - "page_start": 590, - "page_end": 590, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 11-1 Select Content Manager OnDemand system parameters\n\n- 3. To choose a User Exit Logging option, select the option.\n**Tip:** The arslog exit file is run by the same user that owns the **arssockd** process that calls this exit. A common reason for receiving no response from this exit is *access permissions* on either the arslog file itself or files and directories that are accessed within **arslog**.\n\nContent Manager OnDemand provides an exit for each of the four system logging event points. Use these exits to filter the messages and act when a particular event occurs. For example, you can provide a user exit program that sends a message to a security administrator when an unsuccessful logon attempt occurs.\n\n# **System log exit samples**\n\nTo demonstrate the common uses for the system log exit, we provide two typical examples:\n\n- -Capturing failed logon attempts (AIX)\n- -Notifying another system when a load completes (AIX)\n\nFor simplicity, we do not demonstrate the system log exits across all supported platforms. We recognize that the scripting languages between platforms vary, but the principles that we describe here are uniform across all supported platforms; only the syntax differs.\n\n#### *Capturing failed logon attempts (AIX)*\n\nExample 11-4 on page 252 is an extract from a simple system logging exit that captures *message code 31* (a failed logon attempt) and writes the user ID that was used and information about the network address of this user to a file. In this case, the file name is a combination of the system date and the string failedlogon.log. This system log exit writes all of the failed logon attempts for each day to a file that can then be sorted and analyzed by other utilities to alert for possible security risks.", - "page_start": 274, - "page_end": 274, - "source_file": "sg246915.pdf" - }, - { - "text": "an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.\n\nDetecting anomalous user workloads. Another possible defense requires the router to monitor individual user workloads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user's queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent fraction of queries being routed to the strong model.\n\nSuch user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit, each query.\n\n# 9 Related Work\n\nEvasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25, 43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.\n\nPrompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adversarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as \"do not output expletives\" [23, 42, 54, 66, 72, 73].\n\nPrompt injection is also used for extraction attacks that aim to infer some information from or about the model, for example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into thirdparty data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate their integrity by exploiting the weaknesses of the underlying LLM [19, 55].\n\nOur attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming controlplane-confounding queries.\n\nAttacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a given query with a lower computational cost by including an inner routing mechanism that in every layer routes different tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM, rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to build larger models at a fixed compute budget—not all parameters are used at the same time.\n\nHayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore this connection further.\n\nYona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users' prompts. We expect that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.\n\n# 10 Conclusion\n\nLLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an example of a broader, emerging class of systems we call \"LLM control planes\" that aim to achieve various quality, efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "How did automating the writing of EM-to-IP handoffs notes using LLM affect the usefulness of these notes ?", - "target_page": 1, - "target_passage": "LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **Original Investigation | Emergency Medicine** Developing and Evaluating Large LanguageModel–Generated EmergencyMedicine Handoff Notes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n# **Abstract**\n\n**IMPORTANCE** An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\n**OBJECTIVE** To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\n**DESIGN, SETTING, AND PARTICIPANTS** This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\n**EXPOSURE** LLM-generated EM handoff notes.\n\n**MAIN OUTCOMES AND MEASURES** LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\n**RESULTS** In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE (0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\n**CONCLUSIONS AND RELEVANCE** In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n# **Key Points**\n\n**Question** Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\n**Findings** In this cohort study of 1600 EM patient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\n**Meaning** These findings suggest the value of a manual, patient safety– focused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n# **+ Invited Commentary**\n\n# **+ Supplemental content**\n\nAuthor affiliations and article information are listed at the end of this article.\n\n**Open Access.** This is an open access article distributed under the terms of the CC-BY License.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 1/12", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "#### Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## **Introduction**\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors.1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event.3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors.5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints.7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems.11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care.15-18 Limited work to date has demonstrated EM electronic handoff tools as feasible, efficient, and effective.19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.\n\nIn recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout.22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes.23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries,24 radiology reports,25 patient messaging,26 after-visit summaries,27 and ambient dictation28 with various levels of perceived quality in each workflow.29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records.30 A common concern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content.31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets.32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes.34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency.35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases,36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach.37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary.24 However, recently published clinical\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 2/12", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety.38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n# **Methods**\n\n### **Data Collection**\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n### **EM-to-IP Handoff Note Template**\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n### **Data Curation for Automated ED Note Generation**\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in **Table 1**: EM clinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 3/12", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## **Discussion**\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user–developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\n| | | | Table 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)–Generated and Physician-Written | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| | | | Likert rating 1-5, No. (%)a | | | | | | Likert rating 1-5, No. (%)a | | | |\n| Criteria | Mean score (SD) | 1 | 2 | 3 | 4 | 5 | Mean score (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.00 (0.88) | 0 | 12 (8) | 31 (20.7) | 69 (46) | 38 (25.3) | 4.16 (0.84) | 0 | 3 (2) | 31 (20.7) | 48 (32) | 68 (45.3) |\n| Curation | 4.24 (0.58) | 0 | 1 (0.7) | 13 (8.7) | 85 (56.7) | 51 (34) | 4.76 (0.48) | 0 | 0 | 6 (4) | 39 (26) | 105 (70) |\n| Readability | 4.00 (0.64) | 0 | 8 (5.3) | 17 (11.3) | 87 (58) | 38 (25.3) | 4.64 (0.49) | 0 | 0 | 5 (3.3) | 38 (25.3) | 107 (71.3) |\n| Correctness | 4.52 (0.64) | 0 | 0 | 13 (8.7) | 39 (26) | 98 (65.3) | 4.90 (0.39) | 0 | 0 | 2 (1.3) | 12 (8) | 136 (90.7) |\n| Usefulness | 4.04 (0.86) | 0 | 12 (8) | 30 (20) | 59 (39.3) | 49 (32.7) | 4.36 (0.71) | 0 | 5 (3.3) | 13 (8.7) | 53 (35.3) | 79 (52.7) |\n\na Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or LLM summaries in the 150 evaluation results.\n\n#### Table 4. Mean Clinical Safety Evaluation, Large Language Model (LLM)–Generated and Physician-Written\n\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | Likert score 1-5, No. (%)a | | | | | | Likert score 1-5, No. (%)a | | | |\n| Criteria | Mean (SD) | 1 | 2 | 3 | 4 | 5 | Mean (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.20 (0.93) | 0 | 13 (8.7) | 19 (12.7) | 58 (38.7) | 60 (40) | 4.50 (0.65) | 0 | 0 | 17 (11.3) | 43 (28.7) | 90 (60) |\n| Curation | 4.82 (0.32) | 0 | 1 (0.7) | 3 (2) | 21 (14) | 125 (83.3) | 4.90 (0.31) | 0 | 0 | 3 (2) | 8 (5.3) | 139 (92.7) |\n| Readability | 4.74 (0.37) | 0 | 1 (0.7) | 6 (4) | 23 (15.3) | 120 (80) | 4.94 (0.14) | 0 | 0 | 0 | 10 (6.7) | 140 (93.3) |\n| Correctness: hallucination | 4.96 (0.14) | 0 | 0 | 0 | 5 (3.3) | 145 (96.7) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Correctness: knowledge gap | 4.88 (0.48) | 0 | 3 (2) | 2 (1.3) | 6 (4) | 139 (92.7) | 4.90 (0.42) | 0 | 1 (0.7) | 5 (3.3) | 3 (2) | 141 (94) |\n| Correctness: faulty logic | 4.60 (0.75) | 0 | 11 (7.3) | 12 (8) | 13 (8.7) | 114 (76) | 4.94 (0.24) | 0 | 0 | 2 (1.3) | 2 (1.3) | 146 (97.3) |\n| Correctness: bias | 5.00 | 0 | 0 | 0 | 0 | 150 (100) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Overall safety risk | 4.06 (0.86) | 0 | 11 (7.3) | 27 (18) | 60 (40) | 52 (34.7) | 4.50 (0.56) | 0 | 1 (0.7) | 16 (10.7) | 41 (27.3) | 92 (61.3) |\n| | | | | | a Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or AI summaries in the 150 evaluation results. | | | | | | | |\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 7/12", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise.49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows.29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy.54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps),30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n### **Limitations**\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.\n\nWhile the dataset was smaller, we made significant efforts to reduce model variance and prevent overfitting by allocating more data to the training cohort and using k-fold cross validation. And while our ratio split choice implies the testing results will have slightly greater variance than expected, this is mitigated through the extensive manual clinical assessment that was performed. The study's multidimensional clinical evaluation was labor intensive, requiring more than 200 hours from expert informaticists and quality trained clinician experts to both curate the dataset of 1600\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 8/12", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "LLM-model training, an informatics professional (V.H.) worked over a period of 200 hours with 3 board certified emergency medicine physician leaders with experience in formal quality and patient safety review processes (M.M., A.F., and P.S.) to improve the dataset through manual curation and annotation. As the task of EM-handoff note generation is not dependent on racial characteristics of the patients, we removed all mentions of race during the annotation stage as a means to avoid race bias; therefore, the model was trained to generate text without race-based assumptions. Although resource intensive, a small and carefully curated dataset of at least 1000 examples has been shown to be sufficient to produce remarkable results for the language model chosen.42 Given the size of our dataset, we created a train and test dataset with a ratio of 1500:100, with a higher ratio of data placed in the training set and eschewed a validation set to lower the variance of the models. We used k-fold cross validation on the training dataset to avoid sampling bias for the hyperparameter optimization of the LLMs.\n\n### **Models**\n\nFor this study, we chose the LLMs Robustly Optimized BERT Approach (RoBERTa; hereafter referred to as LLM 1)43 for saliency content selection and Large Language Model Meta AI 2 (Llama-2; hereafter referred to as LLM 2) 7B44 for abstractive summarization. Further information about the models and technology specifications is provided in detail in eAppendix 1 in Supplement 1.\n\n#### **Data Processing**\n\nAs LLM 2 only has a context size of 4096 tokens,44 we used 2 steps to process the EM notes to both shorten the input size while maintaining content salience. First, we adopted a number of heuristic strategies for prioritization and filtration: (1) clinical note types (hierarchy presented in Table 1), (2) time of authorship, and (3) duplicate sentence detection. Second, we used an LLM 1–based saliency model to infer EM note sentences based on likelihood of content contribution to the EM-to-IP handoff notes.\n\n#### **Model Training and Inference**\n\nOur summarization model is a fine-tuned decoder-only causal language model based on LLM 2. We used different prompts for the separate types of summarization: HPI and EM handoff. Additional information about the model training and inference process is provided in eAppendix 1 in Supplement 1.\n\nUsing a combination of generative AI powered by our fine-tuned LLM 2 model and a set of heuristic rules, our summarization system produced ED handoff notes with various sections for downstream clinical tasks. The inference process is shown in the **Figure**.\n\n| | Table 1. Types of Data Included From the Emergency Department (ED) Patient Electronic Health Recorda |\n| --- | --- |\n| Type of data | Description |\n| Descriptive | Date of birth, medical record number, encounter number, and total time of stay in ED |\n| Encounter | ED arrival date and time, IP admit date and time |\n| Laboratory tests | Examples: hemoglobin, hematocrit, white blood cell count, neutrophil count, platelets, sodium, |\n| (all results available) | potassium, chloride, bicarbonate, creatinine, blood urea nitrogen, troponin, D dimer, lactate, |\n| | urinalysis, ketone, blood, nitrite, leucocytes, and red blood cells |\n| Laboratory tests | Examples: β-human chorionic gonadotropin hormone, all serum drug levels (alcohol level, |\n| (only if abnormal) | salicylate level, Tylenol level), magnesium, lipase, and erythrocyte sedimentation rate |\n| Notes (in order of | EM clinician notes, consultation notes, EM progress notes, and EM procedure notes |\n| hierarchy) | |\n| Vitals | Height, weight, temperature, heart rate, blood pressure, and peripheral capillary |\n| | oxygen saturation |\n| Orders | Medications, consults, and radiology results |\n\nAbbreviations: EM, emergency medicine; IP, inpatient.\n\na Automated EM handoff notes are generated from the curation of the data through both rule-based and large language model–summarization approaches.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 4/12", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed8.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOn a Likert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences.51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk.45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement.52 Data were analyzed from October 2023 to March 2024.\n\n## **Results**\n\n#### **Automated Tasks**\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In **Table 2**, ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n### **Clinical Evaluation Tasks**\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in **Table 3** and **Table 4**. The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\n| | Table 2. Automated Evaluation Scores, Large Language Model (LLM)–Generated and Physician-Written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Summary type | R-1a | R-2a | R-La | BERT-p | BERT-r | SCALE |\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |\n\nAbbreviations: BERT, bidirectional encoder representations from transformers; p, precision-based scores; r, recall-based scores; R, recall-oriented understudy for gisting evaluation; SCALE, source chunking approach for large-scale inconsistency evaluation.\n\na R-1, R-2, R-L are the 3 types of recall-oriented understudy for gisting evaluation scores. Higher is better for all metrics.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 6/12", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].\n\n### 3 LLM Control Plane Integrity\n\nIn this section, we define *LLM control plane integrity*. Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the case of n = 2, and, for reasons that will be clear in a moment, use Ms (\"strong\") and Mw (\"weak\") to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ←$ RMω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a query.\n\nInference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of LLM invocations Mij (zj ) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RMω (x). Here m is the total number of LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\n$$T=(i_{1},z_{1}),(i_{2},z_{2}),\\ldots,(i_{m},z_{m})$$\n\nof pairs of model indexes ij ∈ {w, s} and model inputs zj . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x),(s, x)}. We write submitting a sequence of inferences ⃗x = ⃗x1, . . . , ⃗xq to a control plane as\n\n$$R_{\\omega}^{\\mathcal{M}}(\\vec{x})=(R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{1}),\\ldots,R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{q}))$$\n\nwhere note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however, each invocation results in a single LLM invocation.\n\nAn *inference flow policy* dictates the control plane designer's intention regarding use of the underlying models. For example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply defined as a predicate P over (input, model) pairs (⃗x1, i1), . . . ,(⃗xq, iq) since this fully defines the sequence of transcripts. For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:\n\n$${\\mathcal{P}}(({\\vec{x}}_{1},i_{1}),\\ldots,({\\vec{x}}_{q},i_{q}))=\\left(\\sum_{j=1}^{q}{\\frac{\\mathbb{I}(i_{j})}{q}}\\leq\\epsilon\\right)$$", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "**Note:** Make sure that your PC or notebook has a network route to the system IP address that you specified. In particular, you can access the management GUI from any management console that is connected to the same subnet as the system. Enter the system IP address on a supported browser to access the management GUI.\n\n# **4.3 System setup**\n\nThis section provides instructions about how to define the basic settings of the system with the system setup wizard, and how to add nodes and optional expansion enclosures.\n\n# **4.3.1 System setup wizard**\n\nWhether you are redirected from your PC or notebook after completing system initialization or you browse to the management IP address manually, you must complete the system setup wizard to define the basic settings of the system.\n\n**Note:** The first time that you connect to the management GUI, you are prompted to accept untrusted certificates because the system certificates are self-signed.\n\nYou can install certificates that are signed by a trusted certificate authority after you complete system setup. For more information about how to perform this task, see 4.5, \"Configuring secure communications\" on page 117.", - "page_start": 113, - "page_end": 113, - "source_file": "sg247938.pdf" - }, - { - "text": "**Note:** ACIF exits are called for every input, indexing, output, and resource record. ACIF exits are not limited to being called only one time for each file.\n\nIn Multiplatforms, ACIF user exits must be written in C. In z/OS, ACIF user exits can be written in C, COBOL, or assembler. For more information, see the \"Special considerations for APKACIF exits written in COBOL\" section in the IBM Content Manager OnDemand for z/OS, V9.0, Administration Guide, SC19-3364. ACIF exits do not exist in Content Manager OnDemand for IBM i.\n\nFor detailed documentation about each exit point, see IBM Content Manager OnDemand for Multiplatforms - Indexing Reference, SC19-3354, and IBM Content Manager OnDemand for z/OS and OS/390 - Indexing Reference, SC27-1375.\n\n# **11.2.1 New macro for user exits**\n\nBecause the default installation directory changed for Content Manager OnDemand V9, the **arsload** program supports a new macro to make user exits more portable.\n\nFor example, instead of specifying the exit as\n\n**INPEXIT=/opt/IBM/ondemand/V9.0/exits/acif/asciinpe**, specify the following items in the indexing parameters:\n\nINPEXIT=$(OD_ACIF_EXIT_DIR)asciinpe\n\nThe **arsload** program substitutes the correct path for the platforms.\n\nThis macro works for all four ACIF user exits. The macro is not supported if ACIF is run outside of the **arsload** program.\n\n# **11.2.2 Input record exit**\n\nACIF provides the input record exit so that you can add, delete, or modify records in the input file before they are processed by ACIF. The primary purpose of this exit is to modify input records before ACIF accesses the records. The exit program is started by the ACIF **inpexit** parameter.\n\nThe input exit can be used to insert indexing information. More common uses are to remove null characters, truncate records, add carriage control, and change code pages. In general, indexer parameters need to reflect what the input record looks like *after* the input exit runs. The only exception is the **FILEFORMAT** indexer parameter, which needs to correspond to the input record *before* it is passed to the input exit. For example, if the file contains ASCII data and uses the ASCII stream delimiter x'0A', specify (NEWLINE=x'0A'), not (NEWLINE=x'25'), even if you use the **apka2e** exit to convert ASCII to EBCDIC. Otherwise, ACIF does not pass the correct record to the **apka2e** input exit.\n\nContent Manager OnDemand provides three input record exits:\n\n- **apka2e**\n- **asciinp**\n- **asciinpe**\n\nYou can either use these input record exits as samples to build from, or you can compile them and run them as is. These programs are documented in IBM Content Manager OnDemand for Multiplatforms - Indexing Reference, SC18-9235, and are described briefly in the following sections.", - "page_start": 266, - "page_end": 266, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "What company released MegatronLM ?", - "target_page": 2, - "target_passage": "NVIDIA released the MegatronLM", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "| | RSW | RMF | RCLS | RLLM |\n| --- | --- | --- | --- | --- |\n| MT-Bench | 100 | 100 | 100 | 100 |\n| MMLU | 100 | 96 | 100 | 100 |\n| GSM8K | 100 | 100 | 100 | 100 |\n\nTable 8: Upgrade rates for query-specific gadgets, in the white-box setting. Results are nearly perfect, i.e. nearly all confounded queries are routed to the strong model.\n\n| Surrogate | RˆSW | | | RˆMF | | | RˆCLS | | | RˆLLM | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Target | RMF RCLS | | RLLM | RSW RCLS | | RLLM | RSW SFM | | RLLM | RSW RMF | | RCLS |\n| MT-Bench | 100 | 83 | 71 | 100 | 83 | 48 | 100 | 73 | 52 | 100 | 67 | 83 |\n| MMLU | 96 | 57 | 89 | 95 | 43 | 83 | 74 | 13 | 83 | 77 | 11 | 30 |\n| GSM8K | 100 | 68 | 74 | 100 | 73 | 68 | 81 | 65 | 70 | 88 | 54 | 64 |\n\nTable 9: Upgrade rates for query-specific gadgets, in the black-box setting. In most cases results are better than in the query-independent setting, at the cost of a more resource intensive process.\n\neach query using a neural scoring function that was trained on prompts from several open datasets (e.g., Open Hermes [62]) and labeled using an LLM-as-a-judge [71].\n\nFor our evaluation, we configure the router to choose between GPT-4o [2] as the strong model and Mixtral 8x7B [39] as the weak model. We focus on the cost and quality metrics, and set the weight of time and latency to 0 so that they are not factored into routing decisions. We manually calibrate the weights to 1 for the quality metric and 0.02 for the cost metric. These weights result in 49% of the original, unmodified queries being routed to the strong model and 51% to the weak model, resulting in a total cost of $0.13 for the 72 MT-bench queries. Adding confounder gadgets generated for the four open-sourced evaluated routers results in upgrade rates of 79%, 88%, 91%, and 89%, respectively, averaged across 10 gadgets. The downgrade rate is zero in all cases. In terms of costs, the addition of the confounder gadget increased the cost to $0.22, $0.23, $0.22, and $0.21, respectively, averaged across 10 gadgets. In other words, the rerouting attack increased the cost of processing the queries, on average, by a factor of 1.7×.\n\nNotDiamond. This router lets users route their queries to a list of predefined models. Available objectives are to maximize quality, or balance quality and cost, or balance quality and latency. The exact details of the routing logic are not specified. We focus on cost-aware routing, for which the API docs state that \"NotDiamond will automatically determine when a query is simple enough to use a cheaper model without degrading the quality of the response.\" NotDiamond provides a router selection tool which gives the routing decision for a particular query without forwarding the query to the chosen model (thereby incurring no costs). We use this for our evaluation—of course a real attack would target the NotDiamond API when used for actual routing.\n\nSimilar to the Unify experiments, we set GPT-4o as the strong model and Mixtral-8x7b as the weak model. Cost-aware routing routes 82% of the original queries to the strong model, 18% to the weak model. Confounded queries generated for RSW , RMF , RCLS, and RLLM achieve upgrade rates of 21%, 18%, 21%, and 15%, respectively. The downgrade rates are 1–3%.\n\nAs opposed to our calibrated routers, NotDiamond aggressively routes to the stronger model even for unmodified queries in most settings. We tried several strong/weak model pairs including GPT-4o/Mistral-7B-Instruct-v0.2, GPT-4o/GPT-4omini, and Claude-3-Opus/Claude-3-Sonnet, and observed a similar 20%–80% split between strong and weak.\n\nWhen we changed the strong model to OpenAI's o1-mini and kept Mixtral-8x7b as the weak model, 54% of the original queries were routed to the strong model, 46% to the weak model. In this setting, confounder gadgets yield 13–16% upgrade rates and, on average, 3–6% downgrade rates. We conclude that while the attack is still effective, NotDiamond is more robust than Unify.\n\nOpenRouter. This framework offers a unified interface for LLMs, and additionally offers a system that routes users' queries between three specific models: Llama-3-70b, Claude-3.5-Sonnet, and GPT-4o. Queries are routed \"depending on their size, subject, and complexity,\" as described in the documentation.2\n\nWith OpenRouter, 96% of the original queries are routed to Llama, 4% to GPT, and none to Claude. Based on the pricing and number of input-output tokens, the queries' total cost is $0.03 for processing all evaluated queries. After adding\n\n2 https://openrouter.ai/openrouter/auto", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv1.pdf" - }, - { - "text": "energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. [210] The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation.[211]\n\nAfter the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.[212] Taiwan aims to phase out nuclear power by 2025.[212] On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[212]\n\nAlthough most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 *Bloomberg* article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[213] Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[213]\n\nOn 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center. [214] According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[214]\n\n#### **Misinformation**\n\nYouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation.[215] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[216] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem .\n\nIn 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[217] AI pioneer Geoffrey Hinton expressed concern about AI enabling \"authoritarian leaders to manipulate their electorates\" on a large scale, among other risks.[218]\n\n#### **Algorithmic bias and fairness**\n\nMachine learning applications will be biased[k] if they learn from biased data.[220] The developers may not be aware that the bias exists.[221] Bias can be introduced by the way training data is selected and by the way a model is deployed.[222][220] If a biased algorithm is used to make decisions that can seriously", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## Understanding Our Business\n\nRogers Communications is one of Canada's leading diversified communications and media companies.\n\n**Our vision** is to be known for leading the enablement and delivery of seamless, customer-driven communications, entertainment, information and transactional experiences across any device, place or time.\n\n**Wireless** provides wireless voice and data communication services, including machine to machine to both consumer and enterprise businesses, governments and other telecommunications service providers. **Cable** provides voice and data communications, home monitoring, television and high-speed Internet services to both consumers and businesses. **Business Solutions** provides voice and data communications and advanced services including data centre based solutions and cloud computing services to a wide range of medium to large businesses, including other service providers, and government either wirelessly or over our terrestrial network. Revenue generated from these segments is generally based on monthly subscription and network usage rates. Costs include attracting, setting-up and retaining customers, content, and the costs of upgrading and maintaining the underlying network.\n\nOur wireless network is currently one of the most extensive and advanced independent high-speed wireless data networks in Canada, capable of supporting wireless services on smartphones, tablets, computers and a broad variety of machine-to-machine and specialized devices. We built the first Long Term Evolution (LTE) high speed network in Canada, reaching nearly 73% of the Canadian population at December 31, 2013. We also have roaming agreements with international carriers in more than 200 other countries, including 5 LTE roaming operators and have network sharing arrangements with several carriers in Canada.\n\nOur expansive fibre and hybrid fibre coaxial infrastructure delivers services to consumers and businesses in Ontario, New Brunswick and Newfoundland. We also operate a North American transcontinental fibre-optic network that extends over 41,000 route kilometres that is used to serve enterprise customers, including government and other telecommunications service providers. In Canada, the network extends coast to coast and includes local and regional fibre, transmission electronics and systems, hubs, POPs and IP Routing and switching infrastructure. The network also extends to the US, from Vancouver south to Seattle, from the Manitoba-Minnesota border through Minneapolis, Milwaukee and Chicago, and from Toronto, through Buffalo, and Montreal, through Albany, to New York City, allowing us to connect Canada's largest markets, while also reaching key US markets for the exchange of data and voice traffic.\n\n**Media** provides television and radio broadcasting services to end customers over both traditional broadcast networks and new digital networks as well as multi-platform shopping, consumer and trade publications and sports media and entertainment experiences, primarily through its ownership of the Toronto Blue Jays. Revenue is largely driven by advertising and, in the case of TV broadcasting and publishing by additional revenues from monthly subscriptions. Revenue is also generated by the sale of merchandise and event tickets. Costs include sports programming, broadcast content (including TV studios, writers and on air and on field talent), the cost of merchandise and the production costs associated with each medium.\n\nWe report our results of operations in four segments, which reflect how we manage our operations and measure our performance.\n\n**WIRELESS** see page 37\n\n#### **CABLE**\n\nsee page 41\n\nCanada's largest provider of wireless communications services.\n\n## One of Canada's leading\n\nproviders of cable television, high-speed Internet and cable telephony services to consumers and businesses.\n\n#### **BUSINESS SOLUTIONS** see page 45\n\nProvides Canadian enterprises, government and other telecommunications service providers and partners with highly reliable network and data centre solutions.\n\n#### **MEDIA** see page 47\n\nA diversified Canadian media company that engages in television and radio broadcasting, multi-platform shopping, publishing, digital, and sports media and entertainment.", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "For every Remote Copy relationship that is created on an IBM Spectrum Virtualize system, a bitmap table is created to track the copied grains. By default, the system allocates 20 MiB of memory for a minimum of 2 TiB of remote copied source volume capacity.\n\nEvery 1 MiB of memory provides volume capacity for the specified I/O group. So, for 256 KiB grains size, 2 TiB of total Metro Mirror, Global Mirror, or active-active volume capacity is provided.\n\nSee Table 11-14 to calculate the memory requirements and confirm that your system is able to accommodate the total installation size.\n\n| Minimum allocated | Default allocated | Maximum allocated | Minimum |\n| --- | --- | --- | --- |\n| bitmap space | bitmap space | bitmap space | functionality when using the default |\n| | | | values1 |\n| 0 | 20 MiB | 512 MiB | 40 TiB of remote |\n| | | | mirroring volume |\n| | | | capacity |\n| 1Remote copy includes Metro Mirror, Global Mirror, and active-active relationships. | | | |\n\n*Table 11-14 Memory allocation for FlashCopy services*\n\nWhen you configure change volumes for use with Global Mirror, two internal FlashCopy mappings are created for each change volume.\n\nFor Metro Mirror, Global Mirror, and HyperSwap active-active relationships, two bitmaps exist. For MM/GM relationships, one is used for the master clustered system and one is used for the auxiliary system because the direction of the relationship can be reversed. For active-active relationships, which are configured automatically when HyperSwap volumes are created, one bitmap is used for the volume copy on each site because the direction of these relationships can be reversed.\n\nMM/GM relationships do not automatically increase the available bitmap space. You might need to use the **chiogrp** command to manually increase the space in one or both of the master and auxiliary systems.\n\nYou can modify the resource allocation for each I/O group of an IBM SAN Volume Controller system by opening the **Settings** → **System** panel and clicking the **Resources** menu, as shown in Figure 11-154.\n\n| Dashboard | Date and Time | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | | Resources | | | | |\n| Monitaring | Licensed Functions | Memory limits for | 1/0 Group 0 | . | | Save Reset |\n| H Pocls | | FlashCopy | | 20 | MiB (0.1 MiB in use) | |\n| | Update System | Total memory: | 20 of 2048 MiB | | | |\n| 00 Volumes | VVOL | Remote Mirroring | | 20 | MIB (0.1 MiB in use) | |\n| E Hosts | Resources | Volume Mirroring | | 20 | MiB (0.1 MiB in use) | |\n| | | RAID | | 40 | MiB (0 MiB in use) | |\n| Copy Services | IP Quorum | Total memory: | 80 of 552 MiB | | | |\n| | I/O Groups | | | | | |\n| DI Access | | | | | | |\n| 505 Settings | DNS | | | | | |\n| | Transparent Cloud Tiering | | | | | |\n\n*Figure 11-154 Modifying resources allocation*", - "page_start": 619, - "page_end": 619, - "source_file": "sg247938.pdf" - }, - { - "text": "# *people*\n\n**T**he worst of 2001 brought out the best in The Hartford's people.\n\nAs the world watched the horrors of Sept. 11, some 330 of our New York employees fled their offices in 7 World Trade Center. Though many were caught in the debris and dust from the nearby Twin Towers, all escaped safely.\n\nBy the time the 47-story 7 World Trade Center building collapsed at about 5:20 p.m., The Hartford had already arranged for temporary space in several of the company's other offices. Employees and suppliers immediately began working around the clock to get the business up and running again. Despite the destruction, back-up systems kept distributors' and customers' data secure.\n\nA hundred miles from Ground Zero, home office employees in Hartford, Conn., began shuttling equipment and supplies to our temporary offices. Some booked Long Island Sound ferries from Connecticut to Long Island within 48 hours of the attack. Others spent the weekend driving supplies to the new locations so employees could concentrate on customers instead of on finding pens and paper. Employees and suppliers were determined to get the company, its distributors and its customers through the crisis.\n\nBy Monday, Sept. 17, all of The Hartford's business units in New York were serving customers again. Employees had new furniture, phones, servers and PCs. Distributors' and customers' access to company e-mail was never interrupted. Calls to old phone numbers were rerouted to cell phones or new office phones. Print and radio ads—along with The Hartford's Web site gave customers instructions for filing claims quickly. Customer relationships were stronger than ever. The Hartford Experience—customer solutions, ease of doing business and extraordinary service—was never better demonstrated.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| vdisk_w_io | 0 | 0 | 181101215930 |\n| --- | --- | --- | --- |\n| vdisk_w_ms | 0 | 0 | 181101215930 |\n| mdisk_r_mb | 0 | 0 | 181101215930 |\n| mdisk_r_io | 0 | 0 | 181101215930 |\n| mdisk_r_ms | 0 | 0 | 181101215930 |\n| mdisk_w_mb | 1 | 2 | 181101215920 |\n| mdisk_w_io | 2 | 4 | 181101215900 |\n| mdisk_w_ms | 6 | 7 | 181101215720 |\n| drive_r_mb | 18 | 34 | 181101215825 |\n| drive_r_io | 77 | 140 | 181101215525 |\n| drive_r_ms | 4 | 13 | 181101215545 |\n| drive_w_mb | 708 | 752 | 181101215510 |\n| drive_w_io | 2800 | 2971 | 181101215510 |\n| drive_w_ms | 11 | 13 | 181101215855 |\n| power_w | 374 | 384 | 181101215555 |\n| temp_c | 24 | 24 | 181101215930 |\n| temp_f | 75 | 75 | 181101215930 |\n| iplink_mb | 0 | 0 | 181101215930 |\n| iplink_io | 0 | 0 | 181101215930 |\n| iplink_comp_mb | 0 | 0 | 181101215930 |\n| cloud_up_mb | 0 | 0 | 181101215930 |\n| cloud_up_ms | 0 | 0 | 181101215930 |\n| cloud_down_mb | 0 | 0 | 181101215930 |\n| cloud_down_ms | 0 | 0 | 181101215930 |\n| iser_mb | 0 | 0 | 181101215930 |\n| iser_io | 0 | 0 | 181101215930 |\n\nTable A-1 gives the description of the different counters that are presented by the **lssystemstats** and **lsnodecanisterstats** commands.\n\n| Value | Description |\n| --- | --- |\n| compression_cpu_pc | Displays the percentage of allocated CPU capacity that is used for |\n| | compression. |\n| cpu_pc | Displays the percentage of allocated CPU capacity that is used for the |\n| | system. |\n| fc_mb | Displays the total number of megabytes transferred per second for Fibre |\n| | Channel traffic on the system. This value includes host I/O and any |\n| | bandwidth that is used for communication within the system. |\n| fc_io | Displays the total I/O operations that are transferred per second for Fibre |\n| | Channel traffic on the system. This value includes host I/O and any |\n| | bandwidth that is used for communication within the system. |\n| sas_mb | Displays the total number of megabytes transferred per second for |\n| | serial-attached SCSI (SAS) traffic on the system. This value includes host |\n| | I/O and bandwidth that is used for background RAID activity. |\n| sas_io | Displays the total I/O operations that are transferred per second for SAS |\n| | traffic on the system. This value includes host I/O and bandwidth that is |\n| | used for background RAID activity. |\n| iscsi_mb | Displays the total number of megabytes transferred per second for iSCSI |\n| | traffic on the system. |\n\n*Table A-1 List of counters in lssystemstats and lsnodecanisterstats*", - "page_start": 767, - "page_end": 767, - "source_file": "sg247938.pdf" - }, - { - "text": "# Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi\n\nDept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\nA. Camero-Arranz\n\nFundaci´on Espa˜nola de Ciencia y Tecnolog´ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\nE. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\nM.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n### I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It consists of 12 NaI detectors 500 in diameter by 0.500 thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 500 in diameter by 500 thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140◦ , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ±35◦ slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26.5 ◦ , individual occultation steps last for ∼10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "cell death and apoptosis with more than 10 genes were examined. Filtered count data of expressed and nondifferentially expressed genes were used as a background.\n\n#### 2.8. Dorsal root ganglion culture\n\nDorsal root ganglia were dissected from MrgDCreERT2;Ai32 and CalcaCreERT2;Ai32 mice .1 week after dosing with tamoxifen and enzymatically digested at 37˚˚C for 80 minutes in dispase type II (4.7 mg/mL) plus collagenase type II (4 mg/mL) (Worthington Biochemical), as described previously.63 Mechanically dissociated cells were plated onto laminin/poly-D-lysine (R&D Systems, Minneapolis, MN) treated coverslips in complete Neurobasal Plus medium (Neurobasal Plus media supplemented with 2% (vol/vol) B27 Plus, 1% N2, 1% Glutamax, and 1% antibiotic–antimycotic [ThermoFisher Scientific, Waltham, MA]). Mouse nerve growth factor (GF) (50 ng/mL; nerve growth factor (NGF), PeproTech, Cranbury, NJ) and 10 ng/mL glial-derived neurotrophic factor (GDNF, PeproTech) were added to the media under some conditions. Cytosine b-D-arabinofuranoside (4 mM) was added to the media for 24 hours the day after plating to reduce the proliferation of nonneuronal cells. Media was refreshed 3 times per week thereafter. Cultures were fixed for 10 minutes at room temperature with 4% paraformaldehyde and subsequently processed by immunocytochemistry (described earlier).\n\n#### 2.9. Statistical analysis\n\nData are expressed as mean 6 SEM unless otherwise specified, and P values of less than 0.05 were considered significant. Power calculations were performed using G*Power 3.1.9.7.15 A quantitative Venn diagram was created using BioVenn.25 All other statistical analyses were performed in Prism 10 (GraphPad Software, Inc, Boston, MA) or R using paired t tests or 1- or 2-way RM ANOVAs (repeated measures analysis of variance), where appropriate. Normality was assessed by the Shapiro–Wilk test. If the main analysis of variance effect was significant, Sˇ ´ıd ´ak or Tukey multiple comparisons tests were performed. To compare population distributions of soma cross-sectional area or volume, Kolmogorov–Smirnov tests were performed.\n\n#### 3. Results\n\n# 3.1. Peripheral nerve injury induces a loss of small neurons from the dorsal root ganglion\n\nTo assess the gross loss of neurons from DRG following nerve injury, we generated the AvilFlpO;Atf3CreERT2;RC::FLTG mouse line in which na¨ıve and axotomized sensory neurons were differentially labelled. In this mouse line, all neurons express tdTomato (Flp-dependent) in the na¨ıve state and switch to expressing green fluorescent protein (GFP) upon axonal damage and concurrent tamoxifen treatment (Flp- and Cre-dependent) (Figs. 1A and B). Following pilot experiments to optimize tamoxifen dosing regimen, this approach was both highly efficient and specific (with the caveat that it was necessary to wait for several days after nerve injury for Cre-induced GFP expression): 14 days after SNItrans surgery, GFP was expressed by 99.1 6 0.6% of Atf3-expressing ipsilateral L4 DRG neurons, while we observed GFP in only 4.6 6 0.7% of contralateral DRG neurons (Figs. S2A–D, http://links.lww.com/PAIN/C84). We then used a stereological approach to quantify the total number of neurons in L4 DRG ipsilateral to injury 1, 2, 4, and 8 weeks after SNItrans, as well as contralateral to injury. One week after SNItrans, we observed 7809 6 153 neurons per DRG; this was not significantly different to the number of neurons in the contralateral DRG (7917 6 349), whereas cell number approximately halved by 8 weeks postinjury to 3963 6 410 neurons per DRG (Fig. 1C). Separating analysis into intact vs axotomized afferents revealed that only axotomized afferents were lost, with no difference observed in numbers of intact afferents (Fig. 1D). Between 1 and 8 weeks after injury, we observed a 61.0 6 7.0% decrease in the number of GFP1 neurons. This loss of injured afferents resulted in a loss of neuron-containing (ie, excluding white matter regions) DRG volume (Fig. 1E), but not neuron density (Fig. 1F). Cell loss predominantly occurred between 1 and 2 weeks postinjury and stabilized after this timepoint. Population distributions of the cross-sectional area of nucleated, tdTomato-expressing cell profiles were not significantly different at 1 vs 8 weeks post-SNItrans, in contrast to GFP-expressing/injured afferents, in which a loss of a population of small afferents at 8 weeks postinjury was observed (Fig. 1G).\n\nSNItrans resulted in a mixed population of axotomized and intact afferents within the L4 DRG. Therefore, we developed an approach to restrict our analysis to axotomized afferents, without relying on transgenic labelling, and used this as a complementary approach to confirm our findings. We injected the neuronal tracer FB into the glabrous, tibial innervation territory of both hindpaws 1 week before common peroneal and tibial transection (SNItrans) or crush (SNIcrush) surgeries (Figs. 2A and B). FastBlue-uptake was complete across neurons of all sizes by 1 week (Fig. S3, http://links.lww.com/PAIN/ C84), so this approach allowed us to profile a sample of the axotomized afferents. Both SNItrans (Fig. 2C) and SNIcrush (Fig. 2D) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post–nerve injury.\n\nAs a third complementary approach, we applied semiautomated volumetric analyses of nuclei size following tissue clearing. In this study, whole DRGs were cleared 4 weeks after SNItrans for nuclei counting in \"complete\" tissue (Figs. 2E–H). Nuclei were labelled by TDP-43, in line with the study by West et al.,67 and were quantified using Imaris software (Fig. 2F, Video 1). We observed a slight but significant rightward shift in nuclear spot volume population distribution 4 weeks after SNItrans (Fig. 2G). In addition, there was a significant reduction in the number of small but not medium or large nuclear spots, in support of a loss of small-diameter neuron populations (Fig. 2H).\n\nTogether, our data derived from several different experimental approaches show that a population of small-diameter afferents are lost following peripheral nerve injury.\n\n# 3.2. Spared nerve crush or transection results in death of Mrgprd-expressing neurons\n\nTo date, determining cell loss among specific populations of afferent neurons has proved challenging due to the downregulation of subpopulation-specific marker genes following axonal transection.37,44 To overcome this issue, we took advantage of transgenic strategies to label populations in a manner that persisted after injury. Owing to the bias for the loss of small neurons and the known loss of IB4-binding central terminals postinjury,36 we initially focused on nonpeptidergic nociceptive neurons. We used MrgDChR2-YFP mice to identify neurons belonging to the largest of the 3 classes of nonpeptidergic nociceptors, NP1.55,59 To determine whether these neurons are lost following nerve injury, we used a stereological method to quantify L4 DRG MrgD-YFP1 (yellow fluorescent", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for \"truth\") acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of *Indianapolis Monthly*, and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\n*This annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.*", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "| | H922 | H924 |\n| --- | --- | --- |\n| Key features | Optimized for SAP HANA | High performance for SAP HANA |\n| | High performance, tight security | Strong security with large memory |\n| | Dense form factor with large | footprint |\n| | memory footprint | For Linux-focused customers |\n| | For Linux-focused customers | |\n| Machine type and model | 9223-22H | 9223-42H |\n| (MTM) | | |\n| Form factors | 2U | 4U |\n| Sockets | 1 upgradeable or 2 | 2 |\n| Cores per socket | 4, 8, or 10 | 8, 10, or 12 |\n| Memory slots | 32 | 32 |\n| Memory maximum | 4 TB | 4 TB |\n| PCIe G4 slots | 4 | 4 |\n| Supported operating | AIX, IBM i, and Linux | AIX, IBM i, and Linux |\n| systems | | |\n\n*Table 5-3 IBM Power Systems: Scale-out servers for SAP HANA* \n\n#### **5.1.2 Big data workloads**\n\nAcross industries, organizations are poised to capitalize on big data to generate new business insights, improve the customer experience, enhance efficiencies and gain competitive advantage. But to make the most of growing data volumes, they need servers with the performance and capacity for big data and AI workloads.\n\nIBM Power Systems Scale-out servers for big data, as shown in Table 5-4, deliver the outstanding performance and scalable capacity for intensive big data and AI workloads. Purpose-built with a storage-rich server design and industry-leading compute capabilities, these servers explore and analyze a tremendous amount of data, all at a lower cost than equivalent x86 alternatives.\n\n| | LC921 | LC922 |\n| --- | --- | --- |\n| Key features | High performance in a | Highest storage capacity in |\n| | space-saving design | the IBM Power Systems |\n| | Industry-leading compute in | portfolio |\n| | a dense form factor | Up to 44 cores and 2 TB of |\n| | | memory |\n| | | High performance at lower |\n| | | cost than comparable x86 |\n| | | systems |\n| Machine type and model | 9006-12P | 9006-22P |\n| (MTM) | | |\n| Form factors | 1U | 2U |\n| Sockets | 1 upgradeable or 2 | 2 |\n| Microprocessors | 1x or 2x POWER9 CPUs; 16 or | 1x or 2x POWER9 CPUs; 16, |\n| | 20 cores | 20 or 22 cores |\n\n*Table 5-4 IBM Power Systems: Scale-out servers for big data*", - "page_start": 90, - "page_end": 90, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "What is the average emission of a human being per year in terms of CO2eq ?", - "target_page": 3, - "target_passage": "the average human is responsible for an estimated 5t CO2e per year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "but also a result of the different forcings influencing the atmosphere model at the time of passing each GWL, and the interaction with the climate sensitivity of HadGEM3. The radiative forcing of non-CO2 forcings has previously been highlighted as a potentially important influence on patterns of climate change at 1.5°C and 2°C global warming [39]. Furthermore, despite some differences in regional climate responses between ensemble members, there were also some remarkable consistencies especially in the changes that might be considered inconsistent with a warming climate, such as regions such as northern South America where heavy rainfall (Rx5day) decreases rather increasing as might be expected under a warming climate. Again, these consistencies point to some common forcing of all simulations.\n\nOne key factor is the different times of passing a particular GWL, because the net radiative forcing would be different even though the same emissions and concentration scenario was used in all simulations. A given GWL was reached at a different time in each ensemble member, so the CO2 and aerosol concentrations vary between ensemble members; in members reaching a GWL early, such as that driven by IPSL-CM5A-LR, the CO2 concentration is relatively lower than in other members, and the total aerosol concentration would be relatively higher (CO2 concentrations are projected to increase in RCP8.5, but aerosol concentrations are projected decline). The net radiative forcing is smaller, because in RCP8.5 the increase positive radiative forcing from CO2 is greater than the decrease in net negative radiative forcing from aerosols. Moreover, the physiological effect of CO2 is also smaller, meaning that the consequent reduction in transpiration and associated additional land surface warming influence would also be expected to be smaller.\n\nConversely, in members reaching the same GWL later, such as that driven by GFDL-ESM2M, CO2 concentration is relatively higher, and aerosol concentrations are lower. So, net radiative forcing, CO2 physiological effects and the regional-scale radiative forcings from individual aerosol types could, therefore, be quite different in the GFDL-driven HadGEM3 simulation when it reaches 2°C global warming 25 years later than the IPSL-CM5A-LR-driven simulation.\n\nThe spatial pattern of changes in the different ensemble members may also play a role in influencing the global mean changes, for example, with large changes in some regions due to faster snow-melt or changes in cloud cover in one ensemble member leading to particular changes in regional warming that are not seen in other ensemble members. Moreover, the individual forcings of the different aerosol components such as sulfate and black carbon differ in sign and spatial pattern, so the overall impact on local radiative forcing and hence regional temperature patterns is more complex. Therefore, the global mean changes may not necessarily be expected to relative to global mean forcings.\n\nA further complexity in identifying precise mechanisms for regional changes is the experimental design used here, with one atmospheric model and concentration/emissions scenario but six different SST and SIC patterns, means that the impact of spatial heterogeneity in radiative forcings is complex and involves a mix of effects in HadGEM3 and the original CMIP5 models. In the case of aerosols, for example, our HadGEM3 simulations are driven with RCP8.5 aerosol emissions and the aerosol concentrations are then calculated within the model itself. The spatial distributions of aerosol optical depth and radiative forcing can, therefore, be expected to be reasonably similar, because they arise from the same emissions scenario, although some differences may occur due to the different regional climate-change patterns. However, the impact of aerosols is also seen in the SST and SIC changes, because these will have responded to changes in regional aerosol radiative forcing in the original CMIP5 simulations. Therefore, these SST and SIC patterns will carry the 'memory' of aerosol changes in the original CMIP5 projections.\n\nOne example of an impact of changing aerosol radiative forcing could be the precipitation changes in northern South America including Amazonia. All ensemble members show a general drying in this region, as seen in RX5day and mean run-off results. The reduction in Rx5day is particularly notable, because the general expectation would be for an increase in heavy rainfall events in a warmer climate, as is seen in most other regions in these projections. This reduced rainfall in the Amazon region may be associated with the reducing net negative aerosol radiative forcing in the North Atlantic [40]. CO2 physiological forcing may also play a role here [41,42].", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n### (d) Freshwater resources: run-off\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28–30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32–34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for defining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981–2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].\n\nThis difference in approach led to inconsistencies in the definitions of the dates of GWLs in the two parts of the study. In the extremes analysis using raw model output, the dates of passing GWLs were defined on the basis of the global mean temperatures in the driving CMIP5 models relative to those models' simulations of global mean temperature in 1870–1899 (table 3). However, in the HCVI and JULES analyses which used bias-corrected data, it was considered more appropriate for the GWLs to be defined using the warming in the observational dataset", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "Altima Hybrid Sentra CA (USA)\n\n#### **The Environment**\n\nWe have extended the Vision Zero concept to our work on environmental technologies. In the area of emissions, for example, we are investigating CO2 and other substances with an environmental impact, either indirect or direct. Nissan's goal is simple: zero emissions. The primary focus for us has been CO2 reduction, and we have been quite successful in this area.\n\nNissan is developing new hybrid technology as well. However, we do not believe this technology is sufficiently mature enough yet for wide application in the market. It would be easy to sell 1,000 or 10,000 cars, but that is neither an effective solution for the environment nor a financially viable proposition for a manufacturer.\n\nNissan's greatest strength is in current technologies such as the CVT, or continuously variable transmission. The CVT is a low-cost, advanced technology that can be applied to all types of vehicles to significantly and immediately reduce CO2 emissions. In comparison to a hybrid electric vehicle, or HEV, a CVT-equipped car reduces CO2 emissions by 20 percent. So if we sell five CVT-equipped vehicles, the effect would be the same as selling one hybrid car. Our current plan is to sell a million CVT vehicles, which would be equivalent to 200,000 HEVs—a significant figure.\n\nWe have to meet certain CO2 emission levels. The first pillar in our efforts is to develop strong future technologies. To do this, Nissan must have a clear, precise vision of the future. We have come up with a number of specific scenarios for the next 40 or 50 years and considered the\n\ntechnologies we will need to introduce to meet that vision. Over the next two decades we will significantly improve gasoline and diesel engines, which is where we can make an immediate impact. We are currently developing the next generation of gasoline HEVs. The generation after that will be diesel HEVs, which have even lower CO2 emissions than gas-powered HEVs. The next step will be FCVs and pure electric vehicles. Nissan is actively working to further the diffusion of the fuel-cell stack we've developed in-house.\n\nAt the same time, we are taking a practical, proactive approach to the environment. We want to do good rather than just look good. The second pillar is to consider how we can actually introduce these new technologies to the world market. Advanced but expensive technology can't be applied on smaller, more economical cars, which are the cars most of us drive.\n\nIn the meantime, we have already had some notable successes in the environmental area. The Nissan Sentra CA, for example, was certified as the cleanest gasolinepowered car in the world, and our Bluebird Sylphy was recognized as the first-ever SU-LEV, or super-ultra-low emission vehicle. We are proud of these successes, but we are focusing on even more significant emissions reductions and fuel economy, continuing to develop advanced technologies that will bring us to our Vision Zero goal.\"\n\nFor more on environment at Nissan, please see the *2005 Nissan Sustainability Report*\n\nX-TRAIL FCV\n\nNissan-original fuel cell stack Continuously Variable Transmission (CVT) CVT (Continuously Variable Transmission) enables a smooth, continuous transmission which not only enhances acceleration, but which also improves fuel economy for better environmental performance", - "page_start": 48, - "page_end": 48, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n- (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n- (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning—exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Table 3.** Time of reaching GWLs of 1.5°C and 2°C in the raw output from the HadGEM3 climate simulations, driven by different sets of CMIP5sea-surface temperatures. The dates are the centre year of a 20-year period for which the climate data are applied to the calculation of the ClimPACT indices.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2015 | 2030 |\n| | GFDL-ESM2M | 2040 | 2055 |\n| | HadGEM2-ES | 2027 | 2039 |\n| | IPSL-CM5A-MR | 2020 | 2034 |\n| | MIROC-ESM-CHEM | 2023 | 2035 |\n| | ACCESS1–0 | 2034 | 2046 |\n\nup to present-day plus model-projected warming thereafter (table 4). While this does lead to inconsistent definitions of dates of the GWLs for applications of the climate model output with and without bias correction, the focus here is on the level of warming relative to pre-industrial rather than the timing of this warming. Therefore, priority is given to an accurate quantification of GWLs in all parts of the study, at the expense of inconsistencies in the dates of these warming levels. The inconsistency between the dates of the GWLs ranged from 2 to 9 years depending on the model and warming level. This inconsistency would have consequences if these results were applied to time-dependent impacts and adaptation assessments, but that is not the case here so this concern does not apply. However, one issue is that the time-dependent nature of the aerosol forcing means that the spatial pattern of regional climate responses varies over time, so this will lead to some degree of inconsistency between the analysis of the ClimPACT extremes and the HCVI and JULES impacts projections.\n\n## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2°C global warming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "Firstly, the period of 1986–2005 is defned as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850–1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between diferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986–2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. Tirdly, the climate data of global warming by 1.5 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 1.5–2.0 °C above pre-industrial levels at the end of the twenty-frst century; the climate data of global warming by 2.0 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 2.0–2.5 °C above pre-industrial levels at the end of the twenty-frst century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately confrmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020–2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041–2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060–2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065–2084.\n\n**Simulation of maize yield using DSSAT.** According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986–2005 on grid level using CERES-Maize, which is part of DSSAT version 4.649.\n\nTe inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5°×0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min50. For management information, fertilizer applications, irrigation and other management practices are required. A crop-specifc gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Profle Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of fve 20 cm thick soil layers51. All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.\n\nFirst maize yields across the world during the historical period 1986–2005 were simulated at the 0.5°×0.5° grid scale with two main production systems, including Spring maize and Summer maize. Historical national maize production is aggregated from simulated gridded yield and weighted by grid cell maize areas in 2000 from the gridded global dataset by combining two data products47. Second, genetic parameters of specifc cultivars of maize from previous works were adopted for the initial parameters; model parameters related to crop genotype characteristics were calibrated and tuned following the method in Xiong et al.52, in which the simulated yields from 1986–2005 were comparable to the statistical data. Tird, maize yields across the world were simulated under global warming by 1.5 °C and 2.0 °C. Finally, global and national maize yields were aggregated from gridded values; changes in national and global yields under global warming by 1.5 °C and 2.0 °C were calculated, comparing maize yield average for 1986–2005.\n\n**Simulation of market price using GTAP.** Te yield changes for maize from the DSSAT models under 1.5 °C and 2.0 °C temperature increase are used to carry out simulations using competitive market for changes in production, market price, and self-sufciency ratio of maize at national and global levels53,54. For this study, we use a comparative static analysis approach to simulate the impact of climate changes on the prices and trade of the major food crops under current economic conditions. Utilizing current economic conditions has the advantage of minimizing assumptions and model uncertainties related to future economic conditions55,56.\n\nTe original GTAP database doesn't include maize as a separate sector, rather it is combined with other coarse grains to form an \"other coarse grain\" sector. For this study, we updated the GTAP database by splitting maize from the original sector in the database, design an appropriate sectoral and regional aggregation scheme to the original database. Te detailed method is given as follows:\n\nFirst, we improved the database by splitting maize from the existing sector \"other coarse grain\", following similar work using GTAP57–59 based on the routines from the Splitcom method60. In this procedure, the old fows of data both at national and trade levels are allocated between the new fows using weights. Te national weights include the division of each unsplit user's use of the original split commodity among the new commodities; the division of unsplit inputs to the original industry between the new industries; the splitting of new industry's use of each new commodity. Maize use is mainly shared between feed, food, processing and others (seed, waste, etc.).\n\nTrade shares allocate the original slice of the split commodity into the new commodity for all elements of basic price value, tax, and margin. Finally, we used the RAS method for balancing the newly created database. Te values for the national shares matrix were obtained from FAOSTAT. Te trade shares matrix was calculated based on the data from UN Comtrade Database.\n\nSecond, our sectoral aggregation scheme for GTAP ensures that all the competing and complimenting sectors for maize are present in the most disaggregated form. For example, for maize, other crops compete for inputs of production and both livestock and households are major users of maize. For regional aggregation, we kept the details for all the main producing, consuming, and trading regions, for maize.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the different connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint efforts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the \"hoax\" frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the differences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and effect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by differently organizing numerous climate concepts. Examining the differences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLarge amounts of user-generated data on social media, which have been valued in computer science, communication, and environmental studies [5,9,15–18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing \"climate change\" and \"global warming\" between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the difference in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018?\n\nRQ3: Did the two competing discourses converge or diverge in this decade?\n\n#### **2. Background**\n\n#### *2.1. Climate Change, Global Warming, and Frames*\n\nExisting studies have noted that the subtle difference between climate change and global warming evokes different public cognitive responses, where global warming\"indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse effect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "How did the Black Lives Matter movement influence the writing of Wikipedia articles ?", - "target_page": 5, - "target_passage": " the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people in- creased in coverage and were generated with reduced latency", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "committed to physicalism. Unlike Type-A Materialists, however, Type-B Materialists *do* accept inconceivability arguments often cited in support of the hard problem, but with a key caveat: that inconceivability arguments give us insight only into how the human mind *tends to conceptualize* the relationship between mind and matter, but not into what the true nature of this relationship actually is.[43][52] According to this view, there is a gap between two ways of knowing (introspection and neuroscience) that will not be resolved by understanding all the underlying neurobiology, but still believe that consciousness and neurobiology are one and the same in reality. [43]\n\nWhile Type-B Materialists all agree that intuitions about the hard problem are psychological rather than ontological in origin, they differ as to whether our intuitions about the hard problem are innate or culturally conditioned. This has been dubbed the \"hard-wired/soft-wired distinction.\"[88][89] In relation to Type-B Materialism, those who believe that our intuitions about the hard problem are innate (and therefore common to all humans) subscribe to the \"hard-wired view\".[89] Those that believe our intuitions are culturally conditioned subscribe to the \"soft-wired view\". Unless otherwise specified, the term *Type-B Materialism* refers to the hard-wired view. [89]\n\nNotable philosophers who subscribe to Type-B Materialism include David Papineau, [90] Joseph Levine, [91] and Janet Levine.[55]\n\n#### **The \"hard-wired view\"**\n\nJoseph Levine (who formulated the notion of the explanatory gap) states: \"The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature.\"[91] He nevertheless contends that full scientific understanding will not close the gap,[43] and that analogous gaps do not exist for other identities in nature, such as that between water and H2O.[92] The philosophers Ned Block and Robert Stalnaker agree that facts about what a conscious experience is like to the one experiencing it cannot be deduced from knowing all the facts about the underlying physiology, but by contrast argue that such gaps of knowledge *are* also present in many other cases in nature, such as the distinction between water and H2O.[93][12]\n\nTo explain why these two ways of knowing (i.e. third-person scientific observation and first-person introspection) yield such different understandings of consciousness, weak reductionists often invoke the *phenomenal concepts strategy*, which argues the difference stems from our inaccurate phenomenal concepts (i.e., how we think about consciousness), not from the nature of consciousness itself.[94][95] By this view, the hard problem of consciousness stems from a dualism of concepts, not from a dualism of properties or substances.[43]\n\n#### **The \"soft-wired view\"**\n\nSome consciousness researchers have argued that the hard problem is a cultural artifact, unique to contemporary Western Culture. This is similar to Type-B Materialism, but it makes the further claim that the psychological facts that cause us to intuit the hard problem are not innate, but culturally conditioned. Notable researchers who hold this view include Anna Wierzbicka, [96] Hakwan Lau and Matthias Michel.[97]\n\nWierzbicka (who is a linguist) argues that the vocabulary used by consciousness researchers (including words like *experience* and *consciousness*) are not universally translatable, and are \"parochially English.\"[96] Weirzbicka calls David Chalmers out by name for using these words, arguing that if", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia2.pdf" - }, - { - "text": "philosophers \"were to use panhuman concepts expressed in crosstranslatable words\" (such as *know*, *think*, or *feel*) then the hard problem would dissolve.[96] David Chalmers has responded to these criticisms by saying that he will not \"apologize for using technical terms in an academic article . . . they play a key role in efficient communication in every discipline, including Wierzbicka's\".[89]\n\n## **Type-C Materialism**\n\nType-C materialists acknowledge a distinction between knowledge and experience[98] without asserting a more complete explanation for the experiential phenomenon. One taking this view would admit that there is an explanatory gap for which no answer to date may be satisfactory, but trust that inevitably the gap will be closed.[52] This is described by analogy to progression in other areas of science, such as massenergy equivalence which would have been unfathomable in ancient times,[52] abiogenesis which was once considered paradoxical from an evolutionary framework,[99][98] or a suspected future theory of everything combining relativity and quantum mechanics. Similarly, type-C materialism posits that the problem of consciousness is a consequence of our ignorance[71][100] but just as resolvable as any other question in neuroscience.\n\nBecause the explanatory question of consciousness is evaded, type-C materialism does not presuppose[101] the descriptive question, for instance that there is any self-consciousness, wakefulness, or even sentience[102] in a rock. Principally, the basis for the argument arises from the apparently high correlation of consciousness with living brain tissue,[103] thereby rejecting panpsychism[101] without explicitly formulating physical causation. More specifically this position denies the existence of philosophical zombies[64] for which there is an absence of data and no proposed method of testing.[104][105] Whether via the inconceivability or actual nonexistence of zombies, a contradiction is exposed nullifying the premise of the consciousness problem's \"hardness\".\n\nType-C materialism is compatible with several cases and could collapse into one of these other metaphysical views[52] depending on scientific discovery and its interpretation. With evidence of emergence, it resolves to strong reductionism under type A. With a different, possibly cultural paradigm for understanding consciousness, it resolves to type-B materialism.[32] If consciousness is explained by the quantum mind, then it resolves to property dualism under type D.[106] With characterization of intrinsic properties in physics extending beyond structure and dynamics, it could resolve to type-F monism.[52]\n\n# **Type-D Dualism**\n\nDualism views consciousness as either a non-physical substance separate from the brain or a non-physical property of the physical brain.[107] Dualism is the view that the mind is irreducible to the physical body. [107] There are multiple dualist accounts of the causal relationship between the mental and the physical, of which interactionism and epiphenomenalism are the most common today. Interactionism posits that the mental and physical causally impact one another, and is associated with the thought of René Descartes (1596–1650).[52] Epiphenomenalism holds the mental is causally dependent on the physical, but does not in turn causally impact it.[52]\n\nIn contemporary philosophy, interactionism has been defended by philosophers including Martine Nida-Rümelin, [108] while epiphenomenalism has been defended by philosophers including Frank Jackson[109][110] (although Jackson later changed his stance to physicalism).[111] Chalmers has also", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia2.pdf" - }, - { - "text": "most similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as \"unintelligible\" [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia, by discarding any page containing one of a list of about 400 \"Dirty, Naughty, Obscene or Otherwise Bad Words\" [p.6].14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people.15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n#### 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.\n\nAn important caveat is that social movements which are poorly documented and which do not receive significant media attention will not be captured at all. Media coverage can fail to cover protest events and social movements [41, 96] and can distort events that challenge state power [36]. This is exemplified by media outlets that tend to ignore peaceful protest activity and instead focus on dramatic or violent events that make for good television but nearly always result in critical coverage [81]. As a result, the data underpinning LMs stands to misrepresent social movements and disproportionately align with existing regimes of power.\n\nDeveloping and shifting frames stand to be learned in incomplete ways or lost in the big-ness of data used to train large LMs — particularly if the training data isn't continually updated. Given the compute costs alone of training large LMs, it likely isn't feasible for even large corporations to fully retrain them frequently enough to keep up with the kind of language change discussed here. Perhaps fine-tuning approaches could be used to retrain LMs, but here again, what would be required is thoughtful curation practices to find appropriate data to capture reframings and techniques for evaluating whether such fine-tuning appropriately captures the ways in which new framings contest hegemonic representations.\n\n## 4.3 Encoding Bias\n\nIt is well established by now that large LMs exhibit various kinds of bias, including stereotypical associations [11, 12, 69, 119, 156, 157], or negative sentiment towards specific groups [61]. Furthermore, we see the effects of intersectionality [34], where BERT, ELMo, GPT and GPT-2 encode more bias against identities marginalized along more than one dimension than would be expected based on just the combination of the bias along each of the axes [54, 132]. Many of these works conclude that these issues are a reflection of training data characteristics. For instance, Hutchinson et al. find that BERT associates phrases referencing persons with disabilities with more negative sentiment words, and that gun violence, homelessness, and drug addiction are overrepresented in texts discussing mental illness [61]. Similarly, Gehman et al. show that models like GPT-3 trained with at least 570GB of data derived mostly from Common Crawl16 can generate sentences with high toxicity scores even when prompted with non-toxic sentences [53]. Their investigation of GPT-2's training data17 also finds 272K documents from unreliable news sites and 63K from banned subreddits.\n\nThese demonstrations of biases learned by LMs are extremely valuable in pointing out the potential for harm when such models are deployed, either in generating text or as components of classification systems, as explored further in §6. However, they do not represent a methodology that can be used to exhaustively discover all such risks, for several reasons.\n\nFirst, model auditing techniques typically rely on automated systems for measuring sentiment, toxicity, or novel metrics such as 'regard' to measure attitudes towards a specific demographic group [119]. But these systems themselves may not be reliable\n\n14Available at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/en, accessed Jan 18, 2021\n\n15This observation is due to William Agnew.\n\n16https://commoncrawl.org/the-data/\n\n17GPT-3's training data is not openly available, but GPT-2's training data was used indirectly to construct GPT-3's [53].", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "physical constituents. For example, water is nothing more than H2O molecules, and understanding everything about H2O molecules is to understand everything there is to know about water. But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical.[27]\n\n#### **Implications for physicalism**\n\nChalmers's idea contradicts physicalism, sometimes labelled materialism. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. For example, the rings of Saturn are a physical thing because they are nothing more than a complex arrangement of a large\n\nnumber of subatomic particles interacting in a certain way. According to physicalism, everything, including consciousness, can be explained by appeal to its microphysical constituents. Chalmers's *hard problem* presents a counterexample to this view and to other phenomena like swarms of birds, since it suggests that consciousness, like swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem then physicalism must be false, and if physicalism is true then the hard problem must not be a real problem.\n\nThe hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.\n\nA swarm of birds showing high order structure emerging from simpler physical constituents\n\nThough Chalmers rejects physicalism, he is still a naturalist. [27]\n\n#### **Historical precedents**\n\nThe hard problem of consciousness has scholarly antecedents considerably earlier than Chalmers. Chalmers himself notes that \"a number of thinkers in the recent and distant past\" have \"recognised the particular difficulties of explaining consciousness.\"[33] He states that all his original 1996 paper contributed to the discussion was \"a catchy name, a minor reformulation of philosophically familiar points\".[33]\n\nAmong others, thinkers who have made arguments similar to Chalmers' formulation of the hard problem include Isaac Newton, [34] John Locke, [35] Gottfried Wilhelm Leibniz, [36][34] John Stuart Mill, [37] and Thomas Henry Huxley. [38][34] Likewise, Asian philosophers like Dharmakirti and Guifeng Zongmi discussed the problem of how consciousness arises from unconscious matter. [34][39][40][41]\n\n#### **Related concepts**\n\n#### **The mind–body problem**", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia2.pdf" - }, - { - "text": "defended versions of both positions as plausible.[52] Traditional dualists such as Descartes believed the mental and the physical to be two separate substances, or fundamental types of entities (hence \"substance dualism\"); some more recent dualists, however, accept only one substance, the physical, but state it has both mental and physical properties (hence \"property dualism\").[107]\n\n### **Type-E Dualism**\n\n### **Type-F Monism**\n\nMeanwhile, panpsychism and neutral monism, broadly speaking, view consciousness as intrinsic to matter. [52] In its most basic form, panpsychism holds that all physical entities have minds (though its proponents take more qualified positions),[112] while neutral monism, in at least some variations, holds that entities are composed of a substance with mental and physical aspects—and is thus sometimes described as a type of panpsychism.[113]\n\nForms of panpsychism and neutral monism were defended in the early twentieth century by the psychologist William James, [114][115][note 2] the philosopher Alfred North Whitehead, [115] the physicist Arthur Eddington, [116][117] and the philosopher Bertrand Russell, [112][113] and interest in these views has been revived in recent decades by philosophers including Thomas Nagel, [115] Galen Strawson, [115][118] Philip Goff, [115] and David Chalmers.[112] Chalmers describes his overall view as \"naturalistic dualism\",[1] but he says panpsychism is in a sense a form of physicalism,[52] as does Strawson.[118] Proponents of panpsychism argue it solves the hard problem of consciousness parsimoniously by making consciousness a fundamental feature of reality. [43][119]\n\n#### **Idealism and cosmopsychism**\n\nA traditional solution to the hard problem is idealism, according to which consciousness is fundamental and not simply an emergent property of matter. It is claimed that this avoids the hard problem entirely. [120] Objective idealism and cosmopsychism consider mind or consciousness to be the fundamental substance of the universe. Proponents claim that this approach is immune to both the hard problem of consciousness and the combination problem that affects panpsychism.[121][122][123]\n\nFrom an idealist perspective, matter is a representation or image of mental processes. Supporters suggest that this avoids the problems associated with the materialist view of mind as an emergent property of a physical brain.[124] Critics argue that this then leads to a decombination problem: how is it possible to split a single, universal conscious experience into multiple, distinct conscious experiences? In response, Bernardo Kastrup claims that nature hints at a mechanism for this in the condition dissociative identity disorder (previously known as Multiple Personality Disorder).[125] Kastrup proposes dissociation as an example from nature showing that multiple minds with their own individual subjective experience could develop within a single universal mind.\n\nCognitive psychologist Donald D. Hoffman uses a mathematical model based around conscious agents, within a fundamentally conscious universe, to support conscious realism as a description of nature—one that falls within the objective idealism approaches to the hard problem: \"The objective world, i.e., the world whose existence does not depend on the perceptions of a particular conscious agent, consists entirely of conscious agents.\"[126]", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia2.pdf" - }, - { - "text": "of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world.[77][78]\n\n#### **Criticisms**\n\nThe main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called *Moorean Arguments*. A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument.[79]\n\nThe roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception.[note 1][80]\n\nIn the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase *\"Je pense, donc je suis\"* (\"I think, therefore I am\").[81] Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite.[82]\n\nThis same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct \"acquaintance\" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise.[83] Chalmers concludes that \"there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy.\"[84]\n\nEliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled \"The Consciousness Deniers\". In it, Strawson describes illusionism as the \"silliest claim ever made\", next to which \"every known religious belief is only a little less sensible than the belief that the grass is green.\"[85] Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book *The Feeling of Life Itself*. In the early pages of the book, Koch describes eliminativism as the \"metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive.\"[86] Koch takes the prevalence of eliminativism as evidence that \"much of twentieth-century analytic philosophy has gone to the dogs\".[87]\n\n### **Type-B Materialism**\n\nType-B Materialism, also known as *Weak Reductionism* or *A Posteriori Physicalism*, is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world.[43] Like Type-A Materialists, Type-B Materialists are", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Today there is a strong tendency to simply *equate* consciousness with the qualia. Yet there is clearly something not quite right about this. The \"itchiness of itches\" and the \"hurtfulness of pain\" are qualities we are conscious *of*. So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely *consciousness* of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged \"mysterious, nonpublic objects\", i.e. objects that seem to be only \"visible\" to the respective subject, but rather to the nature of \"seeing\" itself (and in today's philosophy of mind astonishingly little is said about the latter).[129]\n\n# **Relationship to scientific frameworks**\n\nMost neuroscientists and cognitive scientists believe that Chalmers' alleged \"hard problem\" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called \"easy problems\", although a significant minority disagrees.[9][130]\n\n#### **Neural correlates of consciousness**\n\nSince 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness.[131] These postulated events are referred to as *neural correlates of consciousness* or NCCs. However, this research arguably addresses the question of *which* neurobiological mechanisms are linked to consciousness but not the question of *why* they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In \"On the Search for the Neural Correlate of Consciousness\", Chalmers said he is confident that, granting the principle that something such as what he terms \"global availability\" can be used as an indicator of consciousness, the neural correlates will be discovered \"in a century or two\".[132] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:\n\n> One can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved.[132]\n\nThe neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted.[133] Kandel went on to note Crick and Koch's suggestion that once the binding problem—understanding what accounts for the unity of experience—is solved, it will be possible to solve the hard problem empirically. [133] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the \"real problem\": understanding the neurobiology underlying", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia2.pdf" - }, - { - "text": "The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem—philosophical zombies, Mary's room, and Nagel's bats—are only persuasive if one already assumes that \"consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem.\" Hence, the arguments beg the question. The authors suggest that \"instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments.\"[64]\n\nThe philosopher Massimo Pigliucci argued in 2013 that the hard problem is misguided, resulting from a \"category mistake\".[17] He said: \"Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.\"[17]\n\nIn 2017, the philosopher Marco Stango, in a paper on John Dewey's approach to the problem of consciousness (which preceded Chalmers' formulation of the hard problem by over half a century), noted that Dewey's approach would see the hard problem as the consequence of an unjustified assumption that feelings and functional behaviors are not the same physical process: \"For the Deweyan philosopher, the 'hard problem' of consciousness is a 'conceptual fact' only in the sense that it is a *philosophical mistake*: the mistake of failing to see that the physical can be had as an episode of immediate sentiency.\"[65]\n\nThe philosopher Thomas Metzinger likens the hard problem of consciousness to vitalism, a formerly widespread view in biology which was not so much solved as abandoned.[66] Brian Jonathan Garrett has also argued that the hard problem suffers from flaws analogous to those of vitalism.[67]\n\nThe philosopher Peter Hacker argues that the hard problem is misguided in that it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of living organisms.[68] He states: \"The hard problem isn't a hard problem at all. The really hard problems are the problems the scientists are dealing with. [...] The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme.\"[68] Hacker's critique extends beyond Chalmers and the hard problem, being directed against contemporary philosophy of mind and neuroscience more broadly. Along with the neuroscientist Max Bennett, he has argued that most of contemporary neuroscience remains implicitly dualistic in its conceptualizations and is predicated on the *mereological fallacy* of ascribing psychological concepts to the brain that can properly be ascribed only to the person as a whole.[69] Hacker further states that \"consciousness studies\", as it exists today, is \"literally a total waste of time\" and that \"the conception of consciousness which they have is incoherent\".[68]\n\n#### **Eliminative materialism / Illusionism**\n\nEliminative materialism or eliminativism is the view that many or all of the mental states used in folk psychology (i.e., common-sense ways of discussing the mind) do not, upon scientific examination, correspond to real brain mechanisms.[59] According the 2020 PhilPapers survey, 4.51% of philosophers surveyed subscribe to eliminativism.[25]\n\nWhile Patricia Churchland and Paul Churchland have famously applied eliminative materialism to propositional attitudes, philosophers including Daniel Dennett, Georges Rey, and Keith Frankish have applied it to qualia or phenomenal consciousness (i.e., conscious experience).[59] On their view, it is mistaken not only to believe there is a hard problem of consciousness, but to believe phenomenal consciousness exists at all.[19][61]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Attitudes towards physicalism also differ among professionals. In the 2009 PhilPapers survey, 56.5% of philosophers surveyed subscribed to physicalism and 27.1% of philosophers surveyed rejected physicalism. 16.4% fell into the \"other\" category. [51] In the 2020 PhilPapers survey, 51.93% of philosophers surveyed indicated that they \"accept or lean towards\" physicalism and 32.08% indicated that they reject physicalism. 6.23% were \"agnostic\" or \"undecided\".[25]\n\nDifferent solutions have been proposed to the hard problem of consciousness. The sections below taxonomizes the various responses to the hard problem. The shape of this taxonomy was first introduced by Chalmers in a 2003 literature review on the topic.[52] The labelling convention of this taxonomy has been incorporated into the technical vocabulary of analytic philosophy, being used by philosophers such as Adrian Boutel,[53] Raamy Majeed,[54] Janet Levin,[55] Pete Mandik & Josh Weisberg,[56] Roberto Pereira,[57] and Helen Yetter-Chappell.[58]\n\n### **Type-A Materialism**\n\nType-A materialism (also known as *reductive materialism* or *a priori physicalism*) is a view characterized by a commitment to physicalism and a full rejection of the hard problem. By this view, the hard problem either does not exist or is just another easy problem, because every fact about the mind is a fact about the performance of various functions or behaviours. So, once all the relevant functions and behaviours have been accounted for, there will not be any facts left over in need of explanation.[52] Thinkers who subscribe to type-A materialism include Paul and Patricia Churchland, Daniel Dennett, Keith Frankish, and Thomas Metzinger.\n\nSome type-A materialists believe in the reality of phenomenal consciousness but believe it is nothing extra in addition to certain functions or behaviours. This view is sometimes referred to as *strong reductionism*. [43][52] Other type-A materialists may reject the existence of phenomenal consciousness entirely. This view is referred to as eliminative materialism or illusionism. [59][60][61]\n\n#### **Strong reductionism**\n\nMany philosophers have disputed that there is a hard problem of consciousness distinct from what Chalmers calls the easy problems of consciousness. Some among them, who are sometimes termed *strong reductionists*, hold that phenomenal consciousness (i.e., conscious experience) does exist but that it can be fully understood as reducible to the brain.[43]\n\nBroadly, strong reductionists accept that conscious experience is real but argue it can be fully understood in functional terms as an emergent property of the material brain.[43] In contrast to weak reductionists (see above), strong reductionists reject ideas used to support the existence of a hard problem (that the same functional organization could exist without consciousness, or that a blind person who understood vision through a textbook would not know everything about sight) as simply mistaken intuitions.[43][52]\n\nA notable family of strong reductionist accounts are the higher-order theories of consciousness. [62][43] In 2005, the philosopher Peter Carruthers wrote about \"recognitional concepts of experience\", that is, \"a capacity to recognize [a] type of experience when it occurs in one's own mental life,\" and suggested that such a capacity could explain phenomenal consciousness without positing qualia.[63] On the higher-order view, since consciousness is a representation, and representation is fully functionally analyzable, there is no hard problem of consciousness.[43]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia2.pdf" - }, - { - "text": "from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\").[140] The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene. [141]\n\nIn his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the \"easy problems\" of consciousness.[1] In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that \"now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.\"[1] J. W. Dalton similarly criticized GWT on the grounds that it provides, at best, an account of the cognitive *function* of consciousness, and fails to explain its experiential aspect.[142] By contrast, A. C. Elitzur argued: \"While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition.\"[143]\n\nFor his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal.[21] Dehaene, in his 2014 book *Consciousness and the Brain*, rejected the concept of qualia and argued that Chalmers' \"easy problems\" of consciousness are actually the hard problems.[20] He further stated that the \"hard problem\" is based only upon ill-defined intuitions that are continually shifting as understanding evolves:[20]\n\n> Once our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.\n\n# **Meta-problem**\n\nIn 2018, Chalmers highlighted what he calls the \"**meta-problem of consciousness**\", another problem related to the hard problem of consciousness:[76]\n\n> The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.\n\nIn his \"second approximation\", he says it is the problem of explaining the behavior of \"phenomenal reports\", and the behavior of expressing a belief that there is a hard problem of consciousness.[76]\n\nExplaining its significance, he says:[76]\n\nAlthough the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "Concerning electrolyte solutions, what assumption makes the primitive model (PM) regarding ions?", - "target_page": 1, - "target_passage": "simple phenomenological models such as the primitive model (PM), for which the ions are assimi- lated to charged hard spheres", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina1,2,3 , ∗ Jean-Fran¸cois Dufrˆeche1,2,3 , † Mathieu\n\nSalanne1,2 , Olivier Bernard1,2 , Marie Jardat1,2 , and Pierre Turq1,2\n\n1 UPMC-Universit´e Paris 06, UMR 7195, PECSA, F-75005 Paris, France\n\nUMR 5257 CEA–CNRS–Universit´e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C`eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, H¨uckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8–11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from molecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.\n\nThe first stage consists in calculating the McMillan-Mayer effective ion-ion interaction potentials V eff ij (r), by inverting the radial distribution functions (RDF) gij (r) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0.64 mol l−1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij (r) (which depends on the dielectric constant of the solvent) from V eff ij (r), we obtain the short-range contribution V SR ij (r) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na+ and Cl− free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier (& 2kBT ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-\n\n2 CNRS, UMR 7195, PECSA, F-75005 Paris, France 3\n\nInstitut de Chimie S´eparative de Marcoule (ICSM),\n\nElectronic address: john.molina@etu.upmc.fr\n\nElectronic address: jean-francois.dufreche@upmc.fr", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, Vij = V (0) ij + ∆Vij , a first-order truncated expression for the free energy density of the system βfv is obtained,\n\n$$\\beta f_{v}\\lesssim\\beta f_{v}^{(0)}+\\frac{1}{2}\\beta\\sum_{i,j}\\rho_{i}\\rho_{j}\\int\\mathrm{d}\\mathbf{r}\\,g_{i j}^{(0)}(r)\\Delta V_{i j}(r)\\qquad(1)$$\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = (kBT ) −1 and ρi the concentration of species i. The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter (σi) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆Vij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g(r) = exp [gMSA(r) − 1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye H¨uckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillan-Mayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\nWe first used LPT for a two-component system (Na+ and Cl− free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2.0 mol l−1 . The minimization leads to almost constant diameters on the whole range of concentration: σ1 = 3.67 ˚A and σ2 = 4.78 ˚A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0.1 mol l−1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4.2 ˚A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.\n\nTo overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "FIG. 5: (Color online) RDF obtained from MC simulations (diamond), BIMSA3 (solid line), and MSA-fit (dot dashed) at two concentrations.\n\nThe RDF obtained within BIMSA3 are compared with the MC and MSA-fit results in Fig. 5. Our BIMSA3 model accounts for the strong molecular peak of the CIP and provides the correct distances of minimal approach; whereas the naive MSA-fit procedure ignores the former and gives poor estimates for the latter. At larger separations, the BIMSA3 results do not reproduce the oscillations observed in the MC simulations, but the corresponding energy oscillations in the effective potentials are less than kBT . In addition, the perturbation term of the BIMSA3 appears to be negligible compared to the reference term for concentrations less than 1 mol l−1 . The perturbation can then be omitted to obtain a fully analytical theory, determined by the hard sphere diameters and the pair fraction given by LPT; with the free energy and the RDF given in terms of the BIMSA and MSA solutions, as described above. While the procedure we have followed uses two different approximations for the reference and perturbation terms (MSA vs BIMSA), these are known to be accurate for the systems under consideration and do not appear to be inconsistent with each other.\n\nTo conclude, we have combined MD simulations with LPT to construct simple models of electrolyte solutions which account for the molecular nature of the solvent. The final result is fully analytical and it yields the thermodynamic and structural properties of the solution, in agreement with the original molecular description. The methodology can in principle be adapted to any molecular description of the system (MD simulations involving interaction potentials accounting for polarization effects or Car-Parrinello MD simulations for example) as long as the ion-ion RDF are known. It can also be generalized to study interfaces. The method appears to be a promising approach toward the description of the specific effects of ions, especially for complex systems whose modeling requires an analytic solution.\n\nThe authors are particularly grateful to Werner Kunz for fruitful discussions.\n\n- [1] W. G. McMillan and J. E. Mayer, J. Chem. Phys. 13, 276 (1945).\n- [2] J. M. G. Barthel, H. Krienke, and W. Kunz, Physical Chemistry of Electrolyte Solutions (Springer, 1998).\n- [3] L. Blum, in Theoretical Chemistry: Advances and Perspectives, edited by H. Eyring and D. Henderson (Academic Press, 1980), vol. 5, pp. 1–66.\n- [4] L. Blum and O. Bernard, J. Stat. Phys. 79, 569 (1995).\n- [5] J.-F. Dufrˆeche et al., J. Phys. Chem. B 109, 9873 (2005).\n- [6] P. Jungwirth and D. J. Tobias, Chem. Rev. 106, 1259 (2006).\n- [7] W. Kunz, P. LoNostro, and B. W. Ninham, Curr. Opin. Colloid Interface Sci. 9, 1 (2004).\n- [8] B. Hess, C. Holm, and N. van der Vegt, Phys. Rev. Lett. 96, 147801 (2006).\n- [9] I. Kalcher and J. Dzubiella, J. Chem. Phys. 130, 134507 (2009).\n- [10] S. Gavryushov and P. Linse, J. Phys. Chem. B 110, 10878 (2006)\n- [11] A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52, 3730 (1995).\n- [12] D. Horinek and R. R. Netz, Phys. Rev. Lett. 99, 226104 (2007).\n- [13] M. Lund, P. Jungwirth, and C. E. Woodward, Phys. Rev. Lett. 100, 258105 (2008).\n- [14] S. Van Damme et al., J. Phys. Chem. B 113, 3105 (2009).\n- [15] J.-P. Hansen and I. R. McDonald, Theory of Simple Liquids (Academic Press, 1986).\n- [16] J. C. Rasaiah and R. M. Lynden-Bell, Philos. Trans. R. Soc. London, Ser. A 359, 1545 (2001).\n- [17] A. P. Lyubartsev and S. Marcelja, Phys. Rev. E 65, 041202 (2002).\n- [18] V. M. M. Lobo, Electrolyte Solutions, Data on Thermodynamic and Transport Properties, vol. I-II (Coimbra Editora, Lisbon, Portugal, 1984).\n- [19] G. Ciccotti, P. Turq, and F. Lantelme, Chem. Phys. 88, 333 (1984).\n- [20] J.-F. Dufrˆeche, T. O. White, and J.-P. Hansen, Mol. Phys. 101, 1741 (2003).\n- [21] The average contact distance between a symmetric dumbbell and an infinite plane at β = 0.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2648.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "| Core Concepts | |\n| --- | --- |\n| AIF | Active inference is a formal framework for modelling behaviour and cog |\n| | nition. Perception and action are cast as minimising free energy—the VFE |\n| | and EFE, respectively—given a generative model of the environment. |\n| VFE | The variational free energy F quantifies how well a generative model |\n| | explains incoming sensory observations. It can be rewritten as the negative |\n| | log model evidence (called surprise) upper-bounded by the divergence |\n| | from the optimal posterior p(s o). Perception as inference is accomplished |\n| | by selecting the approximate posterior q(s) with the lowest associated |\n| | VFE. |\n| | F[q(s), o] ≜ DKL[q(s)∥p(o,s)] = DKL[q(s)∥p(s o)] − ln p(o) |\n| | {z } {z } Divergence Surprise |\n| EFE | The expected free energy G quantifies the expected future free energy |\n| | under an action policy π. It consists of an information gain term and a |\n| | pragmatic value term that provide a natural balance between exploratory |\n| | and goal-seeking behaviour. Action as inference is accomplished by select |\n| | ing the action policy with the lowest associated EFE. |\n| | = − Eq(o˜,s˜ π) [ln q(s˜ o˜, π) − ln q(s˜ π)] − Eq(o˜ π) [ln p(o˜ C)] Gπ |\n| | {z } {z } Information gain Pragmatic value |\n| Generative | The generative model is an agent's formal assumptions about the structure |\n| model | and dynamics of its environment, based on which perceptual and active |\n| | inferences are carried out. Many types of generative models exist that are |\n| | suitable for different environments and tasks. |\n| POMDP | The Partially Observable Markov Decision Process is a type of flexible |\n| | generative model that is widely used in the AIF literature. In discrete time |\n| | and usually a discrete state space, this model type is parametrised to fit a |\n| | given task by a set matrices containing probability distributions. |\n\n## **2. Active Inference with POMDPs**\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44–47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4–8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "processesing business. And, I get excited about the opportunity to expand our transaction base with our mobile banking, bill payment and mobile operator solutions.\n\nThe real value of our company is in our transaction processing. Because of the low incremental cost of connecting to a new customer, anytime we sign a new contract most of the incremental revenue will now be flowing to our bottom line. The infrastructure is in place to leverage additional growth and bring us closer to being EBITDA and cash flow positive in the near term.\n\n## **What role will strategic alliances play in extending your reach into new markets?**\n\nAlliances are an important part of our strategic direction. Recently, we announced several partnerships that help us expand sales channels and distribution of our products and services. Our partners were looking for wireless transaction solutions to complement their own offerings, and they selected Euronet's products, proving that our solutions are rock solid.\n\nGemplus, the world's number one provider of smart card-based solutions, chose us as their global partner to provide electronic recharge solutions to mobile operators. We also have agreements with Sila Communications to help us market our suite of mobile banking solutions throughout Europe, the Middle East and Asia Pacific and with Aether Systems which is offering our mobile banking solutions in the United States.\n\n## **Why did you change your corporate name to Euronet Worldwide last year?**\n\nWe became Euronet Worldwide to more accurately reflect the company's growing presence in the global marketplace. We are no longer focused solely on Europe, and today, deliver comprehensive solutions to more than 200 customers in over 60 countries.\n\n## **What was your biggest challenge in 2000?**\n\nI think it was restructuring our software business late in the year. When Euronet purchased Arkansas Systems, Inc. over two years ago, the division was expected to\n\nachieve high growth. As banks began moving to outsourcing rather than purchasing software to manage their transactions, we realized that this high growth would not materialize. We've basically downsized to reduce expenses to better correspond to revenue expectations, so we\n\nexpect this division to be an EBITDA contributor from this point forward. The trend towards outsourcing negatively impacted our software business, but positively benefits our network services division.\n\nIt's important to point out that our software is an asset to our business of\n\nselling transactions. For example, our software sales doubled in the Asia Pacific region over 1999. Relationships with large financial institutions like Westpac Banking Corporation have cemented our position in Asia Pacific as a leading supplier of transaction processing solutions.\n\n#### **Why is ATM outsourcing important?**\n\nIncreasingly, financial institutions are choosing to outsource their ATM operations to free up resources\n\n> and concentrate on their core banking business. Some analysts predict that outsourcing by the European banking and finance sector will total $91 billion by 2003. We are expanding our outsourcing business with wireless and Internet banking services.\n\nOur outsourcing business is thriving. Currently we provide ATM outsourcing for some of the biggest banks in the world – banks like Citibank, ABN AMRO, Deutsche Bank, Millennium\n\nand Raiffeisenbank – as they expand into emerging markets. We have contracts with Citibank in five countries, most recently in Greece and the Czech Republic.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f(h) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure −γ∆h and the disjoining pressure Π(h) = −∂hf(h). Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70–73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. We should expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor film.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle (≈ 0.01), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter – an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Discussion\n\nCurrent theories of predictive processing and active inference assume that, to steer adaptive perception and action, the brain forms internal generative models of the environment and of the body within it. Various studies reveal that the brain has rich models of the body; for example, it integrates somatosensory and proprioceptive information into a coherent representation of things like body size and limb position—i.e. a \"body schema.\" More recently, this model-based perspective has been extended to interoception—and the rich sensations we constantly receive from the internal body. Theories of interoceptive processing propose that the brain continuously estimates key bodily and homeostatic variables, such as thirst or fatigue levels, perhaps forming something like an \"interoceptive schema.\"\n\nA key reason for forming bodily or interoceptive models is that they permit us to exert accurate control over the variety of signals (e.g. somatosensory and interoceptive) that the body produces. Forming an accurate body schema is prominent for motor control, whereas modeling interoceptive variables (e.g. thirst) is key to keeping them under control by engaging autonomic refexes (e.g. vasodilation) and allostatic or goal-directed actions (e.g. drinking) when they have incorrect values. The generative modeling perspective can also be extended hierarchically to consider richer models of multimodal experiences and \"embodied self\" that persists in time and anchors our experiences, permitting us to select adaptive courses of action to achieve our favorite goals.\n\nWhile it seems obvious that controlling bodily variables and achieving goals are crucial for survival, this perspective poses a fundamental challenge. In control theory and active inference, \"controlling\" the body ensures that the body generates the preferred outcomes with high (hedonic or pragmatic) value, e.g. safe levels for thirst and fatigue. This idea applies naturally to many of our activities that pursue some form of biologically adaptive function or well-being, such as ensuring that we keep our bodies healthy and consume good food (Sterling and Eyer 1988, Sterling 2012). However, it fails to explain why we engage in some activities that are apparently maladaptive and contradict our primary biological imperative to ensure body health. Perhaps the most puzzling examples are pathological behaviors (e.g. non-suicidal self-harm or starvation), which are common across psychopathological conditions. In these cases, the control exerted over the body and its sensations might serve the purpose of generating outcomes with high (hedonic or pragmatic) values that nevertheless run against our homeostatic and survival imperatives (e.g. pain and excessive levels of hunger).\n\nIn this article, we started with formal accounts of brain processing based on active inference to discuss the mechanisms and functional purpose of the (apparently) maladaptive ways to \"control the body\" that arise in these and other psychopathological behaviors. We frst discussed how we build models of the world, of our bodily and interoceptive processes, of our emotions, and of the embodied self, which provides a sense of understanding of reality and affords adaptive control at many levels, from the allostatic regulation of our physiological states to the achievement of our individual and social goals. Then, we discussed under which conditions we can become highly uncertain about our current state and the future course of action. These conditions include both contextual factors (e.g. periods of noteworthy changes or stress) and factors related to the person's internal models (e.g. poor models in which precision parameters are incorrectly set). We next turned to active inference and discussed how reducing uncertainty (not just maximizing utility) is a key imperative in this framework. This implies that an active inference agent can sometimes privilege uncertainty minimization over utility maximization. In extreme conditions, such as when interoceptive uncertainty is excessive or diffcult to reduce, a person could develop maladaptive strategies to deal with it, such as acting on the body to produce interoceptive sensations of pain or starvation that reduce interoceptive uncertainty.\n\nThe centrality of physiological processes and bodily information for the sense of self has been widely discussed by interoceptive research (Seth et al. 2012, Quigley et al. 2021). Here, in continuity with previous works (Barca and Pezzulo 2020), we suggest that (i) some pathological behaviors—that \"act on the body\" in maladaptive ways—might be considered as strategies for modifying internal models and the sense of self when it is defcient, through bodily sensations and (ii) the sense of self can be defcient when bodily information is uncertain, and this can happen not only in clinical conditions but also during pivotal periods of developmental transition, e.g. in adolescence.\n\nThe theoretical perspective offered here leaves several important questions unaddressed. First, even if uncertainty reduction might be a central drive in self-injury behaviors, it is unclear what kinds of uncertainty (if any) specifcally trigger the paradoxical behaviors. It may be only the uncertainty at deep hierarchical levels (e.g. at the level of self-models) that promotes paradoxical behaviors. Alternatively, it could be possible that it is not so much the kind of uncertainty that matters but somewhat its associated distress, which in turn could be amplifed by conditions like the intolerance of uncertainty. While these and alternative hypotheses remain to be tested in future research, they might in the future lead to novel tailored interventions. Current reviews of NSSI interventions (see, e.g. Turner et al. 2014, Witt et al. 2021) outline the various treatments currently available (e.g. psychological and psychosocial interventions, pharmacological treatments, and a combination of both), but underline the need for further data on their effectiveness. The use of formal models of brain function to characterize the mechanisms of psychopathology (Friston et al. 2014, Stephan and Mathys 2014) might help conceptualize dysfunctional behaviors in operationalizable terms. In this vein, one might delineate interventions aimed at reducing the uncertainty of self-models by starting from the bodily self and the defnition of self-other boundaries (if these turn out to be the critical aspects for the patient). In this endeavor, techniques such as virtual reality and robotics might help elucidate which levels of the multisensory integration process of the bodily self might be compromised (Dieguez and Lopez 2017, Tsakiris 2017, Serino et al. 2018). Virtual reality along with role-playing sessions and the use of avatars are increasingly considered effective tools for the training of clinicians who deal with individuals engaging in NSSI (Taliaferro et al. 2023). It remains to be tested whether the use of virtual reality or similar interventions—and the defnition of contexts and tasks aimed at reducing the uncertainty of the bodily self—might also be viable for individuals engaging in NSSI.\n\nSecond, in this paper, we have mainly focused on uncertainty reduction, but as we reviewed earlier, there are other alternative (or complementary) perspectives on the genesis of NSSI that considers elements such as affective regulation. In addition to the studies discussed earlier, other insights into the pathological mechanisms that might underlie NSSI come from the analysis of clinical populations. For example, dysregulations of the", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed1.pdf" - }, - { - "text": "**Figure 5. A** learning for the actual reward condition (reward condition left). The agent correctly learned the probability of receiving rewards in the rewarding arm. It did not learn the probabilities of the non-rewarding arm since it did not explore that option. The color grading signifies the likelihood of an observation being generated by a specific state. The more saturated the color, the higher the likelihood.\n\n## *4.3. Fitting the Model to the Data*\n\nSimulations are useful for a variety of purposes, like exploring the consequences of different priors and parameters and establishing the face validity of hypothetical mechanisms underlying behavioural phenomena. However, we often want to use models to make inferences about specific observed phenomena, like the differences in behaviour between various populations, as in computational psychiatry [14]. One standard method here is model fitting, where we estimate the parameter values (e.g., prior beliefs) of an AIF model that are the most likely given some observed behaviour of a participant. This is often performed with approximate Bayesian methods. In the cognitive and behavioural sciences, the predominant method is Markov Chain Monte Carlo (MCMC) methods [34], which are slower but in the limit can estimate parameter posteriors without making assumptions about their functional form. An alternative, which is more often used in other fields and also available in ActiveInference is variational methods, which are faster but require making assumptions about the functional form of the posterior. In general, MCMC methods are favourable when making parameter inferences (i.e., comparing parameters of the same model fitted to different data, like two groups of subjects). When performing a Bayesian model comparison (i.e., comparing different models fitted to the same data), the different approaches rely on different approximations of the model evidence, with the variational", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "## **Our Solutions**\n\n## **NETWORK SERVICES**\n\nEuronet's Network Services division provides complete solutions for management and outsourcing of distribution channels and transaction processing. These solutions include ATM networks, point-of-sale (POS) services and card management, as well as access to all major payment gateways and mobile operators.\n\n### **EFT AND PAYMENTS SOFTWARE**\n\nEuronet's suite of EFT and payment software offers one of the most secure, seamlessly integrated, real-time solutions for financial institutions. Integration is essential for delivering data and electronic transactions for multiple touchpoints, such as ATMs, POS devices, interactive voice response (IVR) systems, Internet and mobile devices.\n\n### **MOBILE OPERATOR SOLUTIONS**\n\nWith mobile phone ownership at an all time high, Euronet's mobile operator solutions provide their customers easy access to payment options. Our transactions expertise helps mobile operators supply consumers with the convenience of any time, any place transactions.\n\n## **Offerings**\n\n- ATM, POS and card outsourcing\n- Europe's largest independent ATM owner\n- Euronet transaction network Europe\n- Dash transaction network USA\n- Cakra transaction network Asia Pacific\n- ATM management\n- Bill payment\n- Credit card solutions\n- Debit card management\n- EMV support\n- POS and merchant management\n- Switching and settlement software\n- Telephone banking\n- Bank account access\n- Mobile phone recharge\n\n## **M- & E-COMMERCE SOLUTIONS**\n\nConsumers are expecting more personalized service than ever before with instant access to financial account information. Euronet's Account Access and Event Messaging products meet these demands with secure, efficient, integrated transaction and information delivery functions via mobile devices and the Internet.\n\n## **PROFESSIONAL SERVICES**\n\nEuronet Worldwide is uniquely qualified to offer professional consulting services because of our day-to-day expertise as a secure transaction provider. Euronet's Professional Services Organization (PSO) supports institutions with EDGE, our proprietary, structured and phased methodology for implementing solutions.\n\n- Account access\n- Bill payment\n- ePOS\n- Event messaging service\n- Internet banking\n- Design\n- Gap analysis\n- Implementation\n- Management\n- Planning\n- Purchasing", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_EEFT_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "What is the principle of the liquid perturbation theory (LPT) ?", - "target_page": 2, - "target_passage": "The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the differ- ence between them treated as a perturbation in the ref- erence potential", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f(h) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure −γ∆h and the disjoining pressure Π(h) = −∂hf(h). Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70–73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. We should expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor film.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle (≈ 0.01), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter – an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n### B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78–83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρl and the nanoparticles ρn. The densities ρl and ρn are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F[ρl , ρn], and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n### A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l. The resulting three possible states of a cell are: liquid (l = 1, n = 0), nanoparticle (l = 0, n = 1), and vapour (l = 0, n = 0, i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\n$$E\\,=\\,-\\frac{\\varepsilon_{nn}}{2}\\sum_{}n_{i}n_{j}\\,-\\,\\frac{\\varepsilon_{nl}}{2}\\sum_{}n_{i}l_{j}\\,-\\,\\frac{\\varepsilon_{ll}}{2}\\sum_{}l_{i}l_{j}\\,-\\,\\mu\\sum_{i}l_{i}\\tag{3}$$\n\nwhere P denotes a sum over nearest neighbour pairs and εll, εnn and εnl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters εll, εnn, εnl and the effective chemical potential µ determines the equilibrium state of the system. We choose εll as unit of energy – i.e. we set εll = 1.\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - }, - { - "text": "is similar to the size of the nanoparticles. At a certain distance from the macroscopic front, the ultrathin film starts to evolve a locally isotropic pattern of holes. The holes themselves grow in an unstable manner resulting in an array of isotropically branched structures as shown, e.g., above in Fig. 1. This indicates that at least some of the patterns described in the literature may have arisen from processes in similar ultrathin 'postcursor' films.\n\nThe existence of the ultrathin 'postcursor' film is an experimental finding that can be drawn on when choosing a theoretical approach to account for the pattern formation (see below). Note however, that at the moment there exists no explanation for its existence. A possible hypothesis is that the substrate strongly attracts the nanoparticles. As a result they form a dense suspension layer having a thickness roughly equal to the diameter of the nanoparticles. The observed mesoscopic dewetting front then actually correspond to an autophobic dewetting of a low concentration suspension from the higher concentration suspension on the surface of the substrate.\n\n### III. MODELLING APPROACHES\n\nModels of dewetting thin films of pure liquids or polymers are often based on thin film hydrodynamics. Starting from the Stokes equations, together with continuity and boundary conditions at the substrate and free surface, one applies a long-wave approximation (assuming small surface slopes and contact angles) [8, 63] and obtains a non-linear evolution equation for the film thickness profile h(x, y, t). In the case of volatile liquids one finds [55–58, 64]\n\n$$\\partial_{t}h\\,=\\,\\nabla\\cdot\\left[Q_{\\mathrm{e}}\\nabla\\frac{\\delta F}{\\delta h}\\right]\\,-\\,Q_{\\mathrm{e}}\\frac{\\delta F}{\\delta h},\\tag{1}$$\n\nwith the mobility functions Qc(h) = h 3/3η ≥ 0 (assuming Poiseuille flow in the film and no slip at the substrate; η is the dynamic viscosity) and Qe ≥ 0 for the convective and evaporative part of the dynamics, respectively. Qe is a rate constant that can be obtained from gas kinetic theory or from experiment [57]. Note that Eq. (1) only applies if the pressure in the vapour above the film is close to the saturation pressure. For alternative expressions that are used to describe the non-conserved evaporative dynamics see, e.g., Refs. [56, 57, 65–69]. Finally, ∇ = (∂x, ∂y), and ∂t , ∂x and ∂y denote partial derivatives w.r.t. time and the coordinates.\n\nFocusing on the influence of capillarity and wettability only, the energy functional F[h] is given by\n\n$$F[h]\\,=\\,\\int dx\\int dy\\left[\\frac{\\gamma}{2}(\\nabla h)^{2}+f(h)-\\mu h\\right]\\tag{2}$$", - "page_start": 6, - "page_end": 6, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes – an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [65] J. P. Burelbach, S. G. Bankoff, and S. H. Davis, \"Nonlinear stability of evaporating/condensing liquid films,\" J. Fluid Mech. 195, 463–494 (1988).\n- [66] A. Oron and S. G. Bankoff, \"Dewetting of a heated surface by an evaporating liquid film under conjoining/disjoining pressures,\" J. Colloid Interface Sci. 218, 152–166 (1999).\n- [67] L. W. Schwartz, R. V. Roy, R. R. Eley, and S. Petrash, \"Dewetting patterns in a drying liquid film,\" J. Colloid Interface Sci. 214, 363–374 (2001).\n- [68] K. Kargupta, R. Konnur, and A. Sharma, \"Spontaneous dewetting and ordered patterns in evaporating thin liquid films on homogeneous and heterogeneous substrates,\" Langmuir 17, 1294–1305 (2001).\n- [69] M. Bestehorn and D. Merkt, \"Regular surface patterns on Rayleigh-Taylor unstable evaporating films heated from below,\" Phys. Rev. Lett. 97, 127802 (2006).\n- [70] G. F. Teletzke, H. T. Davis, and L. E. Scriven, \"Wetting hydrodynamics,\" Rev. Phys. Appl. 23, 989– 1007 (1988).\n- [71] J. N. Israelachvili, *Intermolecular and Surface Forces*, Academic Press, London (1992).\n- [72] V. S. Mitlin, \"Dewetting of solid surface: Analogy with spinodal decomposition,\" J. Colloid Interface Sci. 156, 491–497 (1993).\n- [73] L. M. Pismen and Y. Pomeau, \"Disjoining potential and spreading of thin liquid layers in the diffuse interface model coupled to hydrodynamics,\" Phys. Rev. E 62, 2480–2492 (2000).\n- [74] L. Onsager, \"Crystal statistics. I. A two-dimensional model with an order-disorder transition,\" Phys. Rev. 65, 117–149 (1944).\n- [75] G. Reiter, \"Unstable thin polymer films: Rupture and dewetting processes,\" Langmuir 9, 1344–1351 (1993).\n- [76] C. G. Sztrum, O. Hod, and E. Rabani, \"Self-assembly of nanoparticles in three-dimensions: Formation of stalagmites,\" J. Phys. Chem. B 109, 6741–6747 (2005).\n- [77] G. Yosef and E. Rabani, \"Self-assembly of nanoparticles into rings: A lattice-gas model,\" J. Phys. Chem. B 110, 20965–20972 (2006).\n- [78] J. F. Gouyet, M. Plapp, W. Dieterich, and P. Maass, \"Description of far-from-equilibrium processes by mean-field lattice gas models,\" Adv. Phys. 52, 523–638 (2003).\n- [79] U. M. B. Marconi and P. Tarazona, \"Dynamic density functional theory of fluids,\" J. Chem. Phys. 110, 8032–8044 (1999).\n- [80] U. M. B. Marconi and P. Tarazona, \"Dynamic density functional theory of fluids,\" J. Phys.-Condes. Matter 12, A413–A418 (2000).", - "page_start": 29, - "page_end": 29, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: *J. Phys.-Cond. Mat.* 21, 264016 (2009), in the Volume \"Nanofluids on solid substrates\" and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability pacc = min[1, exp(−∆E/kT)] where k is the Boltzmann constant, T the temperature and ∆E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1. This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kTc ≈ 0.567 liquid and vapour coexist when µ = µcoex = −2. For µ > −2 [µ < −2] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < −2 [µ > −2], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < −2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further – at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0.2. Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26–28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "distance between particle clusters resulting from the demixing process that occurs already in the bulk liquid and is not related to the front instability at all. Note that one finds a similar sequence of regimes (i) to (iv) when increasing the particle-particle interaction strengths for fixed εnl (see Ref. [41]) for further details.\n\nFIG. 3: (Colour online) Dependence of the mean finger number left behind by the unstable dewetting front on the particle-liquid interaction strength εnl. The regions marked (i) to (iv) are discussed in the main text. The insets display typical snapshots obtained in the four different regions. Particles are black, liquid is grey (green online) and the empty substrate is white. The remaining parameters are kT = 0.2, M = 20, µ = −2.2, ρ av n = 0.1, nn = 2.0, domain size 1200 × 1200. For the insets, from left to right, nl = 1.2, 1.4, 1.45, 1.8.\n\nWe note also that the fingering process may be viewed as self-optimising the front motion – i.e. the front keeps its average velocity constant by expelling particles into the fingers. A similar effect exists for dewetting polymer films [18], where liquid is expelled from the growing moving rim which collects the dewetted polymer. There, the surplus liquid is left on the surface as a droplet pattern.\n\nThe kinetic Monte Carlo model is a very useful tool that helps one to understand the pattern formation in drying nanoparticle suspensions. One has, however, to keep in mind the restrictions", - "page_start": 12, - "page_end": 12, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "By how much did the Hartford group's link to AARP website account concerning buisness made over the internet ?", - "target_page": 16, - "target_passage": "In 2001 the company’s link to AARP’s Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "most dynamic sources of business growth. In 2001 the company's link to AARP's Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet.\n\nBecause The Hartford quotes and issues this business online (and added online billing in 2001), acquisition and processing costs are 15 to 20 percent lower than those of traditional direct-marketing or face-toface sales. Because of this and other factors, the expense ratio for AARP business is 30 percent below that of the industry in general. And the customer renewal rate is 96 percent, versus the industry's 88 percent, making the AARP program yield some of the most profitable auto business The Hartford writes.\n\nThe relationship also has The Hartford thinking ahead toward new business and an even stronger relationship with AARP members. The Hartford can crossmarket auto insurance to homeowner's customers and homeowner's insurance to auto customers, which presents a tremendous growth opportunity. In addition,\n\nThe Hartford is committed to providing value to AARP members in many ways. An example: The Hartford and AARP work with the MIT Age Lab to produce information—available in print and on both partners' Web sites—advising AARP members about Alzheimer's disease and other forms of dementia as they affect driving ability. The information guides caregivers struggling with difficult decisions about family members' safety behind the wheel. The resource—a customer solution like no other—helps enhance the superior value The Hartford provides to AARP members.\n\nAlthough it's the most comprehensive, the AARP relationship isn't The Hartford's only affinity program. The company also has affinity arrangements with USAA and other companies. Regardless of the program's size, the affinity partners share the right qualities: strong name-brand recognition, first-class marketing and a broad and loyal customer base.\n\nIn other words, they share some of The Hartford's core attributes.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**\"P**artnering\" is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners—and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n\"A lot of people talk about having the right partners, but The Hartford views it differently from most,\" says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. \"They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.\" Trippe should know. His agency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membership those age 50 and over—is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**N**ew technology tools made The Hartford Experience customer solutions, ease of doing business and extraordinary service—more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries—from solo advisors to members of large financial institutions—are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "#### **Corporate Information**\n\n**Corporate Headquarters** The Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\n#### **Internet Address**\n\nhttp://www.thehartford.com\n\n#### **Annual Meeting**\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n#### **Form 10-K and Other Information**\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n#### **Transfer Agent/Shareholder Records**\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department–11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department–11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\n#### **Investor Relations**\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations 860-547-2537\n\n#### **Media Inquiries**\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n**Common Stock and Dividend Information**\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol \"HIG.\" The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | | Dividends |\n| --- | --- | --- | --- |\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- - *John Belisle, right, is senior vice president of Oswald, Trippe and Company, Inc. in Fort Myers, Fla., one of The Hartford's largest sellers of Select Customer commercial insurance. David van der Merwe, president of electronics manufacturer Saftronics, Inc., depends on him for reliable counsel, as well as products tailored to Saftronics' business.*\n- *The Hartford signed a new eightyear contract, beginning Jan.1, 2002, to continue its highly successful relationship with AARP. Property & Casualty Operations President and CEO Dave Zwiener, second from left, works closely with, left to right, Bill Farris, director, financial products, AARP Services, Inc.; Leisha Spaulding, manager, financial products, AARP Services, Inc.; and Steve Zaleznick, CEO, AARP Services, Inc.*", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| The Hartford Financial Services Group, Inc. |\n| --- |\n| Hartford Plaza, 690 Asylum Avenue |\n| Hartford, Connecticut 06115 |\n\n*There's only*\n\n**The Hartford Financial Services Group, Inc. 2001 Summary Annual Report**\n\n*to run a business...*", - "page_start": 39, - "page_end": 39, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- *The Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.*\n- *Joe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.*", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "Intermediary Service Award and the first-ever Life Insurance Service Award. The triple win reflected the overall excellence of The Hartford's service, a natural complement to the company's quality products. DAL-BAR also recognized The Hartford's mutual funds as the industry leader in several categories, including investment management.\n\nIn managing its product portfolio, The Hartford follows its own advice: think ahead and diversify. The company's earnings base derives from a variety of businesses. Diversification is a key element in managing risk and ensuring profitability—a time-tested philosophy that held especially true in 2001, as the company's other businesses evolved to anticipate changing market demands and to offer protection from new risks.\n\nThe property-casualty Business Insurance group, for example, extended its coverage to include common risks associated with e-commerce. Hartford Financial Products' (HFP) coverage continued to meet emerging risks in an extremely volatile business environment.\n\nThe Hartford helped customers manage risk by developing a new category of commercial coverage called CyberFlex.TM This targets the previously unmet needs of small and mid-sized businesses that are integrating the Internet and other communications tools into their regular operations.\n\nA 2001 survey by The Hartford revealed that 80 percent of small and mid-sized businesses weren't sure if their current insurance policies covered specific—and increasingly common—risks such as e-mail viruses, Web site business interruption and online copyright infringement. CyberFlex coverage protects middle-market and small-business policyholders against the risk of those potentially debilitating conditions.\n\nCyberFlex is part of a broad array of industryspecific coverages in The Hartford's SPECTRUM® business-owner's policy, including protection against employment practices liability, equipment breakdown and business interruption. As the economic environment changes rapidly, The Hartford thinks ahead by providing those flexible coverages. And the company's", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**The Hartford Financial Services Group, Inc.**\n\n**Hartford Plaza, 690 Asylum Avenue**\n\n**Hartford, Connecticut 06115**\n\nFORM 100025[2001]", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "streamlined product-development process maximizes speed-to-market so agents have the right products to sell at the right time. That's one reason why we estimate The Hartford's small-business insurance growth is five to six times the industry average.\n\nDeveloping products for a changing business environment is also a proven skill of HFP. The unit completed its first full year as part of The Hartford after our 2000 acquisition of Reliance Group Holdings, Inc.'s financial products and excess and surplus lines.\n\nIt was quite a year after quite a decade. Demand for HFP's mainstay directors and officers liability insurance was high during the 1990s as the number of U.S. public corporations tripled. Amid the past year's corporate retrenchment, loss activity led to industrywide premium price increases of up to 30 percent. A flight to quality was inevitable under such conditions, and a strong brand and superior ratings helped HFP distance itself from lesser competitors. Even the horrific collapse of its World Trade Center headquarters couldn't hold HFP back in 2001. It renewed $43 million worth of business in September alone, fulfilling its commitment to protecting customers against uncertainty.\n\n- *A strong brand and superior ratings help Hartford Financial Products (HFP) differentiate its directors and officers liability insurance from those of competitors. HFP's Boston Regional Manager Doreen Lukowski-Rizza*\n*works with HFP Underwriting Manager David Garrison, far right, and financial professionals such as William Gallagher Associates President and CEO Philip Edmundson, second from left, and Principal Richard Leavitt.*\n\n- *Hartford Investment Management Co., which specializes in fixedincome asset management, has nearly $75 billion under management. Marcie Hayden, money market trader, and Peter Perrotti, government portfolio manager, are two members of a professional organization whose annual trading volume exceeds $50 billion.*", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "How many licensed intermediaries did Hartford group have in 2001 ?", - "target_page": 23, - "target_passage": "More than 153,000 licensed intermediaries", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "**N**ew technology tools made The Hartford Experience customer solutions, ease of doing business and extraordinary service—more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries—from solo advisors to members of large financial institutions—are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| The Hartford Financial Services Group, Inc. |\n| --- |\n| Hartford Plaza, 690 Asylum Avenue |\n| Hartford, Connecticut 06115 |\n\n*There's only*\n\n**The Hartford Financial Services Group, Inc. 2001 Summary Annual Report**\n\n*to run a business...*", - "page_start": 39, - "page_end": 39, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**The Hartford Financial Services Group, Inc.**\n\n**Hartford Plaza, 690 Asylum Avenue**\n\n**Hartford, Connecticut 06115**\n\nFORM 100025[2001]", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- *The Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.*\n- *Joe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.*", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "#### **Corporate Information**\n\n**Corporate Headquarters** The Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\n#### **Internet Address**\n\nhttp://www.thehartford.com\n\n#### **Annual Meeting**\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n#### **Form 10-K and Other Information**\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n#### **Transfer Agent/Shareholder Records**\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department–11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department–11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\n#### **Investor Relations**\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations 860-547-2537\n\n#### **Media Inquiries**\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n**Common Stock and Dividend Information**\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol \"HIG.\" The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | | Dividends |\n| --- | --- | --- | --- |\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "# *products & services*\n\n**H**ow do you secure the future when the present is puzzling enough? It's a big challenge, and The Hartford's primary objective. Everything we do is designed to help our customers deal with the uncertainties that lie ahead.\n\nThe Hartford believes the best way to secure the future is to provide customers with the right products, and then back those products with outstanding performance and great service. Staying focused on this objective was never more important—or more challenging—than in 2001.\n\nTrue to form, The Hartford's life operations' annuities and mutual funds delivered high-quality performance in a time of market turmoil. Despite an anemic stock market, 87 percent of the funds in The Hartford's Director variable annuity remained in the first or second quartile of three-year returns within the Lipper Peer Group in 2001. Sixty-four percent of the funds in the Leaders suite of annuities and 91 percent of The Hartford's mutual funds remained in the first or second quartile over the three-year period.\n\nThe ability to deliver that kind of performance can be traced to our money managers—Wellington Management Co., American Funds, Franklin Templeton Investments, MFS Investment Management, AIM Funds Management, Inc., Putnam Investment Management and The Hartford's own Hartford Investment Management Co.\n\nAll of The Hartford's money managers have years of experience and are among the most respected firms in the industry. Their experience and expertise were especially important during the market volatility we saw in 2001. They always stay focused on long-term performance, which is the true measuring stick of The Hartford's value to its customers.\n\nBesides outstanding products and excellent management, great service is a critical component in delivering the right solutions to our customers. In 2001, The Hartford won an unprecedented sixth consecutive DALBAR Annuity Service Award, as well as the", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "the New York metropolitan area. In order to speed the payment of claims, GBD employees immediately contacted customers with offices in the towers and worked with industry organizations to expedite the issuing of death certificates.\n\nThe Hartford's individual life operations scoured airline manifests and missing-persons lists, looking for names of customers. When they spotted a potential match, they called agents to alert them to a possible claim and provided tips on how to proceed.\n\nFuture generations will measure the full impact of Sept. 11. But at The Hartford, one thing is known already. As they did after disasters such as the New York fire of 1835, the Chicago fire of 1871 and the 1906 San Francisco earthquake, The Hartford's people in 2001 ran their business the only way they know how the right way. They put customers first and kept promises. In so doing, they helped lay the foundation for a more confident future.\n\n- *New York employees admire a painting depicting the courage and resilience of The Hartford employees and the New York rescue teams. The montage, which now hangs in the lobby of The Hartford's New York offices, was painted by Andy Yelenak of The Hartford's Information Technology department.*\n- *The Hartford's New York staff got their businesses back up and running in less than a week after the Sept. 11 attack, despite the destruction of their offices. Among those who were instrumental in getting 330 employees situated in temporary office space were, left to right, Lucille T. Sgaglione, vice president, Hartford Financial Products; Linda Banks, administrative assistant, office support*\n\n*services, Business Insurance; Holly McCalmont, human resources manager, Business Insurance; Jim Norris, business technology solutions manager, Business Insurance; Craig Lowenthal, first vice president and chief information officer, Hartford Financial Products; and Susan Miranda, support services manager, Hartford Specialty Co.*", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**\"P**artnering\" is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners—and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n\"A lot of people talk about having the right partners, but The Hartford views it differently from most,\" says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. \"They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.\" Trippe should know. His agency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membership those age 50 and over—is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "Meanwhile, in midtown Manhattan, The Hartford's negotiations for permanent offices—a process that normally takes 12 to 15 months—were complete.\n\nThe feverish pace was in some ways therapeutic. It helped take people's minds off the tragedy and the monumental loss of life, including the lives of many good friends and business colleagues at Aon, Marsh & McLennan, Bank of America and Morgan Stanley—major partners of The Hartford with offices in the Twin Towers.\n\nLike many Americans watching the heroism of firefighters, police and emergency crews, thousands of our employees asked, \"How can we help?\" Fortunately, they found ways. Lots of them. Employees crowded into bloodmobiles and dropped food and supplies into overflowing bins. With the company's match, employees also donated more than $700,000 to relief efforts, and The Hartford provided a special telephone hotline for employees who needed counseling.\n\n\"Focused resolve\" is how New York-based Regional Vice President Brandon Hickey characterizes The Hartford's response. \"It solidified in my mind how strong the culture is at this company,\" he says. \"The emotional stress of Sept. 11 will be with us for a long time. But as a tribute to the people who were there, we came back as quickly as we did because we knew we had a job to do, and we were committed to succeed.\"\n\nBy early November—less than 60 days after the attack—The Hartford's New York employees were in their new permanent offices at 2 Park Ave.\n\nNo less impressive—and certainly no less swift was The Hartford's claims service during Sept. 11's aftermath. \"Catastrophe Team\"—CAT—adjusters were on the ground within days, fulfilling obligations to policyholders who suffered losses. As an example, The Hartford advanced $1 million within 72 hours of the disaster to help the Thacher, Proffitt & Wood law firm establish temporary midtown Manhattan offices. All the firm's employees had evacuated the World Trade Center's south tower before everything in their offices was destroyed. Within a week, Thacher, Proffitt & Wood was back in business.\n\nThe Hartford assigned extra resources to expedite service requests, and customers received premium payment extensions as needed. One adjuster wrote a $250,000 check on the spot to help a lower Manhattan software-development company begin its recovery. CAT team members and call center customer service representatives received special training to help them cope with traumatized customers, and the company distributed disaster-recovery literature and forms to help customers get back to business.\n\nThe Hartford's Group Benefits Division (GBD) offered crisis-counseling services to policyholders in", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- *The Campbell Agency in Byron Center, Mich., found that by aligning its organization to mirror that of The Hartford, the two partners could work more closely—and grow more—together. For example, The Campbell Agency emulated The Hartford by dedicating a team to the small-business market.*\n*That made the agency more proficient at identifying potential customers and setting sales targets, according to Mary Lou Barna, vice president, sales and marketing. In other words, she says, the partnership with The Hartford has made Barna and her colleagues better managers.*\n\n- *Dalal Maria Salomon, right, is managing director, investment officer of Salomon Group, part of First Union Securities in Richmond, Va. She strives to maintain high service levels for clients such as Daniel Austin, whose life insurance from The Hartford is part of*\n*a diversified investment portfolio Salomon helps him manage. Salomon relies on The Hartford for outstanding service, versatile online tools and consistently strong returns. The Hartford's mutual funds are some of her first choices when designing a portfolio.*", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "When did the annual sherholder meeting of Hartford happen in 2002 ?", - "target_page": 38, - "target_passage": "Shareholders are cordially invited to attend The Hartford’s Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "| The Hartford Financial Services Group, Inc. |\n| --- |\n| Hartford Plaza, 690 Asylum Avenue |\n| Hartford, Connecticut 06115 |\n\n*There's only*\n\n**The Hartford Financial Services Group, Inc. 2001 Summary Annual Report**\n\n*to run a business...*", - "page_start": 39, - "page_end": 39, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "#### **Corporate Information**\n\n**Corporate Headquarters** The Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\n#### **Internet Address**\n\nhttp://www.thehartford.com\n\n#### **Annual Meeting**\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n#### **Form 10-K and Other Information**\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n#### **Transfer Agent/Shareholder Records**\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department–11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department–11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\n#### **Investor Relations**\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations 860-547-2537\n\n#### **Media Inquiries**\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n**Common Stock and Dividend Information**\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol \"HIG.\" The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | | Dividends |\n| --- | --- | --- | --- |\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**The Hartford Financial Services Group, Inc.**\n\n**Hartford Plaza, 690 Asylum Avenue**\n\n**Hartford, Connecticut 06115**\n\nFORM 100025[2001]", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "the New York metropolitan area. In order to speed the payment of claims, GBD employees immediately contacted customers with offices in the towers and worked with industry organizations to expedite the issuing of death certificates.\n\nThe Hartford's individual life operations scoured airline manifests and missing-persons lists, looking for names of customers. When they spotted a potential match, they called agents to alert them to a possible claim and provided tips on how to proceed.\n\nFuture generations will measure the full impact of Sept. 11. But at The Hartford, one thing is known already. As they did after disasters such as the New York fire of 1835, the Chicago fire of 1871 and the 1906 San Francisco earthquake, The Hartford's people in 2001 ran their business the only way they know how the right way. They put customers first and kept promises. In so doing, they helped lay the foundation for a more confident future.\n\n- *New York employees admire a painting depicting the courage and resilience of The Hartford employees and the New York rescue teams. The montage, which now hangs in the lobby of The Hartford's New York offices, was painted by Andy Yelenak of The Hartford's Information Technology department.*\n- *The Hartford's New York staff got their businesses back up and running in less than a week after the Sept. 11 attack, despite the destruction of their offices. Among those who were instrumental in getting 330 employees situated in temporary office space were, left to right, Lucille T. Sgaglione, vice president, Hartford Financial Products; Linda Banks, administrative assistant, office support*\n\n*services, Business Insurance; Holly McCalmont, human resources manager, Business Insurance; Jim Norris, business technology solutions manager, Business Insurance; Craig Lowenthal, first vice president and chief information officer, Hartford Financial Products; and Susan Miranda, support services manager, Hartford Specialty Co.*", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "# *products & services*\n\n**H**ow do you secure the future when the present is puzzling enough? It's a big challenge, and The Hartford's primary objective. Everything we do is designed to help our customers deal with the uncertainties that lie ahead.\n\nThe Hartford believes the best way to secure the future is to provide customers with the right products, and then back those products with outstanding performance and great service. Staying focused on this objective was never more important—or more challenging—than in 2001.\n\nTrue to form, The Hartford's life operations' annuities and mutual funds delivered high-quality performance in a time of market turmoil. Despite an anemic stock market, 87 percent of the funds in The Hartford's Director variable annuity remained in the first or second quartile of three-year returns within the Lipper Peer Group in 2001. Sixty-four percent of the funds in the Leaders suite of annuities and 91 percent of The Hartford's mutual funds remained in the first or second quartile over the three-year period.\n\nThe ability to deliver that kind of performance can be traced to our money managers—Wellington Management Co., American Funds, Franklin Templeton Investments, MFS Investment Management, AIM Funds Management, Inc., Putnam Investment Management and The Hartford's own Hartford Investment Management Co.\n\nAll of The Hartford's money managers have years of experience and are among the most respected firms in the industry. Their experience and expertise were especially important during the market volatility we saw in 2001. They always stay focused on long-term performance, which is the true measuring stick of The Hartford's value to its customers.\n\nBesides outstanding products and excellent management, great service is a critical component in delivering the right solutions to our customers. In 2001, The Hartford won an unprecedented sixth consecutive DALBAR Annuity Service Award, as well as the", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**N**ew technology tools made The Hartford Experience customer solutions, ease of doing business and extraordinary service—more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries—from solo advisors to members of large financial institutions—are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "Meanwhile, in midtown Manhattan, The Hartford's negotiations for permanent offices—a process that normally takes 12 to 15 months—were complete.\n\nThe feverish pace was in some ways therapeutic. It helped take people's minds off the tragedy and the monumental loss of life, including the lives of many good friends and business colleagues at Aon, Marsh & McLennan, Bank of America and Morgan Stanley—major partners of The Hartford with offices in the Twin Towers.\n\nLike many Americans watching the heroism of firefighters, police and emergency crews, thousands of our employees asked, \"How can we help?\" Fortunately, they found ways. Lots of them. Employees crowded into bloodmobiles and dropped food and supplies into overflowing bins. With the company's match, employees also donated more than $700,000 to relief efforts, and The Hartford provided a special telephone hotline for employees who needed counseling.\n\n\"Focused resolve\" is how New York-based Regional Vice President Brandon Hickey characterizes The Hartford's response. \"It solidified in my mind how strong the culture is at this company,\" he says. \"The emotional stress of Sept. 11 will be with us for a long time. But as a tribute to the people who were there, we came back as quickly as we did because we knew we had a job to do, and we were committed to succeed.\"\n\nBy early November—less than 60 days after the attack—The Hartford's New York employees were in their new permanent offices at 2 Park Ave.\n\nNo less impressive—and certainly no less swift was The Hartford's claims service during Sept. 11's aftermath. \"Catastrophe Team\"—CAT—adjusters were on the ground within days, fulfilling obligations to policyholders who suffered losses. As an example, The Hartford advanced $1 million within 72 hours of the disaster to help the Thacher, Proffitt & Wood law firm establish temporary midtown Manhattan offices. All the firm's employees had evacuated the World Trade Center's south tower before everything in their offices was destroyed. Within a week, Thacher, Proffitt & Wood was back in business.\n\nThe Hartford assigned extra resources to expedite service requests, and customers received premium payment extensions as needed. One adjuster wrote a $250,000 check on the spot to help a lower Manhattan software-development company begin its recovery. CAT team members and call center customer service representatives received special training to help them cope with traumatized customers, and the company distributed disaster-recovery literature and forms to help customers get back to business.\n\nThe Hartford's Group Benefits Division (GBD) offered crisis-counseling services to policyholders in", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "**\"P**artnering\" is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners—and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n\"A lot of people talk about having the right partners, but The Hartford views it differently from most,\" says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. \"They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.\" Trippe should know. His agency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membership those age 50 and over—is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- *The Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.*\n- *Joe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.*", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "most dynamic sources of business growth. In 2001 the company's link to AARP's Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet.\n\nBecause The Hartford quotes and issues this business online (and added online billing in 2001), acquisition and processing costs are 15 to 20 percent lower than those of traditional direct-marketing or face-toface sales. Because of this and other factors, the expense ratio for AARP business is 30 percent below that of the industry in general. And the customer renewal rate is 96 percent, versus the industry's 88 percent, making the AARP program yield some of the most profitable auto business The Hartford writes.\n\nThe relationship also has The Hartford thinking ahead toward new business and an even stronger relationship with AARP members. The Hartford can crossmarket auto insurance to homeowner's customers and homeowner's insurance to auto customers, which presents a tremendous growth opportunity. In addition,\n\nThe Hartford is committed to providing value to AARP members in many ways. An example: The Hartford and AARP work with the MIT Age Lab to produce information—available in print and on both partners' Web sites—advising AARP members about Alzheimer's disease and other forms of dementia as they affect driving ability. The information guides caregivers struggling with difficult decisions about family members' safety behind the wheel. The resource—a customer solution like no other—helps enhance the superior value The Hartford provides to AARP members.\n\nAlthough it's the most comprehensive, the AARP relationship isn't The Hartford's only affinity program. The company also has affinity arrangements with USAA and other companies. Regardless of the program's size, the affinity partners share the right qualities: strong name-brand recognition, first-class marketing and a broad and loyal customer base.\n\nIn other words, they share some of The Hartford's core attributes.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "Regarding climate change, to what corresponds the \"average length of flood events ?", - "target_page": 11, - "target_passage": "The average length of flood events (number of days in which the cumulative daily rainfall excess is positive, compared to the 95th percentile of the baseline", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "**Figure 1.** Hunger and Climate Vulnerability Index for 1981–2010 climate (ensemble mean across the bias-corrected HadGEM3 ensemble).\n\n**Table 2.** Proxies for flood and drought events used in the HCVI.\n\n| extreme weather event description of proxy |\n| --- |\n| average length of flood events number of days in which the cumulative daily rainfall excess is positive, |\n| compared with the 95th percentile in the 1981–2010 average |\n| |\n| average length of drought events number of days in which the cumulative daily rainfall deficit is positive, |\n| compared with the 20th percentile in the 1981–2010 average |\n| |\n\nUN Food and Agriculture Organization, UN Development Programme and UN Population Fund [22]. The exposure component comprised proxies for the average length of flood and drought events calculated with daily precipitation data [23] (table 2). These proxies were chosen above other possible metrics as they were required to replace self-reported instances of flood and drought events used in the original HCVI, which correlate with undernutrition data at the country-level [23]. The proxies were therefore masked to only include data where a significant proportion of people live and grow crops before aggregating to country level and combining to comprise a measure of exposure [23]; nevertheless, it is recognized that precipitation data alone may not always be adequate for representing flood and drought events, so the current method is regarded as preliminary.\n\nThe impacts of projected climate change, therefore, act through changes in these quantities. In the current version of the HCVI, climate-change impacts on other quantities such as crop yield are not considered. Socio-economic factors affecting sensitivity and adaptive capacity are fixed at present-day conditions.\n\nThe ensemble-mean baseline HCVI calculated with the high-resolution bias-corrected HadGEM3 ensemble is shown in figure 1. The spatial pattern is compatible with HCVI values calculated using reanalysis data at the CMIP5 grid-scale resolution [23]; the most vulnerable regions are sub-Saharan Africa and South Asia. This higher-resolution climate data enables inclusion of additional countries which were not resolved in the lower-resolution CMIP5 data.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed11.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n- (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n- (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning—exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30–110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows—many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility. This article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/2007– 2013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n- 1. IPCC. 2014 Summary for policymakers. In *Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change* (eds CB Field *et al*.), pp. 1–32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "# What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n# **OBSERVATIONS**\n\n### **Annual report: State of the UK Climate. Downloadable data.**\n\nThe \"State of the UK Climate\" report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence9. For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n# **MARINE PROJECTIONS**\n\n#### **Sea level rise. Storm surge. Past event case studies.**\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a \"plausible but highly unlikely\" scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report10.\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These \"storminess\" projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n8 The latest update can be found at **http://www.metoffice.gov.uk/climate/uk/about/state-of-climate**\n\n- 9 **http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/**\n10 **https://www.ipcc.ch/report/ar5/**", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW** \n\n# What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme2.\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power3 for example.\n\n# What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n• Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback – user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information4.\n\n- Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM35 model and the CMIP56 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n• Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models7.\n\n• The increased quantity and range of observations available since 2009.\n\n• Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n\n1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. **https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports**\n\n2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): **https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/** 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: **https://www.gov.uk/government/collections/climate-change-adaptationreporting-second-round-reports**\n\n4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n\n- 5 **http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3**\n- 6 Coupled model intercomparison project phase 5, see **http://cmip-pcmdi.llnl.gov/cmip5/**\n\n7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25,\n\n5791–5806 (2012) **http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1**", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n### (d) Freshwater resources: run-off\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28–30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32–34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for defining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981–2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].\n\nThis difference in approach led to inconsistencies in the definitions of the dates of GWLs in the two parts of the study. In the extremes analysis using raw model output, the dates of passing GWLs were defined on the basis of the global mean temperatures in the driving CMIP5 models relative to those models' simulations of global mean temperature in 1870–1899 (table 3). However, in the HCVI and JULES analyses which used bias-corrected data, it was considered more appropriate for the GWLs to be defined using the warming in the observational dataset", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "A combination of the above questions is also relevant—how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. We also consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5°C and 2°C global warming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined—one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as \"Frankenstein Food\" [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though \"the end of world in 2012\" and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n#### *5.3. Discrepancy between the Two Discourses*\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, differences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in different manners.\n\nTo be specific, although \"ipcc\", \"cop\", and \"un\" were mentioned in both discourses (yellow in Figures 3 and 4) in earlier years, the clusters to which they belonged had significantly different meanings. As mentioned in the results section, these hashtags were associated with a series of scientific hashtags in the climate change discourse, appealing to global efforts. In the global warming discourse, they were clustered with \"hoax\" and \"frame\", showing lack of belief in climate issue facts and hesitation about global efforts. More recently, when discussions about temperature, politics, and hesitation significantly shrank in the global warming discourse, the wo discourses showed more similarities about the importance of scientific concepts according to Figure 5a,b. However, links between global efforts and scientific facts were not constructed in the global warming discourse. According to a network model for cognition, the lack of associations means fewer psychological activations will spread to", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "What is the projected situation of India regarding HCVI (Hunger and Climate Vulnerability Index)?", - "target_page": 12, - "target_passage": "India is projected to see increased HCVI by all ensemble members, due to a consistent increase in length of flood events projected in all members, outweighing the beneficial impact of decreased length of drought which is again projected in all members", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "#### **Table 1.** ClimPACT weather extremes indices.\n\n| ID | definition | units | sector of relevance |\n| --- | --- | --- | --- |\n| TXx | annual maximum daily maximum temperature | °C | health, agriculture and food security |\n| | | | |\n| TX90p | percentage of days above the 90th percentile | % | health, agriculture and food security, |\n| of daily maximum temperature in the | | | water resources and hydrology |\n| 1981–2010 average | | | |\n| | | | |\n| CDD | maximum number of consecutive days with | days | health, agriculture and food security, |\n| precipitation less than 1 mm | | | water resources and hydrology |\n| | | | |\n| RX5day | maximum consecutive 5 day precipitation | mm | health, agriculture and food security, |\n| water resources and hydrology | | | |\n| | | | |\n\nmembers at any given date. Since specific levels of global warming such as 1.5°C or 2°C were reached at different times in the different ensemble members, according to the SST forcings used, any given level of global warming could be associated with different radiative forcings in different ensemble members. In any given ensemble member at any specific level of global warming, the CO2 concentration and SSTs were the same as in the driving CMIP5 model at that GWL. Land cover was fixed in this simulation—there was no dynamic vegetation nor any time-dependent anthropogenic land use change.\n\nSome comparison of the higher-resolution atmospheric simulations with the original CMIP5 simulations, is provided by Wyser *et al.* [20].\n\n### (b) Temperature and precipitation extremes: the ClimPACT indices\n\nTo quantify changes in weather extremes projected in our climate simulations, we calculated a number of indices designed to be relevant to sector-specific impacts using an established methodology, ClimPACT [21] (table 1)\n\n### (c) Food security: the Hunger and Climate Vulnerability Index\n\nTo assess implications of climate change for vulnerability to food insecurity, we used an adaptation of the Hunger and Climate Vulnerability Index (HCVI) [22]. The HCVI was developed by the United Nations World Food Programme to provide a country-level assessment of vulnerability to food insecurity as a result of climate-related events. We used a new iteration of the HCVI which makes use of gridded climate model projections to understand the impact of climate change on vulnerability to food insecurity, and the benefits that adaptation can bring via scenarios of adaptation investment [23]. This iteration of the HCVI only considers in-country production of food and does not account for food trade. For this reason, the HCVI is only calculated for 122 developing and least-developed countries (defined here as countries not in the OECD or EU which can be resolved by the scale of the climate model; i.e. larger than 500 km2).\n\nThe index provides quantification at the national level across the globe of the scale and direction of impact of climate change on food insecurity. As such, it aims to provide the following: (i) information to help policy-makers understand the level of challenge to global food security that climate change presents; (ii) information on the geography of the impacts and help to evaluate the relative benefits of mitigation and adaptation responses.\n\nThe index is not intended to be a detailed planning tool, but aims to help planners evaluate the nature of the top-level threat to food insecurity that climate change presents, thereby supporting prioritization of effort.\n\nThe HCVI consists of three equally weighted components: exposure to climate-related hazards, sensitivity of national agricultural production to climate-related hazards, and adaptive capacity a measure of a country's ability to cope with climate-related food shocks. The sensitivity and adaptive capacity components are based on data from the World Bank, World Resources Institute,", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 1.** Hunger and Climate Vulnerability Index for 1981–2010 climate (ensemble mean across the bias-corrected HadGEM3 ensemble).\n\n**Table 2.** Proxies for flood and drought events used in the HCVI.\n\n| extreme weather event description of proxy |\n| --- |\n| average length of flood events number of days in which the cumulative daily rainfall excess is positive, |\n| compared with the 95th percentile in the 1981–2010 average |\n| |\n| average length of drought events number of days in which the cumulative daily rainfall deficit is positive, |\n| compared with the 20th percentile in the 1981–2010 average |\n| |\n\nUN Food and Agriculture Organization, UN Development Programme and UN Population Fund [22]. The exposure component comprised proxies for the average length of flood and drought events calculated with daily precipitation data [23] (table 2). These proxies were chosen above other possible metrics as they were required to replace self-reported instances of flood and drought events used in the original HCVI, which correlate with undernutrition data at the country-level [23]. The proxies were therefore masked to only include data where a significant proportion of people live and grow crops before aggregating to country level and combining to comprise a measure of exposure [23]; nevertheless, it is recognized that precipitation data alone may not always be adequate for representing flood and drought events, so the current method is regarded as preliminary.\n\nThe impacts of projected climate change, therefore, act through changes in these quantities. In the current version of the HCVI, climate-change impacts on other quantities such as crop yield are not considered. Socio-economic factors affecting sensitivity and adaptive capacity are fixed at present-day conditions.\n\nThe ensemble-mean baseline HCVI calculated with the high-resolution bias-corrected HadGEM3 ensemble is shown in figure 1. The spatial pattern is compatible with HCVI values calculated using reanalysis data at the CMIP5 grid-scale resolution [23]; the most vulnerable regions are sub-Saharan Africa and South Asia. This higher-resolution climate data enables inclusion of additional countries which were not resolved in the lower-resolution CMIP5 data.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 5.** Simulated changes in the annual maximum rainfall over 5 days relative to 1981–2010, at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n2°C, although the geographical variation is still dominated by the non-climatic factors (figure 7). Therefore, the ensemble-mean change is a reasonable guide to the results.\n\nThe ensemble mean is higher in nearly all assessed countries relative to the baseline (figure 8). The greatest increase was in Oman, followed by India, Bangladesh and Saudi Arabia, then Brazil and a number of its neighbouring countries. Smaller increases in HCVI were seen across Africa. Southeastern Africa showed larger increases than Central Africa. The HCVI decreased in three countries: Mali, Burkino Faso and Sudan.\n\nThe ensemble members showed broadly consistent changes in HCVI at 2°C global warming, with increases in most assessed countries and generally similar sets of countries experiencing the largest and smallest changes. Southeastern Africa consistently showed larger increases in HCVI than Central Africa, due to increased length of drought events projected in all ensemble members (not shown). The length of flood events was not projected to increase in this region. The Sahel region consistently showed one or more countries with a small decrease in the HCVI, although the precise country or countries varied between ensemble members. The decrease in HCVI here was due to projected decreases in length of drought, with length of flood events projected to change little.\n\nIndia is projected to see increased HCVI by all ensemble members, due to a consistent increase in length of flood events projected in all members, outweighing the beneficial impact of decreased length of drought which is again projected in all members.\n\nBrazil is projected to see increased HCVI, but for reasons which vary between ensemble members. Although the location of projected longer flood events varies across the country in different members, the aggregation of the HCVI to the country level renders this geographical variability irrelevant for such a large country because only the median value across the country is used in the HCVI. Some ensemble members project longer drought for Brazil, which again contributed to increased HCVI.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 8.** Change in Hunger and Climate Vulnerability Index relative to baseline calculated for simulated climate states at 2°C globalwarming,for five individualHadGEM3simulations driven by SSTs and SICsfrom differentmembers ofthe CMIP5 ensemble, and the ensemble mean.\n\nFour countries show ensemble-mean HCVI values at 2°C global warming that are higher than any seen in the baseline climate; these are Oman, Bangladesh, Mauritania and Yemen. The implication of such HCVI values is that climate change at 2°C is projected to cause levels of vulnerability to food insecurity that are greater than any seen in the present day. For individual ensemble members, the number of countries with 'unprecedented' HCVI values at 2°C varies from three to seven. Conversely, many countries in the baseline climate have levels of vulnerability to food insecurity that are greater than those expected in other countries under 2°C global warming. This suggests that other factors are already posing greater risk for food insecurity than 2°C climate change is expected to cause in other countries, so the increased risk from climate change should not overshadow the need to reduce vulnerability to food insecurity arising from non-climatic factors. There is scope to reduce vulnerability to food insecurity by addressing various socio-economic issues in such counties.\n\nThe JULES simulations show a general tendency towards increased run-off over approximately half of the land surface (figure 9) and the majority of the major river basins assessed (figure 10), but with large regional uncertainties including the possibility of decreased flows in many basins. The ensemble-mean change in mean streamflow shows an increase of between 5 and 25% over most of the Northern Hemisphere land surface, with some regions seeing an increase of over 50% at 2°C global warming. Notable exceptions to this are western Europe and southcentral USA, which see less than a 5% change in run-off, and the already very dry region of the Sahara Desert where the existing very small run-off become even smaller.\n\nEnsemble-mean projected changes in low run-off flows are generally larger (figure 11), with the regions seeing an increase in mean run-off seeing a larger percentage increase in low run-off—over 75% increases over much of North America, Eastern Europe and Asia. Note that this does not necessarily imply a larger increase in absolute low flow compared to absolute mean flow, because the baseline is (by definition) smaller for low flows. In western Europe, where the changes in mean flows were less than 5%, the ensemble-mean low flow decreases by between 5", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed11.pdf" - }, - { - "text": "**18**\n\n**Figure 12.** Comparison of global mean changes in climate extremes indices relative to 1981–2010 at 2°C and 1.5°C global warming for individual ensemble members and ensemble mean. (*a*) Change in annual daily maximum temperature; (*b*) percentage of days with maximum temperature above 90th percentile for 1981–2010; (*c*) change in consecutive dry days; (*d*) change in annual maximum 5-day rainfall.\n\nFor precipitation, generally similar changes are seen at 1.5°C global warming as at 2°C, but smaller in magnitude (compare figures 16 and 4), suggesting that most of these changes are a response to radiatively forced climate change as opposed to internal climate variability. However, some localized changes do vary in sign between the GWLs, such as in South Australia, suggesting a possible dominance of internal variability over the global warming signal in these places.\n\nWhere Rx5day increases, the increases are projected to be larger—in some cases approximately double—at 2°C global warming than 1.5°C. Where Rx5day decreases, again the decreases are projected to be larger at 2°C global warming than 1.5°C (figure 17).\n\nOf the 122 countries assessed, 93 have smaller ensemble-mean HCVI calculated at 1.5°C global warming than at 2°C, indicating an ensemble consensus that 76% of assessed countries would see a smaller increase in vulnerability to food insecurity if global warming were limited to 1.5°C (figures 18 and 19). Conversely, 24% of countries would, by this metric, see the same or higher vulnerability to food insecurity at 1.5°C than 2°C. Of these, some are countries where HCVI is projected to be lower at 2°C global warming than in the baseline. For example, in Mali the ensemble-mean baseline HCVI of 0.83 increased slightly to 0.85 at 1.5°C then reduced to 0.81 at 2°C. In some countries, the ensemble-mean HCVI happened to be identical at both warming levels. In Chad, for example, the baseline HCVI of 0.89 increased to 0.91 at both 1.5°C and 2°C.\n\nAs noted above, four countries saw ensemble-mean HCVI values at 2°C above any seen in the baseline, and this number increased to seven at 1.5°C. The same four countries with 'unprecedented' HCVI values at 2°C also saw 'unprecedented' values at 1.5°C; these were Oman, Bangladesh, Mauritania and Yemen. These were joined by Myanmar, India and Cambodia as having 'unprecedented' values at 1.5°C. The role of internal climate variability in the HCVI results needs to be assessed, as does the effect of potential nonlinear interactions between the flood and drought metric. Until the reasons behind these country-specific results are understood,", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed11.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n- (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n- (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning—exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 13.** Global mean percentage changes relative to 1981–2010 in (*a*) precipitation over land, (*b*) mean run-off flows, (*c*) low run-off lows (10th percentile), at 2°C and 1.5°C global warming.\n\nthis comparison of the number of 'unprecedented' HCVI values at 1.5°C and 2°C should be treated with caution. Nevertheless, the finding that some countries see HCVI values higher at either or both 1.5°C and 2°C compared to the baseline may indicate that climate change has the potential to lead to unprecedented levels of vulnerability to food insecurity in some countries. More robustly, it can be concluded that by this metric, overall worldwide vulnerability to food insecurity generally increases with global warming, and for approximately three-quarters of countries assessed, this increase is larger at 2°C than 1.5°C.\n\nIn the ensemble mean, changes in mean, low and high flows are generally larger at 2°C global warming compared to 1.5°C (figure 20). This is often the case for both increases and decreases in flows—increasing the level of global warming magnifies the pattern of river flow changes, although not in all cases.\n\nThe range of projected mean run-off changes is larger for 2°C than 1.5°C in many basins, but this was not always the case, with many basins showing similar or smaller ranges at 2°C compared with 1.5°. Moreover, the ranges overlap substantially, so in terms of the set of", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed11.pdf" - }, - { - "text": "A combination of the above questions is also relevant—how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. We also consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5°C and 2°C global warming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined—one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 6.** Simulated changes inthe average length of flood events(number of days in whichthe cumulative dailyrainfall excess is positive, compared with the 95th percentile in 1981–2010, at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n**Figure 7.** Hunger and Climate Vulnerability Index calculated for simulated climate states at 2°C global warming for five individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "Regarding climate change simulation, what is JULES ?", - "target_page": 7, - "target_passage": "Impacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n### (d) Freshwater resources: run-off\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28–30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32–34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for defining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981–2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].\n\nThis difference in approach led to inconsistencies in the definitions of the dates of GWLs in the two parts of the study. In the extremes analysis using raw model output, the dates of passing GWLs were defined on the basis of the global mean temperatures in the driving CMIP5 models relative to those models' simulations of global mean temperature in 1870–1899 (table 3). However, in the HCVI and JULES analyses which used bias-corrected data, it was considered more appropriate for the GWLs to be defined using the warming in the observational dataset", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as \"Frankenstein Food\" [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though \"the end of world in 2012\" and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n#### *5.3. Discrepancy between the Two Discourses*\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, differences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in different manners.\n\nTo be specific, although \"ipcc\", \"cop\", and \"un\" were mentioned in both discourses (yellow in Figures 3 and 4) in earlier years, the clusters to which they belonged had significantly different meanings. As mentioned in the results section, these hashtags were associated with a series of scientific hashtags in the climate change discourse, appealing to global efforts. In the global warming discourse, they were clustered with \"hoax\" and \"frame\", showing lack of belief in climate issue facts and hesitation about global efforts. More recently, when discussions about temperature, politics, and hesitation significantly shrank in the global warming discourse, the wo discourses showed more similarities about the importance of scientific concepts according to Figure 5a,b. However, links between global efforts and scientific facts were not constructed in the global warming discourse. According to a network model for cognition, the lack of associations means fewer psychological activations will spread to", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "**UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW** \n\n# What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme2.\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power3 for example.\n\n# What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n• Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback – user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information4.\n\n- Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM35 model and the CMIP56 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n• Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models7.\n\n• The increased quantity and range of observations available since 2009.\n\n• Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n\n1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. **https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports**\n\n2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): **https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/** 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: **https://www.gov.uk/government/collections/climate-change-adaptationreporting-second-round-reports**\n\n4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n\n- 5 **http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3**\n- 6 Coupled model intercomparison project phase 5, see **http://cmip-pcmdi.llnl.gov/cmip5/**\n\n7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25,\n\n5791–5806 (2012) **http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1**", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "**Table 3.** Time of reaching GWLs of 1.5°C and 2°C in the raw output from the HadGEM3 climate simulations, driven by different sets of CMIP5sea-surface temperatures. The dates are the centre year of a 20-year period for which the climate data are applied to the calculation of the ClimPACT indices.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2015 | 2030 |\n| | GFDL-ESM2M | 2040 | 2055 |\n| | HadGEM2-ES | 2027 | 2039 |\n| | IPSL-CM5A-MR | 2020 | 2034 |\n| | MIROC-ESM-CHEM | 2023 | 2035 |\n| | ACCESS1–0 | 2034 | 2046 |\n\nup to present-day plus model-projected warming thereafter (table 4). While this does lead to inconsistent definitions of dates of the GWLs for applications of the climate model output with and without bias correction, the focus here is on the level of warming relative to pre-industrial rather than the timing of this warming. Therefore, priority is given to an accurate quantification of GWLs in all parts of the study, at the expense of inconsistencies in the dates of these warming levels. The inconsistency between the dates of the GWLs ranged from 2 to 9 years depending on the model and warming level. This inconsistency would have consequences if these results were applied to time-dependent impacts and adaptation assessments, but that is not the case here so this concern does not apply. However, one issue is that the time-dependent nature of the aerosol forcing means that the spatial pattern of regional climate responses varies over time, so this will lead to some degree of inconsistency between the analysis of the ClimPACT extremes and the HCVI and JULES impacts projections.\n\n## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2°C global warming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "from 5 climate models under 4 RCP scenarios, the future climate situations were selected which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C at the end of 21 century relative to pre-industrial levels; it could minimize the uncertainties of future climate data. Te inputs for DSSAT simulation include soil parameters, crop calendar data and management information are coped with carefully to improve the efectiveness and reliability of maize yield simulation.\n\nTere are also several uncertainties and limitations. Firstly, there is no unifed understanding of how to calculate the temperature rise of 1.5 °C and 2.0 °C relative to pre-industrial levels in the worldwide. At present the research on climate prediction and impact assessment under global warming 1.5 °C and 2.0 °C usually adopts multi-mode ensemble average methods61,62, which could obtain the warming response under the condition of instantaneous change, rather than the warming process under the stable state expected by the long-term goal. If we expect to obtain the accurate results, the model prediction test should be estimated to form proprietary scenarios for global warming by 1.5 °C and 2.0 °C63,64, which could support for the impacts assessment on diferent sectors. Some institutions are carrying out climate change predictions under the lower emission scenarios (global warming 1.5 °C or 2.0 °C). At the same time, in order to achieve the goal of controlling temperature by 1.5 °C at the end of the twenty-frst century, it is urgent to take actions to reduce emissions and develop along the track of low energy consumption65,66; but it is a great challenge for human society to achieve this goal.\n\nSecondly, our methodological approach in this study also has some important limitations, including our use of a single crop model to estimate maize yields. Tere are some limitations for the DSSAT model to simulate yield loss caused by climate extreme events67, in which the impacts of pests and diseases are also ignored68. However, the DSSAT model has been applied in a lot of researches to simulate historical maize yield69–71, in which the results are reliable and credible72. Te results of this research could be an important reference to the other studies which simulate global maize yield in the future, applying crop models such as APSIM, WOFOST, ORYZA and so on.\n\nTirdly, there are relatively more researches on the prediction of climate change trend under the background of 1.5 °C and 2.0 °C; but the research on the impact assessment of the main grain crops including global trade in worldwide is few. In the meantime, we do not assess the efect of future changes on agriculture, such as increases in farm productivity due to new technology. Te maize planting area in the future is assumed to be the same as the current situation of maize cultivation in the world.\n\n**Conclusion.** According to the simulation results, the yield of maize under global warming by 2.0 °C would decrease between 3.0 and 18.7% in the worldwide relative to 1986–2005; the maize yield would fuctuate between − 6.8 and 7.2% under global warming by 1.5 °C. From the spatial distribution, the gross maize yield in the top 5 high-yield countries (including the United States, China, Brazil, Argentina and Mexico) would decrease by 2% under global warming by 1.5 °C and 11.4% under global warming by 2.0 °C. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among diferent countries and regions. So, it is urgent for all countries to pay enough attention to the loss risk of maize yield and take actions of mitigation and adaptation to climate change. Te time lef for changing our minds and actions is becoming less and less.\n\n#### **Data availability**\n\nTe historical weather data (1986–2005) that support the analysis with ESMs in this study are publicly available online at https://data.giss.nasa.gov/impacts/agmipcf/; the future climate scenario data (2006–2099) that support the analysis with ESMs in this study are publicly available online at https://pcmdi.llnl.gov/?cmip5 and https:// esgf-node.llnl.gov/projects/esgf-llnl/. Te spatial data of harvest area, yield, crop calendar, irrigation portion and chemical N input for maize that support the simulation with crop model (DSSAT) in this study are publicly available at http://mapspam.info/ (SPAM) and http://www.sage.wisc.edu (SAGE); the soil data that support the simulation with crop model (DSSAT) in this study are publicly available from the WISE database (https://www. isric.online/index.php/) and the Digital Soil Map of the World (DSMW) (http://www.fao.org/land-water/land/ land-governance/land-resources-planning-toolbox/category/details/en/c/1026564/). All other relevant data are available from the corresponding authors.\n\nReceived: 6 June 2022; Accepted: 11 October 2022\n\n#### **References**\n\n- 1. Angélil, O. *et al.* An independent assessment of anthropogenic attribution statements for recent extreme temperature and rainfall events. *J. Clim.* **30**(1), 5–16 (2017).\n- 2. Rosenzweig, C. *et al.* Coordinating AgMIP data and models across global and regional scales for 1.5°C and 2.0°C assessments. *Philos. Trans. R. Soc. A.* **376**, 20160455 (2018).\n- 3. Mitchell, D. *et al.* Half a degree additional warming, prognosis and projected impacts (HAPPI): Background and experimental design. *Geosci. Model Dev.* **10**, 571–583 (2017).\n- 4. Coumou, D. & Rahmstorf, S. A decade of weather extremes. *Nat. Clim. Change* **2**, 491–496 (2012).\n- 5. IPCC: Summary for Policymakers. In *Climate Change 2013: Te Physical Science Basis. Contribution of Working Group I to the Fifh Assessment Report of the Intergovernmental Panel on Climate Change* 4–6 (Cambridge University Press, 2013).\n- 6. Difenbaugh, N. S. *et al.* Quantifying the infuence of global warming on unprecedented extreme climate events. *PNAS* **114**(19), 4881–4886 (2016).\n- 7. Tai, A. P. K., Martin, M. V. & Heald, C. L. Treat to future global food security from climate change and ozone air pollution. *Nat. Clim. Change* **4**, 817–821 (2014).\n- 8. Román-Palacios, C. & Wiens, J. J. Recent responses to climate change reveal the drivers of species extinction and survival. *PNAS* **117**(8), 4211–4217 (2020).\n\nVol:.(1234567890)", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed9.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the different connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint efforts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the \"hoax\" frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the differences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and effect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by differently organizing numerous climate concepts. Examining the differences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLarge amounts of user-generated data on social media, which have been valued in computer science, communication, and environmental studies [5,9,15–18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing \"climate change\" and \"global warming\" between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the difference in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018?\n\nRQ3: Did the two competing discourses converge or diverge in this decade?\n\n#### **2. Background**\n\n#### *2.1. Climate Change, Global Warming, and Frames*\n\nExisting studies have noted that the subtle difference between climate change and global warming evokes different public cognitive responses, where global warming\"indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse effect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "but also a result of the different forcings influencing the atmosphere model at the time of passing each GWL, and the interaction with the climate sensitivity of HadGEM3. The radiative forcing of non-CO2 forcings has previously been highlighted as a potentially important influence on patterns of climate change at 1.5°C and 2°C global warming [39]. Furthermore, despite some differences in regional climate responses between ensemble members, there were also some remarkable consistencies especially in the changes that might be considered inconsistent with a warming climate, such as regions such as northern South America where heavy rainfall (Rx5day) decreases rather increasing as might be expected under a warming climate. Again, these consistencies point to some common forcing of all simulations.\n\nOne key factor is the different times of passing a particular GWL, because the net radiative forcing would be different even though the same emissions and concentration scenario was used in all simulations. A given GWL was reached at a different time in each ensemble member, so the CO2 and aerosol concentrations vary between ensemble members; in members reaching a GWL early, such as that driven by IPSL-CM5A-LR, the CO2 concentration is relatively lower than in other members, and the total aerosol concentration would be relatively higher (CO2 concentrations are projected to increase in RCP8.5, but aerosol concentrations are projected decline). The net radiative forcing is smaller, because in RCP8.5 the increase positive radiative forcing from CO2 is greater than the decrease in net negative radiative forcing from aerosols. Moreover, the physiological effect of CO2 is also smaller, meaning that the consequent reduction in transpiration and associated additional land surface warming influence would also be expected to be smaller.\n\nConversely, in members reaching the same GWL later, such as that driven by GFDL-ESM2M, CO2 concentration is relatively higher, and aerosol concentrations are lower. So, net radiative forcing, CO2 physiological effects and the regional-scale radiative forcings from individual aerosol types could, therefore, be quite different in the GFDL-driven HadGEM3 simulation when it reaches 2°C global warming 25 years later than the IPSL-CM5A-LR-driven simulation.\n\nThe spatial pattern of changes in the different ensemble members may also play a role in influencing the global mean changes, for example, with large changes in some regions due to faster snow-melt or changes in cloud cover in one ensemble member leading to particular changes in regional warming that are not seen in other ensemble members. Moreover, the individual forcings of the different aerosol components such as sulfate and black carbon differ in sign and spatial pattern, so the overall impact on local radiative forcing and hence regional temperature patterns is more complex. Therefore, the global mean changes may not necessarily be expected to relative to global mean forcings.\n\nA further complexity in identifying precise mechanisms for regional changes is the experimental design used here, with one atmospheric model and concentration/emissions scenario but six different SST and SIC patterns, means that the impact of spatial heterogeneity in radiative forcings is complex and involves a mix of effects in HadGEM3 and the original CMIP5 models. In the case of aerosols, for example, our HadGEM3 simulations are driven with RCP8.5 aerosol emissions and the aerosol concentrations are then calculated within the model itself. The spatial distributions of aerosol optical depth and radiative forcing can, therefore, be expected to be reasonably similar, because they arise from the same emissions scenario, although some differences may occur due to the different regional climate-change patterns. However, the impact of aerosols is also seen in the SST and SIC changes, because these will have responded to changes in regional aerosol radiative forcing in the original CMIP5 simulations. Therefore, these SST and SIC patterns will carry the 'memory' of aerosol changes in the original CMIP5 projections.\n\nOne example of an impact of changing aerosol radiative forcing could be the precipitation changes in northern South America including Amazonia. All ensemble members show a general drying in this region, as seen in RX5day and mean run-off results. The reduction in Rx5day is particularly notable, because the general expectation would be for an increase in heavy rainfall events in a warmer climate, as is seen in most other regions in these projections. This reduced rainfall in the Amazon region may be associated with the reducing net negative aerosol radiative forcing in the North Atlantic [40]. CO2 physiological forcing may also play a role here [41,42].", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "Which of #climatechange and #globalwarming is the most used ?", - "target_page": 5, - "target_passage": "A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained #globalwarming", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "**Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**). **Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**). **Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**).\n\nAs the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018.\n\nThe rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order . The rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order. The rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order .\n\n**Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**). **Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**). **Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**).\n\n#### **5. Discussion 5. Discussion 5. Discussion**\n\n#### *5.1. Themes and Structure of the Two Discourses 5.1. Themes and Structure of the Two Discourses 5.1. Themes and Structure of the Two Discourses*\n\n#### 5.1.1. Phenomenon vs. Mechanism of Action 5.1.1. Phenomenon vs. Mechanism of Action 5.1.1. Phenomenon vs. Mechanism of Action\n\nClimate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of Climate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of Climate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "column name to create two matrices. One matrix was created for the climate change discourse, and we filled the cell whose column name and row name were among the top 50 list in the climate change discourse with the frequency at which the two hashtags were associated in this discourse, and the other cells were filled with 0. This was repeated for the global warming matrix. We thus obtained two matrices with the same row and column names but different values in the cells. Then, the two matrices were input to the quadratic assignment procedure (QAP) [85] analysis provided by UCINET software [86] to assess their correlation for each year.\n\n### **4. Results**\n\n### *4.1. General Descriptions*\n\nAssociation networks surrounding #climatechange and #globalwarming showed different properties. The climate change discourse included 38,821 hashtags, whereas the global warming discourse only contained 8788 hashtags. Table 1 displays the 50 most significant hashtags in the two discourses based on centrality. As some hashtags were used in the form of an abbreviation or phrase, explanations are provided in the table. Two networks shared 32 out of the 50 most significant words. Hashtags \"canada\", \"cdnpoli\", \"sdgs\", \"biodiversity\", \"education\", \"environmental\", \"cop24\", \"sustainable\", \"auspol\", \"food\", \"agriculture\", \"cleanenergy\", \"renewableenergy\", \"renewables\", \"emissions\", \"coal\", \"fossilfuels\", and \"cop21\" only showed up on the top 50 list of the \"climate change\" network. Hashtags \"tcot\", \"california\", \"p2\", \"nyc\", \"snow\", \"agw\", \"summer\", \"global\", \"winter\", \"india\", \"planet\", \"heatwave\", \"hoax\", \"nasa\", \"algore\", \"world\", \"oil\", and \"eco\" were unique on the top 50 list of the global warming network. The two lists only shared three out of the top five hashtags. In the #climatechange network, \"climateaction\" was ranked third place and \"sustainability\" was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network. In the #globalwarming network, \"earth\" and \"weather\" were among the top five nodes, whereas they were ranked 14th and 24th in the #climatechange network, respectively.\n\n| No. | #Climatechange | | #Globalwarming | |\n| --- | --- | --- | --- | --- |\n| | Hashtag | Centrality | Hashtag | Centrality |\n| 1 | climate | 0.466 | climate | 0.530 |\n| 2 | environment | 0.465 | environment | 0.446 |\n| 3 | climateaction | 0.391 | science | 0.319 |\n| 4 | sustainability | 0.316 | earth | 0.296 |\n| 5 | science | 0.314 | weather | 0.280 |\n| 6 | energy | 0.283 | us * | 0.280 |\n| 7 | trump | 0.257 | trump | 0.263 |\n| 8 | us * | 0.247 | pollution | 0.256 |\n| 9 | cop21 * | 0.232 | co2 | 0.244 |\n| 10 | parisagreement * | 0.232 | green | 0.239 |\n| 11 | actonclimate * | 0.225 | tcot * | 0.229 |\n| 12 | water | 0.221 | nature | 0.213 |\n| 13 | pollution | 0.210 | news | 0.198 |\n| 14 | earth | 0.207 | energy | 0.192 |\n| 15 | green | 0.200 | climatechangeisreal | 0.187 |\n| 16 | climatechangeisreal | 0.195 | obama | 0.181 |\n| 17 | renewableenergy * | 0.194 | climateaction | 0.175 |\n| 18 | health | 0.193 | algore * | 0.174 |\n| 19 | nature | 0.187 | water | 0.171 |\n| 20 | renewables | 0.186 | agw * | 0.164 |\n| 21 | cleanenergy | 0.176 | carbon | 0.164 |\n| 22 | carbon | 0.175 | sustainability | 0.163 |\n\n**Table 1.** The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "International Journal of *Environmental Research and Public Health*\n\n# *Article* **#Climatechange vs. #Globalwarming: Characterizing Two Competing Climate Discourses on Twitter with Semantic Network and Temporal Analyses**\n\n**Wen Shi 1 , Haohuan Fu 1,2, Peinan Wang 3 , Changfeng Chen 3 and Jie Xiong 4,***\n\n- 1 Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China; shi-w18@mails.tsinghua.edu.cn (W.S.); haohuan@tsinghua.edu.cn (H.F.)\n- 2 National Supercomputing Center in Wuxi, Wuxi 214000, China\n- 3 School of Journalism and Communication, Tsinghua University, Beijing 100084, China; wpn17@mails.tsinghua.edu.cn (P.W.); chencf@mail.tsinghua.edu.cn (C.C.)\n- 4 Strategy and Innovation Department, Rennes School of Business, 35065 Rennes, France\n- ***** Correspondence: jie.xiong@rennes-sb.com; Tel.:+ 33-(0)-2-99-54-46-79\n\nReceived: 5 December 2019; Accepted: 3 February 2020; Published: 7 February 2020\n\n**Abstract:** Distinct perceptions of the global climate is one of the factors preventing society from achieving consensus or taking collaborative actions on this issue. The public has not even reached an agreement on the naming of the global concern, showing preference for either \"climate change\" or \"global warming\", and few previous studies have addressed these two competing discourses resulting from distinct climate concerns by differently linking numerous climate concepts. Based on the 6,662,478 tweets containing #climatechange or #globalwarming generated between 1 January 2009 and 31 December 2018, we constructed the semantic networks of the two discourses and examined their evolution over the decade. The findings indicate that climate change demonstrated a more scientific perspective and showed an attempt to condense climate discussions rather than diffuse the topic by frequently addressing sub-topics simultaneously. Global warming triggered more political responses and showed a greater connection with phenomena. Temporal analysis suggests that traditional political discussions were gradually fading in both discourses but more recently started to revive in the form of discourse alliance in the climate change discourse. The associations between global warming and weather abnormalitiessuddenly strengthened around 2012. Climate change is becoming more dominant than global warming in public discussions. Although two discourses have shown more similarities in the rank order of important climate concepts, apparent disagreements continue about how these concepts are associated. These findings lay the groundwork for researchers and communicators to narrow the discrepancy between diverse climate perceptions.\n\n**Keywords:** climate change; global warming; semantic network analysis; temporal analysis; public discourse; Twitter\n\n# **1. Introduction**\n\nThe public's distinct understanding of the cause and effect of the global climate issue is an obstacle to joint mitigation actions. In addition to a diversity of views co-existing in the public discourse [1,2], previous studies noticed that the public had even failed to reach an agreement on whether \"climate change\" or \"global warming\" is the most appropriate definition of the global climate concern [3–5]. According to the definition provided by [6], global warming describes global climate issues as a continuous increase in the average temperature of Earth's surface due to anthropogenic emissions of greenhouse gases, whereas climate change includes not only temperature rise but also a range of", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed10.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as \"Frankenstein Food\" [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though \"the end of world in 2012\" and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n#### *5.3. Discrepancy between the Two Discourses*\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, differences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in different manners.\n\nTo be specific, although \"ipcc\", \"cop\", and \"un\" were mentioned in both discourses (yellow in Figures 3 and 4) in earlier years, the clusters to which they belonged had significantly different meanings. As mentioned in the results section, these hashtags were associated with a series of scientific hashtags in the climate change discourse, appealing to global efforts. In the global warming discourse, they were clustered with \"hoax\" and \"frame\", showing lack of belief in climate issue facts and hesitation about global efforts. More recently, when discussions about temperature, politics, and hesitation significantly shrank in the global warming discourse, the wo discourses showed more similarities about the importance of scientific concepts according to Figure 5a,b. However, links between global efforts and scientific facts were not constructed in the global warming discourse. According to a network model for cognition, the lack of associations means fewer psychological activations will spread to", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30–110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows—many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility. This article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/2007– 2013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n- 1. IPCC. 2014 Summary for policymakers. In *Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change* (eds CB Field *et al*.), pp. 1–32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "but also a result of the different forcings influencing the atmosphere model at the time of passing each GWL, and the interaction with the climate sensitivity of HadGEM3. The radiative forcing of non-CO2 forcings has previously been highlighted as a potentially important influence on patterns of climate change at 1.5°C and 2°C global warming [39]. Furthermore, despite some differences in regional climate responses between ensemble members, there were also some remarkable consistencies especially in the changes that might be considered inconsistent with a warming climate, such as regions such as northern South America where heavy rainfall (Rx5day) decreases rather increasing as might be expected under a warming climate. Again, these consistencies point to some common forcing of all simulations.\n\nOne key factor is the different times of passing a particular GWL, because the net radiative forcing would be different even though the same emissions and concentration scenario was used in all simulations. A given GWL was reached at a different time in each ensemble member, so the CO2 and aerosol concentrations vary between ensemble members; in members reaching a GWL early, such as that driven by IPSL-CM5A-LR, the CO2 concentration is relatively lower than in other members, and the total aerosol concentration would be relatively higher (CO2 concentrations are projected to increase in RCP8.5, but aerosol concentrations are projected decline). The net radiative forcing is smaller, because in RCP8.5 the increase positive radiative forcing from CO2 is greater than the decrease in net negative radiative forcing from aerosols. Moreover, the physiological effect of CO2 is also smaller, meaning that the consequent reduction in transpiration and associated additional land surface warming influence would also be expected to be smaller.\n\nConversely, in members reaching the same GWL later, such as that driven by GFDL-ESM2M, CO2 concentration is relatively higher, and aerosol concentrations are lower. So, net radiative forcing, CO2 physiological effects and the regional-scale radiative forcings from individual aerosol types could, therefore, be quite different in the GFDL-driven HadGEM3 simulation when it reaches 2°C global warming 25 years later than the IPSL-CM5A-LR-driven simulation.\n\nThe spatial pattern of changes in the different ensemble members may also play a role in influencing the global mean changes, for example, with large changes in some regions due to faster snow-melt or changes in cloud cover in one ensemble member leading to particular changes in regional warming that are not seen in other ensemble members. Moreover, the individual forcings of the different aerosol components such as sulfate and black carbon differ in sign and spatial pattern, so the overall impact on local radiative forcing and hence regional temperature patterns is more complex. Therefore, the global mean changes may not necessarily be expected to relative to global mean forcings.\n\nA further complexity in identifying precise mechanisms for regional changes is the experimental design used here, with one atmospheric model and concentration/emissions scenario but six different SST and SIC patterns, means that the impact of spatial heterogeneity in radiative forcings is complex and involves a mix of effects in HadGEM3 and the original CMIP5 models. In the case of aerosols, for example, our HadGEM3 simulations are driven with RCP8.5 aerosol emissions and the aerosol concentrations are then calculated within the model itself. The spatial distributions of aerosol optical depth and radiative forcing can, therefore, be expected to be reasonably similar, because they arise from the same emissions scenario, although some differences may occur due to the different regional climate-change patterns. However, the impact of aerosols is also seen in the SST and SIC changes, because these will have responded to changes in regional aerosol radiative forcing in the original CMIP5 simulations. Therefore, these SST and SIC patterns will carry the 'memory' of aerosol changes in the original CMIP5 projections.\n\nOne example of an impact of changing aerosol radiative forcing could be the precipitation changes in northern South America including Amazonia. All ensemble members show a general drying in this region, as seen in RX5day and mean run-off results. The reduction in Rx5day is particularly notable, because the general expectation would be for an increase in heavy rainfall events in a warmer climate, as is seen in most other regions in these projections. This reduced rainfall in the Amazon region may be associated with the reducing net negative aerosol radiative forcing in the North Atlantic [40]. CO2 physiological forcing may also play a role here [41,42].", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Table 3.** Time of reaching GWLs of 1.5°C and 2°C in the raw output from the HadGEM3 climate simulations, driven by different sets of CMIP5sea-surface temperatures. The dates are the centre year of a 20-year period for which the climate data are applied to the calculation of the ClimPACT indices.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2015 | 2030 |\n| | GFDL-ESM2M | 2040 | 2055 |\n| | HadGEM2-ES | 2027 | 2039 |\n| | IPSL-CM5A-MR | 2020 | 2034 |\n| | MIROC-ESM-CHEM | 2023 | 2035 |\n| | ACCESS1–0 | 2034 | 2046 |\n\nup to present-day plus model-projected warming thereafter (table 4). While this does lead to inconsistent definitions of dates of the GWLs for applications of the climate model output with and without bias correction, the focus here is on the level of warming relative to pre-industrial rather than the timing of this warming. Therefore, priority is given to an accurate quantification of GWLs in all parts of the study, at the expense of inconsistencies in the dates of these warming levels. The inconsistency between the dates of the GWLs ranged from 2 to 9 years depending on the model and warming level. This inconsistency would have consequences if these results were applied to time-dependent impacts and adaptation assessments, but that is not the case here so this concern does not apply. However, one issue is that the time-dependent nature of the aerosol forcing means that the spatial pattern of regional climate responses varies over time, so this will lead to some degree of inconsistency between the analysis of the ClimPACT extremes and the HCVI and JULES impacts projections.\n\n## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2°C global warming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "Is the #climateaction hashtag more bound the #globalwarming of #climatechange ?", - "target_page": 7, - "target_passage": "In the #climatechange network, “climateaction” was ranked third place and “sustainability” was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "**Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**). **Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**). **Figure 5.** The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 (**a**); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 (**b**).\n\nAs the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018.\n\nThe rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order . The rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order. The rank–order correlation coefficient of nodes between the two networks maintained a stable level and showed a slight climbing trend starting 2009, as shown in Figure 6a, except for 2010 and 2011, when the *p*-values were larger than 0.05 and no significant correlations were identified. The QAP analysis showed that the associations between the two discourses were correlated in the 10-year period (the *p*-value for 2015 was 0.011; *p*-values for all the other years were less than 0.001). Figure 6b reveals that the similarity of associations between the top 50 nodes in the two discourses fluctuated and did not show a rising trend with the correlation of nodes' rank order .\n\n**Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**). **Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**). **Figure 6.** Rank order correlation between hashtags in the climate change and global warming discourses from 2009 to 2018 (**a**); correlation between matrices of the climate change discourse and the global warming discourse from 2009 to 2018 (**b**).\n\n#### **5. Discussion 5. Discussion 5. Discussion**\n\n#### *5.1. Themes and Structure of the Two Discourses 5.1. Themes and Structure of the Two Discourses 5.1. Themes and Structure of the Two Discourses*\n\n#### 5.1.1. Phenomenon vs. Mechanism of Action 5.1.1. Phenomenon vs. Mechanism of Action 5.1.1. Phenomenon vs. Mechanism of Action\n\nClimate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of Climate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of Climate change and global warming have long been two competing frameworks shaping the public's perceptions, memory, and interpretations of climate issue by highlighting different aspects of", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "column name to create two matrices. One matrix was created for the climate change discourse, and we filled the cell whose column name and row name were among the top 50 list in the climate change discourse with the frequency at which the two hashtags were associated in this discourse, and the other cells were filled with 0. This was repeated for the global warming matrix. We thus obtained two matrices with the same row and column names but different values in the cells. Then, the two matrices were input to the quadratic assignment procedure (QAP) [85] analysis provided by UCINET software [86] to assess their correlation for each year.\n\n### **4. Results**\n\n### *4.1. General Descriptions*\n\nAssociation networks surrounding #climatechange and #globalwarming showed different properties. The climate change discourse included 38,821 hashtags, whereas the global warming discourse only contained 8788 hashtags. Table 1 displays the 50 most significant hashtags in the two discourses based on centrality. As some hashtags were used in the form of an abbreviation or phrase, explanations are provided in the table. Two networks shared 32 out of the 50 most significant words. Hashtags \"canada\", \"cdnpoli\", \"sdgs\", \"biodiversity\", \"education\", \"environmental\", \"cop24\", \"sustainable\", \"auspol\", \"food\", \"agriculture\", \"cleanenergy\", \"renewableenergy\", \"renewables\", \"emissions\", \"coal\", \"fossilfuels\", and \"cop21\" only showed up on the top 50 list of the \"climate change\" network. Hashtags \"tcot\", \"california\", \"p2\", \"nyc\", \"snow\", \"agw\", \"summer\", \"global\", \"winter\", \"india\", \"planet\", \"heatwave\", \"hoax\", \"nasa\", \"algore\", \"world\", \"oil\", and \"eco\" were unique on the top 50 list of the global warming network. The two lists only shared three out of the top five hashtags. In the #climatechange network, \"climateaction\" was ranked third place and \"sustainability\" was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network. In the #globalwarming network, \"earth\" and \"weather\" were among the top five nodes, whereas they were ranked 14th and 24th in the #climatechange network, respectively.\n\n| No. | #Climatechange | | #Globalwarming | |\n| --- | --- | --- | --- | --- |\n| | Hashtag | Centrality | Hashtag | Centrality |\n| 1 | climate | 0.466 | climate | 0.530 |\n| 2 | environment | 0.465 | environment | 0.446 |\n| 3 | climateaction | 0.391 | science | 0.319 |\n| 4 | sustainability | 0.316 | earth | 0.296 |\n| 5 | science | 0.314 | weather | 0.280 |\n| 6 | energy | 0.283 | us * | 0.280 |\n| 7 | trump | 0.257 | trump | 0.263 |\n| 8 | us * | 0.247 | pollution | 0.256 |\n| 9 | cop21 * | 0.232 | co2 | 0.244 |\n| 10 | parisagreement * | 0.232 | green | 0.239 |\n| 11 | actonclimate * | 0.225 | tcot * | 0.229 |\n| 12 | water | 0.221 | nature | 0.213 |\n| 13 | pollution | 0.210 | news | 0.198 |\n| 14 | earth | 0.207 | energy | 0.192 |\n| 15 | green | 0.200 | climatechangeisreal | 0.187 |\n| 16 | climatechangeisreal | 0.195 | obama | 0.181 |\n| 17 | renewableenergy * | 0.194 | climateaction | 0.175 |\n| 18 | health | 0.193 | algore * | 0.174 |\n| 19 | nature | 0.187 | water | 0.171 |\n| 20 | renewables | 0.186 | agw * | 0.164 |\n| 21 | cleanenergy | 0.176 | carbon | 0.164 |\n| 22 | carbon | 0.175 | sustainability | 0.163 |\n\n**Table 1.** The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as \"Frankenstein Food\" [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though \"the end of world in 2012\" and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n#### *5.3. Discrepancy between the Two Discourses*\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, differences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in different manners.\n\nTo be specific, although \"ipcc\", \"cop\", and \"un\" were mentioned in both discourses (yellow in Figures 3 and 4) in earlier years, the clusters to which they belonged had significantly different meanings. As mentioned in the results section, these hashtags were associated with a series of scientific hashtags in the climate change discourse, appealing to global efforts. In the global warming discourse, they were clustered with \"hoax\" and \"frame\", showing lack of belief in climate issue facts and hesitation about global efforts. More recently, when discussions about temperature, politics, and hesitation significantly shrank in the global warming discourse, the wo discourses showed more similarities about the importance of scientific concepts according to Figure 5a,b. However, links between global efforts and scientific facts were not constructed in the global warming discourse. According to a network model for cognition, the lack of associations means fewer psychological activations will spread to", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the different connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint efforts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the \"hoax\" frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the differences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and effect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by differently organizing numerous climate concepts. Examining the differences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLarge amounts of user-generated data on social media, which have been valued in computer science, communication, and environmental studies [5,9,15–18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing \"climate change\" and \"global warming\" between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the difference in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018?\n\nRQ3: Did the two competing discourses converge or diverge in this decade?\n\n#### **2. Background**\n\n#### *2.1. Climate Change, Global Warming, and Frames*\n\nExisting studies have noted that the subtle difference between climate change and global warming evokes different public cognitive responses, where global warming\"indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse effect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "International Journal of *Environmental Research and Public Health*\n\n# *Article* **#Climatechange vs. #Globalwarming: Characterizing Two Competing Climate Discourses on Twitter with Semantic Network and Temporal Analyses**\n\n**Wen Shi 1 , Haohuan Fu 1,2, Peinan Wang 3 , Changfeng Chen 3 and Jie Xiong 4,***\n\n- 1 Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China; shi-w18@mails.tsinghua.edu.cn (W.S.); haohuan@tsinghua.edu.cn (H.F.)\n- 2 National Supercomputing Center in Wuxi, Wuxi 214000, China\n- 3 School of Journalism and Communication, Tsinghua University, Beijing 100084, China; wpn17@mails.tsinghua.edu.cn (P.W.); chencf@mail.tsinghua.edu.cn (C.C.)\n- 4 Strategy and Innovation Department, Rennes School of Business, 35065 Rennes, France\n- ***** Correspondence: jie.xiong@rennes-sb.com; Tel.:+ 33-(0)-2-99-54-46-79\n\nReceived: 5 December 2019; Accepted: 3 February 2020; Published: 7 February 2020\n\n**Abstract:** Distinct perceptions of the global climate is one of the factors preventing society from achieving consensus or taking collaborative actions on this issue. The public has not even reached an agreement on the naming of the global concern, showing preference for either \"climate change\" or \"global warming\", and few previous studies have addressed these two competing discourses resulting from distinct climate concerns by differently linking numerous climate concepts. Based on the 6,662,478 tweets containing #climatechange or #globalwarming generated between 1 January 2009 and 31 December 2018, we constructed the semantic networks of the two discourses and examined their evolution over the decade. The findings indicate that climate change demonstrated a more scientific perspective and showed an attempt to condense climate discussions rather than diffuse the topic by frequently addressing sub-topics simultaneously. Global warming triggered more political responses and showed a greater connection with phenomena. Temporal analysis suggests that traditional political discussions were gradually fading in both discourses but more recently started to revive in the form of discourse alliance in the climate change discourse. The associations between global warming and weather abnormalitiessuddenly strengthened around 2012. Climate change is becoming more dominant than global warming in public discussions. Although two discourses have shown more similarities in the rank order of important climate concepts, apparent disagreements continue about how these concepts are associated. These findings lay the groundwork for researchers and communicators to narrow the discrepancy between diverse climate perceptions.\n\n**Keywords:** climate change; global warming; semantic network analysis; temporal analysis; public discourse; Twitter\n\n# **1. Introduction**\n\nThe public's distinct understanding of the cause and effect of the global climate issue is an obstacle to joint mitigation actions. In addition to a diversity of views co-existing in the public discourse [1,2], previous studies noticed that the public had even failed to reach an agreement on whether \"climate change\" or \"global warming\" is the most appropriate definition of the global climate concern [3–5]. According to the definition provided by [6], global warming describes global climate issues as a continuous increase in the average temperature of Earth's surface due to anthropogenic emissions of greenhouse gases, whereas climate change includes not only temperature rise but also a range of", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed10.pdf" - }, - { - "text": "#### **3. Methods** *3.1. Data Source*\n\n**3. Methods**\n\n#### *3.1. Data Source* As Twitter has been recognized as a popular discussion forum [75] and a social activity platform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate\n\nAs Twitter has been recognized as a popular discussion forum [75] and a social activity platform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population. distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population.\n\n*Int. J. Environ. Res. Public Health* **2020**, *xx*, 5 5 of 22\n\n#### *3.2. Data 3.2. Data* In this research, we were interested in tweets containing either #climatechange or #globalwarming,\n\nIn this research, we were interested in tweets containing either #climatechange or #globalwarming, as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies. as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies.\n\nTo collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study. To collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study.\n\nGiven our goal of exploring the difference between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to differentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained \"#globalwarming\". The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a. Given our goal of exploring the difference between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to differentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained \"#globalwarming\". The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a.\n\n**Figure 1.** The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 (**a**). The number of hashtags contained in the \"climate change\" or \"global warming\" datasets, and their ratio from 2009 to 2018 (**b**). **Figure 1.** The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 (**a**). The number of hashtags contained in the \"climate change\" or \"global warming\" datasets, and their ratio from 2009 to 2018 (**b**).", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed10.pdf" - }, - { - "text": "make global action salient for people talking about global warming than people talking about climate change [40], even though the facts of climate issues are highly recognized in both discourses.\n\n### **6. Conclusions**\n\nAs social media is gradually overtaking the role of legacy media providing a forum for public discussion, the semantic associations contained in social media discussions reflect and reinforce how individuals portray global climate issues. By examining hashtag co-occurrence patterns on Twitter between 2009 and 2018, we identified distinct climate perceptions hidden behind two competing climate discourses and discovered how these two discourses evolved.\n\nWe found that broad scientific, social, political, and international discussions are the topics of public climate discourse. Although the semantic difference between climate change and global warming seems subtle, the differences in their cognitive associations are not trivial. Despite some shared concerns between the two discourses, \"global warming\" is more politicized and focuses more on general phenomena, especially temperature abnormalities, whereas climate change is a more compact topic with a more scientific perspective and tends to refer to specific issues. The temporal analysis revealed that traditional political discussions decreased in both discourses but climate change started to build a discourse alliance with diverse domestic issues to show political intentions. Global warming's associations to extreme events and temperature change were suddenly strengthened around 2012. Climate change is becoming dominant compared with global warming in public discussions. Although the two discourses are becoming increasingly similar in the rank order of climate concepts, a notable discrepancy still exists in the way in which they get concepts associated. These observations may provide climate communicators with theoretical and practical hints to narrow the discrepancy between diverse climate perceptions.\n\n#### *Limitation and Future Directions*\n\nThough big data allowed us to decrease the bias by dealing with the whole set of social media data rather than samples, discrepancies still exist between social media users and the public. As most Twitter users do not disclose their age, education, income, and gender in users' profile, demographics were not introduced as moderator factors in this study. Previous studies noted that in 1970s, global cooling was a prominent climate concern amongst the public [105]. While in the 1980s, ozone layer depletion, species extinction and rainforest destruction became salient on the mass media agenda [106]. Considering the historical background of climate issues, age might influence how individuals perceive climate issues. According to the statistics in 2017 [107], only 16 % of older people (older than 60) in America use Twitter, while the proportion is 39% for people between 30–59 years old and 47% for people younger than 30 years old (Stastista, 2017). Our results reflect the climate perception of older people who use Twitter, as well as younger people amongst whom Twitter is more popular. Although some scholars reported that it is statistically reliable to take data on Twitter as a substitute and supplement for polling [108], we thought our results should be further examined before being generalized to the whole population.\n\nIn this study, we characterized the differences between two popular climate discourses and examined how two discourses evolved over a 10-year period. We did not focus on the interactions between public climate discourse and external factors. However, the evolution of climate discourse might be driven by several external forces such as scientific efforts, natural events, politics and online information (or misinformation) campaigns. The prevalence of certain climate concepts may inverse be weaponized to cause rhetorical shifts in politics and science popularization. For instance, previous studies noted that in the 2016 U.S. Presidential Election, state-supported misinformation campaigns took place to manipulate public opinion [109] and fake accounts were involved in spreading low-credibility news on Twitter [110]. How social media climate discourse reflects and interacts with other sub-systems of our society should be noticed and explored in future. More studies like [2], who examined the influence of several extreme events on public climate change perception, should be", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "What are two main reasons for one's low climate concern ?", - "target_page": 13, - "target_passage": "As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the different connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint efforts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the \"hoax\" frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the differences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and effect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by differently organizing numerous climate concepts. Examining the differences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLarge amounts of user-generated data on social media, which have been valued in computer science, communication, and environmental studies [5,9,15–18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing \"climate change\" and \"global warming\" between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the difference in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018?\n\nRQ3: Did the two competing discourses converge or diverge in this decade?\n\n#### **2. Background**\n\n#### *2.1. Climate Change, Global Warming, and Frames*\n\nExisting studies have noted that the subtle difference between climate change and global warming evokes different public cognitive responses, where global warming\"indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse effect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n- (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n- (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning—exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "make global action salient for people talking about global warming than people talking about climate change [40], even though the facts of climate issues are highly recognized in both discourses.\n\n### **6. Conclusions**\n\nAs social media is gradually overtaking the role of legacy media providing a forum for public discussion, the semantic associations contained in social media discussions reflect and reinforce how individuals portray global climate issues. By examining hashtag co-occurrence patterns on Twitter between 2009 and 2018, we identified distinct climate perceptions hidden behind two competing climate discourses and discovered how these two discourses evolved.\n\nWe found that broad scientific, social, political, and international discussions are the topics of public climate discourse. Although the semantic difference between climate change and global warming seems subtle, the differences in their cognitive associations are not trivial. Despite some shared concerns between the two discourses, \"global warming\" is more politicized and focuses more on general phenomena, especially temperature abnormalities, whereas climate change is a more compact topic with a more scientific perspective and tends to refer to specific issues. The temporal analysis revealed that traditional political discussions decreased in both discourses but climate change started to build a discourse alliance with diverse domestic issues to show political intentions. Global warming's associations to extreme events and temperature change were suddenly strengthened around 2012. Climate change is becoming dominant compared with global warming in public discussions. Although the two discourses are becoming increasingly similar in the rank order of climate concepts, a notable discrepancy still exists in the way in which they get concepts associated. These observations may provide climate communicators with theoretical and practical hints to narrow the discrepancy between diverse climate perceptions.\n\n#### *Limitation and Future Directions*\n\nThough big data allowed us to decrease the bias by dealing with the whole set of social media data rather than samples, discrepancies still exist between social media users and the public. As most Twitter users do not disclose their age, education, income, and gender in users' profile, demographics were not introduced as moderator factors in this study. Previous studies noted that in 1970s, global cooling was a prominent climate concern amongst the public [105]. While in the 1980s, ozone layer depletion, species extinction and rainforest destruction became salient on the mass media agenda [106]. Considering the historical background of climate issues, age might influence how individuals perceive climate issues. According to the statistics in 2017 [107], only 16 % of older people (older than 60) in America use Twitter, while the proportion is 39% for people between 30–59 years old and 47% for people younger than 30 years old (Stastista, 2017). Our results reflect the climate perception of older people who use Twitter, as well as younger people amongst whom Twitter is more popular. Although some scholars reported that it is statistically reliable to take data on Twitter as a substitute and supplement for polling [108], we thought our results should be further examined before being generalized to the whole population.\n\nIn this study, we characterized the differences between two popular climate discourses and examined how two discourses evolved over a 10-year period. We did not focus on the interactions between public climate discourse and external factors. However, the evolution of climate discourse might be driven by several external forces such as scientific efforts, natural events, politics and online information (or misinformation) campaigns. The prevalence of certain climate concepts may inverse be weaponized to cause rhetorical shifts in politics and science popularization. For instance, previous studies noted that in the 2016 U.S. Presidential Election, state-supported misinformation campaigns took place to manipulate public opinion [109] and fake accounts were involved in spreading low-credibility news on Twitter [110]. How social media climate discourse reflects and interacts with other sub-systems of our society should be noticed and explored in future. More studies like [2], who examined the influence of several extreme events on public climate change perception, should be", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where \"tcot\", short for \"Top Conservatives on Twitter\", was the node ranked highest, and \"p2\", short for \"Progressives 2.0\", is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic efforts, such as \"us\", \"trump\", \"climatechangeisreal\", \"climateaction\", and \"epa\", and two international items, like \"china\" and \"india\". The fourth cluster (in blue) referred to emissions, including hashtags like \"co2\", \"green\", and \"carbon\". The smallest cluster (8%) was composed of \"snow\", \"winter\", \"heatwave\", and \"summer\", referring to the temperature abnormalities on the earth.\n\n#### *4.3. Temporal Analysis of the Associations in the Two Discourses*\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change\"discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found \"pollution\" and \"earth\" were unique to the keyword list of the global warming discourse, and \"economy\", \"water\", \"china\", \"coal\", \"solar\", \"sustainability\", and \"food\" only occurred on the critical list for the climate change discourse.\n\n**Table 2.** Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n| --- | --- | --- |\n| #climatechange | china, solar, water, food, economy, coal, sustainability | co2, news, carbon, green, climate, |\n| #globalwarming | pollution, earth | us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.\n\nFigure 3 depicts the associations of hashtags in the climate change discourse for each year from 2009 to 2018. The scientific hashtags cluster (in green) was the most important theme in the climate change discourse, especially more recently. However, some scientific hashtags, such as \"ghg\" (greenhouse gas), \"co2\", and \"forests\", were not identified in the scientific cluster but in the global actions cluster (in yellow) because these hashtags were frequently used in the global action context and identified with a closer semantic association to global action by Gephi. In addition to these hashtags, the global action cluster included a series of international activities, such as \"ipcc\" (Intergovernmental Panel on Climate Change), \"unfccc\" (United Nations Framework Convention on Climate Change), and \"cop\" (Conferences of the Parties) for almost every year. The blue cluster includes to political hashtags, such as \"uniteblue\", \"sgp\", \"p2\", and \"tcot\". In 2017 and 2018, the associations with political hashtags disappeared among the top 50 hashtags. The small red cluster had a mixed theme, combining \"technology\", \"innovation\", \"education\", \"africa\", \"healthcare\", and \"politics\". The centrality sum of the nodes in the red cluster remained rather low throughout the 10-year period but obviously increased in the last two years of the period according to Figure 5a.\n\nFigure 4 describes the evolution of concepts' associations in the global warming discourse during the 10 years. The red cluster included concepts such as \"2012\", \"hot\", \"summer\", \"elnino\", and \"snow\", describing the weather abnormalities related to global warming. A notable finding is that before 2012, global warming's association with temperature abnormalities and extreme weather was not salient,", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "A combination of the above questions is also relevant—how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. We also consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5°C and 2°C global warming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined—one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 8.** Change in Hunger and Climate Vulnerability Index relative to baseline calculated for simulated climate states at 2°C globalwarming,for five individualHadGEM3simulations driven by SSTs and SICsfrom differentmembers ofthe CMIP5 ensemble, and the ensemble mean.\n\nFour countries show ensemble-mean HCVI values at 2°C global warming that are higher than any seen in the baseline climate; these are Oman, Bangladesh, Mauritania and Yemen. The implication of such HCVI values is that climate change at 2°C is projected to cause levels of vulnerability to food insecurity that are greater than any seen in the present day. For individual ensemble members, the number of countries with 'unprecedented' HCVI values at 2°C varies from three to seven. Conversely, many countries in the baseline climate have levels of vulnerability to food insecurity that are greater than those expected in other countries under 2°C global warming. This suggests that other factors are already posing greater risk for food insecurity than 2°C climate change is expected to cause in other countries, so the increased risk from climate change should not overshadow the need to reduce vulnerability to food insecurity arising from non-climatic factors. There is scope to reduce vulnerability to food insecurity by addressing various socio-economic issues in such counties.\n\nThe JULES simulations show a general tendency towards increased run-off over approximately half of the land surface (figure 9) and the majority of the major river basins assessed (figure 10), but with large regional uncertainties including the possibility of decreased flows in many basins. The ensemble-mean change in mean streamflow shows an increase of between 5 and 25% over most of the Northern Hemisphere land surface, with some regions seeing an increase of over 50% at 2°C global warming. Notable exceptions to this are western Europe and southcentral USA, which see less than a 5% change in run-off, and the already very dry region of the Sahara Desert where the existing very small run-off become even smaller.\n\nEnsemble-mean projected changes in low run-off flows are generally larger (figure 11), with the regions seeing an increase in mean run-off seeing a larger percentage increase in low run-off—over 75% increases over much of North America, Eastern Europe and Asia. Note that this does not necessarily imply a larger increase in absolute low flow compared to absolute mean flow, because the baseline is (by definition) smaller for low flows. In western Europe, where the changes in mean flows were less than 5%, the ensemble-mean low flow decreases by between 5", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed11.pdf" - }, - { - "text": "reports, the environment, and science [13]. Some respondents even hold the belief that global warming results in climate change [9].\n\nThe two distinct climate discourses being produced based on the same reality can be explained by the framing theory in communication study. Framing refers to the phenomenon where the reality is always partially selected or highlighted when described by the public or media [19]. By distinctly defining problems, suggesting solutions, and indicating casual interpretations [20], different frames tell the audience different stories and influence how they observe facts [21,22]. Two types of frames, equivalency frames and emphasis frames, are commonly studied by scholars to examine how framing effects influence individuals' attitudes and beliefs [23]. Equivalency frames describe the same fact or logic with different words and may suggest that the audience perceives facts in psychologicallydifferent ways [24]. For example, a cup can be described as \"half full\" and \"half empty\", where the former is a positive frame indicating a reference point lower than current status, and the latter is negative, meaning that the reference point is above the current situation [25]. Emphasis frames employ words selectively associated with parts of reality to shift the audience's attention to particular attributes [26]. Climate change and global warming have been noted to highlight different aspects of an issue by activating distinct cognitive accessibility patterns [27].\n\nDifferent frames concerning the global climate concern are popular among the public, politicians, environmentalists, and the media [1,28,29]. Big data analyses have indicated that when interpreting climate events, individuals' preference for frameworks was influenced by demographics [5] and social-political background [2]. Different choices of frameworks can evoke different psychological processes [30], promote or inhibit engagement intentions [31], or gain approval on various levels [32].\n\nStudies have noted that the frameworks of climate change and global warming may result from different political indications. The American Republican-leaning states show more preference for global warming than climate change compared with Democratic-leaning states, and global warming is more connected with \"hoax\" in questioning the reality of the global climate issue [5]. Conservatives are more likely to link heat-related phenomena to global warming, whereas liberals associate these facts equally with both frames [27]. An earlier survey conducted by [4] argued that wording choice might not influence the whole population similarly. For the whole sample and politically independent individuals, the two terminologies were equally serious, but climate change seemed more serious compared with global warming among the Republicans, and the Democrats held the opposite opinion.\n\n#### *2.2. Network Model for Cognition*\n\nDifferent framework choices may create even more differences than have already been noticed. Psychologists think that human beings are a collection of learned associations [33], and associative response rather than simply linear logic form the structural basis of thought [34]. Associative learning [35] is a long-standing assumption underlying cognitive science [14], suggesting that human cognition toward the world forms a network pattern, where the world is organized into several groups of related items and stored in a network model in the mind. When messages are processed by humans, they are first encoded into a temporary memory network and then linked to an existing associative memory network for long-term storage [36]. In the network, a node represents a certain concept, and edges refers to particular relationships, such as time sequences [37], similarity [38], semantic connections [37], or cause and effect [33] between two nodes.\n\nWhen individuals search their memory for a particular piece of a message in their mind, the targeted node becomes salient and activated in the temporary memory [39]. If two messages are always activated simultaneously, their connection tends to be more robust and the messages are regarded as associated [36]. If a link is recorded between two concepts, activations are likely to spread through the link from one concept to another with or without conscious awareness [40]. Whereas associations of nodes in the mind may not necessarily reflect the actual relationships of objects, in reality, several factors, including media usage, personal experience, and political stance [34,41,42], may help bundle different sets of concepts.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed10.pdf" - }, - { - "text": "Biodiversity is also crucial for safeguarding **EU and global food security.** Biodiversity loss threatens our food systems6 , putting our food security and nutrition at risk. Biodiversity also underpins healthy and nutritious diets and improves rural livelihoods and agricultural productivity7 . For instance, more than 75% of global food crop types rely on animal pollination8 .\n\nDespite this urgent moral, economic and environmental imperative, **nature is in a state of crisis**. The five main direct drivers of biodiversity loss9 – changes in land and sea use, overexploitation, climate change, pollution, and invasive alien species – are making nature disappear quickly. We see the changes in our everyday lives: concrete blocks rising up on green spaces, wilderness disappearing in front of our eyes, and more species being put at risk of extinction than at any point in human history. In the last four decades, global wildlife populations fell by 60% as a result of human activities10. And almost three quarters of the Earth's surface have been altered11, squeezing nature into an eversmaller corner of the planet.\n\nThe biodiversity crisis and the climate crisis are intrinsically linked. Climate change accelerates the destruction of the natural world through droughts, flooding and wildfires, while the loss and unsustainable use of nature are in turn key drivers of climate change. But just as the crises are linked, so are the solutions. **Nature is a vital ally in the fight against climate change**12. Nature regulates the climate, and nature-based solutions13 , such as protecting and restoring wetlands, peatlands and coastal ecosystems, or sustainably managing marine areas, forests, grasslands and agricultural soils, will be essential for emission reduction and climate adaptation. Planting trees and deploying green infrastructure will help us to cool urban areas and mitigate the impact of natural disasters.\n\nBiodiversity loss and ecosystem collapse are one of the biggest threats facing humanity in the next decade14. They also threaten the foundations of our economy and the **costs of inaction** are high and are anticipated to increase15. The world lost an estimated €3.5-18.5 trillion per year in ecosystem services from 1997 to 2011 owing to land-cover change, and an estimated €5.5-10.5 trillion per year from land degradation. Specifically, biodiversity loss results in reduced crop yields and fish catches, increased economic losses from flooding and other disasters, and the loss of potential new sources of medicine16 .\n\nThe EU is ready to show ambition to reverse biodiversity loss, lead the world by example and by action, and help agree and adopt a transformative post-2020 global framework at the 15th Conference of the Parties to the Convention on Biological Diversity. This should\n\n6 World Economic Forum (2020), The Global Risks Report 2020.\n\n7 Food and Agriculture Organization (2019), State of the World's Biodiversity for Food and Agriculture.\n\n8 IPBES (2019), Summary for policymakers, p. 3, A1.\n\n9 IPBES (2019), Summary for policymakers, pp. 17-19, B.10-B.14; European Environment Agency (2019), The European environment – state and outlook 2020.\n\n10 World Wildlife Fund (2018), Living Planet Report - 2018: Aiming Higher.\n\n11 IPBES (2019), Summary for policymakers, p. 4, A4.\n\n12 Idem.\n\n13 https://ec.europa.eu/research/environment/index.cfm?pg=nbs\n\n14 World Economic Forum (2020), The Global Risks Report 2020.\n\n15 Organisation for Economic Co-operation and Development (OECD) (2019), Biodiversity: Finance and the Economic and Business Case for Action.\n\n16 Idem.", - "page_start": 2, - "page_end": 2, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "How many scholarly articles are published every year ?", - "target_page": 1, - "target_passage": "over 3 million scholarly articles published per year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n#### Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing – original draft, Writing – review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing – review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – review & editing.\n\n### Funding\n\nThe author(s) declare that financial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Conflict of interest\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n### Publisher's note\n\nAll claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.\n\n## References\n\n1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler. (2020) 26(14):1816–21. doi: 10.1177/1352458520970841\n\n2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports. (2018) 28 (9):1960–9. doi: 10.1111/sms.13214\n\n3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord. (2017) 13:38–43. doi: 10.1016/j.msard.2017.01.016\n\n4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport. (2022) 25(2):146–54. doi: 10.1016/j.jsams.2021.08.015\n\n5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis—time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep. (2019) 19(11):1–12. doi: 10.1007/s11910-019-1002-3\n\n6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective efficacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler. (2022) 28(10):1620–9. doi: 10. 1177/13524585221079200\n\n7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler. (2020) 26(12):1459–69. doi: 10.1177/ 1352458520915629\n\n8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther. (2021) 101 (5):1–9. doi: 10.1093/ptj/ptzab049\n\n9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord. (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n\n10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother. (2021) 23(6):377–83. doi: 10.1080/21679169.2020.1772870\n\n11. Unluer NO, Ozkan T, Yasa ME, Ates Y, Anlar O. Investigation of the relationship between trunk motor control and balance, functional mobility, and gait capacity in patients with multiple sclerosis/multipl sklerozlu hastalarda govde motor kontrolu ile denge, fonksiyonel mobilite ve yuruyus kapasitesi arasindaki iliskinin incelenmesi. Türk Nöroloji Dergisi. (2021) 27(3):283. doi: 10.4274/tdn.2021.41017\n\n12. Learmonth YC, Motl RW. Physical activity and exercise training in multiple sclerosis: a review and content analysis of qualitative research identifying perceived determinants and consequences. Disabil Rehabil. (2016) 38(13):1227–42. doi: 10. 3109/09638288.2015.1077397\n\n13. Fikke HK, Normann B, Sivertsen M, Dahl SSH, Arntzen EC. Optimizing sensorimotor function, physical activity and employment for people with MS—a feasibility study. Fysioterapeuten. (2023) 90(1):32–42. doi: 10.52705/ c14a8ca05f7546dabc18bd0275cf2edd\n\n14. Arntzen EC, Straume B, Odeh F, Feys P, Normann B. Group-based, individualized, comprehensive core stability and balance intervention provides immediate and long-term improvements in walking in individuals with multiple sclerosis: a randomized controlled trial. Physiother Res Int. (2019) 25(1):e1798. doi: 10.1002/pri.1798\n\n15. Arntzen EC, Straume BK, Odeh F, Feys P, Zanaboni P, Normann B. Groupbased individualized comprehensive core stability intervention improves balance in persons with multiple sclerosis: a randomized controlled trial. Phys Ther. (2019) 99 (8):1027–38. doi: 10.1093/ptj/pzz017\n\n16. Arntzen EC, Øberg GK, Gallagher S, Normann B. Group-based, individualized exercises can provide perceived bodily changes and strengthen aspects of self in individuals with MS: a qualitative interview study. Physiother Theory Pract. (2019) 37(10):1080–95. doi: 10.1080/09593985.2019.1683923\n\n17. Florio-Smith J, Ayer M, Colhoun S, Daykin N, Hamill B, Liu X, et al. The importance of the patient's perspective in decision-making in multiple sclerosis: results of the OwnMS patient perspectives study. Mult Scler Relat Disord. (2023) 75:104757. doi: 10.1016/j.msard.2023.104757\n\n18. Kleim JA, Jones TA. Principles of experience-dependent neural plasticity: implications for rehabilitation after brain damage. J Speech Lang Hear Res. (2008) 51(1):225–39. doi: 10.1044/1092-4388(2008/018)\n\n19. Thompson E. Mind in Life: Biology, Phenomenology, and The Sciences of Mind. Cambridge, Mass: Harvard University Press (2007).\n\n20. Merleau-Ponty M. Phenomenology of Perception. London: Routledge Classics (2008).", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publisher. Subject to any applicable licensing terms and conditions in the case of electronically supplied publications, a person may engage in fair dealing with a copy of this publication for his or her personal or private use, or his or her research or private study. See Section 12(1)(a) of the Copyright Act 98 of 1978.\n\nThe authors and the publisher have made every effort to obtain permission for and to acknowledge the use of copyright material. Should any infringement of copyright have occurred, please contact the publisher, and every effort will be made to rectify omissions or errors in the event of a reprint or new edition.\n\nDeveloped for Oxbridge Academy - 2015", - "page_start": 1, - "page_end": 1, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "Neither the European Agency for Safety and Health at Work nor any person acting on behalf of the agency is responsible for the use that might be made of the following information.\n\nLuxembourg: Publications Office of the European Union, 2023\n\nPrint ISBN 978-92-9479-934-0 doi: 10.2802/26873 PDF ISBN 978-92-9479-935-7 doi: 10.2802/56459\n\n© European Agency for Safety and Health at Work, 2023\n\nReproduction is authorised provided the source is acknowledged.\n\nFor any use or reproduction of photos or other material that is not under the copyright of the European Agency for Safety and Health at Work, permission must be sought directly from the copyright holders.\n\nThe photographs used in this publication illustrate a range of work activities. They do not necessarily show good practices or compliance with legislative requirements.\n\nFor one-click access to websites and references please consult the online version of this publication https://osha.europa.eu/en/publications/occupational-safety-and-health-europe-state-and-trends-2023", - "page_start": 1, - "page_end": 1, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- Paulson, Lawrence C. (February 2018). \"Computational Logic: Its Origins and Applications\" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5832843). *Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences*. **474** (2210): 1–14. arXiv:1712.04375 (https://arxiv.org/abs/1712.04375). Bibcode:2018RSPSA.47470872P (https://ui.adsabs.harv ard.edu/abs/2018RSPSA.47470872P). doi:10.1098/rspa.2017.0872 (https://doi.org/10.109 8%2Frspa.2017.0872). PMC 5832843 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5832 843). PMID 29507522 (https://pubmed.ncbi.nlm.nih.gov/29507522). S2CID 3805901 (http s://api.semanticscholar.org/CorpusID:3805901).\n- Pedemonte, Bettina (25 June 2018). \"Strategic vs Definitory Rules: Their Role in Abductive Argumentation and their Relationship with Deductive Proof\" (https://www.ejmste.com/article/ strategic-vs-definitory-rules-their-role-in-abductive-argumentation-and-their-relationship-with -5539). *Eurasia Journal of Mathematics, Science and Technology Education*. **14** (9): 1–17. doi:10.29333/ejmste/92562 (https://doi.org/10.29333%2Fejmste%2F92562). ISSN 1305- 8215 (https://search.worldcat.org/issn/1305-8215). S2CID 126245285 (https://api.semantics cholar.org/CorpusID:126245285). Archived (https://web.archive.org/web/20211207195246/h ttps://www.ejmste.com/article/strategic-vs-definitory-rules-their-role-in-abductive-argumentati on-and-their-relationship-with-5539) from the original on 7 December 2021. Retrieved 8 January 2022.\n- Pickel, Bryan (1 July 2020). \"Structured Propositions and Trivial Composition\" (https://doi.or g/10.1007%2Fs11229-018-1853-1). *Synthese*. **197** (7): 2991–3006. doi:10.1007/s11229- 018-1853-1 (https://doi.org/10.1007%2Fs11229-018-1853-1). hdl:20.500.11820/3427c028 f2cb-4216-a199-9679a49ce71c (https://hdl.handle.net/20.500.11820%2F3427c028-f2cb-42 16-a199-9679a49ce71c). ISSN 1573-0964 (https://search.worldcat.org/issn/1573-0964). S2CID 49729020 (https://api.semanticscholar.org/CorpusID:49729020).\n- Pietroski, Paul (2021). \"Logical Form: 1. Patterns of Reason\" (https://plato.stanford.edu/entri es/logical-form/#pat). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211002190116/https://plato.sta nford.edu/entries/logical-form/#pat) from the original on 2 October 2021. Retrieved 4 December 2021.\n- Planty-Bonjour, Guy (2012). *The Categories of Dialectical Materialism: Contemporary Soviet Ontology*. Springer Science & Business Media. p. 62. ISBN 978-94-010-3517-0.\n- Possin, Kevin (2016). \"Conductive Arguments: Why is This Still a Thing?\" (https://philpapers. org/rec/POSCAW-4). *Informal Logic*. **36** (4): 563–593. doi:10.22329/il.v36i4.4527 (https://do i.org/10.22329%2Fil.v36i4.4527). Archived (https://web.archive.org/web/20220108171723/ht tps://philpapers.org/rec/POSCAW-4) from the original on 8 January 2022. Retrieved 8 January 2022.\n- Priest, Graham; Tanaka, Koji; Weber, Zach (2018). \"Paraconsistent Logic\" (https://plato.stan ford.edu/entries/logic-paraconsistent/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 14 December 2021.\n- Pépin, Jean (2004). \"Logos\". *Encyclopedia of Religion* (https://www.encyclopedia.com/philo sophy-and-religion/philosophy/philosophy-terms-and-concepts/logos). ISBN 978-0-02- 865733-2. Archived (https://web.archive.org/web/20211229134626/https://www.encyclopedi a.com/philosophy-and-religion/philosophy/philosophy-terms-and-concepts/logos) from the original on 29 December 2021. Retrieved 29 December 2021.\n- Putnam, H. (1969). \"Is Logic Empirical?\". *Boston Studies in the Philosophy of Science*. Vol. 5. pp. 216–241. doi:10.1007/978-94-010-3381-7_5 (https://doi.org/10.1007%2F978-94- 010-3381-7_5). ISBN 978-94-010-3383-1.\n- Quine, Willard Van Orman (1981). *Mathematical Logic*. Harvard University Press. p. 1. ISBN 978-0-674-55451-1.\n- Rathjen, Michael; Sieg, Wilfried (2022). \"Proof Theory\" (https://plato.stanford.edu/entries/pro of-theory/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 4 March 2023.", - "page_start": 33, - "page_end": 33, - "source_file": "wikipedia1.pdf" - }, - { - "text": "## *4. Copyright, Licensing, & Access to Books for Training*\n\nEven if books can be acquired, digitized, and made technically useful for AI training, the development of a books data commons would necessarily need to navigate and comply with copyright law.\n\n**Out-of-Copyright Books:** A minority of books are old enough to be in the public domain and out of copyright, and an AI developer could use them in training without securing any copyright permission. In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on, it is worth noting that the status of whether a book is in the public domain can be difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14 out of copyright if they were not subject to a copyright renewal; however, data on copyright renewals is not easily accessible.\n\nWhat's more, copyright definitions and term lengths vary among countries. Even if a work is in the public domain in the US, it may not be in other countries. Countries generally use the 15 life of the last living author + \"x\" years to determine the term of copyright protection. For most countries, \"x\" is either 50 years (the minimum required by the Berne Convention) or 70 years (this is the case for all member states of the European Union and for all works published in the U.S. after 1978). This approach makes it difficult to determine copyright terms with certainty because it requires information about the date of death of each author, which is often not readily available.\n\n**In-Copyright Books:** The vast majority of books are in copyright, and, insofar as the training process requires making a copy of the book, the use in AI training may implicate copyright law. Our workshop covered three possible paths for incorporating such works.\n\n#### **Direct licensing**\n\nOne could directly license books from rightsholders. There may be some publishers who are willing to license their works for this purpose, but it is hard to determine the scale of such access, and, in any event, there are significant limits on this approach. Along with the challenge (and expense) of reaching agreements with relevant rightsholders, there is also the practical difficulty of simply identifying and finding the rightsholder that one must negotiate\n\nFor a sense of the complexity, see e.g. Melissa Levine, Richard C. Adler. *Finding the Public Domain:* 14 *Copyright Review Management System Toolkit*. 2016, quod.lib.umich.edu/c/crmstoolkit/\n\n14616082.0001.001. Accessed 20 Mar. 2024.; Kopel, Matthew. \"LibGuides: Copyright at Cornell Libraries: Copyright Term and the Public Domain.\" guides.library.cornell.edu/copyright/publicdomain; Mannapperuma, Menesha, et al. *Is It in the Public Domain? A HANDBOOK for EVALUATING the COPYRIGHT STATUS of a WORK CREATED in the UNITED STATES*. 1923.\n\nSee e.g. Moody, Glyn. \"Project Gutenberg Blocks Access in Germany to All Its Public Domain Books 15 because of Local Copyright Claim on 18 of Them.\" *Techdirt*, 7 Mar. 2018, www.techdirt.com/ 2018/03/07/project-gutenberg-blocks-access-germany-to-all-public-domain-books-because-localcopyright-claim-18-them/. Accessed 20 Mar. 2024.", - "page_start": 8, - "page_end": 8, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *3. Why Books are Important to Training AI*\n\nDespite the proliferation of online content and some speculating that books would simply die out with the advent of the Internet,9 books remain a critical vehicle for disseminating knowledge. The more scientists study how books can impact people, the less surprising this is. Our brains have been shown to interact with longform books in meaningful ways: we develop bigger vocabularies when we read books; we develop more empathy when we read literary fiction; and connectivity between different regions of our brain increases when we read. 10\n\nIn that light, it might be unsurprising that books are important for training AI models. A broadly accessible books dataset could be useful not only for building LLMs, but also for many other types of AI research and development.\n\n## *Performance and Quality*\n\nThe performance and versatility of an AI model can significantly depend on whether the training corpus includes books or not. Books are uniquely valuable for AI training due to several characteristics.\n\n- **Length:** Books tend to represent longer-form content, and fiction books, in particular, represent long-form narrative. An AI trained on this longer-form, narrative type of content is able to make connections over a longer context, so instead of putting words together to form a single sentence, the AI becomes more able to string concepts together into a coherent whole; even after a book is divided into many \"chunks\" before the process of tokenization, that will still provide long stretches of text that are longer than the average web page. While Web documents, for instance, tend to be longer than a single sentence, they are not typically hundreds of pages long like a book.\n- **Quality:** The qualities of the training data impact the outputs a tool can produce. Consider an LLM trained on gibberish; it can learn the patterns of that gibberish and, in turn, produce related gibberish, but will not be very useful for writing an argument or a story, for instance. In contrast, training an LLM on books with well-constructed arguments or crafted stories could serve those purposes. While \"well-constructed\" and \"crafted\" are necessarily subjective, the traditional role of editors and the publishing process can provide a useful indicator for the quality of writing inside of books. What's more, metadata for books — information such as the title, author and year of publication — is often more comprehensive than metadata for information\n\n&quot;the novel, too, as we know it, has come to its end\" — \"The End of Books.\" *Archive.nytimes.com*, 21 June 9 1992, archive.nytimes.com/www.nytimes.com/books/98/09/27/specials/coover-end.html. Accessed 27 Aug. 2021.\n\nStanborough, Rebecca Joy. \"Benefits of Reading Books: For Your Physical and Mental Health.\" 10 *Healthline*, 15 Oct. 2019, www.healthline.com/health/benefits-of-reading-books#prevents-cognitivedecline.", - "page_start": 5, - "page_end": 5, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "ISBN: 978-1-78655-073-6\n\nISSN: 1756-3666\n\n© Crown copyright 2016\n\nThis publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gsi.gov.uk.\n\nWhere we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.", - "page_start": 44, - "page_end": 44, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "It is also an example predicated on copyright's limitations and exceptions — in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use.32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is \"an international community of research libraries committed to the long-term curation and availability of the cultural record.\" It started in what it calls the \"early 33 days of mass digitization\" — that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called \"digital humanities,\" which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., \"war on drugs\") became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from \"non-consumptive research,\" or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center \"[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.\" It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,\n\n<i>Authors Guild v. HathiTrust, 902 F.Supp.2d 445 (SDNY October 10, 2012) and *Authors Guild v.* 32 *HathiTrust*, 755 F.3d 87 (2d Cir. 2014).\n\nSee https://www.hathitrust.org/member-libraries/member-list/ — the membership is principally US 33 institutions, and most of the non-US members are from English speaking countries or institutions that use English as the primary language of operations.\n\nThis functionality is limited to scanned books provided by library partners in the US. 34", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "For what reason a researcher's name is not a good tools to track back its works and affiliations ?", - "target_page": 1, - "target_passage": "Many people have the same name Names may change through marriage or other circumstances Individuals use different alphabets, abbreviations, or naming conventions People use different versions of their name during their career", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Occupational safety and health in Europe: state and trends 2023\n\nSafety and health at work is everyone's concern. It's good for you. It's good for business.", - "page_start": 0, - "page_end": 0, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# P R A C T I C A L A N D P R O F E S S I O N A L\n\nSomething of a paradox, too; highly competitive but approachable; stylish but never a slave to fashion. I have a true talent for leadership. I'm stable, steady, reliable, and efficient. At the same time, I'm good-looking, good-natured, and good-humored. Seek successful business person driven by values, with a \"whatever it takes\" attitude — just like me, practical and professional.\n\n## T H E H O N C O M P A N Y", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### **Safer and healthier technologies and organisation**\n\nTo support the **practical implementation of preventive safety and health measures**, numerous actors (e.g. organisations of OSH professionals and practitioners, and standardisation institutes such as the European Committee for Standardisation and the International Organisation for Standardisation) issued safety and health guidance or standards, or developed new and advanced OSH management systems, the engineering sciences worked on better technical preventive technologies, on measuring and monitoring technologies, the medical sciences introduced better medical diagnosis and treatment of work-related diseases, and the social sciences contributed with better knowledge on the legal and economic determinants of OSH, or analysed the characteristics of awareness raising, knowledge development and healthy work organisation.\n\nIt is obvious that **better technical and organisational prevention at work** contributed to more safety and the evident strong reduction in accidents. **Prominent fields and examples** of such improvements are: technically safer design of moving vehicles (e.g. for fork lifts or heavy trucks and machines, light and noise warning signals for moving vehicles); safer design of machines like automatic shutdowns or disconnections, two-hand operating of machines (e.g. for pressing and punching), safer cranes including better technologies for communication between co-workers, coverage of moving parts, safer company cars (e.g. safety belts and airbags), safer tools (e.g. for drilling or cutting); improved personal protective equipment like air-supplied breathing apparatus, steel mesh gloves for meat workers, trousers for forest workers that resist a chainsaw; minimum safety requirements for buildings (e.g. forms and size of stairs and handrails, fire exits and fire alarms, safer ladders and scaffolds), emergency equipment like eye wash and emergency showers; better monitoring of acute hazards (e.g. in sewage water systems), exhaust and ventilation technologies to avoid fumes, dusts, chemicals or contact with hazardous biological agents; strong safety obligations for work in confined spaces, or for work at height and work in trenches; introduction of explosion zones and of non-sparking tools, a comprehensive system of warning signals, warning signals for slippery floors and unsafe grounds, better warning systems and equipment in particularly dangerous work environments like road maintenance, combined with better organisational measures; quality systems that promote continuous repair and maintenance of tools; regular instructions by safety representatives and safety coordinators, and guarantee of minimum safety standards of machines and products by European standards like CE ('European Conformity').\n\n#### **Major technological developments**\n\nThe widespread **introduction of new or advanced technologies** — automation, digitalisation/ICT, green technologies, new material technologies and so on — results in substantial changes in work organisation and work processes, and replacement of (traditional) materials (screws by glues, metal and wood by plastics, nanomaterials). For OSH regulators and practitioners, it is a constant challenge to assess these changes regarding their impact on risks for health and safety and to develop adequate risk prevention and mitigation measures.\n\n**Foresight studies** (e.g. by EU-OSHA) have shown that such technological change can help improve working conditions, for example, by taking over heavy, dangerous or routine work (automation, robotisation, exoskeletons), or by better communication and remote control via ICT tools. At the same time, they can also pose new risks, creating rigid work processes without much decision latitude, along with technical options for extreme surveillance and control (e.g. by constant geolocation), or pose new safety risks like working at height (renewable energies) or by exposure to materials with widely unknown health effects (e.g. nano).\n\nEU-OSHA has **published several foresight studies** to emphasise possible safety and health concerns. Examples are the reports and fact sheets about new safety risks in green jobs (green buildings, solar energy, wind energy) published more than 10 years ago. Since 2015, EU-OSHA has been publishing reviews and discussion papers on emerging risks and foresight topics. This work covers topics like robotics, performance-enhancing drugs, 3D printing, monitoring technologies, developments in the eretail sector, artificial intelligence, platform work, Long COVID, exoskeletons and so on. In 2018, the Agency published a foresight report on new and emerging OSH risks associated with digitalisation.\n\nA well-known example of such changes in work processes causing new OSH challenges is the **growing number of workers outside the premises of the employer**, that is, at non-stationary or mobile workplaces or at home. This refers to the increasing amount of mobile work in transport, traffic and", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "When we think about our careers, and what we need to do to establish them, we often forget about the need to develop an essential skill: communication. If you start reading through the job descriptions in a industry, you will find that the vast majority of jobs require one or more of the following:\n\n- Effective communication skills\n- Interpersonal skills\n- Ability to work in a team\n- Negotiation skills\n- Conflict resolution skills\n- Report writing skills\n\nWhat all of these skills have in common is that they involve the use of language to achieve a particular purpose. And for this reason, having good language skills is essential in any working environment.\n\n#### In a career context, good language skills can also:\n\n- Affect your credibility. Poor grammar indicates to a prospective employer that you are sloppy, while flawless grammar indicates that you pay attention to detail.\n- Improve your relationships with your co- workers. If you are able to express yourself clearly, you can eliminate the confusion and misunderstanding that often leads to conflict.\n- Increase your chances of being promoted.\n- Help you to create a good impression.\n- Improve your ability to persuade others (which is a valuable skill in the working world).", - "page_start": 4, - "page_end": 4, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "Some important questions remain at the end of such a report:\n\n- The **quality of statistics and surveys fades the more irregular are the working** conditions being studied. Which research methods are adequate for a clearer and more reliable evidence base on these working conditions? It might require research methods different from those used today, for example, more investigative case studies; it might also be helpful to evaluate the **existing national working conditions surveys or statistics** under this aspect.\n- **Fading employer–employee relations.** There are special research efforts necessary to study the application of OSH regulations of work with weak or no employer–employee relations, for example, for the self-employed and new forms of employment.\n- **Surveys usually suffer a participation bias, for example, for the migrant workforce.** The low participation rate of migrants can contribute to a particular underestimation regarding their often unfavourable working conditions.\n- **Workers in manual occupations** report **better health than administrative workers** but **less expectations to do the job until being 60 years old**. What are the reasons behind this? Is it the healthy worker effect, strong occupation-related differences regarding the perception of health and the expression of health problems?502,503\n- High work intensity is a major cause for low wellbeing and high psychosocial risks. Survey data suggest that **work intensification stopped after 2005**. What might be the reasons? Are the current indicators not specific enough to measure developments of work intensity? Has since then the major burden of intensification been put on other types of workers, for example, subcontracted or self-employed, temporary and seasonal workers, or on workers in the global supply chain?\n- How much evidence is there that **dangerous work has been increasingly contracted out to small and medium-size enterprises and the self-employed**? Are there sufficiently detailed data on whether a larger share of service and client-related work at atypical times or work requiring long working hours has been taken over by self-employed or subcontractors?\n- The **influence of enterprise size** is often difficult to explain. In several aspects, the SMEs perform better, and in other important aspects worse. What might be the reason for this?\n- **How is it possible to overcome the 'prevention gap' that in general exists between mobile and stationary workplaces?** Can the solutions be technical or must there be organisational and legal measures, for example, a limitation of the prolonged use of ergonomically inadequate equipment like mobile phones?\n- Impact of **international and global supply chains on OSH: Does it improve or worsen the working conditions in the EU?** Research could try to estimate the risk-reducing impact of the shift of some high-risk productions to enterprises outside the EU, for example, mining, base chemicals, recycling and so on (export of risks), and to estimate the OSH impact of EU export production, for example, vehicles, specialty chemicals, machines for risks at work inside the EU (import of risks).\n- It would also be a big step forward if research could achieve an agreed **standard value or a standard range** (as reliable as possible) for the **attributable fraction of work** to widespread diseases, that is, cardiovascular diseases, mental and behavioural disorders, musculoskeletal diseases and cancer.\n- **Compliance** with and impact of legislation. Currently, there are data on the percentage of enterprises with a risk assessment but very limited information about the **quality of these risk assessments and of implemented risk management and reduction measures**. Previous studies indicate that in many cases the risk assessment is conducted by an enterprise just to comply with legal obligations (paper compliance). A possible approach could be an **anonymous evaluation of the quality of a representative share** of risk assessments.", - "page_start": 139, - "page_end": 139, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Great brands are like great people. The best ones blend a distinctive personality with a strong character. They combine a \"can-do\" attitude with a \"can't-wait-to-try-somethingnew\" enthusiasm. They know themselves as well as they know the people who associate with them. They know that while good looks are important, beauty is only skin deep; it's what's inside that counts.\n\nBecause all of our brands have something unique and valuable to offer, we're letting them speak for themselves. As for the people who know and love our brands, we've invited a few to share an \"up close and personal\" look into why and how HON INDUSTRIES is …\n\n# T H E P E R F E C T M A T C H", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "workers and those with care duties at home. Digitalisation also offers opportunities for more effective OSH training, advanced workplace risk assessment, communication and OSH inspections.\n\n**Digital technologies can worsen the OSH situation at workplaces.** Depending on how technologies are designed and implemented, on the organisational context and on the employment status, digitalisation may result in workers being more exposed to OSH risks such as ergonomic and psychosocial risks, with an increase in work-related stress, increasing performance pressure and work complexity, facilitating irregular working hours, reducing social interaction and support at work, blurred boundaries between work and private life, and new forms of dislocated work with unclear employment status. Technical concerns relate to aspects like safe interaction of workers with robots and semiautonomous machines and vehicles. The extensive use of data has the potential to harm privacy interests. **Digitalisation can create abrupt (disruptive) and emerging changes at workplaces** and with that very different challenges for OSH.275 Eurofound summarised the opportunities and risks of **ICTbased mobile work** in a table format.276\n\n| Opportunities | Risks |\n| --- | --- |\n| Potential transformation of work organisation | |\n| Contribution to inclusive labour markets | Potential exclusion of certain groups from the labour market |\n| Addressing (regional) labour shortages | (for example, low-skilled workers, older people, place-bound |\n| Job creation and retention | occupations) |\n| Flexibility and autonomy | Advanced monitoring and control |\n| Increased work intensity and stress | |\n| Improved work-life balance | 'Limitless work' |\n| Potential expected 24/7 availability | |\n| Long working hours, limited rest time | |\n| Blurring spheres of work and private life | |\n| Productivity, costs, results-based remuneration | |\n| Improved communication and collaboration | Information overload |\n| Conflicts due to a lack of coordination | |\n| Skills development (technical applications) | Social and professional isolation |\n| High demands for self-management and self-organisation | |\n| Outsourcing of employer responsibilities (equipment, health | |\n| and safety data protection) | |\n\n#### **Table 27: Opportunities and risks of ICT-based mobile work – Eurofound**\n\nEU-OSHA observes particular risks for safety and health in:277\n\n- low standards of OSH (particularly ergonomic) in mobile and home-based work,\n- safety of robots, cobots and autonomous vehicles,\n- platform work with low OSH standards,\n- enhanced and detailed surveillance,\n- permanent availability, and\n- physical inactivity, permanent sitting and focusing on digital equipment.\n\nEU-OSHA included in its ESENER 2019 survey several questions regarding **digitalisation and OSH** in enterprises. There is a great diversity when it comes to the types of digital technologies reported by the establishments. PCs at fixed workplaces (86% of surveyed establishments in the EU27) and laptops, tablets, smartphones or other mobile devices (77%) are frequently reported across all activity sectors and business size classes. Only 6% of surveyed establishments in the EU27 reported using none of the digital technologies.278", - "page_start": 104, - "page_end": 104, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# CHAPTER 11:\n\n### LANGUAGE SKILLS AT WORK HOW TO WRITE A RESIGNATION LETTER\n\nNo matter what the reason, resigning from your job is likely to be an uncomfortable experience.\n\nIf you are leaving for personal reasons (such as moving away, starting a family, or retiring), you may feel sad about leaving. But if you are leaving for a better opportunity, or you've simply had enough of your current job, you may be glad to be moving on.\n\nEither way, it's always going to be in your best interests to leave on a positive note, and to resign in a professional manner.", - "page_start": 47, - "page_end": 47, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### **4.3 Wellbeing and health status**\n\nExisting concepts of **wellbeing** cover **more aspects of work than working conditions or safety and health** at workplaces. Eurofound mentions as the most relevant components: *income, working time arrangements, possibilities for skills development and career advancement, and the degree of individual control over work*.243 The United Nations Economic Commission for Europe (UNECE) developed a scheme of quality of employment that covers these aspects: *safety and ethics of employment, income benefits and employment, working hours and balancing working and non-working life, security of employment and social protection, social dialogue, skills development and training, workplace relationships and work motivation.*244\n\nThis chapter **focuses on the health and safety aspects** of wellbeing, although the OSH aspect is often not clearly separable from the above-mentioned aspects, that is, when surveys are intending to identify the level of 'satisfaction at work'. Still, due to its serious impact on all other aspects of working conditions, the consequences of insufficient health are regarded as critical:\n\n*'While OHS is only one substantive working condition, like earnings and job insecurity it is arguably a critical one for many workers. In terms of scope and severity, even official data … suggests poor OHS is something most workers will experience at some point and many far more frequently.'*245\n\nA common methodology to collect data on **health status** and wellbeing is **self-reporting and selfassessment** of workplace risks, health risks and health problems, absence, job satisfaction and working life perspective from a health point of view. The data are in general collected by EU-wide surveys, for example, by the EWCS, the Flash Eurobarometer, ESENER or the LFS Ad hoc modules. The description of working conditions in the OSH Barometer starts with responses regarding the **'Overall opinion'** on working conditions. This allows insight into the subjective assessment of health risks at work and wellbeing.\n\n### *4.3.1 Satisfaction at work*\n\nIn the EWCS of 2015, at EU level 86% of the workers respond that they are **'satisfied'** (60%) or **'very satisfied'** (26%) with their work. Country differences exist but are not striking. The EU Member States with the highest satisfaction rates are Austria, the Netherlands, Finland, Czechia, Denmark, Belgium and Estonia; they range between 93% and 90%. The six countries with the lowest sum of satisfied and very satisfied responses are Greece, Croatia, France, Spain, Italy and Latvia; their values range between 77% and 82%.\n\n#### **Figure 28: Satisfaction with working conditions in the main paid job – EWCS 2015246**", - "page_start": 88, - "page_end": 88, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "way by OSH legislation or OSH practice. The principle of employer responsibility for working conditions of workers is undermined or at least blurred in such situations.\n\nFuture solutions could focus on several aspects — a **new definition of 'work' or of 'employment', stronger individual responsibility, or extended state interventions to guarantee OSH** also in such working and employment conditions. There are some examples of such solutions but to date most of them focus on better information, that is, stronger individual responsibility.\n\n**Undeclared and illegal employment is scarcely visible** in the statistics. Due to the difficult conditions for research, the overall OSH situation in these types of work is widely unknown; in case study-based investigative studies, the working conditions — including safety and health — for this group are mostly regarded as worse compared to workers with a regular work contract. It seems to be necessary to consider different research and action initiatives for this type of work, also in collaboration with other state supervising authorities.\n\nThe health data clearly show an ever-growing **share of work tasks that go along with or even require physical inactivity**. Inactive work is often characterised by permanent sitting combined with high requirements for visual and mental focusing during work, for example, towards digital equipment or to traffic situations. Serious indirect health consequences of such inactivity can be seen in the strong increase in certain widespread diseases or disease-supporting factors, like obesity.\n\nEven 15 years after the enlargement of the EU in 2004, **significant differences between Member States** can still be observed regarding several working conditions. The data demonstrate that the worst status concerning physical risks, wellbeing, and expectations to do the job until the age of 60 — is almost always present in eastern EU Member States, followed by southern Member States, all compared to the status in central, western and northern Member States**.** For psychosocial risks, it is just the other way around, these are more often reported in central, western and northern Member States.\n\n**International organisations complain about an unfair divide of OSH risks in globalised supply chains**, be it in mining, metallurgy, textile production, disposal of hazardous waste or other sectors. The ILO decided in June 2022 to make OSH one of the Fundamental Principles and Rights at Work. In this context, 10 ILO conventions and instruments are considered now as fundamental, including two OSH conventions: the Occupational Safety and Health Convention, of 1981 (No. 155) and the Promotional Framework for Occupational Safety and Health Convention, of 2006 (No. 187). Ethical, fairness and justice considerations have led to more activities on decent, safe and healthy work in developing countries and a fair share of risks at work in global supply chains. These are important initiatives, but until now they only slightly changed the overall situation when looking at the global scale of the issue.", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "What is an ORCID iD ?", - "target_page": 1, - "target_passage": "ORCID iD is a 16-digit identifier that researchers can register for and use for free.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **The Value of Using Unique Identifiers for Researchers**\n\n#### **Researchers are mobile!**\n\n#### **30% OF THE SCIENTISTS WHO GOT THEIR PhD IN THE UNITED KINGDOM NOW LIVE ELSEWHERE** Source: Science Magazine\n\nResearch institutions and organizations therefore find it hard to\n\n- **Benchmark their organization against others**\n**Identify, track, and report on researchers' aliations and contributions (publications, peer reviews, grants, and more)** \n\n#### **Institutions Face a Rising Tide of Research**\n\n**Institutions must increasingly recognize and demonstrate the impact of all types of research contributions** \n\n# **Tackling Information Overload**\n\nORCID is a non-profit organization, which provides a fully open and interoperable identifier to reliably connect researchers with their research contributions. The ORCID iD is a 16-digit identifier that researchers can register for and use for free.\n\n### **How ORCID Works**\n\n- **It's a registry of unique persistent identifiers for researchers**\n- **It's a hub that connects researchers with their professional activities and contributions**\n- **It's a global community that enables researchers to share their data with other individuals, organizations, and systems**\n\n### **Why Connect with ORCID?**\n\n**Hundreds of members and systems use ORCID globally**\n\n# **5.5 MILLION+**\n\n**live ORCID iDs registered since its 2012 launch**\n\n## **Evidence of Institutional Value**\n\nExamples of time/sta savings achieved by implementing ORCID from around the world\n\n**UK:** 0.2 – 0.4 FTEs per institution1 **Portugal:** 100,000 researcher hours per year2 **Australia:** 15-30 minutes per grant application3 **1. Jisc/ARMA Institutional ORCID Implementation and Cost Benefit Analysis Report 2015 2. Cátia Laranjeira, FCT - Fundação para a Ciência e a Tecnologia 2017 3. Australian Research Council governance meeting, September 2018**\n\n\"Having ORCID iDs for most of our researchers has helped in providing authoritative accounts in our various databases, ensuring accuracy in reviewer identities, and helping editors find reviewers and check expertise.\"\n\n**—Brooks Hanson, Executive Vice President, Science, American Geophysical Union**\n\n#### **How Organizations and Researchers Benefit**\n\n#### **INSTITUTIONS RESEARCHERS**\n\n- Save time and reduce errors with automated information-sharing and cross-system interoperability\n- Manage your organization name and your researchers' connections with it\n\t-\n- Maintain links with your researchers - past, present, and future\n\n- Improve recognition and discoverability of their research\n- Spend more time doing research, less time managing it\n- Control and manage a trusted and easily shareable record of their research activities and aliations – for free\n- **Three Ways to Get Involved**\n\t- **1. Encourage and support your researchers in getting, sharing, and using their ORCID iD**\n\t- **2. Invest in integrating ORCID into your systems**\n\t- **3. Connect data to and from your researchers' ORCID records to support information use and reuse across organizations**\n\nSponsored by ORCID\n\n**To learn more go to https://orcid.org**\n\nAll IDC research is © 2018 by IDC. All rights reserved. All IDC materials are licensed with IDC's permission and in no way does the use or publication of IDC research indicate IDC's endorsement of ORCID's products/or strategies.", - "page_start": 0, - "page_end": 0, - "source_file": "infographic3.pdf" - }, - { - "text": "The following scenario shows the how to create a new copy of a volume in a different storage pool. As shown in Example 7-7, the volume initially has a single copy with copy_id 0 that is provisioned in pool Pool0.\n\n*Example 7-7 The lsvdisk command*\n\nIBM_Storwize:ITSO:superuser>lsvdisk 2 id 2 name vdisk0 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name Pool0 capacity 10.00GB type striped formatted yes formatting no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076400F580049800000000000004 preferred_node_id 2 fast_write_state empty cache readonly udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 File system mirror_write_priority latency RC_change no compressed_copy_count 0 access_IO_group_count 1 last_access_time parent_mdisk_grp_id 0 parent_mdisk_grp_name Pool0 owner_type none owner_id owner_name encrypt yes volume_id 2 volume_name vdisk0 function throttle_id throttle_name IOPs_limit bandwidth_limit_MB volume_group_id volume_group_name cloud_backup_enabled no cloud_account_id cloud_account_name backup_status off last_backup_time restore_status none backup_grain_size", - "page_start": 315, - "page_end": 315, - "source_file": "sg247938.pdf" - }, - { - "text": "- 2. To map a volume, select it and click **Next** to map it to the host. The volume is assigned the next available SCSI ID if you leave **System Assign** selected. However, by selecting **Self Assign**, you can manually set SCSI IDs, as shown in Figure 8-30.\n\n| | | | Map Volumes to ITSO-VMHOST-01: Select SCSI IDs | | | | | | × |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | SCSI ID | → | Caching I/O Group ID | Ili | Type of Mapping | SCSI ID | → | lli | |\n| VMware2 | 2 | イト | 0 | | Private | 0 | | | |\n| VMware3 | 3 | - | 0 | | | | | | |\n| Cancel | | | | | | | Back | Next ▶ | |\n\n*Figure 8-30 Modify Host Volume Mappings: Assign SCSI ID*\n\nIf you select a SCSI ID that is in use for the host, you cannot proceed. As shown in Figure 8-29 on page 347, we selected SCSI ID 0. However, you can see in the right column SCSI ID 0 is allocated. By changing to SCSI ID 1, we can click **Next**.", - "page_start": 369, - "page_end": 369, - "source_file": "sg247938.pdf" - }, - { - "text": "When you are ready to create your striped volume, use the **mkvdisk** command. The command shown in Example 7-2 creates a 10 gigabyte (GB) striped volume with volume ID 8 within the storage pool Pool0 and assigns it to the I/O group io_grp0. Its preferred node is node 1.\n\n*Example 7-2 The mkvdisk command*\n\nIBM_Storwize:ITSO:superuser>mkvdisk -mdiskgrp Pool0 -iogrp io_grp0 -size 10 -unit gb -name Tiger Virtual Disk, id [8], successfully created\n\nTo verify the results, use the **lsvdisk** command providing the volume ID as the command parameter, as shown in Example 7-3.\n\n*Example 7-3 The lsvdisk command*\n\nIBM_Storwize:ITSO:superuser>lsvdisk 8 id 8 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name Pool0 capacity 10.00GB type striped formatted no formatting yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076400F580049800000000000010 preferred_node_id 2 fast_write_state not_empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 File system mirror_write_priority latency RC_change no compressed_copy_count 0 access_IO_group_count 1 last_access_time parent_mdisk_grp_id 0 parent_mdisk_grp_name Pool0 owner_type none owner_id owner_name encrypt yes volume_id 8 volume_name Tiger function throttle_id throttle_name IOPs_limit", - "page_start": 311, - "page_end": 311, - "source_file": "sg247938.pdf" - }, - { - "text": "- AI & ML in Fusion (https://suli.pppl.gov/2023/course/Rea-PPPL-SULI2023.pdf)\n- AI & ML in Fusion, video lecture (https://drive.google.com/file/d/1npCTrJ8XJn20ZGDA_DfMpAN uQZFMzKPh/view?usp=drive_link) Archived (https://web.archive.org/web/20230702164332/ https://drive.google.com/file/d/1npCTrJ8XJn20ZGDA_DfMpANuQZFMzKPh/view?usp=drive _link) 2 July 2023 at the Wayback Machine\n- Alter, Alexandra; Harris, Elizabeth A. (20 September 2023), \"Franzen, Grisham and Other Prominent Authors Sue OpenAI\" (https://www.nytimes.com/2023/09/20/books/authors-open ai-lawsuit-chatgpt-copyright.html?campaign_id=2&emc=edit_th_20230921&instance_id=103 259&nl=todaysheadlines®i_id=62816440&segment_id=145288&user_id=ad24f3545dae 0ec44284a38bb4a88f1d), *The New York Times*, archived (https://web.archive.org/web/2024 0914155020/https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-co pyright.html?campaign_id=2&emc=edit_th_20230921&instance_id=103259&nl=todaysheadl ines®i_id=62816440&segment_id=145288&user_id=ad24f3545dae0ec44284a38bb4a88 f1d) from the original on 14 September 2024, retrieved 5 October 2024\n- Altman, Sam; Brockman, Greg; Sutskever, Ilya (22 May 2023). \"Governance of Superintelligence\" (https://openai.com/blog/governance-of-superintelligence). *openai.com*. Archived (https://web.archive.org/web/20230527061619/https://openai.com/blog/governanc e-of-superintelligence) from the original on 27 May 2023. Retrieved 27 May 2023.\n- Anderson, Susan Leigh (2008). \"Asimov's \"three laws of robotics\" and machine metaethics\". *AI & Society*. **22** (4): 477–493. doi:10.1007/s00146-007-0094-5 (https://doi.org/10.1007%2Fs0 0146-007-0094-5). S2CID 1809459 (https://api.semanticscholar.org/CorpusID:1809459).\n- Anderson, Michael; Anderson, Susan Leigh (2011). *Machine Ethics*. Cambridge University Press.\n- Arntz, Melanie; Gregory, Terry; Zierahn, Ulrich (2016), \"The risk of automation for jobs in OECD countries: A comparative analysis\", *OECD Social, Employment, and Migration Working Papers 189*\n- Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). \"Cognitive developmental robotics: a survey\". *IEEE Transactions on Autonomous Mental Development*. **1** (1): 12–34. doi:10.1109/tamd.2009.2021702 (https://doi.org/10.110 9%2Ftamd.2009.2021702). S2CID 10168773 (https://api.semanticscholar.org/CorpusID:101 68773).\n- \"Ask the AI experts: What's driving today's progress in AI?\" (https://www.mckinsey.com/business -functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progressin-ai). *McKinsey & Company*. Archived (https://web.archive.org/web/20180413190018/http s://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-expert s-whats-driving-todays-progress-in-ai) from the original on 13 April 2018. Retrieved 13 April 2018.\n- Barfield, Woodrow; Pagallo, Ugo (2018). *Research handbook on the law of artificial intelligence*. Cheltenham, UK: Edward Elgar Publishing. ISBN 978-1-7864-3904-8. OCLC 1039480085 (https://search.worldcat.org/oclc/1039480085).\n- Beal, J.; Winston, Patrick (2009), \"The New Frontier of Human-Level Artificial Intelligence\", *IEEE Intelligent Systems*, vol. 24, pp. 21–24, doi:10.1109/MIS.2009.75 (https://doi.org/10.11 09%2FMIS.2009.75), hdl:1721.1/52357 (https://hdl.handle.net/1721.1%2F52357), S2CID 32437713 (https://api.semanticscholar.org/CorpusID:32437713)\n- Berdahl, Carl Thomas; Baker, Lawrence; Mann, Sean; Osoba, Osonde; Girosi, Federico (7 February 2023). \"Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review\" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11041459). *JMIR AI*. **2**: e42936. doi:10.2196/42936 (https://doi.org/10.2196%2F42936). ISSN 2817-1705 (https://se arch.worldcat.org/issn/2817-1705). PMC 11041459 (https://www.ncbi.nlm.nih.gov/pmc/articl es/PMC11041459). PMID 38875587 (https://pubmed.ncbi.nlm.nih.gov/38875587). S2CID 256681439 (https://api.semanticscholar.org/CorpusID:256681439).", - "page_start": 52, - "page_end": 52, - "source_file": "wikipedia3.pdf" - }, - { - "text": "You can manually assign a SCSI ID to the LUNs you are mapping. This technique is particularly useful when the host needs to have the same LUN ID for a LUN before and after it is migrated. To assign the SCSI ID manually, select the **Self Assign** option and follow the instructions as shown in Figure 9-11.\n\n| Map Volumes to iSCSI_Host: Select SCSI IDs | | | | | | × |\n| --- | --- | --- | --- | --- | --- | --- |\n| Select SCSI ID these mappings will be placed on: | | These SCSI IDs are already occupied: | | | | |\n| → Name SCSI ID Caching I/O Group ID | llí | Type of Mapping | SCSI ID | → | | lli |\n| イ controller6_0 ... 1 0 ▼ | | Private | 0 | | | |\n| Cancel | | | | Back | Next ▶ | |\n\n*Figure 9-11 Manually assign a LUN SCSI ID to mapped Volume*\n\nWhen your LUN mapping is ready, click **Next**. A new dialog is displayed with a summary of the new and existing mappings, as shown in Figure 9-12.\n\n| Map Volumes to iSCSI_Host: Summary | | | | × |\n| --- | --- | --- | --- | --- |\n| The following volumes will be mapped to iSCSI_Host: | | | | |\n| Name | SCSI ID | NVMe NSID | Caching I/O Group ID | Ne î |\n| controller6_000000000000002 | ਹ | | 0 | New |\n| ITSO_Vol001 | 0 | | 0 | |\n| < | | | | > |\n| Cancel | | | Back | Map Volumes |\n\n*Figure 9-12 Volumes mapping summary before migration*", - "page_start": 418, - "page_end": 418, - "source_file": "sg247938.pdf" - }, - { - "text": "Another example is a sales team that generates a monthly sales report for each person on the sales team. The sales manager needs a copy of these reports. A distribution can be set up to email the documents to the appropriate sales manager.\n\nThe applications for using ODF are endless, but the basis for using it is the same. Documents are loaded regularly and are needed by one or more users as they become available in Content Manager OnDemand. Let us look at a specific example from our fictitious company that was introduced in 1.2.1, \"Background information of an example company\" on page 6.\n\nAFinancial Co generates monthly credit card statements for all its customers. These customers can choose to receive a hardcopy of the statement or have the statement sent to them as an email attachment.\n\nIn this example, even though separate customer statements are created each month, they are loaded into the system at the same time, so only one load occurs each month. This information is important when you are determining the best way to set up the distribution. Before a distribution is set up, ask yourself the following questions:\n\n- -What documents are needed?\n- -Who receives the documents?\n- -When are the documents retrieved and delivered?\n- -Where are they delivered?\n\n# **14.1.1 What documents are needed**\n\nIn our example, we identified our documents as the customer statements. How do you identify the customer report that you need from the hundreds of thousands of documents that are stored in Content Manager OnDemand? Certain customers might receive multiple monthly statements.\n\nIn general, you identify the documents by creating an SQL query that uses index fields and values that uniquely identify the documents that you want to retrieve when they are loaded. You can then define the distribution to include multiple report bundles with different SQL queries for each bundle. If the SQL must retrieve the document that is the same except for a value that identifies the recipient, a single distribution can be used with a recipient list. In this case, the SQL specifies a wildcard value. When processing, ODF fills in the recipient ID in the SQL statement. For example, a recipient list contains recipients 100001, 100002, and 100003 and an SQL statement of \"Where branch_id = '$ODF_RECIPIENT'\". When this recipient list is processed, ODF creates a distribution for recipient 100001 with all reports where branch_id = '100001', recipient 100002 will receive a distribution that contains all reports where branch_id = '100002', and so on.\n\n# **14.1.2 Who receives the documents**\n\nIn our example, each customer needs a statement copy every month. To identify the customers to Content Manager OnDemand, an ODF recipient must be created for each customer. Depending on how the documents are delivered, a destination must be set up. For example, if a set of documents will be delivered to a recipient by using email, an email address must be specified in the recipient definition.\n\n# **14.1.3 When the documents are retrieved and delivered**\n\nODF operates throughout the 24-hour day. You can schedule your distributions to be processed at a specific time of day or processed as they are loaded. To specify when the distribution is delivered, choose the method, which is either Loaded, All Ready, Time of Day, Time of Print, or external.", - "page_start": 341, - "page_end": 341, - "source_file": "sg246915.pdf" - }, - { - "text": "*Example 7-9 Synchronization*\n\n```\nIBM_Storwize:ITSO:superuser>lsvdisksyncprogress\nvdisk_id vdisk_name copy_id progress estimated_completion_time\n2 vdisk0 1 0 171018232305\nIBM_Storwize:ITSO:superuser>lsvdisksyncprogress\nvdisk_id vdisk_name copy_id progress estimated_completion_time\n2 vdisk0 1 100\n```\nAs shown in Example 7-10, the new volume copy (copy_id 1) was added and appears in the output of the **lsvdisk** command.\n\n*Example 7-10 The lsvdisk command*\n\nIBM_Storwize:ITSO:superuser>lsvdisk vdisk0 id 2 name vdisk0 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 10.00GB type many formatted yes formatting no mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 6005076400F580049800000000000004 preferred_node_id 2 fast_write_state empty cache readonly udid fc_map_count 0 sync_rate 50 copy_count 2 se_copy_count 0 File system mirror_write_priority latency RC_change no compressed_copy_count 0 access_IO_group_count 1 last_access_time parent_mdisk_grp_id many parent_mdisk_grp_name many owner_type none owner_id owner_name encrypt yes volume_id 2 volume_name vdisk0 function throttle_id throttle_name IOPs_limit bandwidth_limit_MB volume_group_id", - "page_start": 317, - "page_end": 317, - "source_file": "sg247938.pdf" - }, - { - "text": "Example 7-15 shows the **splitvdiskcopy** command, which is used to split a mirrored volume. It creates a volume named SPLIT_VOL from copy with ID 1 of the volume named VOLUME_WITH_MIRRORED_COPY.\n\n*Example 7-15 Split volume*\n\nIBM_Storwize:ITSO:superuser>splitvdiskcopy -copy 1 -iogrp 0 -name SPLIT_VOL VOLUME_WITH_MIRRORED_COPY Virtual Disk, id [1], successfully created\n\nAs you can see in Example 7-16, the new volume is created as an independent volume.\n\n*Example 7-16 The lsvdisk command*\n\nIBM_Storwize:ITSO:superuser>lsvdisk SPLIT_VOL id 1 name SPLIT_VOL IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name Pool1 capacity 10.00GB type striped formatted yes formatting no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076400F580049800000000000012 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 File system mirror_write_priority latency RC_change no compressed_copy_count 0 access_IO_group_count 1 last_access_time parent_mdisk_grp_id 1 parent_mdisk_grp_name Pool1 owner_type none owner_id owner_name encrypt yes volume_id 1 volume_name SPLIT_VOL function throttle_id throttle_name IOPs_limit bandwidth_limit_MB", - "page_start": 321, - "page_end": 321, - "source_file": "sg247938.pdf" - }, - { - "text": "```\narn:partition:service:region:account-id:resource-id\narn:partition:service:region:account-id:resource-type/resource-id\narn:partition:service:region:account-id:resource-type:resource-id\n```\n- arn: literally, the string \"arn\"\n- partition is one of the three partitions: AWS Regions, AWS China Regions, or AWS GovCloud (US) Regions\n- service is the specific service such as Amazon EC2 or DynamoDB\n- region is the AWS region like us-east-1 (North Virginia)\n- account-id is the AWS account ID\n- resource-id is the unique resource ID. Other forms for resource IDs like resource-type/ resource-id, are used by services like IAM where IAM users have resource-type of user and resource-id a username like MyUsername,\n\nTry to identify the service, region, and resource for the following example ARNs:\n\n```\narn:aws::dynamodb:us-west-2:123456789012:table/myDynamoDBTable\narn:aws::lambda:us-east-2:123456789012:function:my-function:1\n```\nIf you are interested in learning more, check out a map of Regions and Availability Zones, a view of our data centers, and the complete list of regional service endpoints.\n\n# **Security model**\n\nSecurity is a top priority for AWS. Before you start building serverless solutions, you need to know how security factors into AWS solutions.\n\nAmazon Web Services has a *shared responsibility model:*", - "page_start": 18, - "page_end": 18, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "What type of instability causes rims in ruptured polystyrene thin films to decay into small drops ?", - "target_page": 3, - "target_passage": " The rims may further decay into lines of small drops due to a Rayleigh-type instability", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "### I. INTRODUCTION\n\nThe patterns formed in dewetting processes have attracted strong interest since Reiter analysed the process quantitatively in the early nineties. In these experiments, that proved to be a paradigm in our understanding of dewetting, a uniform thin film of polystyrene (tens of nanometers thick) is deposited on a flat silicon oxide substrate is brought above the glass transition temperature. The film ruptures in several places, forming holes which subsequently grow, competing for space. As a result, a random polygonal network of liquid rims emerges. The rims may further decay into lines of small drops due to a Rayleigh-type instability [1–3]. The related problems of retracting contact lines on partially wetting substrates and the opening of single holes in rather thick films have also been studied [4, 5].\n\nSubsequent work has mainly focused on many different aspects of the dewetting process for simple non-volatile liquids and polymers (for reviews see Refs. [6–8]). All stages of the dewetting of a film are studied: the initial film rupture via nucleation or a surface instability (called spinodal dewetting) [1, 9–13], the growth process of individual holes [14–16], the evolution of the resulting hole pattern [3, 13], and the stability of the individual dewetting fronts [17–19]. We note in passing, that descriptions of dewetting patterns may also be found in historic papers, particularly for the dewetting of a liquid film on a liquid substrate. Tomlinson [20, footnote 18 on p. 40] considered turpentine on water and Marangoni [21, p. 352f] oil on water.\n\nMore recently, interest has turned to the dewetting processes of solutions and suspensions. However, these systems have not yet been investigated in any great depth. Such systems are complicated because their behaviour is determined by the interplay between the various solute (or colloid) and solvent transport processes. Furthermore, the solvents that are used often evaporate, i.e., one has to distinguish between 'normal' convective dewetting and evaporative dewetting. A number of experiments have been performed employing (colloidal) solutions of polymers [22–25], macromolecules like collagen and DNA [26–31] and nanoparticles [32–40]. The latter are sometimes referred to as 'nanofluids'. The initial focus of much of the research in the field has been on investigating the structures that are formed which are similar to the ones observed in the 'classical' dewetting of non-volatile liquids. Labyrinthine structures and polygonal networks result from spinodal dewetting and heterogeneous nucleation and growth, respectively. They are 'decorated' with the solute and therefore conserve the transient dewetting pattern as a dried-in structure when all the solvent has evaporated [28, 34]. The picture is, however, not complete. The solute may", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2669.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and – in the case of DNA – liquid crystalline structures [22, 30, 45–49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51–53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55–58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n### II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37–40, 61]. The gold core of 2 – 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms (C6 to C12) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, \"Drying of solids wetted by thin liquid films,\" Can. J. Phys. 68, 1084–1088 (1989).\n- [6] P. Muller-Buschbaum, \"Dewetting and pattern formation in thin polymer films as investigated in real ¨ and reciprocal space,\" J. Phys.-Condes. Matter 15, R1549–R1582 (2003).\n- [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, \"Dynamics and structure formation in thin polymer melt films,\" J. Phys.-Condes. Matter 17, S267–S290 (2005).\n- [8] U. Thiele, \"Structure formation in thin liquid films,\" in S. Kalliadasis and U. Thiele, editors, \"Thin films of Soft Matter,\" pages 25–93, Springer, Wien (2007).\n- [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, \"Spinodal dewetting of thin polymer films,\" Phys. Rev. Lett. 81, 1251–1254 (1998).\n- [10] R. Seemann, S. Herminghaus, and K. Jacobs, \"Dewetting patterns and molecular forces: A reconciliation,\" Phys. Rev. Lett. 86, 5534–5537 (2001).\n- [11] U. Thiele, M. G. Velarde, and K. Neuffer, \"Dewetting: Film rupture by nucleation in the spinodal regime,\" Phys. Rev. Lett. 87, 016104 (2001).\n- [12] M. Bestehorn and K. Neuffer, \"Surface patterns of laterally extended thin liquid films in three dimensions,\" Phys. Rev. Lett. 87, 046101 (2001).\n- [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, \"Complex ¨ dewetting scenarios captured by thin-film models,\" Nat. Mater. 2, 59–63 (2003).\n- [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, \"Dynamics of dewetting,\" Phys. Rev. Lett. 66, 715– 718 (1991).\n- [15] R. Seemann, S. Herminghaus, and K. Jacobs, \"Shape of a liquid front upon dewetting,\" Phys. Rev. Lett. 87, 196101 (2001).\n- [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, \"New slip regimes and the shape of ¨ dewetting thin liquid films,\" Phys. Rev. Lett. 95, 127801 (2005).\n- [17] F. Brochard-Wyart and C. Redon, \"Dynamics of liquid rim instabilities,\" Langmuir 8, 2324–2329 (1992).\n- [18] G. Reiter and A. Sharma, \"Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,\" Phys. Rev. Lett. 87, 166103 (2001).\n- [19] A. Munch and B. Wagner, \"Contact-line instability of dewetting thin films,\" Physica D ¨ 209, 178–190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "fast evaporation [104, 105]. These complex experimental systems all represent systems of high practical interest that the theories presented here are not (yet) able to describe. Such experiments do, however, provide a strong motivation for further work to extend the theories presented here, as well as to develop new approaches.\n\nLet us finally mention that several topics were entirely excluded from our discussion here. First, we focused on a limited range of descriptions and did, for instance, not mention lattice Boltzmann, molecular dynamics or dissipative particle dynamics approaches that may also be employed to describe fluid suspensions [106–109]. Second, we have only discussed spatially homogeneous substrates. Patterned substrates are widely used in dewetting experiments [38, 110–112]. Theoretical descriptions are well developed for the dewetting of films of pure non-volatile liquids on such substrates [68, 113–119]. However, in the case of volatile liquids on heterogeneous substrates, much less work has been done. A third topic that we did not touch upon are possible continuum thin film approaches to demixing dewetting suspensions. We believe it is feasible to extend the diffuse interface theories such as model-H [120] to include the influence of evaporation in dewetting nanoparticle suspensions. For instance, such models have already been adapted to describe demixing free surface films of polymer blends [121–123].\n\n## Acknowledgments\n\nAJA and MJR gratefully acknowledge RCUK and EPSRC, respectively, for financial support. We acknowledge support by the European Union via the FP6 and FP7 Marie Curie schemes [Grants MRTN-CT-2004005728 (PATTERNS) and PITN-GA-2008-214919 (MULTIFLOW)].\n\n- [1] G. Reiter, \"Dewetting of thin polymer films,\" Phys. Rev. Lett. 68, 75–78 (1992).\n- [2] G. Reiter, \"Mobility of polymers in films thinner than their unperturbed size,\" Europhys. Lett. 23, 579–584 (1993).\n- [3] A. Sharma and G. Reiter, \"Instability of thin polymer films on coated substrates: Rupture, dewetting and drop formation,\" J. Colloid Interface Sci. 178, 383–399 (1996).\n- [4] P.-G. de Gennes, \"Wetting: Statics and dynamics,\" Rev. Mod. Phys. 57, 827–863 (1985).", - "page_start": 24, - "page_end": 24, - "source_file": "1001.2669.pdf" - }, - { - "text": "polymers which only result in fingers without side-branches [75] or fields of droplets left behind [18].\n\nA quantitative analysis shows that the mean number of fingers depends only very weakly on the average concentration of the nanoparticles ρ av n ; only the mean finger width increases with increasing concentration. However, decreasing the mobility (i.e., decreasing the diffusivity of the particles) leads to a much denser finger pattern and also causes the front instability to appear at an earlier stage, i.e., when the front instability is in its initial linear regime, it has a higher growth rate and a smaller characteristic wavelength (cf. Fig. 2(c) and (d)). Decreasing the effective chemical potential (increasing its absolute value) has a similar but less strong effect. For details see [41]. These findings lead to the conclusion that the determining factor for the front instability is the ratio of the time-scales of the different transport processes. In particular, the front becomes more unstable when the velocity of the dewetting front increases as compared to the mean diffusion velocity of the nanoparticles.\n\nIf the particle diffusivity is low, the front 'collects' the particles, resulting in a build up of the particles at the front that itself is slowed down. This makes the front unstable and any fluctuation along the front will trigger a transverse instability that results in an evolving fingering pattern. This happens even when the particle-liquid and particle-particle attractive interactions do not favour clustering (i.e. demixing of the liquid and the nanoparticles). In this regime, the instability is a purely dynamic effect and energetics plays no role in determining the number of fingers. We call this the 'transport regime'.\n\nTo illustrate the influence of energetics (characterized by the interaction parameters εij ) on fingering in Fig. 3 we display the dependence of the mean finger number on particle-liquid interaction strength εnl. For εnl ≥ 1.5 the mean finger number < f > is nearly constant; this is the transport regime. However, on decreasing εnl below 1.5, we observe a marked increase in the value of < f >, indicating that energy plays an important role in determining the number of fingers in this regime. In this parameter range, demixing of particles and liquid occurs at the moving front and increases its transverse instability. In this 'demixing regime', the wavelength of the fingering instability is determined by the dynamics *and* the energetics of the system. Decreasing εnl further (below 1.4 in Fig. 3) one first observes in regime (iii) a slight decrease in the average finger number. This is a geometric effect resulting from our one-dimensional finger counting routine: The fingers increasingly break up and the dried-in pattern looks progressively isotropic. In regime (iv), the measure hfi does not represent a finger number but instead indicates a decrease in the typical", - "page_start": 11, - "page_end": 11, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [81] A. J. Archer and M. Rauscher, \"Dynamical density functional theory for interacting brownian particles: Stochastic or deterministic?\" J. Phys. A-Math. Gen. 37, 9325–9333 (2004).\n- [82] A. J. Archer and R. Evans, \"Dynamical density functional theory and its application to spinodal decomposition,\" J. Chem. Phys. 121, 4246–4254 (2004).\n- [83] P. A. Monson, \"Mean field kinetic theory for a lattice gas model of fluids confined in porous materials,\" J. Chem. Phys. 128, 084701 (2008).\n- [84] P. M. Chaikin and T. C. Lubensky, *Principles of condensed matter physics*, Cambridge University Press (1997).\n- [85] J. S. Langer, \"An introduction to the kinetics of first-order phase transitions,\" in C. Godreche, editor, \"Solids far from Equilibrium,\" pages 297–363, Cambridge University Press (1992).\n- [86] M. A. Spaid and G. M. Homsy, \"Stability of Newtonian and viscoelastic dynamic contact lines,\" Phys. Fluids 8, 460–478 (1996).\n- [87] U. Thiele and E. Knobloch, \"Front and back instability of a liquid film on a slightly inclined plate,\" Phys. Fluids 15, 892–907 (2003).\n- [88] M. R. E. Warner, R. V. Craster, and O. K. Matar, \"Surface patterning via evaporation of ultrathin films containing nanoparticles,\" J. Colloid Interface Sci. 267, 92–110 (2003).\n- [89] O. K. Matar, R. V. Craster, and K. Sefiane, \"Dynamic spreading of droplets containing nanoparticles,\" Phys. Rev. E 76, 056315 (2007).\n- [90] J. J. Zhou, B. Dupuy, A. L. Bertozzi, and A. E. Hosoi, \"Theory for shock dynamics in particle-laden thin films,\" Phys. Rev. Lett. 94, 117803 (2005).\n- [91] B. P. Cook, A. L. Bertozzi, and A. E. Hosoi, \"Shock solutions for particle-laden thin films,\" SIAM J. Appl. Math. 68, 760–783 (2008).\n- [92] R. V. Craster, O. K. Matar, and K. Sefiane, \"Pinning, retraction, and terracing of evaporating droplets containing nanoparticles,\" Langmuir (2009), online available.\n- [93] D. Quemada, \"Rheology of concentrated disperse systems and minimum energy-dissipation principle I. Viscosity-concentration relationship,\" Rheol. Acta 16, 82–94 (1977).\n- [94] D. Quemada and C. Berli, \"Energy of interaction in colloids and its implications in rheological modeling,\" Adv. Colloid Interface Sci. 98, 51–85 (2002).\n- [95] J. J. Stickel and R. L. Powell, \"Fluid mechanics and rheology of dense suspensions,\" Annu. Rev. Fluid Mech. 37, 129–149 (2005).\n- [96] J. K. G. Dhont, *An Introduction to Dynamics of Colloids*, Elsevier, Amsterdam (1996).", - "page_start": 30, - "page_end": 30, - "source_file": "1001.2669.pdf" - }, - { - "text": "is similar to the size of the nanoparticles. At a certain distance from the macroscopic front, the ultrathin film starts to evolve a locally isotropic pattern of holes. The holes themselves grow in an unstable manner resulting in an array of isotropically branched structures as shown, e.g., above in Fig. 1. This indicates that at least some of the patterns described in the literature may have arisen from processes in similar ultrathin 'postcursor' films.\n\nThe existence of the ultrathin 'postcursor' film is an experimental finding that can be drawn on when choosing a theoretical approach to account for the pattern formation (see below). Note however, that at the moment there exists no explanation for its existence. A possible hypothesis is that the substrate strongly attracts the nanoparticles. As a result they form a dense suspension layer having a thickness roughly equal to the diameter of the nanoparticles. The observed mesoscopic dewetting front then actually correspond to an autophobic dewetting of a low concentration suspension from the higher concentration suspension on the surface of the substrate.\n\n### III. MODELLING APPROACHES\n\nModels of dewetting thin films of pure liquids or polymers are often based on thin film hydrodynamics. Starting from the Stokes equations, together with continuity and boundary conditions at the substrate and free surface, one applies a long-wave approximation (assuming small surface slopes and contact angles) [8, 63] and obtains a non-linear evolution equation for the film thickness profile h(x, y, t). In the case of volatile liquids one finds [55–58, 64]\n\n$$\\partial_{t}h\\,=\\,\\nabla\\cdot\\left[Q_{\\mathrm{e}}\\nabla\\frac{\\delta F}{\\delta h}\\right]\\,-\\,Q_{\\mathrm{e}}\\frac{\\delta F}{\\delta h},\\tag{1}$$\n\nwith the mobility functions Qc(h) = h 3/3η ≥ 0 (assuming Poiseuille flow in the film and no slip at the substrate; η is the dynamic viscosity) and Qe ≥ 0 for the convective and evaporative part of the dynamics, respectively. Qe is a rate constant that can be obtained from gas kinetic theory or from experiment [57]. Note that Eq. (1) only applies if the pressure in the vapour above the film is close to the saturation pressure. For alternative expressions that are used to describe the non-conserved evaporative dynamics see, e.g., Refs. [56, 57, 65–69]. Finally, ∇ = (∂x, ∂y), and ∂t , ∂x and ∂y denote partial derivatives w.r.t. time and the coordinates.\n\nFocusing on the influence of capillarity and wettability only, the energy functional F[h] is given by\n\n$$F[h]\\,=\\,\\int dx\\int dy\\left[\\frac{\\gamma}{2}(\\nabla h)^{2}+f(h)-\\mu h\\right]\\tag{2}$$", - "page_start": 6, - "page_end": 6, - "source_file": "1001.2669.pdf" - }, - { - "text": "dewetted liquid. The front recedes until all liquid is collected in a central drop. Since no liquid evaporates [Qnc = 0 in Eq. (1)], the particle concentration does not change during the process.\n\nThe situation changes when allowing for evaporation (Qnc > 0). Now the front may retract by convection *and/or* evaporation. Evaporation leads to the possibility of a strong increase in the particle concentration at the contact line as evaporation is strongest there. Due to the strong nonlinear dependence of the viscosity on the particle concentration, this may lead to a dramatic decrease of the convective contribution to the front velocity. For moderate evaporation rates, this may result in a (temporary) self-pinning of the front. Within the present basic model, the process can (after complete dry-in) result in three different basic deposition patterns: (i) for very fast evaporation rates, all other processes occur over time scales that are much larger. In particular, the effects of convective redistribution of the liquid are neglectable. As a result one finds that a nearly homogeneous film of nanoparticles of thickness hp = φ0h0 is deposited (see Fig. 6(a)). Convection only results in the small heap of material visible at the left hand side of Fig. 6(a). The decrease in hp on the right side of Fig. 6(a) arises due to the diffusion of particles to the right of the initial front position; (ii) for very low evaporation rates, the film dynamics is dominated by convective dewetting as this process acts on a much shorter time scale than evaporation. As a result, all the liquid is collected into a drop before evaporation slowly removes the remaining solvent. Under these conditions most of the nanoparticles are deposited in a single heap (see Fig. 6(c)). Depending on the diffusivity, the heap might be highest at the centre or show a depression there; (iii) at intermediate evaporation rates, one may observe the deposition of a nanoparticle ring around a region with a nanoparticle film of much lower height. At the centre deposition might increase again (see Fig. 6(b)).\n\nThe most intriguing feature is the ring formation that has been observed experimentally for suspensions of very different particle sizes ranging from nanometers [32, 36, 46, 47] to hundreds of micrometers. Pinning of the contact line and thermal Marangoni effects are often mentioned as necessary conditions for the ring formation. The contact line pinning is often assumed to result from substrate heterogeneities. Film height and concentration profiles at various instants during the dewetting process are displayed in Fig. 7. The profiles are from before, at and after self-pinning of the contact line. In Fig. 8 we display a space-time plot for the complete process. At first, the front recedes in the same manner as when there is no evaporation, but now driven by convection and evaporation. A small capillary rim forms that collects all the dewetted liquid that does not evaporate. The particle concentration slowly increases at the contact line (Fig. 7(a) and regime", - "page_start": 20, - "page_end": 20, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height hp = hφ. The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\nshould also be investigated further in the simple case presented here.\n\n### IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "Concerning the dewetting of nanoparticle solutions, how does the concentration of nanoparticle affect the main finger's width ?", - "target_page": 12, - "target_passage": "A quantitative analysis shows that the mean number of fingers depends only very weakly on the av- erage concentration of the nanoparticles ; only the mean finger width increases with increasing concentration", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "### I. INTRODUCTION\n\nThe patterns formed in dewetting processes have attracted strong interest since Reiter analysed the process quantitatively in the early nineties. In these experiments, that proved to be a paradigm in our understanding of dewetting, a uniform thin film of polystyrene (tens of nanometers thick) is deposited on a flat silicon oxide substrate is brought above the glass transition temperature. The film ruptures in several places, forming holes which subsequently grow, competing for space. As a result, a random polygonal network of liquid rims emerges. The rims may further decay into lines of small drops due to a Rayleigh-type instability [1–3]. The related problems of retracting contact lines on partially wetting substrates and the opening of single holes in rather thick films have also been studied [4, 5].\n\nSubsequent work has mainly focused on many different aspects of the dewetting process for simple non-volatile liquids and polymers (for reviews see Refs. [6–8]). All stages of the dewetting of a film are studied: the initial film rupture via nucleation or a surface instability (called spinodal dewetting) [1, 9–13], the growth process of individual holes [14–16], the evolution of the resulting hole pattern [3, 13], and the stability of the individual dewetting fronts [17–19]. We note in passing, that descriptions of dewetting patterns may also be found in historic papers, particularly for the dewetting of a liquid film on a liquid substrate. Tomlinson [20, footnote 18 on p. 40] considered turpentine on water and Marangoni [21, p. 352f] oil on water.\n\nMore recently, interest has turned to the dewetting processes of solutions and suspensions. However, these systems have not yet been investigated in any great depth. Such systems are complicated because their behaviour is determined by the interplay between the various solute (or colloid) and solvent transport processes. Furthermore, the solvents that are used often evaporate, i.e., one has to distinguish between 'normal' convective dewetting and evaporative dewetting. A number of experiments have been performed employing (colloidal) solutions of polymers [22–25], macromolecules like collagen and DNA [26–31] and nanoparticles [32–40]. The latter are sometimes referred to as 'nanofluids'. The initial focus of much of the research in the field has been on investigating the structures that are formed which are similar to the ones observed in the 'classical' dewetting of non-volatile liquids. Labyrinthine structures and polygonal networks result from spinodal dewetting and heterogeneous nucleation and growth, respectively. They are 'decorated' with the solute and therefore conserve the transient dewetting pattern as a dried-in structure when all the solvent has evaporated [28, 34]. The picture is, however, not complete. The solute may", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2669.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability pacc = min[1, exp(−∆E/kT)] where k is the Boltzmann constant, T the temperature and ∆E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1. This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kTc ≈ 0.567 liquid and vapour coexist when µ = µcoex = −2. For µ > −2 [µ < −2] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < −2 [µ > −2], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < −2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further – at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0.2. Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26–28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and – in the case of DNA – liquid crystalline structures [22, 30, 45–49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51–53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55–58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n### II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37–40, 61]. The gold core of 2 – 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms (C6 to C12) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes – an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 5: (Colour online) Density profiles for the situation where the substrate is covered by nanoparticles with average density ρ av n = 0.3 and with the liquid excluded from the region y < 0. The top row shows the nanoparticle density profiles and bottom row the corresponding liquid density profiles at the times t/tl = 1000 (left), 10000 (middle) and 30000 (right), where tl = 1/kTMnc l σ 2 . The parameters are kT /εll = 0.8, εnl/εll = 0.6, εnn = 0, α = 0.2Mnc l σ 4 , Mc l = 0, ρl(t = 0) = 0.9 ± ξ (where ξ represents white noise of amplitude 0.05) and (µ − µcoex)/kT = −0.78.\n\nThis theory allows us to study the time evolution of the evaporating film of nanoparticle suspension without some of the restrictions of the kinetic Monte Carlo model. Here, however, we illustrate its application in similar parameter regimes as used above for the KMC. We focus on two examples: (i) the spinodal dewetting of a initially flat film of nanoparticle suspension characterised by constant ρl and ρn (Fig. 4); and (ii) the retraction of a dewetting front that is unstable with respect to a fingering instability (Fig. 5).\n\nFig. 4 presents two pairs of snapshots from a purely evaporative dewetting process deep inside the parameter region of the phase diagram where spinodal dewetting occurs. For small times the film becomes unstable showing a typical spinodal labyrinthine pattern with a typical wavelength. The nanoparticles concentrate where the remaining liquid is situated. However, they are 'slow' in their reaction: when ρl already takes values in the range 0.08 – 0.83, the nanoparticle concentration has only deviated by about 25% from its initial value. The film thins strongly forming many", - "page_start": 16, - "page_end": 16, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [34] P. Moriarty, M. D. R. Taylor, and M. Brust, \"Nanostructured cellular networks,\" Phys. Rev. Lett. 89, 248303 (2002).\n- [35] E. Rabani, D. R. Reichman, P. L. Geissler, and L. E. Brus, \"Drying-mediated self-assembly of nanoparticles,\" Nature 426, 271–274 (2003).\n- [36] L. V. Govor, G. Reiter, J. Parisi, and G. H. Bauer, \"Self-assembled nanoparticle deposits formed at the contact line of evaporating micrometer-size droplets,\" Phys. Rev. E 69, 061609 (2004).\n- [37] C. P. Martin, M. O. Blunt, and P. Moriarty, \"Nanoparticle networks on silicon: Self-organized or disorganized?\" Nano Lett. 4, 2389–2392 (2004).\n- [38] C. P. Martin, M. O. Blunt, E. Pauliac-Vaujour, A. Stannard, P. Moriarty, I. Vancea, and U. Thiele, \"Controlling pattern formation in nanoparticle assemblies via directed solvent dewetting,\" Phys. Rev. Lett. 99, 116103 (2007).\n- [39] A. Stannard, C. P. Martin, E. Pauliac-Vaujour, P. Moriarty, and U. Thiele, \"Dual-scale pattern formation in nanoparticle assemblies,\" J. Chem. Phys. C 112, 15195–15203 (2008).\n- [40] E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, I. Notingher, P. J. Moriarty, I. Vancea, and U. Thiele, \"Fingering instabilities in dewetting nanofluids,\" Phys. Rev. Lett. 100, 176102 (2008).\n- [41] I. Vancea, U. Thiele, E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, and P. J. Moriarty, \"Front instabilities in evaporatively dewetting nanofluids,\" Phys. Rev. E 78, 041601 (2008).\n- [42] U. Thiele, *Entnetzung von Kollagenfilmen*, Ph.D. thesis, Technische Universitat Dresden (1998). ¨\n- [43] H. Yabu and M. Shimomura, \"Preparation of self-organized mesoscale polymer patterns on a solid substrate: Continuous pattern formation from a receding meniscus,\" Adv. Funct. Mater. 15, 575–581 (2005).\n- [44] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, \"Capillary flow as the cause of ring stains from dried liquid drops,\" Nature 389, 827–829 (1997).\n- [45] E. Adachi, A. S. Dimitrov, and K. Nagayama, \"Stripe patterns formed on a glass-surface during droplet evaporation,\" Langmuir 11, 1057–1060 (1995).\n- [46] R. D. Deegan, \"Pattern formation in drying drops,\" Phys. Rev. E 61, 475–485 (2000).\n- [47] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, \"Contact line deposits in an evaporating drop,\" Phys. Rev. E 62, 756–765 (2000).\n- [48] L. Shmuylovich, A. Q. Shen, and H. A. Stone, \"Surface morphology of drying latex films: Multiple ring formation,\" Langmuir 18, 3441–3445 (2002).\n- [49] V. X. Nguyen and K. J. Stebe, \"Patterning of small particles by a surfactant-enhanced Marangoni-", - "page_start": 27, - "page_end": 27, - "source_file": "1001.2669.pdf" - }, - { - "text": "distance between particle clusters resulting from the demixing process that occurs already in the bulk liquid and is not related to the front instability at all. Note that one finds a similar sequence of regimes (i) to (iv) when increasing the particle-particle interaction strengths for fixed εnl (see Ref. [41]) for further details.\n\nFIG. 3: (Colour online) Dependence of the mean finger number left behind by the unstable dewetting front on the particle-liquid interaction strength εnl. The regions marked (i) to (iv) are discussed in the main text. The insets display typical snapshots obtained in the four different regions. Particles are black, liquid is grey (green online) and the empty substrate is white. The remaining parameters are kT = 0.2, M = 20, µ = −2.2, ρ av n = 0.1, nn = 2.0, domain size 1200 × 1200. For the insets, from left to right, nl = 1.2, 1.4, 1.45, 1.8.\n\nWe note also that the fingering process may be viewed as self-optimising the front motion – i.e. the front keeps its average velocity constant by expelling particles into the fingers. A similar effect exists for dewetting polymer films [18], where liquid is expelled from the growing moving rim which collects the dewetted polymer. There, the surplus liquid is left on the surface as a droplet pattern.\n\nThe kinetic Monte Carlo model is a very useful tool that helps one to understand the pattern formation in drying nanoparticle suspensions. One has, however, to keep in mind the restrictions", - "page_start": 12, - "page_end": 12, - "source_file": "1001.2669.pdf" - }, - { - "text": "Benard instability,\" Phys. Rev. Lett. 88, 164501 (2002).\n\n- [50] J. Huang, F. Kim, A. R. Tao, S. Connor, and P. Yang, \"Spontaneous formation of nanoparticle stripe patterns through dewetting,\" Nat. Mater. 4, 896–900 (2005).\n- [51] S. H. Lee, P. J. Yoo, S. J. Kwon, and H. H. Lee, \"Solvent-driven dewetting and rim instability,\" J. Chem. Phys. 121, 4346–4351 (2004).\n- [52] L. Xu, T. F. Shi, P. K. Dutta, and L. An, \"Rim instability by solvent-induced dewetting,\" J. Chem. Phys. 127, 144704 (2007).\n- [53] L. Xu, T. F. Shi, and L. J. An, \"The dewetting dynamics of the polymer thin film by solvent annealing,\" J. Chem. Phys. 129, 044904 (2008).\n- [54] M. Elbaum and S. G. Lipson, \"How does a thin wetted film dry up?\" Phys. Rev. Lett. 72, 3562–3565 (1994).\n- [55] N. Samid-Merzel, S. G. Lipson, and D. S. Tannhauser, \"Pattern formation in drying water films,\" Phys. Rev. E 57, 2906–2913 (1998).\n- [56] A. Padmakar, K. Kargupta, and A. Sharma, \"Instability and dewetting of evaporating thin water films on partially and completely wettable substrates,\" J. Chem. Phys. 110, 1735–1744 (1999).\n- [57] A. V. Lyushnin, A. A. Golovin, and L. M. Pismen, \"Fingering instability of thin evaporating liquid films,\" Phys. Rev. E 65, 021602 (2002).\n- [58] L. M. Pismen, \"Spinodal dewetting in a volatile liquid film,\" Phys. Rev. E 70, 021601 (2004).\n- [59] C. Poulard, O. Benichou, and A. M. Cazabat, \"Freely receding evaporating droplets,\" Langmuir 19, 8828–8834 (2003).\n- [60] Y. Gotkis, I. Ivanov, N. Murisic, and L. Kondic, \"Dynamic structure formation at the fronts of volatile liquid drops,\" Phys. Rev. Lett. 97, 186101 (2006).\n- [61] E. Pauliac-Vaujour and P. Moriarty, \"Meniscus-mediated organization of colloidal nanoparticles,\" J. Phys. Chem. C 111, 16255–16260 (2007).\n- [62] C. Gigault, K. Dalnoki-Veress, and J. R. Dutcher, \"Changes in the morphology of self-assembled polystyrene microsphere monolayers produced by annealing,\" J. Colloid Interface Sci. 243, 143–155 (2001).\n- [63] A. Oron, S. H. Davis, and S. G. Bankoff, \"Long-scale evolution of thin liquid films,\" Rev. Mod. Phys. 69, 931–980 (1997).\n- [64] U. Thiele, \"Thin film evolution equations from (evaporating) dewetting liquid layers to epitaxial growth,\" J. Phys.-Cond. Mat. (2010), (at press).", - "page_start": 28, - "page_end": 28, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: *J. Phys.-Cond. Mat.* 21, 264016 (2009), in the Volume \"Nanofluids on solid substrates\" and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "polymers which only result in fingers without side-branches [75] or fields of droplets left behind [18].\n\nA quantitative analysis shows that the mean number of fingers depends only very weakly on the average concentration of the nanoparticles ρ av n ; only the mean finger width increases with increasing concentration. However, decreasing the mobility (i.e., decreasing the diffusivity of the particles) leads to a much denser finger pattern and also causes the front instability to appear at an earlier stage, i.e., when the front instability is in its initial linear regime, it has a higher growth rate and a smaller characteristic wavelength (cf. Fig. 2(c) and (d)). Decreasing the effective chemical potential (increasing its absolute value) has a similar but less strong effect. For details see [41]. These findings lead to the conclusion that the determining factor for the front instability is the ratio of the time-scales of the different transport processes. In particular, the front becomes more unstable when the velocity of the dewetting front increases as compared to the mean diffusion velocity of the nanoparticles.\n\nIf the particle diffusivity is low, the front 'collects' the particles, resulting in a build up of the particles at the front that itself is slowed down. This makes the front unstable and any fluctuation along the front will trigger a transverse instability that results in an evolving fingering pattern. This happens even when the particle-liquid and particle-particle attractive interactions do not favour clustering (i.e. demixing of the liquid and the nanoparticles). In this regime, the instability is a purely dynamic effect and energetics plays no role in determining the number of fingers. We call this the 'transport regime'.\n\nTo illustrate the influence of energetics (characterized by the interaction parameters εij ) on fingering in Fig. 3 we display the dependence of the mean finger number on particle-liquid interaction strength εnl. For εnl ≥ 1.5 the mean finger number < f > is nearly constant; this is the transport regime. However, on decreasing εnl below 1.5, we observe a marked increase in the value of < f >, indicating that energy plays an important role in determining the number of fingers in this regime. In this parameter range, demixing of particles and liquid occurs at the moving front and increases its transverse instability. In this 'demixing regime', the wavelength of the fingering instability is determined by the dynamics *and* the energetics of the system. Decreasing εnl further (below 1.4 in Fig. 3) one first observes in regime (iii) a slight decrease in the average finger number. This is a geometric effect resulting from our one-dimensional finger counting routine: The fingers increasingly break up and the dried-in pattern looks progressively isotropic. In regime (iv), the measure hfi does not represent a finger number but instead indicates a decrease in the typical", - "page_start": 11, - "page_end": 11, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "Which of ultrathin film or mesoscale hydrodynamics are best explained by kinetic Monte Carlo models ? ", - "target_page": 18, - "target_passage": "lthough both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "FIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height hp = hφ. The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\nshould also be investigated further in the simple case presented here.\n\n### IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f(h) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure −γ∆h and the disjoining pressure Π(h) = −∂hf(h). Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70–73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. We should expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor film.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle (≈ 0.01), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter – an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes – an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: *J. Phys.-Cond. Mat.* 21, 264016 (2009), in the Volume \"Nanofluids on solid substrates\" and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "is similar to the size of the nanoparticles. At a certain distance from the macroscopic front, the ultrathin film starts to evolve a locally isotropic pattern of holes. The holes themselves grow in an unstable manner resulting in an array of isotropically branched structures as shown, e.g., above in Fig. 1. This indicates that at least some of the patterns described in the literature may have arisen from processes in similar ultrathin 'postcursor' films.\n\nThe existence of the ultrathin 'postcursor' film is an experimental finding that can be drawn on when choosing a theoretical approach to account for the pattern formation (see below). Note however, that at the moment there exists no explanation for its existence. A possible hypothesis is that the substrate strongly attracts the nanoparticles. As a result they form a dense suspension layer having a thickness roughly equal to the diameter of the nanoparticles. The observed mesoscopic dewetting front then actually correspond to an autophobic dewetting of a low concentration suspension from the higher concentration suspension on the surface of the substrate.\n\n### III. MODELLING APPROACHES\n\nModels of dewetting thin films of pure liquids or polymers are often based on thin film hydrodynamics. Starting from the Stokes equations, together with continuity and boundary conditions at the substrate and free surface, one applies a long-wave approximation (assuming small surface slopes and contact angles) [8, 63] and obtains a non-linear evolution equation for the film thickness profile h(x, y, t). In the case of volatile liquids one finds [55–58, 64]\n\n$$\\partial_{t}h\\,=\\,\\nabla\\cdot\\left[Q_{\\mathrm{e}}\\nabla\\frac{\\delta F}{\\delta h}\\right]\\,-\\,Q_{\\mathrm{e}}\\frac{\\delta F}{\\delta h},\\tag{1}$$\n\nwith the mobility functions Qc(h) = h 3/3η ≥ 0 (assuming Poiseuille flow in the film and no slip at the substrate; η is the dynamic viscosity) and Qe ≥ 0 for the convective and evaporative part of the dynamics, respectively. Qe is a rate constant that can be obtained from gas kinetic theory or from experiment [57]. Note that Eq. (1) only applies if the pressure in the vapour above the film is close to the saturation pressure. For alternative expressions that are used to describe the non-conserved evaporative dynamics see, e.g., Refs. [56, 57, 65–69]. Finally, ∇ = (∂x, ∂y), and ∂t , ∂x and ∂y denote partial derivatives w.r.t. time and the coordinates.\n\nFocusing on the influence of capillarity and wettability only, the energy functional F[h] is given by\n\n$$F[h]\\,=\\,\\int dx\\int dy\\left[\\frac{\\gamma}{2}(\\nabla h)^{2}+f(h)-\\mu h\\right]\\tag{2}$$", - "page_start": 6, - "page_end": 6, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 5: (Colour online) Density profiles for the situation where the substrate is covered by nanoparticles with average density ρ av n = 0.3 and with the liquid excluded from the region y < 0. The top row shows the nanoparticle density profiles and bottom row the corresponding liquid density profiles at the times t/tl = 1000 (left), 10000 (middle) and 30000 (right), where tl = 1/kTMnc l σ 2 . The parameters are kT /εll = 0.8, εnl/εll = 0.6, εnn = 0, α = 0.2Mnc l σ 4 , Mc l = 0, ρl(t = 0) = 0.9 ± ξ (where ξ represents white noise of amplitude 0.05) and (µ − µcoex)/kT = −0.78.\n\nThis theory allows us to study the time evolution of the evaporating film of nanoparticle suspension without some of the restrictions of the kinetic Monte Carlo model. Here, however, we illustrate its application in similar parameter regimes as used above for the KMC. We focus on two examples: (i) the spinodal dewetting of a initially flat film of nanoparticle suspension characterised by constant ρl and ρn (Fig. 4); and (ii) the retraction of a dewetting front that is unstable with respect to a fingering instability (Fig. 5).\n\nFig. 4 presents two pairs of snapshots from a purely evaporative dewetting process deep inside the parameter region of the phase diagram where spinodal dewetting occurs. For small times the film becomes unstable showing a typical spinodal labyrinthine pattern with a typical wavelength. The nanoparticles concentrate where the remaining liquid is situated. However, they are 'slow' in their reaction: when ρl already takes values in the range 0.08 – 0.83, the nanoparticle concentration has only deviated by about 25% from its initial value. The film thins strongly forming many", - "page_start": 16, - "page_end": 16, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n### B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78–83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρl and the nanoparticles ρn. The densities ρl and ρn are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F[ρl , ρn], and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [20] C. Tomlinson, \"On the motion of certain liquids on the surface of water,\" Phil. Mag. Ser. 4 39, 32–48 (1870).\n- [21] C. G. Marangoni, \"Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberfl ¨ ache einer ¨ anderen,\" Ann. Phys. (Poggendorf) 143, 337–354 (1871).\n- [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, \"Formation of ordered mesoscopic poly- ¨ mer arrays by dewetting,\" Chaos 9, 308–314 (1999).\n- [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, \"Hole-growth instability in the dewetting of evaporating polymer solution films,\" J. Polym. Sci. Pt. B-Polym. Phys. 40, 2825–2832 (2002).\n- [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, \"Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,\" Adv. Mater. 19, 1413–1417 (2007).\n- [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, \"Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,\" Langmuir 24, 7923–7930 (2008).\n- [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, \"Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,\" Surface and Interface Analysis 25, 514–521 (1997).\n- [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, \"Dewetting of thin collagenous precursor films,\" Appl. Phys. A 66, S565–S568 (1998).\n- [28] U. Thiele, M. Mertig, and W. Pompe, \"Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,\" Phys. Rev. Lett. 80, 2869–2872 (1998).\n- [29] H. Maeda, \"An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,\" Langmuir 15, 8505–8513 (1999).\n- [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, \"Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,\" Phys. Rev. Lett. 96, 177801 (2006).\n- [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, \"Evaporative self-assembly from complex DNA-colloid suspensions,\" Langmuir 24, 3911–3917 (2008).\n- [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, \"Rings and hexagons made of nanocrystals: A Marangoni effect,\" J. Phys. Chem. B 104, 11871–11877 (2000).\n- [33] G. L. Ge and L. Brus, \"Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,\" J. Phys. Chem. B 104, 9573–9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [81] A. J. Archer and M. Rauscher, \"Dynamical density functional theory for interacting brownian particles: Stochastic or deterministic?\" J. Phys. A-Math. Gen. 37, 9325–9333 (2004).\n- [82] A. J. Archer and R. Evans, \"Dynamical density functional theory and its application to spinodal decomposition,\" J. Chem. Phys. 121, 4246–4254 (2004).\n- [83] P. A. Monson, \"Mean field kinetic theory for a lattice gas model of fluids confined in porous materials,\" J. Chem. Phys. 128, 084701 (2008).\n- [84] P. M. Chaikin and T. C. Lubensky, *Principles of condensed matter physics*, Cambridge University Press (1997).\n- [85] J. S. Langer, \"An introduction to the kinetics of first-order phase transitions,\" in C. Godreche, editor, \"Solids far from Equilibrium,\" pages 297–363, Cambridge University Press (1992).\n- [86] M. A. Spaid and G. M. Homsy, \"Stability of Newtonian and viscoelastic dynamic contact lines,\" Phys. Fluids 8, 460–478 (1996).\n- [87] U. Thiele and E. Knobloch, \"Front and back instability of a liquid film on a slightly inclined plate,\" Phys. Fluids 15, 892–907 (2003).\n- [88] M. R. E. Warner, R. V. Craster, and O. K. Matar, \"Surface patterning via evaporation of ultrathin films containing nanoparticles,\" J. Colloid Interface Sci. 267, 92–110 (2003).\n- [89] O. K. Matar, R. V. Craster, and K. Sefiane, \"Dynamic spreading of droplets containing nanoparticles,\" Phys. Rev. E 76, 056315 (2007).\n- [90] J. J. Zhou, B. Dupuy, A. L. Bertozzi, and A. E. Hosoi, \"Theory for shock dynamics in particle-laden thin films,\" Phys. Rev. Lett. 94, 117803 (2005).\n- [91] B. P. Cook, A. L. Bertozzi, and A. E. Hosoi, \"Shock solutions for particle-laden thin films,\" SIAM J. Appl. Math. 68, 760–783 (2008).\n- [92] R. V. Craster, O. K. Matar, and K. Sefiane, \"Pinning, retraction, and terracing of evaporating droplets containing nanoparticles,\" Langmuir (2009), online available.\n- [93] D. Quemada, \"Rheology of concentrated disperse systems and minimum energy-dissipation principle I. Viscosity-concentration relationship,\" Rheol. Acta 16, 82–94 (1977).\n- [94] D. Quemada and C. Berli, \"Energy of interaction in colloids and its implications in rheological modeling,\" Adv. Colloid Interface Sci. 98, 51–85 (2002).\n- [95] J. J. Stickel and R. L. Powell, \"Fluid mechanics and rheology of dense suspensions,\" Annu. Rev. Fluid Mech. 37, 129–149 (2005).\n- [96] J. K. G. Dhont, *An Introduction to Dynamics of Colloids*, Elsevier, Amsterdam (1996).", - "page_start": 30, - "page_end": 30, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "What is AgMERRA ?", - "target_page": 2, - "target_passage": " historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. The dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "premises within the complex. The agreement is subject to the implementation of proposed gaming law reforms and a tax structure acceptable to the Company, and obtaining required planning and other approvals.\n\n**Macau.** In connection with the Company's pending joint venture in Macau (see Note 1), the Company has committed to invest up to $280 million in the entity in the form of capital contributions and shareholder loans.\n\n**New York Racing Association.** The Company has an understanding with the New York Racing Association (\"NYRA\") to manage video lottery terminals (\"VLTs\") at NYRA's Aqueduct horseracing facility in metropolitan New York. The Company would assist in the development of the facility, including providing project financing, and would manage the facility for a fee. Work was halted on the VLT facility in August 2003 pending the outcome of an investigation of certain aspects of NYRA's operations by Federal prosecutors. In December 2003, NYRA reached agreement with the Justice Department whereby NYRA was indicted with prosecution deferred. NYRA agreed to pay a fine and the indictment will be dismissed with prejudice upon NYRA implementing certain reforms and otherwise complying with the terms of the agreement. The Company's participation is subject to a definitive agreement, regulatory approvals and certain legislative changes by the State of New York.\n\n**The Residences at MGM Grand.** In July 2004, the venture obtained construction financing for up to $210 million for the development of the first tower. The Company has provided a guaranty for up to 50% of the interest and principal payment obligations on the construction financing as well as a joint and several completion guaranty with its partners. The Company recorded the value of the guaranty obligation, approximately $2 million, in other long-term liabilities.\n\n**Other Guarantees.** The Company is party to various guarantee contracts in the normal course of business, which are generally supported by letters of credit issued by financial institutions. The Company's Senior Credit Facility limits the amount of letters of credit that can be issued to $200 million, and the amount of available borrowings under the Senior Credit Facility is reduced by any outstanding letters of credit. At December 31, 2004, the Company had provided a $50 million letter of credit to support the Economic Development Corporation of the City of Detroit bonds referred to above, which are a liability of the Company.\n\n**Litigation.** The Company is a party to various legal proceedings, most of which relate to routine matters incidental to its business. Management does not believe that the outcome of such proceedings will have a material adverse effect on the Company's financial position or results of operations.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "*Unconcerned by a Chesapeake drilling rig, antelope continue their daily routines in southeastern Wyoming's Powder River Basin where the company is developing the promising Niobrara Play.*", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "- Lee, Timothy B. (22 August 2014). \"Will artificial intelligence destroy humanity? Here are 5 reasons not to worry\" (https://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worr y-about-super-intelligent-computers-taking). *Vox*. Archived (https://web.archive.org/web/201 51030092203/http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-s uper-intelligent-computers-taking) from the original on 30 October 2015. Retrieved 30 October 2015.\n- Lenat, Douglas; Guha, R. V. (1989). *Building Large Knowledge-Based Systems*. Addison-Wesley. ISBN 978-0-2015-1752-1.\n- Lighthill, James (1973). \"Artificial Intelligence: A General Survey\". *Artificial Intelligence: a paper symposium*. Science Research Council.\n- Lipartito, Kenneth (6 January 2011), *The Narrative and the Algorithm: Genres of Credit Reporting from the Nineteenth Century to Today* (https://mpra.ub.uni-muenchen.de/28142/1/ MPRA_paper_28142.pdf) (PDF) (Unpublished manuscript), doi:10.2139/ssrn.1736283 (http s://doi.org/10.2139%2Fssrn.1736283), S2CID 166742927 (https://api.semanticscholar.org/C orpusID:166742927), archived (https://ghostarchive.org/archive/20221009/https://mpra.ub.u ni-muenchen.de/28142/1/MPRA_paper_28142.pdf) (PDF) from the original on 9 October 2022\n- Lohr, Steve (2017). \"Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says\" (https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-so me-fear-new-report-says.html). *The New York Times*. Archived (https://web.archive.org/web/ 20180114073704/https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-butnot-as-fast-as-some-fear-new-report-says.html) from the original on 14 January 2018. Retrieved 13 January 2018.\n- Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). \"Developmental robotics: a survey\". *Connection Science*. **15** (4): 151–190. CiteSeerX 10.1.1.83.7615 (https://citeseerx.ist.psu.ed u/viewdoc/summary?doi=10.1.1.83.7615). doi:10.1080/09540090310001655110 (https://doi. org/10.1080%2F09540090310001655110). S2CID 1452734 (https://api.semanticscholar.or g/CorpusID:1452734).\n- \"Machine Ethics\" (https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Sy mposia/Fall/fs05-06). *aaai.org*. Archived from the original (http://www.aaai.org/Library/Symp osia/Fall/fs05-06) on 29 November 2014.\n- Madrigal, Alexis C. (27 February 2015). \"The case against killer robots, from a guy actually working on artificial intelligence\" (https://www.hrw.org/report/2012/11/19/losing-humanity/cas e-against-killer-robots). *Fusion.net*. Archived (https://web.archive.org/web/20160204175716/ http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai) from the original on 4 February 2016. Retrieved 31 January 2016.\n- Mahdawi, Arwa (26 June 2017). \"What jobs will still be around in 20 years? Read this to prepare your future\" (https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robo ts-skills-creative-health). *The Guardian*. Archived (https://web.archive.org/web/20180114021 804/https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skillscreative-health) from the original on 14 January 2018. Retrieved 13 January 2018.\n- Maker, Meg Houston (2006), *AI@50: AI Past, Present, Future* (https://web.archive.org/web/200 81008120238/http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html), Dartmouth College, archived from the original (http://www.engagingexperience.com/2006/0 7/ai50_ai_past_pr.html) on 8 October 2008, retrieved 16 October 2008\n- Marmouyet, Françoise (15 December 2023). \"Google's Gemini: is the new AI model really better than ChatGPT?\" (https://theconversation.com/googles-gemini-is-the-new-ai-model-really-bet ter-than-chatgpt-219526). *The Conversation*. Archived (https://web.archive.org/web/202403 04215625/https://theconversation.com/googles-gemini-is-the-new-ai-model-really-better-tha n-chatgpt-219526) from the original on 4 March 2024. Retrieved 25 December 2023.\n\nMinsky, Marvin (1986), *The Society of Mind*, Simon and Schuster", - "page_start": 59, - "page_end": 59, - "source_file": "wikipedia3.pdf" - }, - { - "text": "There are no mandatory naming conventions for OWL entities. In chapter 7, we will discuss names and labels in more detail. A best practice is to select one set of naming conventions and then abide by that convention across your organization. For this tutorial we will follow the standard where class and individual names start with a capital letter for each word and do not contain spaces. This is known as CamelBack notation. For example: Pizza, PizzaTopping, etc. Also, we will follow the standard that class names are always singular rather than plural. E.g., Pizza rather than Pizzas, PizzaTopping rather than PizzaToppings.\n\n#### 4.2 Using a Reasoner\n\nYou may notice that one or more of your classes is highlighted in red as in Figure 4.5. This is because we haven't run the reasoner yet so Protégé has not been able to verify that our new classes have no inconsistencies. When just creating classes and subclasses in a new ontology there is little chance of an inconsistency. However, it is a good idea to run the reasoner often. When there is an inconsistency the sooner it is discovered the easier it is to fix. One common mistake that new users make is to do a lot of development and then run the reasoner only to find that there are multiple inconsistencies which can make debugging significantly more difficult. So let's get into the good habit of running the reasoner often. Protégé comes with some reasoners bundled in and others available as plugins. Since we are going to write some SWRL rules later in the tutorial, we want to use the Pellet reasoner. It has the best support for SWRL at the time this tutorial is being written.\n\n#### **Exercise 5: Install and Run the Pellet Reasoner**\n\n1. Check to see if the Pellet reasoner is installed. Click on the Reasoner menu. At the bottom of the menu there will be a list of the installed reasoners such as Hermit and possibly Pellet. If Pellet is visible in that menu then select it and skip to step 3.\n\n_____________________________________________________________________________________\n\n2. If Pellet is not visible then do File>Check for plugins and select Pellet from the list of available plugins and then select Install. This will install Pellet and you should get a message that says it will take effect the next time you start Protégé. Do a File>Save to save your work then quit Protégé and restart it. Then go to File>Open recent. You should see your saved Pizza tutorial in the list of recent ontologies. Select it to load it. Now you should see Pellet under the Reasoner menu and be able to select it so do so.\n\n3. With Pellet selected in the Reasoner menu execute the command Reasoner>Start reasoner. The reasoner should run very quickly since the ontology is so simple. You will notice that the little text message in the lower right corner of the Protégé window has changed to now say Reasoner active. The next time you make a change to the ontology that text will change to say: Reasoner state out of sync with active ontology. With small ontologies the reasoner runs very quickly, and it is a good idea to get into the habit of running it often, as much as after every change.\n\n4. It is possible that one or more of your classes will still be highlighted in red after you run the reasoner. If that happens do: Window>Refresh user interface and any red highlights should go away. Whenever your user interface seems to show something you don't expect the first thing to do is to try this command.", - "page_start": 15, - "page_end": 15, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Remuneration Report\n\n#### Dear Shareholder\n\nI am pleased to present our Remuneration Report for 2013.\n\nAs you would be aware, at last year's Annual General Meeting (\"AGM\") 30% of the votes cast in respect of the resolution to adopt the 2012 Remuneration Report voted 'against' the resolution. As this was greater than the 25% threshold under the executive remuneration legislation, we received what is referred to as a 'first strike.' Our formal response to issues raised by shareholders at the AGM with respect to the 2012 Remuneration Report is set out on page 50 of this Report.\n\nVoting at AGMs is not compulsory and results of the 2012 AGM reflected this with only 59% of issued shares that were eligible to vote on the resolution to adopt the Remuneration Report doing so, meaning the 'against' vote represented 18% of eligible issued shares.\n\nWhile we believe our remuneration practices are sound and demonstrate a clear link between executive and shareholder returns, we have taken the first strike seriously and have undertaken an extensive review of the remuneration principles for Key Management Personnel.\n\nThe changes that the Board have implemented as a result of this review include:\n\n- 〉 A structural review of the Company resulting in the appointment in December 2012 of a senior human resources specialist as a direct report to the Managing Director and Executive Committee member;\n- 〉 Fees / base salary packages for Directors and Key Management Personnel were frozen from 1 July 2012;\n- 〉 Directors and Key Management Personnel have agreed to a 10% reduction in fees and remuneration;\n- 〉 The Managing Director and Key Management Personnel agreed to not accept any of their entitled Short Term Incentive (\"STI\") equivalent to a minimum of 10% of their base salary for the 2013 financial year;\n- 〉 A revised Performance Management System, including 'at risk' remuneration, has been introduced at all levels in corporate and site based operations including at risk remuneration for Key Management Personnel in the form of short term and long term incentive programs described in detail in this report; and\n- 〉 A broadening of the remuneration benchmarking processes for Directors and Key Management Personnel.\n\nFurther details on each of the changes outlined above are provided in specific sections of this Remuneration Report. We believe that these changes will be welcomed by our shareholders.\n\nWe will continue to review our remuneration polices and framework in consideration of a changing industry environment and your feedback.\n\nThank you for your interest in this report.\n\nRoss Smyth-Kirk Chairman Remuneration Committee", - "page_start": 50, - "page_end": 50, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "processesing business. And, I get excited about the opportunity to expand our transaction base with our mobile banking, bill payment and mobile operator solutions.\n\nThe real value of our company is in our transaction processing. Because of the low incremental cost of connecting to a new customer, anytime we sign a new contract most of the incremental revenue will now be flowing to our bottom line. The infrastructure is in place to leverage additional growth and bring us closer to being EBITDA and cash flow positive in the near term.\n\n## **What role will strategic alliances play in extending your reach into new markets?**\n\nAlliances are an important part of our strategic direction. Recently, we announced several partnerships that help us expand sales channels and distribution of our products and services. Our partners were looking for wireless transaction solutions to complement their own offerings, and they selected Euronet's products, proving that our solutions are rock solid.\n\nGemplus, the world's number one provider of smart card-based solutions, chose us as their global partner to provide electronic recharge solutions to mobile operators. We also have agreements with Sila Communications to help us market our suite of mobile banking solutions throughout Europe, the Middle East and Asia Pacific and with Aether Systems which is offering our mobile banking solutions in the United States.\n\n## **Why did you change your corporate name to Euronet Worldwide last year?**\n\nWe became Euronet Worldwide to more accurately reflect the company's growing presence in the global marketplace. We are no longer focused solely on Europe, and today, deliver comprehensive solutions to more than 200 customers in over 60 countries.\n\n## **What was your biggest challenge in 2000?**\n\nI think it was restructuring our software business late in the year. When Euronet purchased Arkansas Systems, Inc. over two years ago, the division was expected to\n\nachieve high growth. As banks began moving to outsourcing rather than purchasing software to manage their transactions, we realized that this high growth would not materialize. We've basically downsized to reduce expenses to better correspond to revenue expectations, so we\n\nexpect this division to be an EBITDA contributor from this point forward. The trend towards outsourcing negatively impacted our software business, but positively benefits our network services division.\n\nIt's important to point out that our software is an asset to our business of\n\nselling transactions. For example, our software sales doubled in the Asia Pacific region over 1999. Relationships with large financial institutions like Westpac Banking Corporation have cemented our position in Asia Pacific as a leading supplier of transaction processing solutions.\n\n#### **Why is ATM outsourcing important?**\n\nIncreasingly, financial institutions are choosing to outsource their ATM operations to free up resources\n\n> and concentrate on their core banking business. Some analysts predict that outsourcing by the European banking and finance sector will total $91 billion by 2003. We are expanding our outsourcing business with wireless and Internet banking services.\n\nOur outsourcing business is thriving. Currently we provide ATM outsourcing for some of the biggest banks in the world – banks like Citibank, ABN AMRO, Deutsche Bank, Millennium\n\nand Raiffeisenbank – as they expand into emerging markets. We have contracts with Citibank in five countries, most recently in Greece and the Czech Republic.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## JAMES HENRY CARVER **Executive Director - Appointed 29 June 1998**\n\nCaptain James Carver is a Ships Master with over 30 years direct experience in the marine industry. He was Woodside Petroleum's first ships master, carrying out marine operations in the LNG development. Captain Carver was involved in exploration, construction and production of most oil and gas projects on the North West Shelf. He has in-depth knowledge of the industry, its needs and its future. Establishing\n\nthe company in 1982, Jim pursued a \"can do\" attitude at sea and on shore. Under his direction the fleet grew from 1 to 15 vessels and the Base at Dampier secured for the present expansion and exiting future.\n\n#### DERRICE-ANN DILLON\n\n**Executive Director - Corporate - Appointed 12 August 1998** Derrice Dillon has considerable experience in management, administration and finance acquired over the last 22 years and has held a number of senior positions in Australia and overseas. From the early 1990's Derrice developed a strong knowledge of the oil and gas industry from her previous position as a director and head of administration of Slimdrill Pty Ltd. She was responsible for the design\n\nand implementation of all accounting and administration systems, including complex databases to track information for the construction and manufacture of the Slimdrill oil drilling rigs. She was also responsible for all legal matters and the production of promotional and marketing material for worldwide distribution.\n\nDerrice took a leading role in the listing of Mermaid Marine in 1999 and has since headed up accounting, systems and administration. As Chairman of the Board of Management she plays a senior role in Mermaid's operations.\n\nDerrice has also recently been appointed to the Seacare Authority by the Minister of Employment, Workplace Relations and Small Business as National Representative for the Offshore Maritime Industry.", - "page_start": 30, - "page_end": 30, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "- 160. Alex McFarland: *7 Best AI for Math Tools.* (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n- 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n- 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n- 163. Congressional Research Service (2019). *Artificial Intelligence and National Security* (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n- 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n- 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). *+972 Magazine*. Retrieved 6 April 2024.\n- 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). *The Guardian*. Retrieved 4 December 2023.\n- 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender – deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). *Neue Zürcher Zeitung* (in German). Retrieved 10 August 2024.\n- 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n- 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n- 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). *The New York Times*. Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n- 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). *Bloomberg News*. Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. The following conditions must be true:\n\n- -The relationship was ConsistentSynchronized when it became disconnected.\n- -No writes received successful completion at the master while disconnected.\n\nOtherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.\n\n#### *Empty*\n\nThis state applies only to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show.\n\nIt is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point the state of the relationship becomes the state of the Consistency Group.\n\n# **11.7 Remote Copy commands**\n\nThis section presents commands that need to be issued to create and operate remote copy services.\n\n# **11.7.1 Remote Copy process**\n\nThe MM/GM process includes the following steps:\n\n- 1. A system partnership is created between two IBM Spectrum Virtualize systems (for intercluster MM/GM).\n- 2. A MM/GM relationship is created between two volumes of the same size.\n- 3. To manage multiple MM/GM relationships as one entity, the relationships can be made part of a MM/GM Consistency Group to ensure data consistency across multiple MM/GM relationships, or for ease of management.\n- 4. The MM/GM relationship is started. When the background copy completes, the relationship is consistent and synchronized. When synchronized, the auxiliary volume holds a copy of the production data at the master that can be used for disaster recovery.\n- 5. To access the auxiliary volume, the MM/GM relationship must be stopped with the access option enabled before write I/O is submitted to the auxiliary.\n\nFollowing these steps, the remote host server is mapped to the auxiliary volume and the disk is available for I/O.\n\nFor more information about MM/GM commands, see *IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line Interface User's Guide,* GC27-2287*.*\n\nThe command set for MM/GM contains the following broad groups:\n\n- -Commands to create, delete, and manipulate relationships and Consistency Groups\n- -Commands to cause state changes\n\nIf a configuration command affects more than one system, MM/GM coordinates configuration activity between the systems. Certain configuration commands can be performed only when the systems are connected, and fail with no effect when they are disconnected.", - "page_start": 562, - "page_end": 562, - "source_file": "sg247938.pdf" - }, - { - "text": "#### Brett Dunstone\n\nDip. Catering and Hotel Management – William Angliss College, B.Bus. Victoria University (part complete)\n\n#### General Manager – Human Resources\n\nBrett Dunstone joined Kingsgate in December 2012 and has over 25 years experience in senior human resource management roles across a diverse industry portfolio. Brett was formerly head of Human Resources for Crown Casino, Melbourne, the Myer group, key Village Roadshow entities and head of Employee Relations for the Coles Myer group. Brett has experience in supporting both large and emerging resource company development projects locally and overseas (BHP Billiton, Woodside, Equinox Minerals and Chalice Gold).\n\n### Michael Monaghan\n\nDip Eng (Mining) Dip Business MAusIMM MAICD SME\n\n#### Chief Operating Officer and General Manager – Akara Resources PCL\n\nMike Monaghan joined Kingsgate as the General Manager of Chatree Gold Mine in October 2012. He is a mining engineer with 28 years of management experience in both underground and open cut opeartions across a number of commodities as well as commissioning, mine management, turnaround management and environmental and safety compliance in Australia, Africa and Europe. Mike was most recently Mining Manager at Geita Gold mine in Tanzania for AngloGold Ashanti Limited. Prior to that he held General Manager and Mining Manager positions at Etruscan Resources Youga Gold Mine in Burkina Faso and Red back Mining's Chirano Gold Mine in Ghana.\n\n### Pakorn Sukhum\n\nBSc (Hons) University of London, UK MBA Sasin Graduate Institute of Business Administration Thailand\n\n#### Chief Executive Officer – Akara Resources PCL\n\nPakorn Sukhum joined the management team of Akara Resources PCL as Chief Executive Officer at the end of 2009. He brings to Akara over 24 years of industrial commercial managerial experience in various industries such as metallurgy, chemicals and ceramics in international and domestic markets of Thailand, having held senior management positions in both Thai and Multinational joint venture companies such as Basell Poyolefins, Bayer AG as well as Padeang Industry of Thailand. His major contributions and responsibilities have ranged from project management, commercial marketing and sales to business development.", - "page_start": 41, - "page_end": 41, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "In 2018, what was the global proportion of maize grown in the US ?", - "target_page": 5, - "target_passage": "According to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. The United States accounts for more than 32%", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "that maize yield would decrease severely. For the whole world more mitigation and adaptation actions should be taken from now on. Food security would be a signifcant challenge in this century.\n\n**Yield change of maize in main countries.** Tere are huge diferences in impacts on maize yield under climate change, which would infuence the food crisis in diferent regions. Tere are 159 countries in the whole world which plant maize. Te gross yield of maize the top 20 countries accounts for more than 90% of the total yield in the 159 countries. So, the changes in the top 20 countries under future scenarios would infuence the food security of the whole world (Fig. 5). From the results of simulated by CRESE-maize under global warming by 1.5 °C, there would be 75 countries facing with yield loss of maize; the mean yield loss rate would become 33.5%. Tere would be 84 countries experiencing yield increases. Overall, the global maize yield would slightly increase. Under global warming by 2.0 °C, there would be 82 countries facing with yield loss of maize, for which the mean yield loss rate is approximate to that under global warming by 1.5 °C. Tere would be 77 countries experiencing yield increase; however, the mean yield increase is apparently smaller than that under global warming by 1.5 °C. Generally, the global maize yield would decrease. Te results show that the adverse efect of warming up 2.0 °C on global maize production is far greater than warming up 1.5 °C. It is important to take actions to develop forward-looking adaptation measures to cope with future climate change.\n\nAccording to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. Te United States accounts for more than 32%; China accounts for about 24%; Brazil, Argentina and Mexico account for about 23%. Te fuctuation of maize production in these fve top countries will have a signifcant impact on the global maize trade. Based on the simulation results, comparing to 1986–2005, the maize yield in China, Brazil and Argentina would decrease under global warming by 1.5 °C; the yield loss rate would reach more than 20% in Brazil; Argentina would decrease by 14.7%; China would decrease by 3.7%. However, there would be increasing trends in the United States and Mexico; the change in the United States would not be signifcant and the maize yield would increase by 0.5%; the yield increasing rate would exceed 50% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 2% under global warming\n\nVol:.(1234567890)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed9.pdf" - }, - { - "text": "by 1.5 °C. According to the simulation results, comparing to 1986–2005, the maize yield in the United States, China and Brazil would decrease under global warming by 2.0 °C; the yield loss rate would reach more than 24% in Brazil; the United States would decrease by 13.3%; China would decrease by 11.5%. However, there would be increasing trends in Argentina and Mexico; the maize yield would increase by 16.8% in Argentina; the yield increasing rate would exceed 40% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 11.4% under global warming by 2.0 °C. By comparing the maize production in diferent countries, it can be found that the reduction trend of total maize production in the top fve countries is more obvious, especially under the scenario of global warming by 2.0 °C, the global food trade and food security may face greater risks.\n\nFrom the view of continents, there are diferent trends of maize yield changes in the 6 continents (except Antarctica) under global warming by 1.5 °C and 2.0 °C (Fig. 6). From the results of simulated by CRESE-maize under global warming by 1.5 °C, the maize yield in 3 continents would decline apparently, including South America, Europe and Oceania; the average yield loss rates are respectively − 15.6%, − 12.4%, − 36.4%; in the other 3 continents the average maize yield would go up, especially in Africa more than 30%; the increasing trends are slight in Asia and North America, in which the yield increasing rates are separately 0.7% and 0.4%. However, the yield change trends simulated by IPSL-CM5A-LR and GFDL-ESM2M models are diferent in 2 continents, including Asia and North America. From the results of simulated by CRESE-maize under global warming by 2.0 °C, the maize yield in 5 continents would decline apparently, except Africa; the average yield loss rates are respectively − 7.9% (Asia), − 14.1% (North America), − 9.3% (South America), − 22.5% (Europe), − 25.5% (Oceania); only in Africa the average maize yield would go up also more than 30%; meanwhile the yield change trends simulated by IPSL-CM5A-LR and GFDL-ESM2M models are the same in each continent. Comparing the two global warming scenarios, there would be apparent variations in maize yield in Asia and North America, in which the annual maize yield accounts for a great proportion of the whole world, leading to a much more serious yield loss under global warming by 2.0 °C than that under global warming by 1.5 °C. Tere would be an obvious crisis of food supply under global warming by 2.0 °C with the increasing population in the future. So, it is important to make full preparation for adaptation to climate change in the whole world.\n\n<b>Figure 5. (continued)", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed9.pdf" - }, - { - "text": "is 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\nTere are apparent trends of humidifcation in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. Te distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. Te drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\n**Yield change of maize under global warming by 1.5 °C and 2.0 °C.** Maize production is afected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986–2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. Te distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. Te loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 3.** Distribution of yield loss rate on maize in the world under global warming by 1.5 °C (up: IPSL-CM5A-LR model, RCP 2.6; down: GFDL-ESM2M model, RCP 4.5). Te fgure has been generated using ArcGIS 10.2 and Natural Earth-Free vector and raster map data @ https://naturalearthdata.com.\n\nwarming by 1.5 °C and 2.0 °C. So, there are apparent challenges and opportunities for maize production in the whole world under climate change. We should grasp the opportunities and expand the yield increasing potentials; meanwhile, the threat of maize yield loss should be controlled and compressed to the minimum in the high-risk regions.\n\nFrom the results simulated by IPSL-CM5A-LR model under RCP 2.6 scenario, the gross yield of maize in the world between 2020 and 2039 would decrease by 6.8% relative to 1986–2005. Te area is 37.7% of the whole maize planting regions in the world, in which the yield loss would be less than 50%, mainly located in the low and middle latitude of South America and Asia, and the middle latitude of Africa and North America. Te area is 16.4% of the whole maize planting regions, in which the yield loss would be more than 50%, mainly located in the low latitude of South America and the middle latitude of Asia and Europe. Te area is 45.8% of the whole maize planting regions, in which the yield would increase, mainly located in the low latitude of Africa, Asia and North America, the high latitude of Europe. From the results simulated by the GFDL-ESM2M model under RCP 4.5 scenario, the gross yield of maize in the world between 2041 and 2060 would increase by 7.2% relative to 1986–2005. Tere are opposite trends of maize yield under global warming by 1.5 °C, which are simulated by diferent global climate models. However, the spatial distributions of maize yield change are similar to each other. Te diference is that the regions of high yield loss rate are decreasing, and the regions of yield increasing are going up. In a comprehensive perspective, under global warming by 1.5 °C, maize yield in the whole world would increase 0.18% relative to 1986–2005 (Fig. 3). According to Paris Agreement, all countries should do their best to limit the global warming by 1.5 °C until the end of 21 century. If that objective could be accomplished, gross maize production of the whole world would not be infuenced so much by climate change, but the food\n\nVol:.(1234567890)", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 4.** Distribution of yield loss rates on maize in the world under global warming by 2.0 °C (up: NorESM1-M model, RCP 4.5; down: GFDL-ESM2M model, RCP 6.0). Te fgure has been generated using ArcGIS 10.2 and Natural Earth-Free vector and raster map data @ https://naturalearthdata.com.\n\nsecurity of the whole world would still be attacked violently. Tere are huge diferences among the continents; South America, Asia and the Middle East are threatened seriously by yield loss seriously under global warming by 1.5 °C. Te changes in maize yield in diferent regions would infuence the maize price and food trades. So, it should be cautious to cope with the maize changes under global warming by 1.5 °C.\n\nFrom the results of simulated by the NorESM1-M model under RCP 4.5 scenario, the gross yield of maize in the world between 2060 and 2079 would decrease by 18.7% relative to 1986–2005. Te area is 41.7% of the whole maize planting regions in the world, in which the yield loss would be less than 50%. Te area is 15.6% of the whole maize planting regions, in which the yield loss would be more than 50%. Te area is 42.7% of the whole maize planting regions, in which the yield would increase. Te distribution of maize yield change is similar to that under global warming by 1.5 °C. From the results simulated by the GFDL-ESM2M model under RCP 6.0 scenario, the gross yield of maize in the world between 2065 and 2084 would decrease by 3% relative to 1986–2005. Comparing to the results of the NorESM1-M model, the regions of high yield loss rate are increasing, and the regions of yield increases are going down; but the per unit area yields are increasing quickly in the regions of yield increasing. So, the gross maize yield in the whole world simulated by the GFDL-ESM2M model is more than the NorESM1-M model. In a comprehensive perspective, under global warming by 2.0 °C, maize yield in the whole world would decrease 10.8% relative to 1986–2005 (Fig. 4). Compared to the results under global warming by 1.5 °C, the risk of yield loss is much higher. According to the new results from the Emission Gap Report in 2019, the target of global warming by 1.5 °C would not be implemented according to the reality of mitigation actions; the chance become much bigger for all countries in the world, who will be facing the severe challenge of global temperature rise of 2.0 °C or even higher (3.0 °C or 4.0 °C) in the future. So it is critical to cope with the serious condition", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 7.** Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n**Figure 8.** Changes in Self-sufciency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\nmeantime, the huge diferences in yield changes in diferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be efectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price fuctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (Te Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the fve models could more efectively support impact assessment in diferent sectors and provide more reliable results. Based on the simulation results", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 6.** Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n**Market price of maize in main countries.** In this study, we elaborate on the endogenous response of our economic models. Tis response can be theoretically elaborated as: due to the efect of climate change on yield reduction (improvement), the supply curve moves lefward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shifing to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. Tis also alters the self-sufciency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among diferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signifcantly under both scenarios, as their yields improve due to climate efects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-sufciency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive efects on yields and/or are relatively less dependent on imports, are positively (less negatively) afected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-sufciency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n#### **Discussion and conclusion**\n\n**Discussion.** Our analysis highlights the efects of climate change on global- and regional-specifc maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We fnd that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. Te limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - }, - { - "text": "| Model | Research institute | Country | Horizontal resolution |\n| --- | --- | --- | --- |\n| GFDL-ESM2M | Geophysical Fluid Dynamics Laboratory | Te United States | 144×90 |\n| HadGEM2-ES | Hadley Center for Climate Prediction and Research | Te United Kingdom | 192×145 |\n| IPSL-CM5A-LR | L' Institute Pierre-Simon Laplace | France | 96×96 |\n| NorESM1-M | Norway Climate Center | Norway | 144×96 |\n| MIROC-ESM | Center for Climate System Research, National Institute for Environmental Studies, and Frontier Research Center for Global Change | Japan | 128×64 |\n\n**Table 1.** Basic information of 5 ESMs in CMIP5. Horizontal resolution means the number of longitudinal grids×the number of latitudinal grids.\n\n**Figure 1.** Changes of global temperature of 20 years moving average from 2020 to 2099 simulated by 5 ESMs under 4 RCP scenarios. Note: Te black horizontal dashed lines: global warming by 1.5 °C and 2.0 °C; the black vertical solid line: the years when global warming reaches 1.5 °C and 2.0 °C simulated by the selected models and scenarios.\n\nAlthough, so far there are plenty of research on the impacts of global warming by 1.5 °C temperature, including the impacts comparison of global warming by 1.5 °C versus 2.0 °C44. It is necessary to do more quantitative impacts assessments of global warming by 1.5 °C and 2.0 °C on crops yield and market price to address research gaps and support the requirement of the scientifc community and governments. In this paper, the future climate situations were selected and analyzed which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C, based on the simulation results from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. Ten the per unit yield changes of maize all over the world under global warming by 1.5 °C and 2.0 °C were analyzed and the spatial distributions of changes in maize yield were revealed relative to the baseline from 1985 to 2006, applying crop model DSSAT (Decision Support System for Agrotechnology Transfer). Next, we examine the efects of the resulting maize production shocks in diferent countries; the market price of maize is simulated using GTAP to reveal the impacts of climate change on global crop trade. Finally, the future trend of maize yield and market price in the main breadbasket is assessed and the adaptation suggestions are put forward for maize cultivation.\n\n#### **Materials and methods**\n\n**Data processing.** In this study, historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. Te dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data45.\n\nFor future (2020–2099), the original climate scenario data (Table 1) were extracted from output archives of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) under four RCPs (RCP2.6, RCP4.5, RCP6.0, RCP8.5) retrieved from the CMIP website. Te climate scenario data was interpolated into 0.5°×0.5° horizontal resolution and bias-corrected with respect to historical observations to remove systematic errors46. Te data of maize-planting regions are from the gridded global dataset in 2000 by combining two data products47,48.\n\n**Simulation of climate scenarios with global warming by 1.5 °C and 2.0 °C.** In this study, climate data of global warming by 1.5 °C and 2.0 °C are determined according to the results of global climate models driven by typical concentration paths (RCPs) of greenhouse gas emissions. Eligible data are selected from a total of 20 sets of data under four RCP scenarios of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M), which estimate the temperature, precipitation and sunshine hours (Fig. 1).\n\nVol:.(1234567890)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed9.pdf" - }, - { - "text": "Firstly, the period of 1986–2005 is defned as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850–1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between diferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986–2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. Tirdly, the climate data of global warming by 1.5 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 1.5–2.0 °C above pre-industrial levels at the end of the twenty-frst century; the climate data of global warming by 2.0 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 2.0–2.5 °C above pre-industrial levels at the end of the twenty-frst century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately confrmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020–2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041–2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060–2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065–2084.\n\n**Simulation of maize yield using DSSAT.** According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986–2005 on grid level using CERES-Maize, which is part of DSSAT version 4.649.\n\nTe inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5°×0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min50. For management information, fertilizer applications, irrigation and other management practices are required. A crop-specifc gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Profle Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of fve 20 cm thick soil layers51. All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.\n\nFirst maize yields across the world during the historical period 1986–2005 were simulated at the 0.5°×0.5° grid scale with two main production systems, including Spring maize and Summer maize. Historical national maize production is aggregated from simulated gridded yield and weighted by grid cell maize areas in 2000 from the gridded global dataset by combining two data products47. Second, genetic parameters of specifc cultivars of maize from previous works were adopted for the initial parameters; model parameters related to crop genotype characteristics were calibrated and tuned following the method in Xiong et al.52, in which the simulated yields from 1986–2005 were comparable to the statistical data. Tird, maize yields across the world were simulated under global warming by 1.5 °C and 2.0 °C. Finally, global and national maize yields were aggregated from gridded values; changes in national and global yields under global warming by 1.5 °C and 2.0 °C were calculated, comparing maize yield average for 1986–2005.\n\n**Simulation of market price using GTAP.** Te yield changes for maize from the DSSAT models under 1.5 °C and 2.0 °C temperature increase are used to carry out simulations using competitive market for changes in production, market price, and self-sufciency ratio of maize at national and global levels53,54. For this study, we use a comparative static analysis approach to simulate the impact of climate changes on the prices and trade of the major food crops under current economic conditions. Utilizing current economic conditions has the advantage of minimizing assumptions and model uncertainties related to future economic conditions55,56.\n\nTe original GTAP database doesn't include maize as a separate sector, rather it is combined with other coarse grains to form an \"other coarse grain\" sector. For this study, we updated the GTAP database by splitting maize from the original sector in the database, design an appropriate sectoral and regional aggregation scheme to the original database. Te detailed method is given as follows:\n\nFirst, we improved the database by splitting maize from the existing sector \"other coarse grain\", following similar work using GTAP57–59 based on the routines from the Splitcom method60. In this procedure, the old fows of data both at national and trade levels are allocated between the new fows using weights. Te national weights include the division of each unsplit user's use of the original split commodity among the new commodities; the division of unsplit inputs to the original industry between the new industries; the splitting of new industry's use of each new commodity. Maize use is mainly shared between feed, food, processing and others (seed, waste, etc.).\n\nTrade shares allocate the original slice of the split commodity into the new commodity for all elements of basic price value, tax, and margin. Finally, we used the RAS method for balancing the newly created database. Te values for the national shares matrix were obtained from FAOSTAT. Te trade shares matrix was calculated based on the data from UN Comtrade Database.\n\nSecond, our sectoral aggregation scheme for GTAP ensures that all the competing and complimenting sectors for maize are present in the most disaggregated form. For example, for maize, other crops compete for inputs of production and both livestock and households are major users of maize. For regional aggregation, we kept the details for all the main producing, consuming, and trading regions, for maize.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "What would be the price increase resulting from maize production changes due to 1.5°C and 2°C global temperature increase ?", - "target_page": 10, - "target_passage": "In response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "is 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\nTere are apparent trends of humidifcation in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. Te distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. Te drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\n**Yield change of maize under global warming by 1.5 °C and 2.0 °C.** Maize production is afected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986–2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. Te distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. Te loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 10.** Distributions of changes in run-off for mean flows simulated by the JULES ecosystem–hydrology model under the ensemble of six climate projections at 1.5°C (blue) and 2°C (orange) global warming. Boxes show the 25th and 75th percentile changes, whiskers show the range, circles show the four projections that do not define the ends of the range, and crosses show the ensemble means. Numbers in square brackets show the ensemble-mean flow in the baseline, in millimetres of rain equivalent.\n\nall members (figure 12). This is not the case for the precipitation and run-off results; for those quantities, there is substantial overlap in the ranges of changes at 2°C and 1.5°C, so there is not a consistent picture of how much wetter or drier the world is projected to be in this ensemble, even though it involves a single atmosphere model.\n\nFor TXx, the difference between 2°C and 1.5°C global warming is larger than the 0.5°C difference in global mean temperature across most of the land surface in all ensemble members (figure 14). Although some ensemble members simulate local temperatures to be higher at 1.5°C global warming than 2°C in some small regions, these are relatively localized and most regions are cooler at 1.5°C global warming than 2°C. In many regions, the difference is between 0.5°C and 1.0°C, but many other regions see larger differences. In several ensemble members, the difference is 1.5°C, 2°C or larger in large parts of North America, South America, Europe and China. For example, over parts of Europe, where annual maximum daily temperature was projected to increase by over 5°C for a 2°C global warming, the local increase is limited to 3–4°C for 1.5°C global warming. Limiting global warming by half a degree Celsius would, therefore, limit maximum temperatures by three or four times as much in those areas (figure 14).\n\nAt 1.5°C global warming, although the increases in TXx are smaller than at 2°C, these increases show similar geographical patterns as for 2°C in all ensemble members, with larger changes in continental interiors especially in the mid-latitudes (not shown).\n\nThe percentage of days exceeding the 90th percentile of daily temperature (Tx90p) also increases less at 1.5°C global warming than at 2°C (figure 15). The largest reductions are in the tropics, where the largest increase was seen at 2°C; whereas at 2°C global warming, 50% or more rsta.royalsocietypublishing.org\n\n *Phil. Trans. R. Soc. A* **376**: 20160452\n\n........................................................", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 7.** Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n**Figure 8.** Changes in Self-sufciency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\nmeantime, the huge diferences in yield changes in diferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be efectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price fuctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (Te Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the fve models could more efectively support impact assessment in diferent sectors and provide more reliable results. Based on the simulation results", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 13.** Global mean percentage changes relative to 1981–2010 in (*a*) precipitation over land, (*b*) mean run-off flows, (*c*) low run-off lows (10th percentile), at 2°C and 1.5°C global warming.\n\nthis comparison of the number of 'unprecedented' HCVI values at 1.5°C and 2°C should be treated with caution. Nevertheless, the finding that some countries see HCVI values higher at either or both 1.5°C and 2°C compared to the baseline may indicate that climate change has the potential to lead to unprecedented levels of vulnerability to food insecurity in some countries. More robustly, it can be concluded that by this metric, overall worldwide vulnerability to food insecurity generally increases with global warming, and for approximately three-quarters of countries assessed, this increase is larger at 2°C than 1.5°C.\n\nIn the ensemble mean, changes in mean, low and high flows are generally larger at 2°C global warming compared to 1.5°C (figure 20). This is often the case for both increases and decreases in flows—increasing the level of global warming magnifies the pattern of river flow changes, although not in all cases.\n\nThe range of projected mean run-off changes is larger for 2°C than 1.5°C in many basins, but this was not always the case, with many basins showing similar or smaller ranges at 2°C compared with 1.5°. Moreover, the ranges overlap substantially, so in terms of the set of", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed11.pdf" - }, - { - "text": "**18**\n\n**Figure 12.** Comparison of global mean changes in climate extremes indices relative to 1981–2010 at 2°C and 1.5°C global warming for individual ensemble members and ensemble mean. (*a*) Change in annual daily maximum temperature; (*b*) percentage of days with maximum temperature above 90th percentile for 1981–2010; (*c*) change in consecutive dry days; (*d*) change in annual maximum 5-day rainfall.\n\nFor precipitation, generally similar changes are seen at 1.5°C global warming as at 2°C, but smaller in magnitude (compare figures 16 and 4), suggesting that most of these changes are a response to radiatively forced climate change as opposed to internal climate variability. However, some localized changes do vary in sign between the GWLs, such as in South Australia, suggesting a possible dominance of internal variability over the global warming signal in these places.\n\nWhere Rx5day increases, the increases are projected to be larger—in some cases approximately double—at 2°C global warming than 1.5°C. Where Rx5day decreases, again the decreases are projected to be larger at 2°C global warming than 1.5°C (figure 17).\n\nOf the 122 countries assessed, 93 have smaller ensemble-mean HCVI calculated at 1.5°C global warming than at 2°C, indicating an ensemble consensus that 76% of assessed countries would see a smaller increase in vulnerability to food insecurity if global warming were limited to 1.5°C (figures 18 and 19). Conversely, 24% of countries would, by this metric, see the same or higher vulnerability to food insecurity at 1.5°C than 2°C. Of these, some are countries where HCVI is projected to be lower at 2°C global warming than in the baseline. For example, in Mali the ensemble-mean baseline HCVI of 0.83 increased slightly to 0.85 at 1.5°C then reduced to 0.81 at 2°C. In some countries, the ensemble-mean HCVI happened to be identical at both warming levels. In Chad, for example, the baseline HCVI of 0.89 increased to 0.91 at both 1.5°C and 2°C.\n\nAs noted above, four countries saw ensemble-mean HCVI values at 2°C above any seen in the baseline, and this number increased to seven at 1.5°C. The same four countries with 'unprecedented' HCVI values at 2°C also saw 'unprecedented' values at 1.5°C; these were Oman, Bangladesh, Mauritania and Yemen. These were joined by Myanmar, India and Cambodia as having 'unprecedented' values at 1.5°C. The role of internal climate variability in the HCVI results needs to be assessed, as does the effect of potential nonlinear interactions between the flood and drought metric. Until the reasons behind these country-specific results are understood,", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 6.** Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n**Market price of maize in main countries.** In this study, we elaborate on the endogenous response of our economic models. Tis response can be theoretically elaborated as: due to the efect of climate change on yield reduction (improvement), the supply curve moves lefward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shifing to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. Tis also alters the self-sufciency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among diferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signifcantly under both scenarios, as their yields improve due to climate efects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-sufciency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive efects on yields and/or are relatively less dependent on imports, are positively (less negatively) afected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-sufciency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n#### **Discussion and conclusion**\n\n**Discussion.** Our analysis highlights the efects of climate change on global- and regional-specifc maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We fnd that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. Te limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - }, - { - "text": "that maize yield would decrease severely. For the whole world more mitigation and adaptation actions should be taken from now on. Food security would be a signifcant challenge in this century.\n\n**Yield change of maize in main countries.** Tere are huge diferences in impacts on maize yield under climate change, which would infuence the food crisis in diferent regions. Tere are 159 countries in the whole world which plant maize. Te gross yield of maize the top 20 countries accounts for more than 90% of the total yield in the 159 countries. So, the changes in the top 20 countries under future scenarios would infuence the food security of the whole world (Fig. 5). From the results of simulated by CRESE-maize under global warming by 1.5 °C, there would be 75 countries facing with yield loss of maize; the mean yield loss rate would become 33.5%. Tere would be 84 countries experiencing yield increases. Overall, the global maize yield would slightly increase. Under global warming by 2.0 °C, there would be 82 countries facing with yield loss of maize, for which the mean yield loss rate is approximate to that under global warming by 1.5 °C. Tere would be 77 countries experiencing yield increase; however, the mean yield increase is apparently smaller than that under global warming by 1.5 °C. Generally, the global maize yield would decrease. Te results show that the adverse efect of warming up 2.0 °C on global maize production is far greater than warming up 1.5 °C. It is important to take actions to develop forward-looking adaptation measures to cope with future climate change.\n\nAccording to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. Te United States accounts for more than 32%; China accounts for about 24%; Brazil, Argentina and Mexico account for about 23%. Te fuctuation of maize production in these fve top countries will have a signifcant impact on the global maize trade. Based on the simulation results, comparing to 1986–2005, the maize yield in China, Brazil and Argentina would decrease under global warming by 1.5 °C; the yield loss rate would reach more than 20% in Brazil; Argentina would decrease by 14.7%; China would decrease by 3.7%. However, there would be increasing trends in the United States and Mexico; the change in the United States would not be signifcant and the maize yield would increase by 0.5%; the yield increasing rate would exceed 50% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 2% under global warming\n\nVol:.(1234567890)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed9.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30–110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows—many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility. This article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/2007– 2013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n- 1. IPCC. 2014 Summary for policymakers. In *Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change* (eds CB Field *et al*.), pp. 1–32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "| Model | Research institute | Country | Horizontal resolution |\n| --- | --- | --- | --- |\n| GFDL-ESM2M | Geophysical Fluid Dynamics Laboratory | Te United States | 144×90 |\n| HadGEM2-ES | Hadley Center for Climate Prediction and Research | Te United Kingdom | 192×145 |\n| IPSL-CM5A-LR | L' Institute Pierre-Simon Laplace | France | 96×96 |\n| NorESM1-M | Norway Climate Center | Norway | 144×96 |\n| MIROC-ESM | Center for Climate System Research, National Institute for Environmental Studies, and Frontier Research Center for Global Change | Japan | 128×64 |\n\n**Table 1.** Basic information of 5 ESMs in CMIP5. Horizontal resolution means the number of longitudinal grids×the number of latitudinal grids.\n\n**Figure 1.** Changes of global temperature of 20 years moving average from 2020 to 2099 simulated by 5 ESMs under 4 RCP scenarios. Note: Te black horizontal dashed lines: global warming by 1.5 °C and 2.0 °C; the black vertical solid line: the years when global warming reaches 1.5 °C and 2.0 °C simulated by the selected models and scenarios.\n\nAlthough, so far there are plenty of research on the impacts of global warming by 1.5 °C temperature, including the impacts comparison of global warming by 1.5 °C versus 2.0 °C44. It is necessary to do more quantitative impacts assessments of global warming by 1.5 °C and 2.0 °C on crops yield and market price to address research gaps and support the requirement of the scientifc community and governments. In this paper, the future climate situations were selected and analyzed which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C, based on the simulation results from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. Ten the per unit yield changes of maize all over the world under global warming by 1.5 °C and 2.0 °C were analyzed and the spatial distributions of changes in maize yield were revealed relative to the baseline from 1985 to 2006, applying crop model DSSAT (Decision Support System for Agrotechnology Transfer). Next, we examine the efects of the resulting maize production shocks in diferent countries; the market price of maize is simulated using GTAP to reveal the impacts of climate change on global crop trade. Finally, the future trend of maize yield and market price in the main breadbasket is assessed and the adaptation suggestions are put forward for maize cultivation.\n\n#### **Materials and methods**\n\n**Data processing.** In this study, historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. Te dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data45.\n\nFor future (2020–2099), the original climate scenario data (Table 1) were extracted from output archives of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) under four RCPs (RCP2.6, RCP4.5, RCP6.0, RCP8.5) retrieved from the CMIP website. Te climate scenario data was interpolated into 0.5°×0.5° horizontal resolution and bias-corrected with respect to historical observations to remove systematic errors46. Te data of maize-planting regions are from the gridded global dataset in 2000 by combining two data products47,48.\n\n**Simulation of climate scenarios with global warming by 1.5 °C and 2.0 °C.** In this study, climate data of global warming by 1.5 °C and 2.0 °C are determined according to the results of global climate models driven by typical concentration paths (RCPs) of greenhouse gas emissions. Eligible data are selected from a total of 20 sets of data under four RCP scenarios of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M), which estimate the temperature, precipitation and sunshine hours (Fig. 1).\n\nVol:.(1234567890)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "What is a formal fallacy ?", - "target_page": 8, - "target_passage": "For formal fallacies, the source of the error is found in the form of the argument", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation.[82][83]\n\n# **Fallacies**\n\nNot all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion.[84] So the argument \"it is sunny today; therefore spiders have eight legs\" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct.[85] This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them.[86] However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people.[87]\n\nFallacies are usually divided into formal and informal fallacies.[38] For formal fallacies, the source of the error is found in the *form* of the argument. For example, denying the antecedent is one type of formal fallacy, as in \"if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male\".[88] But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the *content* or the *context* of the argument.[89] Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are\n\nYoung America's dilemma: Shall I be wise and great, or rich and powerful? (poster from 1901) This is an example of a false dilemma: an informal fallacy using a disjunctive premise that excludes viable alternatives.\n\nresponsible for their flaw, as in \"feathers are light; what is light cannot be dark; therefore feathers cannot be dark\".[90] Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise.[91] In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it.[92]\n\n# **Definitory and strategic rules**\n\nThe main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as *rules of inference*. [93] They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia1.pdf" - }, - { - "text": "# **Logic**\n\n**Logic** is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term \"a logic\" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.\n\nLogic studies valid forms of inference like *modus ponens*.\n\nLogic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises \"it's Sunday\" and \"if it's Sunday then I don't have to work\" leading to the conclusion \"I don't have to work\".[1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like \"Sunday\" or \"work\" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.\n\nArguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalization—such as inferring that all ravens are black, based on many individual observations of black ravens.[2] Abductive arguments are inferences to the best explanation—for example, when a doctor concludes that a patient has a certain disease, as the best explanation for the symptoms that they are observed to suffer. [3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.\n\nLogic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \"∧\" has the meaning of \"and\".\n\n# **Informal logic**\n\nWhen understood in a wide sense, logic\n\nencompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\".[36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction.[139] They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel. [140]\n\n# **Informal**\n\nInformal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments.[141]\n\nThe *pragmatic* or *dialogical approach* to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion.[142] As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments.[143] A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion.[144] This is achieved by making arguments: arguments are the moves of the game.[145] They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion.[144] Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules.[146] These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations.[147]\n\nThe *epistemic approach* to informal logic, on the other hand, focuses on the epistemic role of arguments.[148] It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified.[149] Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion.[150] For example, the fallacy of begging the question is a *fallacy* because it fails to provide independent justification for its conclusion, even though it is deductively valid.[151] In this sense, logical normativity consists in epistemic success or rationality. [149] The Bayesian approach is one example of an epistemic approach.[152] Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called *credence*. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true.[153] On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n# **Definition**\n\nThe word \"logic\" originates from the Greek word *logos*, which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]\n\n# **Formal logic**\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) *p*, (2) if *p* then *q*, (3) therefore *q*\" are valid, independent of what the terms *p* and *q* stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from *p* to *q* is deductively valid then the claim \"if *p* then *q*\" is a logical truth.[16]\n\nFormal logic uses formal languages to express and analyze arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, *a logic* is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful.[43]\n\nArguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. [62] The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer nondeductive support to their conclusions. For such cases, the term *ampliative* or *inductive reasoning* is used.[63] Deductive arguments are associated with formal logic in contrast to the\n\nArgument terminology used in logic\n\nrelation between ampliative arguments and informal logic.[64]\n\n#### **Deductive**\n\nA deductively valid argument is one whose premises guarantee the truth of its conclusion.[11] For instance, the argument \"(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs\" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument \"(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs\" is also valid because the conclusion follows necessarily from the premises.[65]\n\nAccording to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances.[66]\n\nBecause of the first feature, the focus on formality, deductive inference is usually identified with rules of inference.[67] Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid.[68] The modus ponens is a prominent rule of inference. It has the form \"*p*; if *p*, then *q*; therefore *q*\".[69] Knowing that it has just rained ( ) and that after rain the streets are wet ( ), one can use modus ponens to deduce that the streets are wet ( ).[70]\n\nThe third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false.[71] Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises.[72] But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia1.pdf" - }, - { - "text": "moves may be used to win a game, for instance, by controlling the center and by defending one's king. [94] It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning.[93]\n\n### **Formal systems**\n\nA formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms.[95] In logic, axioms are statements that are accepted without proof. They are used to justify other statements.[96] Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects.[97] Starting in the late 19th century, many new formal systems have been proposed.[98]\n\nA *formal language* consists of an *alphabet* and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in wellformed formulas.[99] For instance, the syntactic rules of propositional logic determine that \" \" is a well-formed formula but \" \" is not since the logical conjunction requires terms on both sides.[100]\n\nA *proof system* is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that follows from the premises and . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi. [101]\n\nA *semantics* is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation \"true\" whenever and are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true.[102]\n\nA system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly. [103]\n\n# **Systems of logic**\n\nSystems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world,[104] but modern developments in this field have led to a vast proliferation of logical systems.[105] One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics. [106]\n\n# **Aristotelian**", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.[173]\n\n#### **Epistemology of logic**\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true.[174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false.[175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths.[177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.[179]\n\n# **History**\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed *term logic* in his *Organon* and *Prior Analytics*. [183] He was responsible for the introduction of the hypothetical syllogism[184] and temporal modal logic.[185] Further innovations include inductive logic[186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.[188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information.[154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.[155]\n\n# **Areas of research**\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science.[156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.[157]\n\n# **Philosophy of logic and philosophical logic**\n\n*Philosophy of logic* is the philosophical discipline studying the scope and nature of logic.[59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them.[158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] *Philosophical logic* is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n# **Metalogic**\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.[162]\n\n# **Mathematical logic**\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments.[40]\n\n# **Basic concepts**\n\n### **Premises, conclusions, and truth**\n\n#### **Premises and conclusions**\n\n*Premises* and *conclusions* are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion.[41] For instance, the premises \"Mars is red\" and \"Mars is a planet\" support the conclusion \"Mars is a red planet\". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers. [41][a] This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as *propositions* or as *sentences*. [43] Propositions are the denotations of sentences and are usually seen as abstract objects. [44] For example, the English sentence \"the tree is green\" is different from the German sentence \"der Baum ist grün\" but both express the same proposition.[45]\n\nPropositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions.[43] These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted.[46] Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today. [47]\n\n#### **Internal structure**\n\nPremises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex.[48] A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like \"and\" or \"if...then\". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. [49][48] For example, the simple proposition \"Mars is red\" can be formed by applying the predicate \"red\" to the singular term \"Mars\". In contrast, the complex proposition \"Mars is red and Venus is white\" is made up of two simple propositions connected by the propositional connective \"and\".[49]\n\nWhether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts. [49][50] But this relation is more complicated in the case of simple propositions and their", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "In early Chinese philosophy, what were the major influences regarding the philosophy of logic ?", - "target_page": 18, - "target_passage": "In Chinese philosophy, the School of Names and Mohism were particularly influential", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics.[197]\n\nIn India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation.[198] In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object.[199] A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources.[200] Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number. [201]\n\nThe syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic.[202] Many see Gottlob Frege's *Begriffsschrift* as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work *Principia Mathematica*. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language.[203] Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic.[204] Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic.[205]\n\n# **See also**\n\n*Philosophy portal*", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. [189] It influenced Western medieval writers such as Albertus Magnus and William of Ockham. [190] Ibn Sina wrote on the hypothetical syllogism[191] and on the propositional calculus. [192] He developed an original \"temporally modalized\" syllogistic theory, involving temporal logic and modal logic.[193] He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. [191] Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill.[194]\n\nDuring the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic.[195] Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential *Summa Logicae* was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions.[196]", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information.[154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.[155]\n\n# **Areas of research**\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science.[156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.[157]\n\n# **Philosophy of logic and philosophical logic**\n\n*Philosophy of logic* is the philosophical discipline studying the scope and nature of logic.[59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them.[158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] *Philosophical logic* is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n# **Metalogic**\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.[162]\n\n# **Mathematical logic**\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.[173]\n\n#### **Epistemology of logic**\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true.[174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false.[175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths.[177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.[179]\n\n# **History**\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed *term logic* in his *Organon* and *Prior Analytics*. [183] He was responsible for the introduction of the hypothetical syllogism[184] and temporal modal logic.[185] Further innovations include inductive logic[186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.[188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future.[119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.[120]\n\n#### **Propositional logic**\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component.[121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition.[122]\n\n#### **First-order logic**\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\".[123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\nGottlob Frege's *Begriffschrift* introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n#### **Extended**\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n#### **Modal logic**\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\".[127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Bertrand Russell made various contributions to mathematical logic. [163]\n\nto use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. [165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopherlogicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his *Grundgesetze* by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. [166]\n\nSet theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms.[167]\n\nComputability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of\n\nits main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue.[168]\n\n# **Computational logic**\n\nComputational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention.[169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic.[170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits.[171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output.[172]\n\n### **Formal semantics of natural language**\n\nFormal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n# **Definition**\n\nThe word \"logic\" originates from the Greek word *logos*, which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]\n\n# **Formal logic**\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) *p*, (2) if *p* then *q*, (3) therefore *q*\" are valid, independent of what the terms *p* and *q* stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from *p* to *q* is deductively valid then the claim \"if *p* then *q*\" is a logical truth.[16]\n\nFormal logic uses formal languages to express and analyze arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, *a logic* is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \"∧\" has the meaning of \"and\".\n\n# **Informal logic**\n\nWhen understood in a wide sense, logic\n\nencompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\".[36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it.[129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time.[129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case.[130]\n\n#### **Higher order logic**\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification.[131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" (*some* apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories.[43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used.[133]\n\n#### **Deviant**\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.[134]\n\nIntuitionistic logic is a restricted version of classical logic.[135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true.[135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence.[136]\n\nMulti-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate.[137] These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of \"degrees of truth\", represented by a real number between 0 and 1.[138]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - }, - { - "text": "# **Logic**\n\n**Logic** is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term \"a logic\" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.\n\nLogic studies valid forms of inference like *modus ponens*.\n\nLogic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises \"it's Sunday\" and \"if it's Sunday then I don't have to work\" leading to the conclusion \"I don't have to work\".[1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like \"Sunday\" or \"work\" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.\n\nArguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalization—such as inferring that all ravens are black, based on many individual observations of black ravens.[2] Abductive arguments are inferences to the best explanation—for example, when a doctor concludes that a patient has a certain disease, as the best explanation for the symptoms that they are observed to suffer. [3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.\n\nLogic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "What is considered a deductively valid argument regarding logic ?", - "target_page": 6, - "target_passage": "A deductively valid argument is one whose premises guarantee the truth of its conclusion", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful.[43]\n\nArguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. [62] The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer nondeductive support to their conclusions. For such cases, the term *ampliative* or *inductive reasoning* is used.[63] Deductive arguments are associated with formal logic in contrast to the\n\nArgument terminology used in logic\n\nrelation between ampliative arguments and informal logic.[64]\n\n#### **Deductive**\n\nA deductively valid argument is one whose premises guarantee the truth of its conclusion.[11] For instance, the argument \"(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs\" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument \"(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs\" is also valid because the conclusion follows necessarily from the premises.[65]\n\nAccording to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances.[66]\n\nBecause of the first feature, the focus on formality, deductive inference is usually identified with rules of inference.[67] Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid.[68] The modus ponens is a prominent rule of inference. It has the form \"*p*; if *p*, then *q*; therefore *q*\".[69] Knowing that it has just rained ( ) and that after rain the streets are wet ( ), one can use modus ponens to deduce that the streets are wet ( ).[70]\n\nThe third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false.[71] Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises.[72] But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n# **Definition**\n\nThe word \"logic\" originates from the Greek word *logos*, which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]\n\n# **Formal logic**\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) *p*, (2) if *p* then *q*, (3) therefore *q*\" are valid, independent of what the terms *p* and *q* stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from *p* to *q* is deductively valid then the claim \"if *p* then *q*\" is a logical truth.[16]\n\nFormal logic uses formal languages to express and analyze arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, *a logic* is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "# **Logic**\n\n**Logic** is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term \"a logic\" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.\n\nLogic studies valid forms of inference like *modus ponens*.\n\nLogic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises \"it's Sunday\" and \"if it's Sunday then I don't have to work\" leading to the conclusion \"I don't have to work\".[1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like \"Sunday\" or \"work\" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.\n\nArguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalization—such as inferring that all ravens are black, based on many individual observations of black ravens.[2] Abductive arguments are inferences to the best explanation—for example, when a doctor concludes that a patient has a certain disease, as the best explanation for the symptoms that they are observed to suffer. [3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.\n\nLogic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information.[154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.[155]\n\n# **Areas of research**\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science.[156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.[157]\n\n# **Philosophy of logic and philosophical logic**\n\n*Philosophy of logic* is the philosophical discipline studying the scope and nature of logic.[59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them.[158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] *Philosophical logic* is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n# **Metalogic**\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.[162]\n\n# **Mathematical logic**\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \"∧\" has the meaning of \"and\".\n\n# **Informal logic**\n\nWhen understood in a wide sense, logic\n\nencompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\".[36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.[173]\n\n#### **Epistemology of logic**\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true.[174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false.[175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths.[177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.[179]\n\n# **History**\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed *term logic* in his *Organon* and *Prior Analytics*. [183] He was responsible for the introduction of the hypothetical syllogism[184] and temporal modal logic.[185] Further innovations include inductive logic[186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.[188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future.[119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.[120]\n\n#### **Propositional logic**\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component.[121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition.[122]\n\n#### **First-order logic**\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\".[123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\nGottlob Frege's *Begriffschrift* introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n#### **Extended**\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n#### **Modal logic**\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\".[127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[77]\n\n## **Logic**\n\nFormal logic is used for reasoning and knowledge representation. [78] Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as \"and\", \"or\", \"not\" and \"implies\")[79] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as \"*Every X* is a *Y*\" and \"There are *some X*s that are *Y*s\").[80]\n\nIllustration of gradient descent for 3 different starting points; two parameters (represented by the plan coordinates) are adjusted in order to minimize the loss function (the height)\n\nDeductive reasoning in logic is the process of proving a new\n\nstatement (conclusion) from other statements that are given and assumed to be true (the premises).[81] Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.\n\nGiven a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem.[82] In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[83]\n\nInference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.[84]\n\nFuzzy logic assigns a \"degree of truth\" between 0 and 1. It can therefore handle propositions that are vague and partially true.[85]\n\nNon-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. [28] Other specialized versions of logic have been developed to describe many complex domains.\n\n## **Probabilistic methods for uncertain reasoning**\n\nMany problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.[86] Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, [87] and information value theory. [88] These tools include models such as Markov decision processes, [89] dynamic decision networks, [90] game theory and mechanism design. [91]", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Bertrand Russell made various contributions to mathematical logic. [163]\n\nto use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. [165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopherlogicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his *Grundgesetze* by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. [166]\n\nSet theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms.[167]\n\nComputability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of\n\nits main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue.[168]\n\n# **Computational logic**\n\nComputational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention.[169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic.[170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits.[171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output.[172]\n\n### **Formal semantics of natural language**\n\nFormal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia1.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it.[129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time.[129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case.[130]\n\n#### **Higher order logic**\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification.[131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" (*some* apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories.[43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used.[133]\n\n#### **Deviant**\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.[134]\n\nIntuitionistic logic is a restricted version of classical logic.[135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true.[135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence.[136]\n\nMulti-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate.[137] These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of \"degrees of truth\", represented by a real number between 0 and 1.[138]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "What was the mean correctness score for LLM-generated handoff notes ?", - "target_page": 7, - "target_passage": "Correctness 4.52", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# **Original Investigation | Emergency Medicine** Developing and Evaluating Large LanguageModel–Generated EmergencyMedicine Handoff Notes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n# **Abstract**\n\n**IMPORTANCE** An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\n**OBJECTIVE** To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\n**DESIGN, SETTING, AND PARTICIPANTS** This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\n**EXPOSURE** LLM-generated EM handoff notes.\n\n**MAIN OUTCOMES AND MEASURES** LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\n**RESULTS** In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE (0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\n**CONCLUSIONS AND RELEVANCE** In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n# **Key Points**\n\n**Question** Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\n**Findings** In this cohort study of 1600 EM patient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\n**Meaning** These findings suggest the value of a manual, patient safety– focused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n# **+ Invited Commentary**\n\n# **+ Supplemental content**\n\nAuthor affiliations and article information are listed at the end of this article.\n\n**Open Access.** This is an open access article distributed under the terms of the CC-BY License.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 1/12", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## **Discussion**\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user–developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\n| | | | Table 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)–Generated and Physician-Written | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| | | | Likert rating 1-5, No. (%)a | | | | | | Likert rating 1-5, No. (%)a | | | |\n| Criteria | Mean score (SD) | 1 | 2 | 3 | 4 | 5 | Mean score (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.00 (0.88) | 0 | 12 (8) | 31 (20.7) | 69 (46) | 38 (25.3) | 4.16 (0.84) | 0 | 3 (2) | 31 (20.7) | 48 (32) | 68 (45.3) |\n| Curation | 4.24 (0.58) | 0 | 1 (0.7) | 13 (8.7) | 85 (56.7) | 51 (34) | 4.76 (0.48) | 0 | 0 | 6 (4) | 39 (26) | 105 (70) |\n| Readability | 4.00 (0.64) | 0 | 8 (5.3) | 17 (11.3) | 87 (58) | 38 (25.3) | 4.64 (0.49) | 0 | 0 | 5 (3.3) | 38 (25.3) | 107 (71.3) |\n| Correctness | 4.52 (0.64) | 0 | 0 | 13 (8.7) | 39 (26) | 98 (65.3) | 4.90 (0.39) | 0 | 0 | 2 (1.3) | 12 (8) | 136 (90.7) |\n| Usefulness | 4.04 (0.86) | 0 | 12 (8) | 30 (20) | 59 (39.3) | 49 (32.7) | 4.36 (0.71) | 0 | 5 (3.3) | 13 (8.7) | 53 (35.3) | 79 (52.7) |\n\na Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or LLM summaries in the 150 evaluation results.\n\n#### Table 4. Mean Clinical Safety Evaluation, Large Language Model (LLM)–Generated and Physician-Written\n\n| | LLM-generated | | | | | | Physician-written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | Likert score 1-5, No. (%)a | | | | | | Likert score 1-5, No. (%)a | | | |\n| Criteria | Mean (SD) | 1 | 2 | 3 | 4 | 5 | Mean (SD) | 1 | 2 | 3 | 4 | 5 |\n| Completeness | 4.20 (0.93) | 0 | 13 (8.7) | 19 (12.7) | 58 (38.7) | 60 (40) | 4.50 (0.65) | 0 | 0 | 17 (11.3) | 43 (28.7) | 90 (60) |\n| Curation | 4.82 (0.32) | 0 | 1 (0.7) | 3 (2) | 21 (14) | 125 (83.3) | 4.90 (0.31) | 0 | 0 | 3 (2) | 8 (5.3) | 139 (92.7) |\n| Readability | 4.74 (0.37) | 0 | 1 (0.7) | 6 (4) | 23 (15.3) | 120 (80) | 4.94 (0.14) | 0 | 0 | 0 | 10 (6.7) | 140 (93.3) |\n| Correctness: hallucination | 4.96 (0.14) | 0 | 0 | 0 | 5 (3.3) | 145 (96.7) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Correctness: knowledge gap | 4.88 (0.48) | 0 | 3 (2) | 2 (1.3) | 6 (4) | 139 (92.7) | 4.90 (0.42) | 0 | 1 (0.7) | 5 (3.3) | 3 (2) | 141 (94) |\n| Correctness: faulty logic | 4.60 (0.75) | 0 | 11 (7.3) | 12 (8) | 13 (8.7) | 114 (76) | 4.94 (0.24) | 0 | 0 | 2 (1.3) | 2 (1.3) | 146 (97.3) |\n| Correctness: bias | 5.00 | 0 | 0 | 0 | 0 | 150 (100) | 5.00 | 0 | 0 | 0 | 0 | 150 (100) |\n| Overall safety risk | 4.06 (0.86) | 0 | 11 (7.3) | 27 (18) | 60 (40) | 52 (34.7) | 4.50 (0.56) | 0 | 1 (0.7) | 16 (10.7) | 41 (27.3) | 92 (61.3) |\n| | | | | | a Likert scores and score distributions over 50 notes for 3 annotators. There are no 1 ratings for either physician or AI summaries in the 150 evaluation results. | | | | | | | |\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 7/12", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety.38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n# **Methods**\n\n### **Data Collection**\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n### **EM-to-IP Handoff Note Template**\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n### **Data Curation for Automated ED Note Generation**\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in **Table 1**: EM clinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 3/12", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOn a Likert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences.51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk.45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement.52 Data were analyzed from October 2023 to March 2024.\n\n## **Results**\n\n#### **Automated Tasks**\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In **Table 2**, ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n### **Clinical Evaluation Tasks**\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in **Table 3** and **Table 4**. The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\n| | Table 2. Automated Evaluation Scores, Large Language Model (LLM)–Generated and Physician-Written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Summary type | R-1a | R-2a | R-La | BERT-p | BERT-r | SCALE |\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |\n\nAbbreviations: BERT, bidirectional encoder representations from transformers; p, precision-based scores; r, recall-based scores; R, recall-oriented understudy for gisting evaluation; SCALE, source chunking approach for large-scale inconsistency evaluation.\n\na R-1, R-2, R-L are the 3 types of recall-oriented understudy for gisting evaluation scores. Higher is better for all metrics.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 6/12", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "#### Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## **Introduction**\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors.1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event.3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors.5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints.7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems.11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care.15-18 Limited work to date has demonstrated EM electronic handoff tools as feasible, efficient, and effective.19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.\n\nIn recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout.22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes.23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries,24 radiology reports,25 patient messaging,26 after-visit summaries,27 and ambient dictation28 with various levels of perceived quality in each workflow.29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records.30 A common concern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content.31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets.32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes.34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency.35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases,36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach.37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary.24 However, recently published clinical\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 2/12", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise.49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows.29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy.54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps),30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n### **Limitations**\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.\n\nWhile the dataset was smaller, we made significant efforts to reduce model variance and prevent overfitting by allocating more data to the training cohort and using k-fold cross validation. And while our ratio split choice implies the testing results will have slightly greater variance than expected, this is mitigated through the extensive manual clinical assessment that was performed. The study's multidimensional clinical evaluation was labor intensive, requiring more than 200 hours from expert informaticists and quality trained clinician experts to both curate the dataset of 1600\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 8/12", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "LLM-model training, an informatics professional (V.H.) worked over a period of 200 hours with 3 board certified emergency medicine physician leaders with experience in formal quality and patient safety review processes (M.M., A.F., and P.S.) to improve the dataset through manual curation and annotation. As the task of EM-handoff note generation is not dependent on racial characteristics of the patients, we removed all mentions of race during the annotation stage as a means to avoid race bias; therefore, the model was trained to generate text without race-based assumptions. Although resource intensive, a small and carefully curated dataset of at least 1000 examples has been shown to be sufficient to produce remarkable results for the language model chosen.42 Given the size of our dataset, we created a train and test dataset with a ratio of 1500:100, with a higher ratio of data placed in the training set and eschewed a validation set to lower the variance of the models. We used k-fold cross validation on the training dataset to avoid sampling bias for the hyperparameter optimization of the LLMs.\n\n### **Models**\n\nFor this study, we chose the LLMs Robustly Optimized BERT Approach (RoBERTa; hereafter referred to as LLM 1)43 for saliency content selection and Large Language Model Meta AI 2 (Llama-2; hereafter referred to as LLM 2) 7B44 for abstractive summarization. Further information about the models and technology specifications is provided in detail in eAppendix 1 in Supplement 1.\n\n#### **Data Processing**\n\nAs LLM 2 only has a context size of 4096 tokens,44 we used 2 steps to process the EM notes to both shorten the input size while maintaining content salience. First, we adopted a number of heuristic strategies for prioritization and filtration: (1) clinical note types (hierarchy presented in Table 1), (2) time of authorship, and (3) duplicate sentence detection. Second, we used an LLM 1–based saliency model to infer EM note sentences based on likelihood of content contribution to the EM-to-IP handoff notes.\n\n#### **Model Training and Inference**\n\nOur summarization model is a fine-tuned decoder-only causal language model based on LLM 2. We used different prompts for the separate types of summarization: HPI and EM handoff. Additional information about the model training and inference process is provided in eAppendix 1 in Supplement 1.\n\nUsing a combination of generative AI powered by our fine-tuned LLM 2 model and a set of heuristic rules, our summarization system produced ED handoff notes with various sections for downstream clinical tasks. The inference process is shown in the **Figure**.\n\n| | Table 1. Types of Data Included From the Emergency Department (ED) Patient Electronic Health Recorda |\n| --- | --- |\n| Type of data | Description |\n| Descriptive | Date of birth, medical record number, encounter number, and total time of stay in ED |\n| Encounter | ED arrival date and time, IP admit date and time |\n| Laboratory tests | Examples: hemoglobin, hematocrit, white blood cell count, neutrophil count, platelets, sodium, |\n| (all results available) | potassium, chloride, bicarbonate, creatinine, blood urea nitrogen, troponin, D dimer, lactate, |\n| | urinalysis, ketone, blood, nitrite, leucocytes, and red blood cells |\n| Laboratory tests | Examples: β-human chorionic gonadotropin hormone, all serum drug levels (alcohol level, |\n| (only if abnormal) | salicylate level, Tylenol level), magnesium, lipase, and erythrocyte sedimentation rate |\n| Notes (in order of | EM clinician notes, consultation notes, EM progress notes, and EM procedure notes |\n| hierarchy) | |\n| Vitals | Height, weight, temperature, heart rate, blood pressure, and peripheral capillary |\n| | oxygen saturation |\n| Orders | Medications, consults, and radiology results |\n\nAbbreviations: EM, emergency medicine; IP, inpatient.\n\na Automated EM handoff notes are generated from the curation of the data through both rule-based and large language model–summarization approaches.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 4/12", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed8.pdf" - }, - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].\n\n### 3 LLM Control Plane Integrity\n\nIn this section, we define *LLM control plane integrity*. Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the case of n = 2, and, for reasons that will be clear in a moment, use Ms (\"strong\") and Mw (\"weak\") to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ←$ RMω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a query.\n\nInference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of LLM invocations Mij (zj ) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RMω (x). Here m is the total number of LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\n$$T=(i_{1},z_{1}),(i_{2},z_{2}),\\ldots,(i_{m},z_{m})$$\n\nof pairs of model indexes ij ∈ {w, s} and model inputs zj . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x),(s, x)}. We write submitting a sequence of inferences ⃗x = ⃗x1, . . . , ⃗xq to a control plane as\n\n$$R_{\\omega}^{\\mathcal{M}}(\\vec{x})=(R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{1}),\\ldots,R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{q}))$$\n\nwhere note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however, each invocation results in a single LLM invocation.\n\nAn *inference flow policy* dictates the control plane designer's intention regarding use of the underlying models. For example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply defined as a predicate P over (input, model) pairs (⃗x1, i1), . . . ,(⃗xq, iq) since this fully defines the sequence of transcripts. For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:\n\n$${\\mathcal{P}}(({\\vec{x}}_{1},i_{1}),\\ldots,({\\vec{x}}_{q},i_{q}))=\\left(\\sum_{j=1}^{q}{\\frac{\\mathbb{I}(i_{j})}{q}}\\leq\\epsilon\\right)$$", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "#### **Evaluation**\n\nIt is critical to ensure that AI systems are safe, ethical, and without bias in the clinical domain. For the proposed approach, we performed comprehensive automatic evaluations and a novel, rigorous, patient safety-focused clinical evaluation. The unique clinical evaluation framework was designed to (1) screen for and identify the common, specific correctness issues in LLMs observed in longform clinical summarization and (2) assess the potential patient safety implications associated with any incorrectness identified using a modified version of the World Health Organization's International Classification for Patient Safety.45\n\n### **Automated Evaluations**\n\nWe used the summarization evaluation metrics of recall-oriented understudy for gisting evaluation (ROUGE),46 bidirectional encoder representations from transformers score (BERTScore),47 and source chunking approach for large-scale inconsistency evaluation (SCALE).48 ROUGE computes the overlap of n-grams between the generated and reference summaries. For longform document summarization, the following ROUGE scores are considered to be close to the reference summaries: ROUGE-1, above 0.4; ROUGE-2, above 0.2; and ROUGE-L, above 0.3.46 BERTScore leverages the pretrained contextual embeddings from BERT and matches words to compute a similarity score for each token in the candidate sentence with each token in the reference sentence. We used SCALE,48 a natural language inference–based approach, to measure the faithfulness between the source document and the generated text. Further background is provided about SCALE in eAppendix 2 in Supplement 1.\n\n#### **Statistical Analysis**\n\nBased on prior work, 3 board certified EM physician leaders (M.M., A.F., and P.S.) with experience in formal quality and patient safety review processes performed retrospective reviews of ED-based EHR records of 50 individual ED patient encounters, randomly selected from the test dataset.49 Based on prior published clinical evaluations of LLM, as well as the study feasibility of using EM physician quality and patient safety leaders, 50 ED patient encounters were evaluated.50 Reviewers\n\nCBC indicates complete blood count; CMP, comprehensive metabolic panel; CTH, computed tomography of the head; EHR, electronic health record; Hct, hematocrit; Hgb, hemoglobin; HPI, history of present illness; HR, heart rate; IP, inpatient; IVF, intravenous fluid; N/V/D, nausea, vomiting, and diarrhea; RR, respiratory rate; SDU, step down unit; SPO2, peripheral capillary oxygen saturation; WBC, white blood cell; WBG, whole blood glucose.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 5/12", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed8.pdf" - }, - { - "text": "# REROUTING LLM ROUTERS\n\nA PREPRINT\n\nAvital Shafran The Hebrew University of Jerusalem\n\nRoei Schuster Wild Moose\n\nThomas Ristenpart Cornell Tech\n\nVitaly Shmatikov Cornell Tech\n\n### ABSTRACT\n\nLLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we investigate routers' adversarial robustness.\n\nWe first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial inputs, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate queryindependent token sequences we call \"confounder gadgets\" that, when added to any query, cause LLM routers to send the query to a strong LLM.\n\nOur quantitative evaluation shows that this attack is successful both in white-box and black-box settings against a variety of open-source and commercial routers, and that confounding queries do not affect the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating alternative defenses.\n\n### 1 Introduction\n\nLarge language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller, less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.\n\nDevelopers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of this writing, GPT-3.5-turbo costs $0.5/$1.5 per 1M input/output tokens, GPT-4o-mini $0.15/$0.6, GPT-4o $2.5/$10, o1-preview $15/$60. The difference in quality between models is not uniform across queries. For some queries, even a cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality answer.\n\nA natural solution to balancing performance and economic considerations is to take advantage of the availability of multiple LLMs at different price-performance points. Recently proposed *LLM routing* systems [5, 12, 27, 47, 53] orchestrate two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of sufficient quality. In the two-LLM case, let Ms be an expensive, high-quality model and Mw a weaker, lower-grade one. Given query q, the routing algorithm R(·) applies a classifier to q that outputs 0 if Mw is sufficient for answering q, or 1 if Ms is required. The system then routes q accordingly.\n\nLLM routing is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple LLMs to process inputs, as further described in Section 2.\n\nOur contributions. First, we introduce *LLM control plane integrity* as a novel problem in AI safety. Recently proposed LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial robustness of the underlying LLMs.", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "What are the improvements made to possible to the HadGEM3 and CMIP5 climate change models by UKCP18 ?", - "target_page": 1, - "target_passage": "mprovements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "**UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW** \n\n# What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme2.\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power3 for example.\n\n# What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n• Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback – user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information4.\n\n- Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM35 model and the CMIP56 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n• Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models7.\n\n• The increased quantity and range of observations available since 2009.\n\n• Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n\n1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. **https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports**\n\n2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): **https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/** 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: **https://www.gov.uk/government/collections/climate-change-adaptationreporting-second-round-reports**\n\n4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n\n- 5 **http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3**\n- 6 Coupled model intercomparison project phase 5, see **http://cmip-pcmdi.llnl.gov/cmip5/**\n\n7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25,\n\n5791–5806 (2012) **http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1**", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**4**\n\nRather than using the original CMIP5 ensemble as in previous studies, the aim is to allow for an improved representation of atmospheric and land surface processes including extremes by using higher spatial resolution [11].\n\nHadGEM3 (Hadley Centre Global Environment Model version 3) is a configuration of the UK Met Office Unified Model (MetUM) which has been developed for use for both climate research and weather prediction applications. It is the result of converging the development of the Met Office's weather and climate global atmospheric model components so that, where possible, atmospheric processes are modelled or parametrized seamlessly across spatial resolutions and timescales.\n\nThe high-resolution simulations were performed using the HadGEM3A Global Atmosphere (GA) 3.0 model [12–14] at a resolution of N216 (0.556° of latitude by 0.833° of longitude with gridboxes of approx. 60 km length in mid-latitudes). This is the atmospheric component of the HadGEM3-GC2 coupled climate model [15,16], which is part of the HadGEM3 family of climate models [12]. This represents the third generation of HadGEM configurations, leading on from the HadGEM2 family of climate model configurations [13] which was used for CMIP5. Key improvements over the previous model, HadGEM2, include increased vertical levels in the atmosphere (85 compared to 38) and substantial changes to the model dynamics (ENDGame) [17]. This version of the HadGEM3 model lies in the transition from CMIP5 to CMIP6 versions. The Met Office is currently operationally running the coupled HadGEM3-GC2 model at N216 resolution for seasonal and decadal forecasting and clear benefits are emerging from this use at higher resolution [18,19].\n\nWe ran the model using only its atmosphere and land components, with time-varying seasurface temperatures (SSTs) and sea-ice concentrations (SICs) prescribed as input quantities. This approach was taken for two reasons: (i) to provide a rapid first analysis of the implications of the higher resolution for projections of climate extremes and impacts—an atmosphereonly simulation requires considerably less computing time than a coupled ocean–atmosphere general circulation model (GCM); (ii) to allow us to explore, to some degree, uncertainties in regional climate changes by using SSTs and SICs from different climate models. To explore these uncertainties in the regional impacts of climate change, we carried out six HadGEM3 atmospheric simulations driven by time-varying SSTs and SICs from a subset of projections from the CMIP5 with the RCP8.5 scenario. The assumption here is that SSTs and SICs provide a substantial influence on regional patterns of climate change over land, so using a range of SST and SIC patterns in a single atmosphere model goes some way towards representing the range of regional climate changes that would arise in a set of different coupled ocean–atmosphere GCMs. This approach will not capture the full range of uncertainty affecting regional climate changes over land, because it still relies on one atmosphere model and one land surface scheme, so responses to radiative forcing that depend mainly on atmospheric process or land-atmosphere interactions will still be constrained by the behaviour of that single model. Nevertheless, we consider that our experimental design avoids the reliance on one single realization of climate and hence allows some of the uncertainties in regional climate-change impacts to be illustrated and explored.\n\nThe SSTs and SICs were taken from a subset of the CMIP5 transient projections performed with the RCP8.5 scenario from 1979 to 2100—the CMIP5 members were selected as representative of a range of outcomes for future climate change, including high and low climate sensitivity, different biases in baseline precipitation climatology, and different global patterns of precipitation change. Specific levels of global warming such as 1.5°C or 2°C were defined on the basis of the global mean temperature in the original CMIP5 projections. The time of reaching a specific level of global warming, therefore, varied between ensemble members. The CMIP5 SSTs were not bias-corrected, which means that the results here may be sensitive to systematic errors arising from biases in the present-day SST patterns.\n\nAtmospheric greenhouse gas concentrations were prescribed from the standard RCP8.5 concentration scenario. Aerosol concentrations were calculated within the model, with aerosol emissions prescribed again from the standard RCP8.5 scenario. This means that the greenhouse gas and aerosol concentrations, and hence radiative forcing, were the same in all ensemble", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 20.** Difference between 2°C and1.5°C global warming in percentage changes in mean(top) run-off inJULESsimulations driven bythe ensemble of HadGEM3simulations. Notethatthe use of percentage changes emphasizes changes inregions where the baseline streamflow is small.\n\nThe largest regional differences between 2°C and 1.5°C global warming tend to be in the regions where the local impact is largest relative to the baseline. For TXx this is generally the midlatitudes, whereas for TX90p it is generally the tropics. So, broadly, the impacts at 1.5°C global warming could be estimated by scaling-back the impacts at 2°C.\n\nThese results show some similarities with those from the CMIP5 models [9,38], but also some notable differences. The CMIP5 models were at lower spatial resolution than the models used here. Although the general patterns of change in TXx are broadly similar in our study and CMIP5, with greater warming in many continental interiors, is notable that our results show more marked geographical variation than those from CMIP5 projections ([9], among others), with the continental interior warming being more intense in our projections. In particular, our results with HadGEM3 show more intense increases in maximum temperature in North America and Europe.\n\nOur projections of changes in consecutive dry days (CDD) broadly consistent with those found in a subset of the CMIP5 ensemble [9], although there are some differences. Our ensemble mean suggests shorter dry spells in the central Amazon, whereas ISIMIP-indicated longer dry spells. Also, as with the temperature indices, our results show greater geographical differentiation in the intensity of changes.\n\nThe decrease in Rx5day in some regions in our simulations contrasts with the subset of CMIP5 models used for the ISIMIP Fast-Track projections [9] which suggested an increase in Rx5day almost everywhere where at least 66% of the model ensemble agreed on the sign of the change, including all of northern South America. The reasons for these differences require further investigation, but some insight into possible reasons may be gained by examining the similarities and differences between our own individual ensemble members.\n\nFor all the CLIMPAct variables, the variations in global means between the ensemble members were consistent at 1.5°C and 2°C. That is, the members with the largest changes at 2°C also showed the largest changes at 1.5°C, and the same was true for the smallest changes, and the relative proportions of changes in other ensemble members. This suggests that variations between the ensemble members at any particular GWL were not merely a consequence of internal variability", - "page_start": 22, - "page_end": 22, - "source_file": "pubmed11.pdf" - }, - { - "text": "# What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n# **OBSERVATIONS**\n\n### **Annual report: State of the UK Climate. Downloadable data.**\n\nThe \"State of the UK Climate\" report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence9. For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n# **MARINE PROJECTIONS**\n\n#### **Sea level rise. Storm surge. Past event case studies.**\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a \"plausible but highly unlikely\" scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report10.\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These \"storminess\" projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n8 The latest update can be found at **http://www.metoffice.gov.uk/climate/uk/about/state-of-climate**\n\n- 9 **http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/**\n10 **https://www.ipcc.ch/report/ar5/**", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**Figure 6.** Simulated changes inthe average length of flood events(number of days in whichthe cumulative dailyrainfall excess is positive, compared with the 95th percentile in 1981–2010, at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n**Figure 7.** Hunger and Climate Vulnerability Index calculated for simulated climate states at 2°C global warming for five individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed11.pdf" - }, - { - "text": "# **PROJECTIONS OVER LAND**\n\nThe land projections comprise three components:\n\n### **60KM GLOBAL PROJECTIONS**\n\n#### **20 plausible climate futures. Latest Hadley Centre climate model. Simulations of extreme weather. Simultaneous impacts captured at multiple locations.**\n\nThis resolution will enable more realistic simulations of climate for the UK and capture the drivers of extreme weather, a significant advance on the 300 km-resolution simulations of UKCP09. A set of 20 plausible global projections of 21st century climate will be generated using an ensemble of the Met Office Hadley Centre HadGEM3 climate model. These projections will be selected to represent a wide range of possible future climate states to reflect key uncertainties, informing a risk-based approach to planning. They will be generated to provide spatially coherent daily data at a horizontal resolution of 60 km for two greenhouse gas concentration scenarios. These will be compared with an ensemble of CMIP5 models to provide additional information on uncertainties in the projections relative to other climate models.\n\n## **25KM PROBABILISTIC PROJECTIONS**\n\n**Captures natural variability and climate change. Updated models and observations. Provides seasonal scale projections.**\n\nBased on the established, peer-reviewed, ground-breaking method of UKCP09 for estimating uncertainty for use in risk-based analysis. Probabilistic projections will be updated using an up-to-date collection of Met Office climate simulations and the latest IPCC-assessed simulations to estimate the model uncertainties, incorporate the latest observations and estimate carbon cycle feedbacks. Projections will be on a 25 km grid for the UK at monthly intervals for several emission scenarios, including one used in UKCP0911. The new probabilistic projections will indicate the range of uncertainty in our knowledge of the climate system and natural variability through the 21st century, using probability density functions to provide information on how climate varies from month to month. This contrasts with UKCP09 for which only 30-year means were provided12.\n\n### **DOWNSCALED HIGH RESOLUTION PROJECTIONS**\n\n**Downscaled versions of the global model for the UK. For the most spatially detailed downscaling this includes hourly data. Simultaneous impacts captured at multiple UK locations.**\n\nThe high resolution projections will provide information on types of weather of relevance to adaptation at two different resolutions. The 12 km model provides a downscaled product that is similar to UKCP09's 25 km simulations but driven by an improved global model and at a higher resolution. This may be especially useful for those interested in water availability and some aspects of agriculture. A key reason for providing this data is that users will be able to compare it directly with EURO-CORDEX13.\n\nThe global projections will also be downscaled to 2.2 km using a process of nesting models at finer resolution that maintains the integrity of the representation of evolving atmospheric processes. Key benefits of simulations at this resolution will be the information provided on high impact events such as localised heavy rainfall in summer and potential improvements in the diurnal cycle.\n\nThe output will be available at a time resolution of 3-hourly, possibly higher for some output, for a high emission scenario. Spatial coherence will be maintained. Specific time slices (e.g. 2061-2080) will be made available with the exact nature of these still to be confirmed.\n\n11 SRESA1B: IPCC future scenario based on rapid economic growth and a balance of energy sources\n\n12 30-year means can be created using the UKCP18 PDF data\n\n13 http://www.euro-cordex.net/", - "page_start": 2, - "page_end": 2, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n### (d) Freshwater resources: run-off\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28–30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32–34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for defining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981–2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].\n\nThis difference in approach led to inconsistencies in the definitions of the dates of GWLs in the two parts of the study. In the extremes analysis using raw model output, the dates of passing GWLs were defined on the basis of the global mean temperatures in the driving CMIP5 models relative to those models' simulations of global mean temperature in 1870–1899 (table 3). However, in the HCVI and JULES analyses which used bias-corrected data, it was considered more appropriate for the GWLs to be defined using the warming in the observational dataset", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Table 3.** Time of reaching GWLs of 1.5°C and 2°C in the raw output from the HadGEM3 climate simulations, driven by different sets of CMIP5sea-surface temperatures. The dates are the centre year of a 20-year period for which the climate data are applied to the calculation of the ClimPACT indices.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2015 | 2030 |\n| | GFDL-ESM2M | 2040 | 2055 |\n| | HadGEM2-ES | 2027 | 2039 |\n| | IPSL-CM5A-MR | 2020 | 2034 |\n| | MIROC-ESM-CHEM | 2023 | 2035 |\n| | ACCESS1–0 | 2034 | 2046 |\n\nup to present-day plus model-projected warming thereafter (table 4). While this does lead to inconsistent definitions of dates of the GWLs for applications of the climate model output with and without bias correction, the focus here is on the level of warming relative to pre-industrial rather than the timing of this warming. Therefore, priority is given to an accurate quantification of GWLs in all parts of the study, at the expense of inconsistencies in the dates of these warming levels. The inconsistency between the dates of the GWLs ranged from 2 to 9 years depending on the model and warming level. This inconsistency would have consequences if these results were applied to time-dependent impacts and adaptation assessments, but that is not the case here so this concern does not apply. However, one issue is that the time-dependent nature of the aerosol forcing means that the spatial pattern of regional climate responses varies over time, so this will lead to some degree of inconsistency between the analysis of the ClimPACT extremes and the HCVI and JULES impacts projections.\n\n## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2°C global warming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Figure 2.** Simulated changes in annual daily maximumtemperature relativeto1981–2010 at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n**Table 4.** Time of reaching GWLs of 1.5°C and 2°C in each bias-corrected output from the HadGEM3 climate simulations, driven by differentsets of CMIP5sea-surface temperatures. The dates are the centre year of a 20 year period for which the climate data is applied to the HCVI calculation and JULES simulations.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2024 | 2035 |\n| | GFDL-ESM2M | 2036 | 2051 |\n| | HadGEM2-ES | 2019 | 2033 |\n| | IPSL-CM5A-MR | 2023 | 2036 |\n| | MIROC-ESM-CHEM | 2020 | 2032 |\n| ACCESS1-0 | | 2026 | 2040 |\n| | | | |\n\nland surface sees an increase in annual daily maximum temperature which is similar to the global annual mean temperature increase. In the IPSL-driven simulations, increases in TXx substantially larger than the GWL are confined to the eastern USA, Europe and part of northeast Asia. By contrast, the GFDL-driven simulation shows much of the global land surface seeing increases in annual daily maximum temperature larger than the global mean warming. Much of the midlatitudes experience an increase in TXx of over 4°C. The very largest increases of 5°C or more are seen in central North America, Europe and northwestern Asia. Similar results are seen in the MIROC and ACCESS models.\n\nThe percentage of days exceeding the 90th percentile of daily maximum temperature increase more in tropical areas (figure 3). Some areas show over 60% of days above this level at 2°C global warming compared with present day, whereas in the mid-latitudes between 20% and 30% of days exceed this level. The global mean is between 20% and 30% in all ensemble members (table 3).\n\nrsta.royalsocietypublishing.org\n\n *Phil. Trans. R. Soc. A* **376**: 20160452\n\n........................................................", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed11.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n- (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n- (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning—exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "Which causes of the rise of sea level will be considered by UKCP18 ?", - "target_page": 2, - "target_passage": "Sea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n# **OBSERVATIONS**\n\n### **Annual report: State of the UK Climate. Downloadable data.**\n\nThe \"State of the UK Climate\" report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence9. For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n# **MARINE PROJECTIONS**\n\n#### **Sea level rise. Storm surge. Past event case studies.**\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a \"plausible but highly unlikely\" scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report10.\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These \"storminess\" projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n8 The latest update can be found at **http://www.metoffice.gov.uk/climate/uk/about/state-of-climate**\n\n- 9 **http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/**\n10 **https://www.ipcc.ch/report/ar5/**", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW** \n\n# What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme2.\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power3 for example.\n\n# What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n• Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback – user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information4.\n\n- Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM35 model and the CMIP56 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n• Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models7.\n\n• The increased quantity and range of observations available since 2009.\n\n• Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n\n1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. **https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports**\n\n2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): **https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/** 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: **https://www.gov.uk/government/collections/climate-change-adaptationreporting-second-round-reports**\n\n4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n\n- 5 **http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3**\n- 6 Coupled model intercomparison project phase 5, see **http://cmip-pcmdi.llnl.gov/cmip5/**\n\n7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25,\n\n5791–5806 (2012) **http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1**", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**Table 3.** Time of reaching GWLs of 1.5°C and 2°C in the raw output from the HadGEM3 climate simulations, driven by different sets of CMIP5sea-surface temperatures. The dates are the centre year of a 20-year period for which the climate data are applied to the calculation of the ClimPACT indices.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2015 | 2030 |\n| | GFDL-ESM2M | 2040 | 2055 |\n| | HadGEM2-ES | 2027 | 2039 |\n| | IPSL-CM5A-MR | 2020 | 2034 |\n| | MIROC-ESM-CHEM | 2023 | 2035 |\n| | ACCESS1–0 | 2034 | 2046 |\n\nup to present-day plus model-projected warming thereafter (table 4). While this does lead to inconsistent definitions of dates of the GWLs for applications of the climate model output with and without bias correction, the focus here is on the level of warming relative to pre-industrial rather than the timing of this warming. Therefore, priority is given to an accurate quantification of GWLs in all parts of the study, at the expense of inconsistencies in the dates of these warming levels. The inconsistency between the dates of the GWLs ranged from 2 to 9 years depending on the model and warming level. This inconsistency would have consequences if these results were applied to time-dependent impacts and adaptation assessments, but that is not the case here so this concern does not apply. However, one issue is that the time-dependent nature of the aerosol forcing means that the spatial pattern of regional climate responses varies over time, so this will lead to some degree of inconsistency between the analysis of the ClimPACT extremes and the HCVI and JULES impacts projections.\n\n## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2°C global warming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "Firstly, the period of 1986–2005 is defned as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850–1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between diferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986–2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. Tirdly, the climate data of global warming by 1.5 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 1.5–2.0 °C above pre-industrial levels at the end of the twenty-frst century; the climate data of global warming by 2.0 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 2.0–2.5 °C above pre-industrial levels at the end of the twenty-frst century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately confrmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020–2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041–2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060–2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065–2084.\n\n**Simulation of maize yield using DSSAT.** According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986–2005 on grid level using CERES-Maize, which is part of DSSAT version 4.649.\n\nTe inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5°×0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min50. For management information, fertilizer applications, irrigation and other management practices are required. A crop-specifc gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Profle Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of fve 20 cm thick soil layers51. All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.\n\nFirst maize yields across the world during the historical period 1986–2005 were simulated at the 0.5°×0.5° grid scale with two main production systems, including Spring maize and Summer maize. Historical national maize production is aggregated from simulated gridded yield and weighted by grid cell maize areas in 2000 from the gridded global dataset by combining two data products47. Second, genetic parameters of specifc cultivars of maize from previous works were adopted for the initial parameters; model parameters related to crop genotype characteristics were calibrated and tuned following the method in Xiong et al.52, in which the simulated yields from 1986–2005 were comparable to the statistical data. Tird, maize yields across the world were simulated under global warming by 1.5 °C and 2.0 °C. Finally, global and national maize yields were aggregated from gridded values; changes in national and global yields under global warming by 1.5 °C and 2.0 °C were calculated, comparing maize yield average for 1986–2005.\n\n**Simulation of market price using GTAP.** Te yield changes for maize from the DSSAT models under 1.5 °C and 2.0 °C temperature increase are used to carry out simulations using competitive market for changes in production, market price, and self-sufciency ratio of maize at national and global levels53,54. For this study, we use a comparative static analysis approach to simulate the impact of climate changes on the prices and trade of the major food crops under current economic conditions. Utilizing current economic conditions has the advantage of minimizing assumptions and model uncertainties related to future economic conditions55,56.\n\nTe original GTAP database doesn't include maize as a separate sector, rather it is combined with other coarse grains to form an \"other coarse grain\" sector. For this study, we updated the GTAP database by splitting maize from the original sector in the database, design an appropriate sectoral and regional aggregation scheme to the original database. Te detailed method is given as follows:\n\nFirst, we improved the database by splitting maize from the existing sector \"other coarse grain\", following similar work using GTAP57–59 based on the routines from the Splitcom method60. In this procedure, the old fows of data both at national and trade levels are allocated between the new fows using weights. Te national weights include the division of each unsplit user's use of the original split commodity among the new commodities; the division of unsplit inputs to the original industry between the new industries; the splitting of new industry's use of each new commodity. Maize use is mainly shared between feed, food, processing and others (seed, waste, etc.).\n\nTrade shares allocate the original slice of the split commodity into the new commodity for all elements of basic price value, tax, and margin. Finally, we used the RAS method for balancing the newly created database. Te values for the national shares matrix were obtained from FAOSTAT. Te trade shares matrix was calculated based on the data from UN Comtrade Database.\n\nSecond, our sectoral aggregation scheme for GTAP ensures that all the competing and complimenting sectors for maize are present in the most disaggregated form. For example, for maize, other crops compete for inputs of production and both livestock and households are major users of maize. For regional aggregation, we kept the details for all the main producing, consuming, and trading regions, for maize.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "The assumptions used are based on consultation with policy and operational experts at the Ministry of Justice and the National Offender Management Service. They also take into account observed data trends:\n\n- These projections represent a change from last year where the 2013 Scenario 2 (central) saw the population gradually falling over the six year lifetime of the projection. The Central Scenario in the projections this year shows the population rising over the next six years. This change arises from the fact that the latest projections capture a recent upward trend in prosecutions of more serious offences.\n- Despite the fact that overall crime is falling there has been an increase in recorded crime for certain offence types:\n\t- o Prosecutions for sexual offences are the highest in the decade and increased by 19% in the 12 months ending June 2014, in line with a 21% increase in recorded crime. Offenders sentenced for sexual offences had an Average Custodial Sentence Length (ASCL) of 59.7 months, a rise of 2.4 months, compared with year ending June 2013.\n\t- o Violence against the person proceedings for indictable offences have increased by 7% in the 12 months ending June 2014. This is in line with an 11% increase in recorded crime.\n\nFurther statistics and commentary on the changes seen in Court proceedings and sentencing over the last year is presented in the Criminal Justice System Statistics Quarterly publication. This is available online on GOV.UK at: www.gov.uk/government/collections/criminal-justice-statistics-quarterly", - "page_start": 4, - "page_end": 4, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "but also a result of the different forcings influencing the atmosphere model at the time of passing each GWL, and the interaction with the climate sensitivity of HadGEM3. The radiative forcing of non-CO2 forcings has previously been highlighted as a potentially important influence on patterns of climate change at 1.5°C and 2°C global warming [39]. Furthermore, despite some differences in regional climate responses between ensemble members, there were also some remarkable consistencies especially in the changes that might be considered inconsistent with a warming climate, such as regions such as northern South America where heavy rainfall (Rx5day) decreases rather increasing as might be expected under a warming climate. Again, these consistencies point to some common forcing of all simulations.\n\nOne key factor is the different times of passing a particular GWL, because the net radiative forcing would be different even though the same emissions and concentration scenario was used in all simulations. A given GWL was reached at a different time in each ensemble member, so the CO2 and aerosol concentrations vary between ensemble members; in members reaching a GWL early, such as that driven by IPSL-CM5A-LR, the CO2 concentration is relatively lower than in other members, and the total aerosol concentration would be relatively higher (CO2 concentrations are projected to increase in RCP8.5, but aerosol concentrations are projected decline). The net radiative forcing is smaller, because in RCP8.5 the increase positive radiative forcing from CO2 is greater than the decrease in net negative radiative forcing from aerosols. Moreover, the physiological effect of CO2 is also smaller, meaning that the consequent reduction in transpiration and associated additional land surface warming influence would also be expected to be smaller.\n\nConversely, in members reaching the same GWL later, such as that driven by GFDL-ESM2M, CO2 concentration is relatively higher, and aerosol concentrations are lower. So, net radiative forcing, CO2 physiological effects and the regional-scale radiative forcings from individual aerosol types could, therefore, be quite different in the GFDL-driven HadGEM3 simulation when it reaches 2°C global warming 25 years later than the IPSL-CM5A-LR-driven simulation.\n\nThe spatial pattern of changes in the different ensemble members may also play a role in influencing the global mean changes, for example, with large changes in some regions due to faster snow-melt or changes in cloud cover in one ensemble member leading to particular changes in regional warming that are not seen in other ensemble members. Moreover, the individual forcings of the different aerosol components such as sulfate and black carbon differ in sign and spatial pattern, so the overall impact on local radiative forcing and hence regional temperature patterns is more complex. Therefore, the global mean changes may not necessarily be expected to relative to global mean forcings.\n\nA further complexity in identifying precise mechanisms for regional changes is the experimental design used here, with one atmospheric model and concentration/emissions scenario but six different SST and SIC patterns, means that the impact of spatial heterogeneity in radiative forcings is complex and involves a mix of effects in HadGEM3 and the original CMIP5 models. In the case of aerosols, for example, our HadGEM3 simulations are driven with RCP8.5 aerosol emissions and the aerosol concentrations are then calculated within the model itself. The spatial distributions of aerosol optical depth and radiative forcing can, therefore, be expected to be reasonably similar, because they arise from the same emissions scenario, although some differences may occur due to the different regional climate-change patterns. However, the impact of aerosols is also seen in the SST and SIC changes, because these will have responded to changes in regional aerosol radiative forcing in the original CMIP5 simulations. Therefore, these SST and SIC patterns will carry the 'memory' of aerosol changes in the original CMIP5 projections.\n\nOne example of an impact of changing aerosol radiative forcing could be the precipitation changes in northern South America including Amazonia. All ensemble members show a general drying in this region, as seen in RX5day and mean run-off results. The reduction in Rx5day is particularly notable, because the general expectation would be for an increase in heavy rainfall events in a warmer climate, as is seen in most other regions in these projections. This reduced rainfall in the Amazon region may be associated with the reducing net negative aerosol radiative forcing in the North Atlantic [40]. CO2 physiological forcing may also play a role here [41,42].", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30–110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows—many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility. This article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/2007– 2013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n- 1. IPCC. 2014 Summary for policymakers. In *Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change* (eds CB Field *et al*.), pp. 1–32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "**energy**41. It will also review in 2021 the data on biofuels with high indirect land-use change risk and establish a trajectory for their gradual phase out by 2030.\n\nThe overall objective is to ensure that EU regulatory framework on bioenergy is in line with the increased ambition set out in the European Green Deal.\n\n#### *2.2.6. Restoring the good environmental status of marine ecosystems*\n\n**Restored and properly protected marine ecosystems** bring substantial health, social and economic benefits to coastal communities and the EU as a whole. The need for stronger action is all the more acute as marine and coastal ecosystem biodiversity loss is severely exacerbated by global warming42 .\n\nAchieving good environmental status of marine ecosystems, including through strictly protected areas, must involve the restoration of carbon-rich ecosystems as well as important fish spawning and nursery areas. Some of today's sea uses endanger food security, fishers' livelihoods, and the fishery and seafood sectors. **Marine resources must be harvested sustainably and there must be zero-tolerance for illegal practices**. In this regard, the full implementation of the EU's Common Fisheries Policy, the Marine Strategy Framework Directive and the Birds and Habitats Directives is essential.\n\nThe application of an ecosystem-based management approach under EU legislation43 will reduce the adverse impacts of fishing, extraction and other human activities, especially on sensitive species and seabed habitats. To support this, **national maritime spatial plans**, which Member States have to deliver in 2021, should aim at covering all maritime sectors and activities, as well as area-based conservation-management measures.44 The Commission will also propose a **new action plan to conserve fisheries resources and protect marine ecosystems** by 2021. Where necessary, measures will be introduced to limit the use of fishing gear most harmful to biodiversity, including on the seabed. It will also look at how to reconcile the use of bottom-contacting fishing gear with biodiversity goals, given it is now the most damaging activity to the seabed. This must be done in a fair and just way for all. The European Maritime and Fisheries Fund should also support the transition to more selective and less damaging fishing techniques.\n\nHealthy fish stocks are key to the long-term prosperity of fishermen and the health of our oceans and biodiversity. This makes it all the more important to maintain or reduce fishing mortality at or under **Maximum Sustainable Yield levels**. This will help achieve a healthy population age and size distribution for fish stocks.\n\nThe **by-catch of species threatened with extinction** must also be eliminated or reduced to a level that allows full recovery. This should also be the case for those in bad conservation status or not in good environmental status. Furthermore, the by-catch of other species45 must be eliminated or, where this is not possible, minimised so as not to\n\n41 Article 29 of the EU Renewable Energy Directive 2018/2001.\n\n42 See for example Intergovernmental Panel on Climate Change (2019), Special Report on the Ocean and the Cryosphere in a Changing Climate.\n\n43 The Common Fisheries Policy, the Marine Strategy Framework Directive (2008/56/EC) and the Maritime Spatial Planning Directive (2014/89/EU).\n\n44 The Commission will report on the implementation of the Maritime Spatial Planning Directive by March 2022 at the latest, including the application of ecosystem-based management.\n\n45 Protected by international and EU law.", - "page_start": 11, - "page_end": 11, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 2.** Simulated changes in annual daily maximumtemperature relativeto1981–2010 at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n**Table 4.** Time of reaching GWLs of 1.5°C and 2°C in each bias-corrected output from the HadGEM3 climate simulations, driven by differentsets of CMIP5sea-surface temperatures. The dates are the centre year of a 20 year period for which the climate data is applied to the HCVI calculation and JULES simulations.\n\n| 1.5°C | driving SSTs | | 2.0°C |\n| --- | --- | --- | --- |\n| | IPSL-CM5A-LR | 2024 | 2035 |\n| | GFDL-ESM2M | 2036 | 2051 |\n| | HadGEM2-ES | 2019 | 2033 |\n| | IPSL-CM5A-MR | 2023 | 2036 |\n| | MIROC-ESM-CHEM | 2020 | 2032 |\n| ACCESS1-0 | | 2026 | 2040 |\n| | | | |\n\nland surface sees an increase in annual daily maximum temperature which is similar to the global annual mean temperature increase. In the IPSL-driven simulations, increases in TXx substantially larger than the GWL are confined to the eastern USA, Europe and part of northeast Asia. By contrast, the GFDL-driven simulation shows much of the global land surface seeing increases in annual daily maximum temperature larger than the global mean warming. Much of the midlatitudes experience an increase in TXx of over 4°C. The very largest increases of 5°C or more are seen in central North America, Europe and northwestern Asia. Similar results are seen in the MIROC and ACCESS models.\n\nThe percentage of days exceeding the 90th percentile of daily maximum temperature increase more in tropical areas (figure 3). Some areas show over 60% of days above this level at 2°C global warming compared with present day, whereas in the mid-latitudes between 20% and 30% of days exceed this level. The global mean is between 20% and 30% in all ensemble members (table 3).\n\nrsta.royalsocietypublishing.org\n\n *Phil. Trans. R. Soc. A* **376**: 20160452\n\n........................................................", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "What perdiod is covered by the 12 km resolution projection data of the UKCP18 ?", - "target_page": 4, - "target_passage": "1981-2080", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n# **OBSERVATIONS**\n\n### **Annual report: State of the UK Climate. Downloadable data.**\n\nThe \"State of the UK Climate\" report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence9. For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n# **MARINE PROJECTIONS**\n\n#### **Sea level rise. Storm surge. Past event case studies.**\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a \"plausible but highly unlikely\" scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report10.\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These \"storminess\" projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n8 The latest update can be found at **http://www.metoffice.gov.uk/climate/uk/about/state-of-climate**\n\n- 9 **http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/**\n10 **https://www.ipcc.ch/report/ar5/**", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "**UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW** \n\n# What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme2.\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power3 for example.\n\n# What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n• Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback – user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information4.\n\n- Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM35 model and the CMIP56 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n• Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models7.\n\n• The increased quantity and range of observations available since 2009.\n\n• Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n\n1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. **https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports**\n\n2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): **https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/** 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: **https://www.gov.uk/government/collections/climate-change-adaptationreporting-second-round-reports**\n\n4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n\n- 5 **http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3**\n- 6 Coupled model intercomparison project phase 5, see **http://cmip-pcmdi.llnl.gov/cmip5/**\n\n7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25,\n\n5791–5806 (2012) **http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1**", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "# **PROJECTIONS OVER LAND**\n\nThe land projections comprise three components:\n\n### **60KM GLOBAL PROJECTIONS**\n\n#### **20 plausible climate futures. Latest Hadley Centre climate model. Simulations of extreme weather. Simultaneous impacts captured at multiple locations.**\n\nThis resolution will enable more realistic simulations of climate for the UK and capture the drivers of extreme weather, a significant advance on the 300 km-resolution simulations of UKCP09. A set of 20 plausible global projections of 21st century climate will be generated using an ensemble of the Met Office Hadley Centre HadGEM3 climate model. These projections will be selected to represent a wide range of possible future climate states to reflect key uncertainties, informing a risk-based approach to planning. They will be generated to provide spatially coherent daily data at a horizontal resolution of 60 km for two greenhouse gas concentration scenarios. These will be compared with an ensemble of CMIP5 models to provide additional information on uncertainties in the projections relative to other climate models.\n\n## **25KM PROBABILISTIC PROJECTIONS**\n\n**Captures natural variability and climate change. Updated models and observations. Provides seasonal scale projections.**\n\nBased on the established, peer-reviewed, ground-breaking method of UKCP09 for estimating uncertainty for use in risk-based analysis. Probabilistic projections will be updated using an up-to-date collection of Met Office climate simulations and the latest IPCC-assessed simulations to estimate the model uncertainties, incorporate the latest observations and estimate carbon cycle feedbacks. Projections will be on a 25 km grid for the UK at monthly intervals for several emission scenarios, including one used in UKCP0911. The new probabilistic projections will indicate the range of uncertainty in our knowledge of the climate system and natural variability through the 21st century, using probability density functions to provide information on how climate varies from month to month. This contrasts with UKCP09 for which only 30-year means were provided12.\n\n### **DOWNSCALED HIGH RESOLUTION PROJECTIONS**\n\n**Downscaled versions of the global model for the UK. For the most spatially detailed downscaling this includes hourly data. Simultaneous impacts captured at multiple UK locations.**\n\nThe high resolution projections will provide information on types of weather of relevance to adaptation at two different resolutions. The 12 km model provides a downscaled product that is similar to UKCP09's 25 km simulations but driven by an improved global model and at a higher resolution. This may be especially useful for those interested in water availability and some aspects of agriculture. A key reason for providing this data is that users will be able to compare it directly with EURO-CORDEX13.\n\nThe global projections will also be downscaled to 2.2 km using a process of nesting models at finer resolution that maintains the integrity of the representation of evolving atmospheric processes. Key benefits of simulations at this resolution will be the information provided on high impact events such as localised heavy rainfall in summer and potential improvements in the diurnal cycle.\n\nThe output will be available at a time resolution of 3-hourly, possibly higher for some output, for a high emission scenario. Spatial coherence will be maintained. Specific time slices (e.g. 2061-2080) will be made available with the exact nature of these still to be confirmed.\n\n11 SRESA1B: IPCC future scenario based on rapid economic growth and a balance of energy sources\n\n12 30-year means can be created using the UKCP18 PDF data\n\n13 http://www.euro-cordex.net/", - "page_start": 2, - "page_end": 2, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "# **Windows client viewers**\n\nThe Content Manager OnDemand Windows client contains native capabilities for viewing typical archive data types:\n\n- -Line Data and SCS\n- -AFP\n- -Images\n\nThe Windows client reflects the richest set of capabilities in terms of viewing these data types. Because it directly communicates with the Content Manager OnDemand server, we reference the Windows client for all of its features that relate to document display.\n\nThe Line Data viewer of the Windows client is the most sophisticated viewer that is available for Content Manager OnDemand from the selection of readily available viewers.\n\nThe viewing of these primary data types happens within the same application. The Windows client provides other features, such as thumbnails, and configurable and saveable views.\n\nThe Content Manager OnDemand Windows client also contains other capabilities for viewing archive data types, such as Portable Document Format (*PDF*) and *User-Defined*.\n\nStarting with Content Manager OnDemand version 9.5, for both DocType=PDF and user-defined PDF, the Windows Client will attempt to view a PDF document with Adobe Acrobat, if it is installed. If Adobe Acrobat is not installed, for DocType=PDF, Adobe Acrobat Reader will be used instead when the PDF document is viewed.\n\nBefore Content Manager OnDemand version 9.5, PDF documents can be viewed by the Windows client in two ways:\n\n- - If they are configured in the application as data type \"PDF\", the rich feature set of the AFP and Line Data viewer applies, but Adobe Acrobat Professional is required.\n- - If the data type is configured as \"User Defined\" and \".pdf\" as the extension, the documents are started externally. Therefore, you can view the documents with the no-charge Adobe Acrobat viewer or any other installed PDF viewer.\n\nAny data type can be specified as \"User Defined\", for example, Word documents (.docx). User-defined data is viewed by invoking its associated application.\n\n#### **Web-based viewing options**\n\nThe web-based viewing options for Content Manager OnDemand are provided primarily by ODWEK. ODWEK includes different viewers that are dedicated to Content Manager OnDemand documents that can use Content Manager OnDemand functions, such as the segment-wise retrieval of large objects or annotations. These viewers are used in web applications, such as Content Navigator or any other custom-developed web client:\n\n- -Line Data applet\n- -Browser plug-in for image viewing\n- -AFP browser plug-in\n- -AFP Transforms\n- -Generic Transforms\n\nDetailed information about ODWEK's viewers and transforms is in IBM Content Manager OnDemand Web Enablement Kit Java APIs: The Basics and Beyond, SG24-7646. Only a brief overview is provided in this chapter.", - "page_start": 210, - "page_end": 210, - "source_file": "sg246915.pdf" - }, - { - "text": "Depending on the data that you are working with, consider these options:\n\n- - For Line Data:\n\t- The line data applet supports annotations. It can work with large object (LOB) reports if the large object functionality is employed at load time.\n\t- The Ajax viewer and direct rendering capabilities of Content Navigator work only on shorter reports. Additionally, the viewing of annotations and large object documents is not supported.\n- - For AFP data:\n\t- The AFP plug-in is the best choice, because it is almost identical to the client. However, it does not support annotations.\n\nThe only viewers that use this functionality are the line data applet, the AFP plug-in viewer, and the Content Manager OnDemand Windows client.\n\n- AFP to PDF is a choice that does not require a plug-in rollout at the users' computers if the Acrobat plug-in is installed on their workstations. Font mappings must be configured at a central location. The additional workload on a rendering system and additional license costs must be considered. Large reports might not be able to be rendered or viewed.\n**Note:** The AFP viewer plug-in, which is available with ODWEK and Content Manager OnDemand, is a version of the AFP viewer plug-in from the InfoPrint Solutions Company. Although the standard InfoPrint viewer can be used for viewing AFP, the ODWEK version uses direct communication with the Content Manager OnDemand server, enabling segmented document transfer for LOB documents.\n\n# **Annotations**\n\nOnly the native ODWEK viewers and the Windows client support annotations. These viewers and Windows clients support annotations in the following ways:\n\n- - Line data applet: Supports text. Starting with version 9, the viewer can work with graphical annotations, also.\n- -Windows Client: Supports maximum capabilities for all data types.\n- - Other viewers, for example, the AFP plug-in viewer: Do not support and are not aware of annotations.\n\nWeb clients, such as Content Navigator or the ODWEK Java API, can work with annotations and provide access to them through the hit list. Graphical annotations cannot be accessed that way because they are not exposed through the Java API.\n\n# **Large object support**\n\nLarge object (LOB) support is the methodology for working with large reports. For more information about how LOB affects your reports, see \"Large object\" on page 52.\n\nFrom a viewer's perspective, if a large document is transferred, it generates high network traffic, resource consumption, and long wait times for users. If the viewer supports LOB documents, the viewer communicates with the server to transfer only the chunk of data that the user is looking at (for example, a 200 page chunk out of a 10,000 page report). If the user scrolls to a different chunk of pages, the viewer downloads only that relevant portion of the document that the user scrolled to.", - "page_start": 212, - "page_end": 212, - "source_file": "sg246915.pdf" - }, - { - "text": "The OS/390 indexer is enhanced to allow the storage of documents (or large object segments) that exceed 2 GB. A report might contain multiple documents (or large object segments), each of which exceeds 2 GB. This enhancement does not affect the limitations that are imposed by other indexers. The limitations on the document size are based on the available hardware and any other limitations that are placed on the operating environment.\n\nFor more information about the use of the OS/390 indexer, see IBM Content Manager OnDemand - Indexing Reference, SC19-3354.\n\n# **7.6 OS/400 indexer on Content Manager OnDemand on IBM i**\n\nThe OS/400 indexer is a powerful tool to index the print data streams of IBM i application programs. Supported data streams include SCS, AFP, and the less common SCS-Extended and Line Data.\n\nThe OS/400 indexer provides three major functions:\n\n- - Print data stream processing: The OS/400 indexer processes the output print data streams of application programs, for example, SCS, AFP, and Line Data reports. The output can be viewed, printed, and archived by Content Manager OnDemand.\n- - Sophisticated indexing functions: The OS/400 indexer can logically divide reports into individual items, such as statements, policies, and bills. You can define up to 32 index fields for each item in a report if you are running a Content Manager OnDemand server version that is earlier than version 9.0.0.1. Beginning at version 9.0.0.1 of the server, 128 index fields can be defined.\n- - AFP resource collection: For AFP spooled files, the OS/400 indexer determines the resources that are necessary to view, print, and archive the print data stream and collect the resources (except fonts, which are not stored but are mapped by the client during display). Resources allow users to view the report as it displayed in the original printed version, regardless of when or where the report was created.\n\nThe OS/400 indexer supports many advanced features:\n\n- -Multi-key indexes\n- -Spool File Archive compatibility\n- -Start Indexing on Page\n- -Translate Print Control\n- -AFP support with or without TLEs\n- -Large object support\n\nThe OS/400 indexer processes three input sources:\n\n- - Indexing parameters that specify how the data needs to be indexed. The indexing parameters are created when you define a Content Manager OnDemand application.\n- - AFP resources that are required to view and print the data if the application created an AFP print data stream.\n- - The print data stream, which can be in a spooled file (all data types) or in a physical file (Line Data or SCS data that was converted to Line Data with First Character Forms Control (FCFC) characters in column one of the data).", - "page_start": 203, - "page_end": 203, - "source_file": "sg246915.pdf" - }, - { - "text": "# **9.1 Overview of data conversion**\n\nTo work with data conversion, understand the data conversions that are required, and when and how to convert the data. Perform detailed planning before you build your solution so that you achieve a design that remains efficient for many years.\n\nIn this section, we describe why you might need data conversion, when to convert the data stream, and how to convert the data.\n\n# **9.1.1 Why convert data streams**\n\nYou might want to convert data streams for many reasons:\n\n- - Certain data streams, such as Hewlett-Packard (HP) Printer Command Language (PCL) or Xerox metacode, are printer-specific and cannot be displayed. Before you archive or display the documents, these data streams must be transformed into a compatible format.\n- - The archived data stream might need to comply with a company's internal rules or regulations. Therefore, the produced data streams must be transformed into the defined and required final format before they are archived.\n- - The documents might need to be accessible by a user that is outside of the company. The document must be displayed through standard tools that are available on any or at least most of the clients, such as an Internet browser or Adobe Acrobat Reader.\n- - The documents might need to be manipulated so that only part of the document is displayed in a personalized way.\n\n# **9.1.2 When to convert data streams**\n\nThe decision of *when* to convert data streams relies mainly on the use of the system. Typically, converting data at load time requires more time to process the print stream file, and converting data at retrieval time causes the user retrieval to be a little slower. The decision might depend on how many documents are retrieved, compared to how many documents are loaded daily. It might also depend on legal requirements about the format of stored data.\n\n# **AFP to PDF**\n\nIf a requirement exists to present AFP documents in the Portable Document Format (PDF) format over the web, from a storage perspective, it is more efficient to store the documents in their native format and then convert them to PDF at retrieval time. AFP documents are stored more efficiently than PDF documents.\n\nThe PDF print stream, when it is divided into separate customer statements, is larger than AFP because each statement contains its own set of structures that are required by the PDF architecture to define a document.\n\nElapsed time and processor time are also essential factors in the decision-making process. The amount of time (elapsed and CPU) that is needed to convert the document depends on how large the document is and how many resources or fonts are associated with the document.", - "page_start": 231, - "page_end": 231, - "source_file": "sg246915.pdf" - }, - { - "text": "- The *XML Indexer* allows the rapid increase in XML archiving mandates that are based on ISO 20022 standards with XML (including SEPA in Europe). The XML Indexer is optimized for high-volume batch archiving of XML, batch PDF, AFP, Line Data, and check images.\n- The *Full Text Indexer* provides the capability to index the full text of a document (or report). You can search through an indexed document.\n- - *Data loading programs* can be set up to automatically store report data into application groups and update the database. The data loading programs can run on any Content Manager OnDemand server.\n- - *Report Distribution Facility* provides an easy way to automatically group reports and portions of reports and distribute the reports to multiple users. Distributions can be printed, created as an output file, or emailed as an attachment.\n- - Both the archived reports and their resources are stored in the Content Manager OnDemand Archive. The Content Manager OnDemand system manages the stored data throughout its lifetime. It provides authorized users rapid access to the data and allows the data to be converted into different formats for display or print.\n- - A *server print* facility allows users to reprint a large volume of documents at high speed. Print servers, such as Infoprint (on AIX), can be started to manage the server print devices. These print servers are not part of Content Manager OnDemand and must be purchased separately.\n- - Content Manager OnDemand *management programs* maintain the Content Manager OnDemand database and documents in cache storage.\n- - A *system logging* facility provides administrators with tools to monitor server activity and respond to specific events as they occur. The interface to the system logging facility is through the system log folder and the system log user exit.", - "page_start": 35, - "page_end": 35, - "source_file": "sg246915.pdf" - }, - { - "text": "The size of the input file and the output file can create problems during the load process:\n\n- -The temporary space that is used during indexing can be too small and the load fails.\n- - The maximum input file size that the PDF Indexer can process is 4 GB, but the recommended maximum size for a single document (after indexing) is 50 MB. If this size is exceeded, the system might run out of disk space or memory.\n\nCreate PDF data with the base 14 fonts, which do not need to be included in the PDF file. Because they are not included in the PDF file, they are not extracted during resource collection, which improves performance. For more information about the PDF data stream and fonts, see 7.3.1, \"PDF fonts and output file size\" on page 166.\n\n# **13.4.2 Line data**\n\nLine data (ASCII or EBCDIC text-based reports) is the most common type of data that is stored in Content Manager OnDemand. The type of line data that we describe here is a special form of transaction-style report, where it is necessary to search on a value that appears on every line of the report. This transaction data has a transaction number that appears on every line and must be sorted either by column or row and either ascending or descending.\n\nWhen you index transaction data, if each transaction number from each line of the report is treated as a database index, such as date or customer name, the database becomes large quickly. Content Manager OnDemand has a special type of field for transaction data, which is illustrated in Figure 13-4 by the boxed data on the left of the window.\n\n| CENTRAL ADJUSTMENTS DEPARTMENT | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| INCOMING | OUTGOING | KB | ENDPOINT | DT | ITEM | CASH LETTER | ROUTING |\n| SEQ NO. | ID | SEO | ID | | AMOUNT | AMOUNT | TRANSIT |\n| P000000072 | 21593.34 | | 1 0000-0032 TR | | 50.00 | 21593.34 | 0000-0371 |\n| P000000073 | 2151.39 | 1 0000-0194 | | TR | 50.00 | 2151.39 | 0000-0040 |\n| P000000074 | 2151.39 | | 2 0000-0194 TR | | 20.00 | 2151.39 | 0000-0040 |\n| P000000075 | 2151.39 | 3 0000-0194 | | TR | 10.00 | 2151.39 | 0000-0040 |\n| P000000076 | 2151.39 | বা | 0000-0194 TR | | 40.00 | 2151.39 | 0000-0040 |\n| P000000077 | 2151.39 | 5 0000-0194 | | THR | 296.00 | 2151.39 | 0000-0040 |\n| P000000078 | 2151.39 | 6 0000-0194 | | TR | 77.33 | 2151.39 | 0000-0040 |\n| P000000080 | 2151.39 | 7 0000-0194 | | TR | 127.00 | 2151.39 | 0000-0040 |\n| P000000081 | 2151.39 | 8 | 0000-0194 TR | | 25.00 | 2151.39 | 0000-0040 |\n| P000000082 | 2151.39 | | 9 0000-0194 TR | | 135.00 | 2151.39 | 0000-0040 |\n| P000000084 | 2151.39 | 10 0000-0194 | | TR | 300.00 | 2151.39 | 0000-0040 |\n| P000000085 | 2151.39 | 11 0000-0194 | | TIR | 25.00 | 2151.39 | 0000-0040 |\n| P000000086 | 2151.39 | 12 0000-0194 | | TR | 11.00 | 2151.39 | 0000-0040 |\n| P000000089 | 2151.39 | 13 0000-0194 | | TR | 206.00 | 2151.39 | 0000-0040 |\n| P000000091 | 8175.12 | | 1 0000-0372 TR | | 264.75 | 8175.12 | 0000-7083 |\n| P000000093 | 2151.39 | 14 0000-0194 | | THE | 233.00 | 2151.39 | 0000-0040 |\n| P000000094 | 2151.39 | 15 0000-0194 TR | | | 96.90 | 2151.39 | 0000-0040 |\n| P000000095 | 1802.24 | | 1 0000-0502 TR | | 638.00 | 1802.24 | 0000-1544 |\n| P000000097 | 21593.34 | | 2 0000-0032 TR | | 341.54 | 21593.34 | 0000-0589 |\n\nFigure 13-4 Transaction data in graphical indexer\n\nThe transaction data field selects the *first* and *last* values from a group of pages and only these group level values are inserted into the database. Content Manager OnDemand queries the database by comparing the search value that is entered by the user to two database fields, the beginning value and the ending value. If the value that is entered by the user falls within the range of both database fields, Content Manager OnDemand adds the item to the document list.", - "page_start": 332, - "page_end": 332, - "source_file": "sg246915.pdf" - }, - { - "text": "| enabled | True | |\n| --- | --- | --- |\n| id | 385ab82655074660b07bb0757e116e39 | |\n| is_domain | False | |\n| name | ocp-project | |\n| parent_id | default | |\n| tags | [] | |\n| | +-------------+----------------------------------+ | |\n\n- 3. Create a user and assign admin role for the ocp-project:\n\n```\ngroupadd -g 3030 ocpadmin\nuseradd -g ocpadmin -u 3030 -d /home/ocpadmin -c \"OpenShift Container Platform \nAdmin\" ocpadmin\necho \"ocpadmin:\" | chpasswd\nusermod -a -G powervc-filter ocpadmin\nopenstack role add --project ocp-project --user ocpadmin admin\n```\n#### **6.2.3 Creating the virtual machine to host deployment tools**\n\nComplete the following steps to create a virtual machine (LPAR) by using PowerVC CLI:\n\n- 1. Set PowerVC access variables using the new user and project:\n\n| source /opt/ibm/powervc/powervcrc |\n| --- |\n| export OS_PROJECT_NAME=ocp-project |\n| export OS_USERNAME=ocpadmin |\n| export OS_PASSWORD= |\n| openstack project list |\n| +----------------------------------+-------------+ |\n| ID Name |\n| +----------------------------------+-------------+ |\n| 385ab82655074660b07bb0757e116e39 ocp-project |\n| +----------------------------------+-------------+ |\n\n#### 2. List the flavors:\n\nopenstack flavor list\n\n| +--------------------------------------+---------+--------+------+-----------+-------+-----------+ | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| ID RAM Disk Ephemeral VCPUs Is Public | Name | | | | | | |\n| +--------------------------------------+---------+--------+------+-----------+-------+-----------+ | | | | | | | |\n| 16f4c456-debe-4b7f-a59e-d024667fb74b medium 16384 | | | | 0 | 0 | 4 True | |\n| 3f2f851b-1ae9-4604-bde9-f81984a924fa xxlarge 131072 | | | | 0 | 0 | 32 True | |\n| 43032930-4dfb-417e-9501-80b5408076fc large | | | 32768 | 0 | 0 | 8 True | |\n| 962979ba-2ee7-464f-a0ea-856248765758 tiny | | | 4096 | 0 | 0 | 1 True | |\n| d7409716-49e2-426d-acb1-9de141b8d8ea small 8192 | | | | 0 | 0 | 2 True | |\n| e93908f5-b6f9-4bb9-bbff-9136c7a80211 xlarge 65536 0 | | | | 0 | | 16 True | |\n| +--------------------------------------+---------+--------+------+-----------+-------+-----------+ | | | | | | | |\n\n#### 3. List the images:\n\nopenstack image list\n\n| +--------------------------------------+---------------------+--------+ |\n| --- |\n| ID Name Status |\n| +--------------------------------------+---------------------+--------+ |\n| 09ba0030-b6c3-4631-b9f9-19eb3333289c xiv_p8_image_rhel76 active |\n| 77ad197b-0a4e-4792-a3c4-ea634ffa0fd3 xiv_p9_image_rhel76 active |\n| +--------------------------------------+---------------------+--------+ |", - "page_start": 119, - "page_end": 119, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "How many articles compose the Syntec French collective bargaining agreement ?", - "target_page": 2, - "target_passage": "The Syntec French collective bargaining agree- ment comprises around 90 articles", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# **6 OSH legislation and OSH infrastructure in the EU**\n\n### **6.1 Foundation, legislation, compliance and supervision**\n\nThe **ethical and economic importance of safe and healthy working conditions** led to an integration of this target in international conventions and agreements; it is also embedded in the treaties of the EU.\n\n**UN** has included **'Safe and secure work environment'** as an indicator for **Goal 8** of their 17 global **'Sustainable Development Goals**' for 2030. Goal 8 aims to *'Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all'*.334 It requests in its target 8.8 to *'Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment.'*\n\nThe **Preamble to the Constitution**335 **of the ILO** includes as an objective *'*… *the protection of the worker against sickness, disease and injury arising out of his employment ...'*. In 2022, the objective of a safe and healthy working environment became part of the 'Declaration on Fundamental Principles and Rights at Work', adding OSH to the existing four basic principles, that is, 1) freedom of association and right to collective bargaining, 2) the elimination of all forms of forced or compulsory labour, 3) the effective abolition of child labour, and 4) the elimination of discrimination. Between the year of the foundation in 1919 and today, the ILO agreed on more than 40 conventions and recommendations addressing OSH, be it either general provisions or provisions for specific groups and sectors or specific risks.336\n\nThe **EU and its predecessors** have enshrined health and safety of workers in their **founding treaties**. Already in 1951, it was stated in Article 3 of the European Coal and Steel Community (ECSC) Treaty that *'The institutions of the Community shall, within the limits of their respective powers, in the common interest … promote improved working conditions and an improved standard of living for the workers in each of the industries for which it is responsible …'*. 337 During the development of the European institutions and the EU from those years until today, references to working conditions and safety and health were always part of the treaties, and also in the latest Treaty of Lisbon from 2009.338\n\nIn **Article 151 of the Lisbon Treaty,** it is stated that *'The Union and the Member States, shall have as their objectives the promotion of employment, improved living and working conditions …'*. The areas of such promotion are set out in **Article 153**, where two bullet points refer to OSH: (a) *improvement in particular of the working environment to protect workers' health and safety; (b) working conditions.* In 2017, the European Commission launched an initiative to agree on the **'European Pilar of Social Rights'** (EPSR), comprising 20 key principles guiding the EU in the field of social policy.339 These pillars were agreed by the Member States; **Principle 10 refers to a** '**Healthy, safe and well-adapted work environment** and data protection.'\n\nThese European and international agreements and treaties regard **safety and health** as essential for human development, a **basic human right**. The main reasoning is to eliminate or reduce as much as possible suffering, sickness, disability and death of workers. Often the reasoning refers to intertwined objectives, that is, to economic growth (UN), or to reduce the economic burden of incomplete health and safety at work, be it the burden for enterprises or the society as a whole, that is, by *'Promotion of employment'* (Lisbon Treaty) or by *'Prolongation of the participation in the labour market'* (EPSR) or *'Data protection'* (EPSR).\n\nThe EU treaties form the legal background for the development of specific EU legislation, related to working conditions in general and OSH in particular. In 1989, the EU agreed on the **Framework Directive**, a major step regarding OSH.340 This directive introduced a distinguished preventive approach, based on a comprehensive risk assessment, as a dominant legal standard across all Member States. Its legal obligations prescribe several basic principles:\n\n- the **responsibility of employers** for OSH, that is, *'the employer shall take the measures necessary for the safety and health protection of workers, including prevention of occupational risks and provision of information and training',*341 and the **obligation of workers** '*to take care as far as possible of his own safety and health and that of other persons affected …'*;342\n- the obligation to **evaluate all risks** (risk assessment);\n- the preference of the **risk elimination at source** (combating the risk at source), a hierarchy of prevention measures, replacing the dangerous by the non- or the less dangerous;", - "page_start": 117, - "page_end": 117, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 3. Carefully read the license agreement. Select **I agree with the terms in the license agreement** when you are ready, as shown in Figure 4-9. Click **Next**.\n*Figure 4-9 System setup: License agreement*", - "page_start": 116, - "page_end": 116, - "source_file": "sg247938.pdf" - }, - { - "text": "#### Contents\n\n| Consolidated Five-Year Summary | 70 |\n| --- | --- |\n| Business and Other Risks | 71 |\n| Consolidated Balance Sheets | 72 |\n| Consolidated Statements of Income | 74 |\n| Consolidated Statements of Shareholders' Equity | 75 |\n| Consolidated Statements of Cash Flows | 76 |\n| Notes to Consolidated Financial Statements | 77 |\n| Report of Independent Auditors | 104 |\n| Non-consolidated Five-Year Summary | 105 |", - "page_start": 70, - "page_end": 70, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| Dataset | Syntec | HAL | SummEvalFr |\n| --- | --- | --- | --- |\n| Samples | 100 queries | 26233 samples | 100 texts |\n| | 90 documents | 10 classes | 1100 human summaries |\n| | | | 1600 machine summaries |\n| Creation process | Scraping of Syntec col | Scraping of HAL arti | Translation from English |\n| | lective bargaining agree | cles with id, title and do | to French with Deepl of |\n| | ment with articles as doc | main. Further cleaning | the SummEval dataset. |\n| | uments. Writing queries | with deduplication, lan | |\n| | corresponding to articles. | guage filtering and class | |\n| | | subsampling. | |\n| Annotation process | 4 annotators divided into | Annotations provided by | Detailed annotation pro |\n| | 2 groups. Each group was | authors when submitting | cess provided in Fabbri |\n| | given half of the articles | their paper. They choose | et al. (2021). |\n| | and asked to choose an ar | the domain between exist | |\n| | ticle and ask a question | ing academic fields. | |\n| | about it. Each annotator | | |\n| | wrote 25 questions. | | |\n| Quality checks | Human verification of an | Baseline models for clas | Correlation between |\n| | notations. | sification and topic model | BLEU and ROUGE |\n| | | ing. | scores of the French |\n| | | | and the original English |\n| | | | datasets. LLM as-a-judge |\n| | | | translation rating and |\n| | | | human verification. |\n\nTable 1: New datasets details with the number of samples, the creation process, the annotation process and the quality checks. All datasets are test splits.\n\n- Samples belonging to *domain* classes with less than 500 samples were removed, which leads us to keep only 10 classes.\n- Subsampling was performed on 2 classes containing more than 10k samples each to lower the number of samples and mitigate the unbalance of the dataset.\n\nMore details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: *TF-IDF + SVM*, a fine-tuned *Camembert* (Martin et al., 2019) and *GPT-4* leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n#### 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,\n\n6 https://www.deepl.com", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "# Q4: Are there any correlations between datasets with respect to model ranking?\n\nThe datasets correlation w.r.t model ranking are presented in appendix Figure 12. Except for two datasets (*MasakhaNEWSClusteringP2P*, *SummEvalFr*), the correlations, on average, are high. There is still enough diversity to make each dataset interesting for the French MTEB benchmark. Two groups (*SyntecReranking*/ *SyntecRetrieval*, *MassiveScenarioClassification*/ *MTOPDomainClassification*/ *MassiveIntentClassification*) exhibit notably high correlations (∼0.97). It is interesting to point out some sub-diagonal correlation blocks. The datasets being arranged by task indicate that models behave slightly more similarly within the same task than between two different tasks. This underscores the importance of having multiple tasks in the benchmark to select general-purpose models. For readers interested in specific tasks, it is more relevant to examine task-specific rankings rather than the overall one. The complementary results of model correlations w.r.t to strengths and weaknesses on datasets are displayed in appendix Figure 11. Strong correlations in behavior emerge among the variants of the same models (e.g. DistilBERT, sentence-croissant, sentence-t5, e5, etc.). Correlations are also generally observed among numerous models trained using the sentence transformers framework (Reimers and Gurevych, 2019), as well as proprietary models, e.g. from Cohere and OpenAI. Conversely, these models finetuned for sentence similarity, show minimal correlation with pre-trained models for which tokenembedding pooling techniques are employed.\n\n# 5 Conclusion and perspectives\n\nIn this work, we introduce a large-scale embedding benchmark for French to enable the research community and industry to select the most relevant embedding methods based on their specific needs. We undertake significant efforts in collecting 15 datasets and create 3 new quality-checked ones to enhance this collection. The whole French benchmark runs on 26 tasks. We select a diverse range of 51 models, including prominent French and multilingual models deemed most efficient to conduct a broad comparison. Our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. After an in-depth analysis of the results, OpenAI models perform significantly better than\n\nthe other models. However, other models should be considered for their performance on specific tasks, being open source or having a small embedding dimension.\n\nThis work opens several doors for future improvements. By examining dataset diversity in terms of topics and model ranking, we observe that the benchmark would benefit from additional datasets that introduce higher diversity. Beyond classification, many tasks focus on semantic similarity, explaining the strong performance of models trained for similarity. Exploring novel tasks in the generative spectrum or evaluating token embeddings (contextualized or not) on tasks like Named Entity Recognition could be an interesting path for future exploration. There are also opportunities for improvements on the model side. With numerous existing models that could be added to the leaderboard and many new proposals awaiting. For instance, we can already see the promising capabilities of early variants of recent models (Faysse et al., 2024) and expect that future proposals will come to compete strongly with closed-source models. Ultimately, we hope to see the emergence of other language-specific MTEB variants (e.g. for high-resource languages like Spanish and German), enabling a more comprehensive evaluation of multilingual model performance.\n\n# 6 Limitations\n\nNative French resources unavailability The availability of resources natively in French is an obvious limitation of our work. Regarding models, there are far fewer options than with more widespread languages such as English. Indeed, most of the existing French embedding models we found are trained using either older architectures or methods, unlike most recent multilingual models such as *NV-Embed-v1* (Lee et al., 2024) or *e5 mistral-7b-instruct* (Wang et al., 2023). Comparing models by family would be beneficial, particularly for evaluating French models against multilingual models on the same architecture using the same training technique. Resource limitations also apply to datasets. For example, the summarization task dataset is translated, which can be less relevant than a natively French dataset. We have also built datasets for reranking tasks using existing ones from retrieval task because we could not find any in French. This construction process introduces a bias as the model performance on both tasks may be", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv4.pdf" - }, - { - "text": "#### **I.10.3. Provision of list of pre-existing rights and documentary evidence**\n\nThe contractor must provide the contracting authority with a list of *pre-existing rights* as set out in Article II.13.4 together with the invoice for payment of the balance at the latest.\n\n#### **I.11. Termination by either party2**\n\nEither party may terminate the FWC and/or the FWC and specific contracts by sending *formal notification* to the other party with three months written notice.\n\nIf the FWC or a specific contract is terminated:\n\n- a) neither party is entitled to compensation;\nb) the contractor is entitled to payment only for the services provided before termination takes effect.\n\nThe second, third and fourth paragraphs of Article II.18.4 apply.\n\n#### **I.12. Applicable law and settlement of disputes**\n\n- **I.12.1** The FWC is governed by Union law, complemented, where necessary, by the law of Finland.\n- **I.12.2** The courts of Finland have exclusive jurisdiction over any dispute regarding the interpretation, application or validity of the FWC.\n\n#### **I.13. Interinstitutional FWC**\n\nNot applicable\n\n#### **I.14. Service provided on the premises of the contracting authority**\n\nNot applicable.\n\n#### **I.15. Other special conditions**\n\nElectronic documents exchange\n\nIt is intended that the documents exchange (e.g. invoices, deliverables) between the Agency and the Contractor will have to be carried out via electronic means.\n\nAt the request of the Agency, the use of such electronic applications will become mandatory, upon mutual agreement, during the performance of the contract, at no additional cost for the Agency.\n\n2 This article may be deleted on the basis of a risk assessment taking into account the specific market and the need for business continuity", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "| Document | |\n| --- | --- |\n| id article-14 | |\n| url https://www.syntec.fr/convention | |\n| collective/resiliation-du-contrat | |\n| de-travail/#article-14 | |\n| title Article 14 : Préavis pendant la péri | |\n| ode d'essai | |\n| section Résiliation du contrat de travail | |\n| content Modification | Avenant n° 7 du |\n| 5/07/1991 Au cours de cette péri | |\n| ode, les deux parties peuvent se sé | |\n| parer avec un préavis d'une journée | |\n| de travail pendant le premier mois. | |\n| Après le premier mois, le temps | |\n| de préavis réciproque sera d'une | |\n| semaine par mois complet passé | |\n| dans l'entreprise. | Après le pre |\n| mier mois, le temps de préavis ré | |\n| ciproque sera d'une semaine par | |\n| mois passé dans l'entreprise. | Le |\n| préavis donne droit au salarié de | |\n| s'absenter pour la recherche d'un | |\n| emploi dans les conditions fixées à | |\n| l'article 16. Le salarié sera payé au | |\n| prorata du temps passé pendant la | |\n| période d'essai. | |\n| Query | |\n\n| article | article-14 | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| question | Quel | est | le | préavis | en | période |\n| | d'essai ? | | | | | |\n\nFigure 5: Extracts of Syntec dataset.\n\n| hal_id | Domain | Title |\n| --- | --- | --- |\n| hal-02899209 | shs | La transformation |\n| | | digitale du manage |\n| | | ment des ressources |\n| | | humaines et de ses |\n| | | enjeux pour les |\n| | | entreprises |\n| tel-03993881 | math | Sur l'approximation |\n| | | numérique de |\n| | | quelques problèmes |\n| | | en mécanique des |\n| | | fluides |\n\nFigure 6: Extracts of HAL dataset.\n\nFigure 7: Distribution of the word count per title in HAL dataset, *mteb_eval* subset.\n\n| \"\"\" |\n| --- |\n| You will be given a couple of texts in |\n| English and their translation in French. |\n| Your task is to provide a 'rating' score on |\n| how well the system translated the |\n| English text into French. |\n| Give your answer as a float on a scale of 0 |\n| to 10, where 0 means that the |\n| system_translation is bad and does not |\n| represent what is being said in the |\n| original English text, and 10 means that |\n| the translation is good and represents |\n| the original English text. |\n| No need to mind the quality of the text as |\n| original English text may be of bad |\n| quality. |\n| Provide your feedback as follows: |\n| Feedback::: |\n| Total rating: (your rating, as a float |\n| between 0 and 10) |\n| Now here are the English and French texts. |\n| Original text in English: {english_text} |\n| Translation in French: {french_translation} |\n| Feedback::: |\n| Total rating: |\n| \"\"\" |\n\nFigure 8: Prompt used for LLM as-judge evaluation of SummEval dataset translation.", - "page_start": 14, - "page_end": 14, - "source_file": "arxiv4.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOn a Likert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences.51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk.45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement.52 Data were analyzed from October 2023 to March 2024.\n\n## **Results**\n\n#### **Automated Tasks**\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In **Table 2**, ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n### **Clinical Evaluation Tasks**\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in **Table 3** and **Table 4**. The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\n| | Table 2. Automated Evaluation Scores, Large Language Model (LLM)–Generated and Physician-Written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Summary type | R-1a | R-2a | R-La | BERT-p | BERT-r | SCALE |\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |\n\nAbbreviations: BERT, bidirectional encoder representations from transformers; p, precision-based scores; r, recall-based scores; R, recall-oriented understudy for gisting evaluation; SCALE, source chunking approach for large-scale inconsistency evaluation.\n\na R-1, R-2, R-L are the 3 types of recall-oriented understudy for gisting evaluation scores. Higher is better for all metrics.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 6/12", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "- (a) travel, subsistence, accommodation and shipment expenses; and\n- (b) any other expenses provided for in the tender specifications.\n\nThe daily subsistence allowance referred to in Article II.22.4 (d) and the accommodation flat-rate ceiling referred to in Article II.22.4(e) are listed in Annex IV\n\n#### **I.6. Payment arrangements**\n\n#### **I.6.1. Pre-financing**\n\nPre-financing is not applicable to this FWC.\n\n#### **I.6.2. Interim payments**\n\nInterim payment is not applicable to this FWC, unless it is provided for under a specific contract.\n\nIf provided for, the contractor (or leader in the case of a joint tender) may claim the interim payment equal to the amount specified in the relevant specific contract in accordance with Article II.21.6.\n\nThe contractor (or leader in the case of a joint tender) must send an invoice in paper format or via *e-PRIOR* for the interim payment as provided for in the tender specifications, accompanied by the following:\n\n- a list of all *pre-existing rights* to the *results* or parts of the *results* or a declaration stating that there are no such *pre-existing rights*, as provided for in Article II.13.4;\n- the relevant progress report or deliverable accepted by ECHA\n- statements of reimbursable expenses in accordance with Article II.22.\n\nThe contracting authority must approve the submitted documents or deliverables and pay within 30 days from receipt of the invoice.\n\n# **I.6.3. Payment of the balance**\n\n1. The contractor (or leader in the case of a joint tender) may claim the payment of the balance in accordance with Article II.21.6.\n\nThe contractor (or leader in the case of a joint tender) must send an invoice in paper format or via *e-PRIOR* for payment of the balance due under a specific contract, as provided for in the tender specifications and accompanied by the following:\n\n- a list of all *pre-existing rights* to the *results* or parts of the *results* or a declaration stating that there are no such *pre-existing rights*, as provided for in Article II.13.4;\n- document of acceptance by ECHA of the deliverables as defined in the *tender specifications or specific contract*\n- statements of reimbursable expenses in accordance with Article II.22.\n\n2. The contracting authority must approve the submitted documents and pay within 30 days from receipt of the invoice.", - "page_start": 6, - "page_end": 6, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Assessment).396 Many good examples of support for micro and small enterprises (MSEs) are available as identified in the comprehensive EU-OSHA reports on OSH in MSEs.397\n\nOften, guidance documents show the difference between good practice in prevention and risky practices, for example, the SLIC guidance on measures against exposure to respirable crystalline silica at construction sites.398 There are many of these good or best practice examples in literature, but there is rarely a **quantitative estimate of the occurrence of good** (or moderate) **versus poor practice before and after** the publication and promotion of such guidance documents, which would be crucial to estimate the impact of guidance and tools.\n\nOften the support of a proper implementation is done by European national, sectoral and regional employers' and workers' associations. They contribute to supervision and implementation by consultation or participation in steering committees and so on. Some of them produce specific OSH information or guidance for their members, adapted to the main topics of the organisation.399 They participate in the development of national strategic approaches or OSH campaigns. In all EU Member States there exist fora for social dialogue at regional, sectoral or national level (overview in OSHWiki articles on OSH national systems400). At EU level more than 40 sectoral Social Dialogue Committees and a cross-industry social dialogue committee is working on topics of EU-wide relevance.401\n\nIn the frame of social dialogue, employer federations and trade unions agree on the regulation of **working conditions in collective agreements** without intervention or close reference to state regulations, for example, on working time or telework rules. The Eurofound 'Database of wages, working time and collective disputes' provides an EU-wide overview on such agreements.402\n\nIn some countries, **employers' and workers' associations are governing widely independent OSH institutions** (e.g. Austrian AUVA or German Berufsgenossenschaften) that act in the frame of state regulation but with quite considerable independent decision power.403 In some cases they dispose of significant resources and are major players for some areas, like training of OSH professionals, or compensation of occupational diseases. They can even implement financial incentives to initiate better OSH practices.404\n\n**Management systems** and policies contribute to better prevention; they include ethical considerations (corporate responsibility programmes, sustainability and environmental reports), or quality objectives (quality management) particularly in global and large companies. Most of them cover all aspects of the business activities and **OSH is one of these aspects**. Well known are the standards of the International Organisation for Standardization (ISO), namely ISO 9001, Quality management systems, ISO 14001 Environmental management systems - Requirements with guidance for use, and ISO 31000, Risk management - Principles and guidelines.405\n\nIf the OSH aspects in such systems are not sufficiently covered, enterprises can introduce **specific OSH management** systems. ISO published the global standard ISO 45.000-2018 *Occupational health and safety management systems - Requirements with guidance for use* developed by ISO. According to ISO, these systems have the following function:\n\n*'OH&S management controls the conditions and factors that affect, or could affect, the health and safety of workers (including temporary workers and contractor personnel), visitors, or any other person in the workplace, to avoid their ill health and/or injury.'406*\n\nEnterprises can also use other OSH management standards. At international level, the ILO published in 2001 'Guidelines on occupational safety and health management systems'.407 In EU Member States there exist several systems that target specific sectors or SMEs, for example, scorecard systems.408 Critics of OSH management systems refer to the risk of a focus on 'paper compliance'.409", - "page_start": 125, - "page_end": 125, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "In the context of research publication, what is HAL ?", - "target_page": 3, - "target_passage": "Hyper Articles en Ligne (HAL) is a French open archive of scholarly documents from all academic fields.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "It is also an example predicated on copyright's limitations and exceptions — in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use.32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is \"an international community of research libraries committed to the long-term curation and availability of the cultural record.\" It started in what it calls the \"early 33 days of mass digitization\" — that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called \"digital humanities,\" which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., \"war on drugs\") became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from \"non-consumptive research,\" or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center \"[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.\" It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,\n\n<i>Authors Guild v. HathiTrust, 902 F.Supp.2d 445 (SDNY October 10, 2012) and *Authors Guild v.* 32 *HathiTrust*, 755 F.3d 87 (2d Cir. 2014).\n\nSee https://www.hathitrust.org/member-libraries/member-list/ — the membership is principally US 33 institutions, and most of the non-US members are from English speaking countries or institutions that use English as the primary language of operations.\n\nThis functionality is limited to scanned books provided by library partners in the US. 34", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n#### Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing – original draft, Writing – review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing – review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – review & editing.\n\n### Funding\n\nThe author(s) declare that financial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Conflict of interest\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n### Publisher's note\n\nAll claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.\n\n## References\n\n1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler. (2020) 26(14):1816–21. doi: 10.1177/1352458520970841\n\n2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports. (2018) 28 (9):1960–9. doi: 10.1111/sms.13214\n\n3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord. (2017) 13:38–43. doi: 10.1016/j.msard.2017.01.016\n\n4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport. (2022) 25(2):146–54. doi: 10.1016/j.jsams.2021.08.015\n\n5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis—time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep. (2019) 19(11):1–12. doi: 10.1007/s11910-019-1002-3\n\n6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective efficacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler. (2022) 28(10):1620–9. doi: 10. 1177/13524585221079200\n\n7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler. (2020) 26(12):1459–69. doi: 10.1177/ 1352458520915629\n\n8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther. (2021) 101 (5):1–9. doi: 10.1093/ptj/ptzab049\n\n9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord. (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n\n10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother. (2021) 23(6):377–83. doi: 10.1080/21679169.2020.1772870\n\n11. Unluer NO, Ozkan T, Yasa ME, Ates Y, Anlar O. Investigation of the relationship between trunk motor control and balance, functional mobility, and gait capacity in patients with multiple sclerosis/multipl sklerozlu hastalarda govde motor kontrolu ile denge, fonksiyonel mobilite ve yuruyus kapasitesi arasindaki iliskinin incelenmesi. Türk Nöroloji Dergisi. (2021) 27(3):283. doi: 10.4274/tdn.2021.41017\n\n12. Learmonth YC, Motl RW. Physical activity and exercise training in multiple sclerosis: a review and content analysis of qualitative research identifying perceived determinants and consequences. Disabil Rehabil. (2016) 38(13):1227–42. doi: 10. 3109/09638288.2015.1077397\n\n13. Fikke HK, Normann B, Sivertsen M, Dahl SSH, Arntzen EC. Optimizing sensorimotor function, physical activity and employment for people with MS—a feasibility study. Fysioterapeuten. (2023) 90(1):32–42. doi: 10.52705/ c14a8ca05f7546dabc18bd0275cf2edd\n\n14. Arntzen EC, Straume B, Odeh F, Feys P, Normann B. Group-based, individualized, comprehensive core stability and balance intervention provides immediate and long-term improvements in walking in individuals with multiple sclerosis: a randomized controlled trial. Physiother Res Int. (2019) 25(1):e1798. doi: 10.1002/pri.1798\n\n15. Arntzen EC, Straume BK, Odeh F, Feys P, Zanaboni P, Normann B. Groupbased individualized comprehensive core stability intervention improves balance in persons with multiple sclerosis: a randomized controlled trial. Phys Ther. (2019) 99 (8):1027–38. doi: 10.1093/ptj/pzz017\n\n16. Arntzen EC, Øberg GK, Gallagher S, Normann B. Group-based, individualized exercises can provide perceived bodily changes and strengthen aspects of self in individuals with MS: a qualitative interview study. Physiother Theory Pract. (2019) 37(10):1080–95. doi: 10.1080/09593985.2019.1683923\n\n17. Florio-Smith J, Ayer M, Colhoun S, Daykin N, Hamill B, Liu X, et al. The importance of the patient's perspective in decision-making in multiple sclerosis: results of the OwnMS patient perspectives study. Mult Scler Relat Disord. (2023) 75:104757. doi: 10.1016/j.msard.2023.104757\n\n18. Kleim JA, Jones TA. Principles of experience-dependent neural plasticity: implications for rehabilitation after brain damage. J Speech Lang Hear Res. (2008) 51(1):225–39. doi: 10.1044/1092-4388(2008/018)\n\n19. Thompson E. Mind in Life: Biology, Phenomenology, and The Sciences of Mind. Cambridge, Mass: Harvard University Press (2007).\n\n20. Merleau-Ponty M. Phenomenology of Perception. London: Routledge Classics (2008).", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "# *Acknowledgements*\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# A Primer in BERTology: What We Know About How BERT Works\n\nAnna Rogers Center for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\nOlga Kovaleva Dept. of Computer Science University of Massachusetts Lowell okovalev@cs.uml.edu\n\n### Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell arum@cs.uml.edu\n\n### Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n### 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear *why*, which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n### 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\n1https://github.com/ google-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "- Haack, Susan (1978). \"1. 'Philosophy of logics' \". *Philosophy of Logics* (https://philpapers.or g/rec/HAAPOL-2). London and New York: Cambridge University Press. pp. 1–10. ISBN 978- 0-521-29329-7. Archived (https://web.archive.org/web/20211207200551/https://philpapers.o rg/rec/HAAPOL-2) from the original on 7 December 2021. Retrieved 29 December 2021.\n- Haack, Susan (1996). *Deviant Logic, Fuzzy Logic: Beyond the Formalism*. University of Chicago Press. ISBN 978-0-226-31133-3.\n- Haaparanta, Leila (2009). \"1. Introduction\". *The Development of Modern Logic*. Oxford University Press. pp. 4–6. ISBN 978-0-19-513731-6.\n- Hansen, Hans (2020). \"Fallacies\" (https://plato.stanford.edu/entries/fallacies/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (http s://web.archive.org/web/20210329182946/https://plato.stanford.edu/entries/fallacies/) from the original on 29 March 2021. Retrieved 18 March 2021.\n- Hartmann, Stephan; Sprenger, Jan (2010). \"Bayesian Epistemology\". *The Routledge Companion to Epistemology* (https://philpapers.org/rec/BOVSIO). London: Routledge. pp. 609–620. ISBN 978-0-415-96219-3. Archived (https://web.archive.org/web/2021051609 5047/https://philpapers.org/rec/BOVSIO) from the original on 16 May 2021. Retrieved 4 January 2022.\n- Hasse, Dag Nikolaus (2008). \"Influence of Arabic and Islamic Philosophy on the Latin West\" (https://plato.stanford.edu/entries/arabic-islamic-influence/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Retrieved 19 July 2023.\n- Hawthorne, James (2021). \"Inductive Logic\" (https://plato.stanford.edu/entries/logic-inductiv e/). *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20220121081805/https://plato.stanford.ed u/entries/logic-inductive/) from the original on 21 January 2022. Retrieved 6 January 2022.\n- Hintikka, Jaakko J. (2019). \"Philosophy of logic\" (https://www.britannica.com/topic/philosoph y-of-logic). *Encyclopædia Britannica*. Archived (https://web.archive.org/web/2015042810173 2/http://www.britannica.com/EBchecked/topic/346240/philosophy-of-logic) from the original on 28 April 2015. Retrieved 21 November 2021.\n- Hintikka, Jaakko J. (2023). \"Logical systems\" (https://www.britannica.com/topic/logic/Logical -systems). *Encyclopædia Britannica*. Archived (https://web.archive.org/web/2021120718465 6/https://www.britannica.com/topic/logic/Logical-systems) from the original on 7 December 2021. Retrieved 4 December 2021.\n- Hintikka, Jaakko (1970). \"Information, Deduction, and the A Priori\". *Noûs*. **4** (2): 135–152. doi:10.2307/2214318 (https://doi.org/10.2307%2F2214318). ISSN 0029-4624 (https://searc h.worldcat.org/issn/0029-4624). JSTOR 2214318 (https://www.jstor.org/stable/2214318).\n- Hintikka, Jaakko; Sandu, Gabriel (2006). \"What is Logic?\". In Jacquette, D. (ed.). *Philosophy of Logic* (https://philpapers.org/rec/JAAWIL). North Holland. pp. 13–39. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207235525/https://ph ilpapers.org/rec/JAAWIL) from the original on 7 December 2021. Retrieved 29 December 2021.\n- Hintikka, Jaakko J.; Spade, Paul Vincent. \"History of logic\" (https://www.britannica.com/topi c/history-of-logic). *Encyclopædia Britannica*. Retrieved 23 September 2022.\n- Honderich, Ted (2005). *The Oxford Companion to Philosophy* (https://philpapers.org/rec/HO NTOC-2). Oxford University Press. ISBN 978-0-19-926479-7. Archived (https://web.archive. org/web/20210129082636/https://philpapers.org/rec/HONTOC-2) from the original on 29 January 2021. Retrieved 2 January 2022.\n- Hurley, Patrick J. (2015). \"4. Categorical Syllogisms\". *Logic: The Essentials*. Wadsworth. pp. 189–237. ISBN 978-1-305-59041-0.\n- IEP Staff. \"Deductive and Inductive Arguments\" (https://iep.utm.edu/ded-ind/). Archived (http s://web.archive.org/web/20100528032124/https://iep.utm.edu/ded-ind/) from the original on 28 May 2010. Retrieved 6 January 2022.", - "page_start": 29, - "page_end": 29, - "source_file": "wikipedia1.pdf" - }, - { - "text": "## **Better information, Better decision**\n\n#### **Market Intelligence**\n\nMARKETING\n\nASAKO HOSHINO Vice President\n\n\"Why does a company conduct market research on consumers? It is not just about asking the customer if they prefer A or B, which is often what managers want to know. Accumulating knowledge on consumer behavior and emerging trends is how you come up with ideas that are truly focused on the customer. Our aim is to gain the deepest understanding of the customer possible, and use that insight to identify future trends.\n\nThe Market Intelligence department is relatively new, formed by combining the research functions once carried out separately by various divisions. The merger and our independent status have brought several practical benefits. We now have uniform procedures for conducting research, better research methodologies, and greater objectivity in the interpretation of the data. Today, we're a team of experts in this field, not simply coordinators between research organizations and the decision makers. We are often benchmarked by other industries.\n\nWhen the department was first established, Mr. Ghosn made one thing very clear: Do not attack the methodology! Different business areas may complain when we release information that is negative or differs from their objectives. However, they cannot attack how we came to our conclusions, because our methodology is considered the best within the organization. We are transparent in our\n\nselection of methodologies and how we approach conclusions. Among the various areas, we aim to be the department that most effectively utilizes the PDCA—plan, do, check and action—cycle. We are always working to get better and more accurate information to upgrade our methodology. Every year we hold a PDCA session to review our methodology with other departments. Anyone can assess Market Intelligence at this time. This is also a great opportunity to share methodologies and approaches with various functions.\n\nWe also conduct trend review meetings with all decision-makers, including non-marketing officers, to understand social, consumer and value trends so that we can identify sources of innovation for all areas. This makes us unique. Our analysts enrich the analysis, interpretation and forecast because they are aware of global social and consumer trends. The trend review meetings also remind people in all departments—even those not directly involved with sales and marketing—that customers are truly the center of our business.\n\nWe work with different research experts and companies as our partners. They offer a variety of hightech techniques such as glasses with cameras that track eye movement, instruments that measure brainwaves or pupil dilation to detect preferences, and non-categorical measures that help us find personal evaluations of perceived quality or design. Our job is to evaluate these research companies and their output, and to develop the best methodology for our issues. We are always refining the tools we have and looking for new ones that will boost our accuracy. Our strong ties with outside experts are a source of competitive advantage for Nissan.\n\nAgain, it all goes back to being customer-oriented. Confirming that customer-oriented stance will create value for Nissan. Market Intelligence must be a dedicated evangelist for this change.\"", - "page_start": 41, - "page_end": 41, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Policy information about availability of data\n\nAll manuscripts must include a data availability statement. This statement should provide the following information, where applicable:\n\n- Accession codes, unique identifiers, or web links for publicly available datasets\n- A description of any restrictions on data availability\n- For clinical datasets or third party data, please ensure that the statement adheres to our policy\n\nThe dataset consists of 26 MRI scans (T1w, T2w, and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The data is publicly available on https://openneuro.org/datasets/ds005299.\n\n# Research involving human participants, their data, or biological material\n\nPolicy information about studies with human participants or human data. See also policy information about sex, gender (identity/presentation), and sexual orientation and race, ethnicity and racism.\n\n| Reporting on sex and gender | Our study focused on a single female participant to explore how pregnancy shapes the human brain. |\n| --- | --- |\n| Reporting on race, ethnicity, or | The subject was white. |\n| other socially relevant | |\n| groupings | |\n| Population characteristics | This was a precision imaging study of one 38-year old primiparous woman. |\n| Recruitment | Our participant (corresponding author E.R.C.) was a healthy primiparous woman who underwent in-vitro fertilization (IVF) to |\n| | achieve pregnancy. The project was conceived by E.R.C. and she wished to use herself as the participant, as has been done in |\n| | previous \"dense-sampling\" studies (cf. Poldrack et al., 2015; Pritschet et al., 2020). |\n| Ethics oversight | The participant gave written informed consent and the study was approved by the University of California, Irvine Human |\n| | Subjects Committee. |\n\nNote that full information on the approval of the study protocol must also be provided in the manuscript.\n\n# Field-specific reporting\n\nPlease select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.\n\n|\n| |\n\nFor a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf\n\n# Life sciences study design\n\nAll studies must disclose on these points even when the disclosure is negative.\n\n| Sample size | We used precision imaging to deeply-phenotype, densely-sample an individual over the gestational window. As this study was the first of it's |\n| --- | --- |\n| | kind, our sample size was an N=1 design. Although this limits the generalizability of our findings, this project serves as a proof-of-concept, |\n| | showcasing the value and feasibility of studying a woman's brain during the transition to motherhood. |\n| Data exclusions | no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking |\n| Replication | This is the first study of it's kind; therefore, there are no study replications as of yet. However, to reproduce our results internally across |\n| | software packages, we also ran the T1w data through the longitudinal FreeSurfer cortical thickness pipeline (Dale et al., 1999), which |\n| | corroborated our finding that gray matter volume declines throughout gestation (e.g., successful internal replication). This pattern of results |\n| | not only held across software packages, but also brain parcellations (e.g., Schaefer 400-cortical atlas and Desikan-Killiany cortical atlas). |\n| Randomization | This was an observational study design, and therefore not randomized. |\n| Blinding | For medial temporal lobe segmentation, scans were randomized and segmentation was performed in a random order, blind to pregnancy |\n| | stage. No other blinding was applicable, given the observational study of brain changes in response to advancing gestational week. |\n\n# Reporting for specific materials, systems and methods\n\nWe require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed4.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "These factors can also affect our objectives, strategies and intentions. Many of these factors are beyond our control or our current expectations. Should one or more of these risks, uncertainties or other factors materialize, our objectives, strategies or intentions change, or any other factors or assumptions underlying the forward-looking information prove incorrect, our actual results and our plans could vary significantly from what we currently foresee.\n\nAccordingly, we warn investors to exercise caution when considering statements containing forward-looking information and that it would be unreasonable to rely on such statements as creating legal rights regarding our future results or plans. We are under no obligation (and we expressly disclaim any such obligation) to update or alter any statements containing forward-looking information or the factors or assumptions underlying them, whether as a result of new information, future events or otherwise, except as required by law. All of the forward-looking information in this MD&A is qualified by the cautionary statements herein.\n\n#### BEFORE MAKING AN INVESTMENT DECISION\n\nBefore making any investment decisions and for a detailed discussion of the risks, uncertainties and environment associated with our business, fully review \"Regulation in Our Industry\" and \"Governance and Risk Management\", in this MD&A, as well as our various other filings with Canadian and US securities regulators which can be found at sedar.com and sec.gov.\n\n#### FOR MORE INFORMATION\n\nYou can find more information about us, including our Information Circular and Annual Information Form, on our website (rogers.com/ investors), on SEDAR (sedar.com) and on EDGAR (sec.gov), or you can e-mail us at investor.relations@rci.rogers.com. Information on or connected to these and any other websites referenced in this document is not part of this MD&A.\n\nYou can also go to rogers.com/investors for information about our governance practices, corporate social responsibility reporting, a glossary of communications and media industry terms, and additional information about our business.", - "page_start": 28, - "page_end": 28, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "What is the effect of embedding dimension on embedding representation quality ?", - "target_page": 6, - "target_passage": "we observe a performance correla- tion with the embedding dimension and the model’s number of parameters, which are often correlated themselves", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nMathieu Ciancone\n\nWikit, France mathieu@wikit.ai\n\nMarion Schaeffer Wikit, France marion@wikit.ai\n\n#### Abstract\n\nRecently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard1 .\n\n#### 1 Introduction\n\nEmbeddings are dense vector representations that capture the semantics of an input. The first emblematic example is Word2Vec, introduced by Mikolov et al. (2013). It consists of neural architectures trained to learn high-quality word representations from contextual relationships in vast amounts of text. Other models were proposed since then, leveraging the transformer architecture (Vaswani et al., 2017) to produce both generic and contextualized word embeddings using self-attention. Many models now exist with various architectures, monolingual or multilingual, pre-trained or fine-tuned (Naseem et al., 2021; Ding et al., 2023).\n\nIn this work, our primary objective is to introduce a large-scale embedding benchmark for\n\nImene Kerboua Esker, France imene.kerboua@esker.com\n\nWissam Siblini wissam.siblini92@gmail.com\n\nFrench to enable the research community and industry to select the most relevant embedding methods based on one's specific needs, such as being opensource, versatile or targeted toward a particular task, having a small embedding dimension, the ability to process long texts or their performance. To achieve this goal, we undertake significant efforts in collecting datasets to conduct a broad comparison of models. We ensure that the datasets cover various tasks within a common, easy-to-use framework, and we create three new quality-checked datasets to enhance this collection. We select a diverse range of models, including prominent French and multilingual models deemed most efficient. The results of our study already enable the community to make informed model selections, whether for general purposes or specific tasks. Additionally, our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. With this first large-scale comparison, we perform an in-depth analysis of the results, confirming well-known findings such as the correlation between performance and model/embedding dimensions and uncovering interesting nuances.\n\n#### 2 Related Work\n\nSentence Embeddings Sentence embeddings are required for many language tasks, such as Semantic Textual Similarity (STS) and knowledge retrieval. Many models have been proposed in the literature, leveraging pooling strategies (Devlin et al., 2019; Muennighoff, 2022) or similarity fine-tuning (Reimers and Gurevych, 2019) using a contrastive framework (Gao et al., 2021; Neelakantan et al., 2022; Ni et al., 2021; Wang et al., 2022; Zhang et al., 2023), leveraging prompts (Wang et al., 2023) or a two steps training process (Chen et al., 2024; Lee et al., 2024). Few French-language models have been proposed in the literature (Martin et al.,\n\n1 French table on: https://huggingface.co./spaces/ mteb/leaderboard", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv4.pdf" - }, - { - "text": "with respect to model ranking?\n\nTo go further than the correlation analysis among datasets regarding their topics (see section 3.1.5), subsequent analysis will be conducted regarding how they rank models. Additionally, complementary insights will be derived from examining correlations of models relative to their strengths and weaknesses across different datasets.\n\n### 4 Results and discussion\n\nIn this section, we present the results through the prism of our research questions.\n\n### Q1: Is there a model that outstands on all tasks?\n\nModels performances for each task are presented in appendix Tables 9, 10, 11, 12 and 13. Figure 1 shows the critical difference diagram of average score ranks.\n\nAs in MTEB (Muennighoff et al., 2022), no model claims state-of-the-art in all tasks even if the *text-embedding-3-large* model is in first place on average on all tasks (see Table 9). It ranks first for the classification and reranking tasks. For the clustering task, *text-embedding-ada-002* is the best model. The models *voyage-code-2*, *textembedding-3-small* and *mistral-embed* share the top positions in the retrieval task ranking. For the pair classification task, *laser2* is ahead of its competitors. Finally, *sentence-camembert-large* leads on the STS task and *multilingual-e5-small* has the best results for summarization.\n\nFigure 1 shows a global model comparison across all datasets. The models are arranged horizontally according to their performance, with the best models on the left. The black bars represent the statistical equivalence between the models' performances. The statistically equivalent top performers for this benchmark are OpenAI's models *text-embedding-3-large*, *text-embedding-3 small* and *text-embedding-ada-002*. Interestingly, many models do not show a significant performance gap between their base and large flavours. Some French models stand out among the multilingual models, such as *Solon-embeddings-large-0.1*, *sentence_croissant_alpha_v0.3* and *sentencecamembert-large*.\n\n### Q2: Are there any links between model characteristics and performance?\n\nThe Spearman correlations between the average rank of the models and their characteristics are the following:\n\n- *Tuned for sentence similarity*: 0.727\n- *Finetuned vs pretrained*: 0.544\n- *Model number of parameters*: 0.49\n- *Embedding dimension*: 0.452\n- *Closed source*: 0.449\n- *Max sequence length*: 0.336\n- *Multilingual*: 0.103\n- *English*: 0.025\n- *English but tuned on other languages*: -0.025\n- *French*: -0.134\n- *Bilingual*: -0.135\n\nAdditionally, all cross-correlations between characteristics are reported in appendix Figure 10.\n\nAs expected, the score most strongly correlates with whether the evaluated models were trained on a sentence similarity task. Of course, this criterion is connected to the more general *Finetuned* one. The only top-performing models solely pre-trained are from the *E5* family, where the pre-training is, in fact, contrastive and optimized for similarity. Conversely, models pre-trained on token-level tasks and generating embeddings via pooling appear less well-suited for the benchmark tasks.\n\nFurthermore, we observe a performance correlation with the embedding dimension and the model's number of parameters, which are often correlated themselves. This appears very clearly on the relative ranking of *E5* and *T5* models (see Figure 1). However, some small models perform very well on the benchmark, such as the standard version of the multilingual universal sentence encoder or *Solon-embeddings-base-1.0*. Notably, the maximum sequence length, while an important criterion for generative tasks with LLMs, is less correlated with performance than the other dimensions. This can be explained by many datasets containing relatively small texts (see appendix Table 3 showing that 14 datasets have less than 50 tokens).\n\nRegarding language, it is surprising that good performance is not particularly correlated with French models in particular. In reality, the other aspects of the models, such as being fine-tuned", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv4.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n### 4 Localizing linguistic knowledge\n\n#### 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized. Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic). Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs, likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising\n\n3Voita et al. (2019a) look at the evolution of token embeddings, showing that in the earlier Transformer layers, MLM forces the acquisition of contextual information at the expense of the token identity, which gets recreated in later layers.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Feature Prediction versus Pixel Reconstruction. Approaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n# 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x. The additional variable z provides the predictor with information about the transformation that computes y from x.\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x. The basic architecture is made up of an encoder, Eθ(·), which computes the representation of the inputs, and a predictor, Pϕ(·), which predicts the representation of y from the representation of x, conditioned on a variable z indicating the transformation (or corruption) between x and y. Conditioning on z enables the generation of distinct predictions for various transformations of x.\n\n### 3.1 Training Objective\n\nWe train our visual encoder Eθ(·) to satisfy the constraint that representations computed from one part of the video, y, should be predictable from representations\n\ncomputed from another part of the video, x. The predictor network Pϕ(·), which maps the representation of x to the representation of y, is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆y.\n\nNaively implementing the objective using the regression\n\n$$\\begin{array}{r l}{{\\mathrm{minimize}_{\\theta,\\phi}}}&{{}\\|P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-E_{\\theta}(y)\\|_{1},}\\end{array}$$\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize${}_{\\theta,\\phi}\\quad||P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-\\mbox{sg}(\\overline{E}_{\\theta}(y))||_{1},$ (1)\n\nwhere sg(·) denotes a stop-gradient operation, which does not backpropagate through its argument, and Eθ(·) is an exponential moving average of the network Eθ(·). The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ1 regression, which we found to be more stable.\n\nTheoretical motivation. A theoretical motivation for the effectiveness of this collapse prevention strategy was proposed in Grill et al. (2020) for the BYOL method. We provide a simple adaptation of their analysis for our ℓ1 loss. For ease of exposition, we will disregard the effect of the conditioning variable z and consider one dimensional representations. Denote the representation Eθ(y) by a random variable Y . The optimal predictor under equation (1) is thus given by the following functional expression,\n\n$P^{\\star}(E_{\\theta}(x))=\\text{argmin}_{P}\\|P(E_{\\theta}(x))-Y\\|_{1}$ \n \n$=\\text{median}(Y|E_{\\theta}(x))$. \n \n\nSubstituting this expression for the optimal predictor into the loss function and evaluating the expected gradient of the encoder gives\n\n$$\\nabla_{\\theta}\\mathbb{E}\\|P^{\\star}(E_{\\theta}(x))-Y\\|_{1}=\\nabla_{\\theta}\\mathrm{MAD}(Y|E_{\\theta}(x)),$$\n\nwhere MAD(· |Eθ(x)) is the median absolute deviation of a random variable conditioned on Eθ(x). Thus, in the case where the predictor is optimal, the encoder must learn to capture as much information about the video as possible to minimize the deviation of the target. The hypothesis is that incorporating an exponential moving average to compute the representation of y ensures that the predictor evolves faster than the encoder and remains close to optimal, thereby preventing collapse.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 3 V-JEPA. Training operates on a video clip of T frames with spatial resolution H × W, flattened into a sequence of L tokens. (Left to right): We first obtain the input of the x-encoder by dropping tokens from the video clip. The x-encoder then processes the masked video sequence, and outputs an embedding vector for each input token. Next, the outputs of the x-encoder are concatenated with a set of learnable mask tokens containing positional embeddings of the masked spatio-temporal patches. The predictor network processes the combined token sequence, and outputs an embedding vector for each mask token. The outputs of the predictor are then regressed to the prediction targets using an L1 loss. The prediction targets correspond to the output of the y-encoder.\n\n### 3.2 Prediction Task: Predicting y from x\n\nThe feature prediction task is based on a masked modeling formulation (He et al., 2021; Tong et al., 2022); i.e., regions x and y from the video are sampled using masking. To sample y from a video, we sample several (possibly overlapping) spatially continuous blocks with various aspect ratios and repeat the spatial blocks across the entire temporal dimension of the video; x is taken to be the complement. Masking a large continuous block that covers the full temporal dimension limits information leakage due to the spatial and temporal redundancy of videos, and results in a harder prediction task (Tong et al., 2022).\n\nWe leverage two types of masks: short-range masks, where we take the union of 8 randomly sampled target blocks covering 15% of each frame, and long-range masks, where we take the union of 2 randomly sampled target blocks covering 70% of each frame. In both cases, the aspect ratio for all sampled blocks is randomly chosen in the range (0.75, 1.5). Given that both short-range and long-range masks are produced by sampling many blocks and taking their union, the result is an average masking ratio of ∼ 90%. We refer to our masking strategy as multi-block, and compare it to other possible masking strategies in Section 4.\n\n### 3.3 Network Parameterization\n\nWe use a Vision Transformer (ViT) (Dosovitskiy et al., 2020; Arnab et al., 2021) as our video backbone. To process a video with a transformer network, we split the video clip into a 3D grid of L spatio-temporal patches, where a patch consists of a 16 × 16 pixel block spanning 2 consecutive frames; we refer to these spatio-temporal patches as tokens. This sequence of tokens is then directly processed by the stack of transformer blocks. In-\n\nputs x and y correspond to masked regions of a video, we apply the video masks by simply dropping a subset of the tokens. We apply masking at the input of the x-encoder, and at the output of the y-encoder to construct contextualized targets (Baevski et al., 2022b). The encoder is parameterized using standard ViT networks, while the predictor is a narrow transformer implemented using 12 blocks with an embedding dimension of 384. Taking inspiration from masked autoencoders (He et al., 2021), our predictor takes as input the sequence of embeddings produced by the x-encoder as well as a sequence of learnable mask tokens with positional embeddings indicating the spatio-temporal positions of the y tokens. The output of the predictor is an embedding vector for each mask token; see Figure 3 and refer to Appendix B for more details.\n\n### 3.4 Pretraining Data and Evaluation Setup\n\nPretraining. We combine several public datasets to construct an unsupervised video pretraining dataset, which we refer to as VideoMix2M. Specifically, we combine the videos from HowTo100M (HT) (Miech et al., 2019), Kinetics-400/600/700 (K710) (Kay et al., 2017), and Something-Something-v2 (SSv2) (Goyal et al., 2017), and remove any overlap with the validation sets of Kinetics-400/600/700 and Something-Something-v2, resulting in approximately 2 million videos. We train a ViT-L/16, a ViT-H/16, and a ViT-H/16384 transformer model on VideoMix2M. We use a batch size of 3072 for the ViT-L/16 and ViT-H/16 models, and a batch size of 2400 for the ViT-H/16384 model. Each model takes as input a video clip of 16 frames sampled with a frameskip of 4, corresponding to roughly 3 second clips on average. The ViT-L/16 and ViT-H/16 process the video at a spatial resolution of 224, while the ViT-H/16384 uses an input resolution of 384; cf. Appendix C.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 1: Critical difference diagram representing the significant rank gaps between models. The axis represents the normalized average rank of the models (lower is better). The black bars indicate that the difference in models' rank is not statistically significant, i.e. lower than the critical difference.\n\nfor similarity, prevail. Nevertheless, we can highlight the excellent performance of a few French models such as *sentence-camembert* and *sentencecroissant* and *Solon-embeddings*.\n\nLastly, we emphasize that closed-source models perform well on this benchmark (*text-embeddings*, *mistral-embed* and *voyage*), but we lack information about their characteristics. As more opensource well-performing models get added in the future, we could expect this correlation to decrease. Note that the correlation between sequence length and performance could be dragged by closedsource models that have generally larger sequence lengths.\n\n### Q3: Do monolingual models have multilingual capabilities?\n\nFigure 2: Model performance depending on the language of the data they have been trained on.\n\nWe also studied the capabilities of models on the French language when the language of the training data varies. It is surprising to note the absence of a clear correlation between the language the model is trained on and its performance on French, as shown by the large standard deviation in Figure 2. Furthermore, monolingual models trained exclusively on English such as *voyage-code-2* show very good results on French datasets compared to models trained exclusively on French such as *flaubert* derivatives and *distilbert-base-fr-cased* (see Table D.1).\n\nThis is explained by the fact that a large part of the selected French models generate embeddings using a pooling strategy. Only a few are sentence transformer models, for which the pooled representation is part of the model and trained with it, leading to higher-quality embeddings. This is endorsed by the excellent results of *sentence-camembert-large*, a sentence transformer model trained on French corpus and confirms the recent findings in terms of model architecture (Gao et al., 2021).\n\nFinally, it should be noted that a significant portion of the French data used to train the selected French models actually comes from English datasets that have been machine translated (May, 2021). Despite the tremendous progress of machine translation, it is well known that the generated data may be unrepresentative of the language used by native speakers and cause a reduced final performance (Barbosa et al., 2021).", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv4.pdf" - }, - { - "text": "# Q4: Are there any correlations between datasets with respect to model ranking?\n\nThe datasets correlation w.r.t model ranking are presented in appendix Figure 12. Except for two datasets (*MasakhaNEWSClusteringP2P*, *SummEvalFr*), the correlations, on average, are high. There is still enough diversity to make each dataset interesting for the French MTEB benchmark. Two groups (*SyntecReranking*/ *SyntecRetrieval*, *MassiveScenarioClassification*/ *MTOPDomainClassification*/ *MassiveIntentClassification*) exhibit notably high correlations (∼0.97). It is interesting to point out some sub-diagonal correlation blocks. The datasets being arranged by task indicate that models behave slightly more similarly within the same task than between two different tasks. This underscores the importance of having multiple tasks in the benchmark to select general-purpose models. For readers interested in specific tasks, it is more relevant to examine task-specific rankings rather than the overall one. The complementary results of model correlations w.r.t to strengths and weaknesses on datasets are displayed in appendix Figure 11. Strong correlations in behavior emerge among the variants of the same models (e.g. DistilBERT, sentence-croissant, sentence-t5, e5, etc.). Correlations are also generally observed among numerous models trained using the sentence transformers framework (Reimers and Gurevych, 2019), as well as proprietary models, e.g. from Cohere and OpenAI. Conversely, these models finetuned for sentence similarity, show minimal correlation with pre-trained models for which tokenembedding pooling techniques are employed.\n\n# 5 Conclusion and perspectives\n\nIn this work, we introduce a large-scale embedding benchmark for French to enable the research community and industry to select the most relevant embedding methods based on their specific needs. We undertake significant efforts in collecting 15 datasets and create 3 new quality-checked ones to enhance this collection. The whole French benchmark runs on 26 tasks. We select a diverse range of 51 models, including prominent French and multilingual models deemed most efficient to conduct a broad comparison. Our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. After an in-depth analysis of the results, OpenAI models perform significantly better than\n\nthe other models. However, other models should be considered for their performance on specific tasks, being open source or having a small embedding dimension.\n\nThis work opens several doors for future improvements. By examining dataset diversity in terms of topics and model ranking, we observe that the benchmark would benefit from additional datasets that introduce higher diversity. Beyond classification, many tasks focus on semantic similarity, explaining the strong performance of models trained for similarity. Exploring novel tasks in the generative spectrum or evaluating token embeddings (contextualized or not) on tasks like Named Entity Recognition could be an interesting path for future exploration. There are also opportunities for improvements on the model side. With numerous existing models that could be added to the leaderboard and many new proposals awaiting. For instance, we can already see the promising capabilities of early variants of recent models (Faysse et al., 2024) and expect that future proposals will come to compete strongly with closed-source models. Ultimately, we hope to see the emergence of other language-specific MTEB variants (e.g. for high-resource languages like Spanish and German), enabling a more comprehensive evaluation of multilingual model performance.\n\n# 6 Limitations\n\nNative French resources unavailability The availability of resources natively in French is an obvious limitation of our work. Regarding models, there are far fewer options than with more widespread languages such as English. Indeed, most of the existing French embedding models we found are trained using either older architectures or methods, unlike most recent multilingual models such as *NV-Embed-v1* (Lee et al., 2024) or *e5 mistral-7b-instruct* (Wang et al., 2023). Comparing models by family would be beneficial, particularly for evaluating French models against multilingual models on the same architecture using the same training technique. Resource limitations also apply to datasets. For example, the summarization task dataset is translated, which can be less relevant than a natively French dataset. We have also built datasets for reranking tasks using existing ones from retrieval task because we could not find any in French. This construction process introduces a bias as the model performance on both tasks may be", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv4.pdf" - }, - { - "text": "tation and, in practical applications, the underlying storage and compute costs. We selected models with embedding dimensions ranging from 384 to 4096.\n\n- *Sequence length:* Being the number of tokens that a model can consider as input, the sequence length is important as it impacts the unit that can be encoded (sentence, paragraph, document). However, encoding overly long sequences requires efficiently storing the relevant information into a single vector. Among the selected methods, this criterion varies from 128 tokens to 32768.\n- *Model parameters:* Often correlated with the two first characteristics, parameter count is important for practical applications as it affects usability on resource-efficient machines. The selected models have a number of parameters ranging from 20 million (∼100Mb in float32) to 7 billion (∼28Gb).\n- *Language:* This is a major feature of language models. Some are monolingual, and others are multilingual. Language is usually acquired during pre-training, but sometimes, models familiarize themselves with new languages at tuning. For the benchmark, we selected French models, as well as bilingual or multilingual models. We also included a few ones that claimed to be English (e.g. *all-MiniLM-L12-v2*9 ).\n- *Model types:* There are several strategies to generate text embeddings such as aggregating (e.g. with average pooling) token-level embeddings from raw pre-trained models, or adding an extra contrastive learning step on a sentence similarity task with, optionally, additional transformation layers. We included models of all types in our benchmark, summarizing the model type information under two relevant criteria: finetuned vs pretrained, and trained for sentence similarity or not.\n\nThe selected models are visible in Figure 1, and all of their characteristics are summarized in appendix Table 7. Overall, the selection includes the best models from the sentence transformers framework (Reimers and Gurevych, 2019), the most popular French NLP models (Le et al., 2020; Martin\n\net al., 2019), their variants optimized for semantic similarity (Reimers and Gurevych, 2019), numerous multilingual models performing at the top on MTEB (e.g *E5* and *T5*), *Bloom* variants (Zhang et al., 2023), models based on very recent powerful LLMs (Wang et al., 2023; Faysse et al., 2024) and finally the proprietary models of OpenAI, Cohere and Voyage. Certain models were selected in multiple sizes to isolate the dimensionality effect effectively. We provide information on the models' licenses as reported in the Hugging Face hub10 . However, we encourage readers to conduct further research before utilizing a model.\n\n#### 3.3 Evaluation\n\nFor the sake of homogeneity, models are evaluated using the same metrics per task as in MTEB (Muennighoff et al., 2022): Classification (Accuracy), Bitext mining (F1 score), Pair classification (AP), Clustering (V measure), Reranking (MAP), Retrieval (NDCG@10), Summarization and STS (Spearman correlation based on cosine similarity). BitextMining tasks are excluded from the average performance scores and therefore the figures, as this task evaluates 2 languages instead of one, and this benchmark focuses only on one language (French). We present the results for both *DiaBlaBitextMining* and *FloresBitextMining* in Table 12.\n\nUsing the overall benchmark results, our goal will be to answer the following research questions: Q1: Is a model outstanding on all tasks?\n\nAs we are trying to find out whether one embedding model is statistically better than the others for French, the objective will also be to analyze the performance of the models by tasks to facilitate model choice for specific applications.\n\nQ2: Are there any links between the model characteristics and performance?\n\nIn section 3.2, we undertook the substantial task of gathering the characteristics of all evaluated models. The goal here will be to analyze their impact on performance and draw conclusions about, for example, the relationship between embedding dimension and model ranking on the benchmark.\n\nQ3: Do monolingual models have multilingual capabilities?\n\nWe interrogate the ability of a model trained exclusively in one language to perform well in another language.\n\nQ4: Are there any correlations between datasets\n\n9 https://huggingface.co./sentence-transformers/ all-MiniLM-L12-v2\n\n10https://huggingface.co./models", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv4.pdf" - }, - { - "text": "# B Extended Description of V-JEPA\n\nIn this section, we provide an in-depth description of our approach V-JEPA that is illustrated in Figure 3.\n\nInput. Unless stated otherwise, during during pretraining, we always randomly sample a clip of 16 frames from each input video with a temporal stride of 4 between sampled frames. An input video clip therefore covers 64 frames in total, or roughly 2 seconds of a given video running at 30 frames per second. We then resize the video's spatial dimensions to 224 × 224, resulting in an overall shape of 16 × 224 × 224 × 3 for the entire clip. Since ViT networks process a 1D sequence of tokens, we must convert an input video clip into a 1D token sequence. To do so, we apply a 3D convolution comprising d filters of size 2 × 16 × 16 with a temporal stride of 2 and a spatial stride of 16, resulting in a tensor of shape 8 × 14 × 14 × d. Next we add absolute 3D sin-cos positional embeddings to the spatio-temporal feature map and flatten it, resulting in a 1D token sequence of shape 1568 × d. This process is demonstrated in Figure 7.\n\nFigure 7 V-JEPA training operates on a video clip flattened into a sequence of tokens. To convert a video clip of size 16 × 224 × 224 × 3 into a 1D token sequence, we apply a 3D convolution comprising d filters of size 2 × 16 × 16 with a temporal stride of 2 and a spatial stride of 16, resulting in a tensor of shape 8 × 14 × 14 × d. Next we add absolute 3D sin-cos positional embeddings to the spatio-temporal feature map and flatten it, resulting in a 1D token sequence of shape 1568 × d.\n\nV-JEPA. We sample both a video clip, and a video mask in each iteration. We denote a video clip represented as a 1D token sequence of length L = 1568 by xL = (x1, . . . , xL). Similarly, given a mask of M < L patches, leaving N = L − M patches unmasked, we denote the indices of masked patches by (i1, . . . , iM) and its complement (the indices of unmasked patches) by (j1, . . . , jN ).\n\nComputing the x-representations. To compute the V-JEPA loss, we first produce the x-representations by masking the video clip and feeding it into the x-encoder; we denote the masked video by xN = (xj1 , . . . , xjN ). Applying the xencoder Eθ(·) to the masked clip gives a sequence of patch representations, denoted as zN = Eθ(xN ) = (zj1 , . . . , zjN ).\n\nPredicting the target. Next, the V-JEPA predictor network Pϕ(·, ·) takes as input the tokens produced by the x-encoder and predicts the missing regions in the video clip, which are specified by a set of learnable mask tokens. Specifically, the mask tokens are parameterized as the sum of a shared learnable vector and an absolute 3D sin-cos positional embedding, denoted by mM = (mi1 , . . . , miM ). The output of the predictor is thus given by, sˆM = Pϕ(zN , mM) = (ˆsi1 , . . . , sˆiM ), corresponding to a d-dimensional output for each of the M masked patches.\n\nComputing the y-representations. Finally to compute the prediction targets, the entire unmasked video clip is processed by the y-encoder to obtain a set of target representations, denoted by sL = Eθ(xL) = (s1, . . . , sL). The V-JEPA loss is now computed as\n\n$$\\text{Loss}=\\frac{1}{M}\\sum_{k\\in(i_{1},...,i_{M})}\\|\\hat{s}_{k}-s_{k}\\|_{1},\\tag{2}$$\n\nwhich is simply the average L1 distance between the output of the predictor and the y-encoder. We then compute a gradient update with respect to the parameters of the x-encoder, θ, and the predictor, ϕ, and subsequently update the parameters of the y-encoder as an exponential moving average of the context encoder weights (Polyak average).", - "page_start": 15, - "page_end": 15, - "source_file": "arxiv3.pdf" - }, - { - "text": "free energy having some claims to being a better approximation than the information criteria classically used with MCMC methods (although see other approximations, like the Pareto-Smoothed Importance Sampling [59] or Thermodynamic Integration methods [60]; see [35] for a further review). Note that independently of which of these approaches one might take, the process involves inverting a generative model of the mental processes underlying the behaviour of a given subject, a generative model which itself is an inversion of the subject's generative model of the environment. We can call the generative model that the agent has of its environment the *subjective* generative model, and the model we have of the agent the *objective* generative model, in what has been called a meta-Bayesian approach or \"observing the observer\" [1,61].\n\nHere, we demonstrated model fitting by fitting the POMDP model to the synthetic behaviour that it generated; this is called a parameter recovery study since we can then compare the estimated parameters to the generative values used for creating the simulated data [62,63]. Here, we used the simulation method shown in the previous section to produce a synthetic dataset with known parameter values for each agent (in practice, these are often participants in an experiment), here with a focus on estimating the *α* parameter. We then used MCMC methods to estimate the parameters for each agent and compared the estimated values with the correct values. Here, we simulated two groups of five synthetic subjects agents with different *α* values (the parameters for the first group were sampled from a Gaussian distribution with mean = 8 and SD = 2, and the second group with with mean = 24 and SD = 2). Each agent interacted with the T-maze environment for 300 time steps. We produced the following data frame, containing the data of each of the agents: their observations, actions and an identifier, a format suitable for cognitive and behavioural modelling.\n\n| Row | Location | Reward | Cue | Action_Location | Action_Reward | SubjectID |\n| --- | --- | --- | --- | --- | --- | --- |\n| | Int64 | Int64 | Int64 | Int64 | Int64 | Int64 |\n| 1 | 1 | 1 | 1 | 4 | 1 | 1 |\n| 2 | 4 | 1 | 2 | 3 | 1 | 1 |\n| 3 | 3 | 3 | 2 | 2 | 1 | 1 |\n| . | . | . | . | . | . | . |\n| . | . | . | . | . | . | . |\n| 3000 | 2 | 2 | 2 | 2 | 1 | 10 |\n\n3000×5 DataFrame\n\nWe used ActionModels to fit the AIF model created above to each of the agents in the dataset. We began by initialising an ActionModels agent:\n\n✞ ☎\n\n```\nusing ActionModels\n # Initialize ActionModels Agent with the action model and created active inference agent\n agent = init_agent (\n action_model = action_pomdp !, # Action model function\n substruct = aif, # Active inference agent as a substruct\n )\n✝ ✆\n```\nWe then set the prior for the parameter we wanted to estimate: the *α* action precision. As an example, we chose a wide, weakly informative prior: a Gaussian distribution with mean 5 and standard deviation 5, truncated at 0 and 20:", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "What is the maximum amount covered by the FWC of the europeean chemical agency ?", - "target_page": 6, - "target_passage": "The maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is EUR 1 000 000 (one million)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- I.4.2. Period of provision of the services\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n- I.4.3. Implementation of FWC in cascade\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n- (a) send the specific tender back to the contracting authority; or\n- (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the *performance of the specific contract* (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n# **I.5. Prices**\n\n# **I.5.1. Maximum amount of the FWC and maximum prices**\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is **EUR 1 000 000** (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\n| Senior experts: | [ | ] EUR per man-day |\n| --- | --- | --- |\n| Experts: | [ | ] EUR per man-day |\n\n# **I.5.2. Price revision index**\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc_hicp_midx).]\n\n# **I.5.3. Reimbursement of expenses**\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Annex IV – Daily subsistence allowances and accommodation flat rates for Finland\n\n- Annex V (a) Declaration on list of pre-exisiting rights\n\t- (b) Statement of the contractor concerning rights to delivered results and (c) Statement of creator (or right holder)\n\nwhich form an integral part of this framework contract ('the FWC').\n\nThis FWC sets out:\n\n- 1. the procedure by which the contracting authority may order services from the contractor;\n- 2. the provisions that apply to any specific contract which the contracting authority and the contractor may conclude under this FWC; and\n- 3. the obligations of the parties during and after the duration of this FWC.\n\nAll documents issued by the contractor (end-user agreements, general terms and conditions, etc.) except its tender are held inapplicable, unless explicitly mentioned in the special conditions of this FWC. In all circumstances, in the event of contradiction between this FWC and documents issued by the contractor, this FWC prevails, regardless of any provision to the contrary in the contractor's documents.", - "page_start": 1, - "page_end": 1, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "# **I. Special Conditions**\n\n# **I.1. Order of priority of provisions**\n\nIf there is any conflict between different provisions in this FWC, the following rules must be applied:\n\n- (a) The provisions set out in the special conditions take precedence over those in the other parts of the FWC.\n- (b) The provisions set out in the general conditions take precedence over those in the *order form* and specific contract (Annex III)\n- (c) The provisions set out in the *order form* and specific contract (Annex III) take precedence over those in the other annexes.\n- (d) The provisions set out in the tender specifications (Annex I) take precedence over those in the tender (Annex II).\n- (e) The provisions set out in the FWC take precedence over those in the specific contracts.\n- (f) The provisions set out in the specific contracts take precedence over those in the requests for services.\n\nAny reference to specific contracts applies also to order forms.\n\n# **I.2. Subject matter**\n\nThe subject matter of the FWC is scientific support to ECHA for work on restrictions, dose-response functions, Annex XIV, POPs and dossier evaluation.\n\n# **I.3. Entry into force and duration of the FWC**\n\n- **I.3.1** The FWC enters into force on the date on which the last party signs it.\n- **I.3.2** The *implementation of the FWC* cannot start before its entry into force.\n- **I.3.3** The FWC is concluded for a period of 24 months with effect from the date of its entry into force.\n- **I.3.4** The parties must sign any specific contract before the FWC expires.\n\nThe FWC continues to apply to such specific contracts after its expiry. The services relating to such specific contracts must be performed no later than six months after the expiry of the FWC.\n\n#### **I.3.5** Renewal of the FWC\n\nThe FWC is renewed automatically 2 times for 12 months each, unless one of the parties receives *formal notification* to the contrary at least three months before the end of the ongoing duration. Renewal does not change or postpone any existing obligations.\n\n# **I.4. Appointment of the contractor and implementation of the FWC**\n\n- I.4.1. Appointment of the contractor\nThe contracting authority appoints the contractor for a multiple FWC in cascade in [*complete*] position.", - "page_start": 4, - "page_end": 4, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "**The European Agency for Safety and Health at Work (EU-OSHA)** contributes to making Europe a safer, healthier and more productive place to work. The Agency researches, develops and distributes reliable, balanced and impartial safety and health information and organises pan-European awareness-raising campaigns. Set up by the European Union in 1994 and based in Bilbao, Spain, the Agency brings together representatives from the European Commission, Member State governments and employers' and workers' organisations, as well as leading experts in each of the EU Member States and beyond.\n\n#### **European Agency for Safety and Health at Work (EU-OSHA)**\n\nSantiago de Compostela 12, 5th floor 48003 Bilbao Spain Tel: (+34) 944 358 400 Email: information@osha.europa.eu\n\n#### **https://osha.europa.eu**", - "page_start": 163, - "page_end": 163, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "429 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31994R2062\n\n430 Communication from the Commission - Adapting to change in work and society: a new Community strategy on health and safety at work 2002-2006 /COM/2002/0118 final\n\n431 European Commission Brussels, 31.5.2013 SWD (2013) 202 final COMMISSION STAFF WORKING DOCUMENT Evaluation of the European Strategy 2007-2012 on health and safety at work\n\n432 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Improving quality and productivity at work: Community strategy 2007-2012 on health and safety at work {SEC(2007) 214} {SEC(2007) 215} {SEC(2007) 216}\n\n433 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on an EU Strategic Framework on Health and Safety at Work 2014-2020, Brussels, 6.6.2014 COM (2014) 332 final\n\n434 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: EU strategic framework on health and safety at work 2021- 2027: Occupational safety and health in a changing world of work, {SWD(2021) 148 final} - {SWD(2021) 149 final, Brussels, 28.6.2021\n\n435 European Agency for Safety and Health at Work, 2019: National Strategies in the field of Occupational Safety and Health in the EU, 2019, https://osha.europa.eu/en/safety-and-health-legislation/osh-strategies\n\n436 See as overview: https://osha.europa.eu/en/emerging-risks, examples e.g. EU-OSHA, 2014: Current and emerging issues in the healthcare sector, including home and community care, European Risk Observatory report, European Risk Observatory Report;\n\nEU-OSHA, 2014: Green jobs, new risks? New and emerging risks to occupational safety and health in the electricity sectors, Workshop for European Sectoral Social Dialogue Committee Electricity'; EU OSHA 2019: A Review on the Future of Work: Performance Enhancing Drugs\n\n437 Eurostat: Accident at work statistics\n\n438 OSH related LFS Ad hoc modules were: 1999 - Accidents at work and occupational diseases; 2007 - Work related accidents, health problems and hazardous exposure; 2013 Accidents at work and other work-related health problems; 2020 - Accidents at work- and work-related health problems.\n\n439 Eurofound Website: https://www.eurofound.europa.eu/about-eurofound\n\n440Eurofound, European Working Conditions Survey\n\n441 Eurofound, First European Survey on the Work Environment 1991-1992\n\nhttps://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef9211en.pdf\n\n442 European Agency for Safety and Health at Work, ESENER, https://visualisation.osha.europa.eu/esener#!/en 443 Europeans and Health and Safety at Work, June 1992, https://europa.eu/eurobarometer/surveys/detail/113 ; Europeans and Health and Safety at Work, August 1996 https://europa.eu/eurobarometer/surveys/detail/158\n\n444 Eurobarometer: Working Conditions in Europe, June 1997, https://europa.eu/eurobarometer/surveys/detail/151 Eurobarometer: Working Conditions, April 2014, https://europa.eu/eurobarometer/surveys/detail/2044\n\n445 Eurobarometer: Work-Life Balance, October 2018, https://europa.eu/eurobarometer/surveys/detail/2185 446 Eurobarometer: Undeclared work in the European Union, February 2020\n\nhttps://europa.eu/eurobarometer/surveys/detail/2250\n\n447 The Horizon-project INGRID2 offers links to searchable databases on surveys related to working conditions. https://www.ingridportal.eu/en Supporting expertise in inclusive growth, e-portal 'Dataset on Working conditions', 448 e.g.: International Benchmarking on Occupational Safety and Health (OSH) Regulation, revised version 2018, http://www.iali-aiit.org/ ,\n\n449 The Horizon-project INGRID2 also provides an overview on these types of databases\n\n(https://www.ingridportal.eu/en ) 450 European Centre for the Development of Vocational Training CEDEFOP (https://www.cedefop.europa.eu/) Skills Panorama: https://skillspanorama.cedefop.europa.eu/en\n\n451 European Institute for Gender Equality EIGE (https://eige.europa.eu/ ) Gender Statistics Database, Work and Labour market, https://eige.europa.eu/gender-statistics/dgs, Gender Equality Index, e.g. index of digitalisation in the world of work (2020)\n\n452 European Chemical Agency ECHA (https://echa.europa.eu/home) Exposure scenario examples\n\n*453 European Centre for Disease Prevention and Control, https://www.ecdc.europa.eu/en*\n\n454 European Maritime Safety Agency EMSA (http://www.emsa.europa.eu/ ), Section on Safety and Security http://www.emsa.europa.eu/we-do/safety.html\n\n455 Fundamental Rights Agency FRA, https://fra.europa.eu/en, Section on 'Trafficking and labour exploitation, e.g the report from June 2021 titled: Protecting migrants in an irregular situation from labour exploitation – Role of the\n\nEmployers Sanctions Directive 456 European Monitoring Centre for Drugs and Drug Addiction EMCDDA (https://www.emcdda.europa.eu/), Section 'Best practice', Policy and practice briefings: Work places, https://www.emcdda.europa.eu/bestpractice/briefings/workplace_en\n\nQuite unknown and difficult to estimate: between one and nine percent of the employees take so-called neuro", - "page_start": 157, - "page_end": 157, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Neither the European Agency for Safety and Health at Work nor any person acting on behalf of the agency is responsible for the use that might be made of the following information.\n\nLuxembourg: Publications Office of the European Union, 2023\n\nPrint ISBN 978-92-9479-934-0 doi: 10.2802/26873 PDF ISBN 978-92-9479-935-7 doi: 10.2802/56459\n\n© European Agency for Safety and Health at Work, 2023\n\nReproduction is authorised provided the source is acknowledged.\n\nFor any use or reproduction of photos or other material that is not under the copyright of the European Agency for Safety and Health at Work, permission must be sought directly from the copyright holders.\n\nThe photographs used in this publication illustrate a range of work activities. They do not necessarily show good practices or compliance with legislative requirements.\n\nFor one-click access to websites and references please consult the online version of this publication https://osha.europa.eu/en/publications/occupational-safety-and-health-europe-state-and-trends-2023", - "page_start": 1, - "page_end": 1, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**'Result'**: any intended outcome of the *implementation of the FWC*, whatever its form or nature. A *result* may be further defined in this FWC as a deliverable. A *result* may, in addition to newly created materials produced specifically for the contracting authority by the contractor or at its request, also include *pre-existing materials*;\n\n**'Specific contract'**: a contract implementing the FWC and specifying details of a service to be provided;\n\n**'Supplier portal'**: the *e-PRIOR* portal, which allows the contractor to exchange electronic business documents, such as invoices, through a graphical user interface.\n\n# **II.2. Roles and responsibilities in the event of a joint tender**\n\nIn the event of a joint tender submitted by a group of economic operators and where the group does not have legal personality or legal capacity, one member of the group is appointed as leader of the group.\n\n# **II.3. Severability**\n\nEach provision of this FWC is severable and distinct from the others. If a provision is or becomes illegal, invalid or unenforceable to any extent, it must be severed from the remainder of the FWC. This does not affect the legality, validity or enforceability of any other provisions of the FWC, which continue in full force and effect. The illegal, invalid or unenforceable provision must be replaced by a legal, valid and enforceable substitute provision which corresponds as closely as possible with the actual intent of the parties under the illegal, invalid or unenforceable provision. The replacement of such a provision must be made in accordance with Article II.11. The FWC must be interpreted as if it had contained the substitute provision as from its entry into force.\n\n# **II.4. Provision of services**\n\n- **II.4.1** Signature of the FWC does not guarantee any actual purchase. The contracting authority is bound only by specific contracts implementing the FWC.\n- **II.4.2** The contractor must provide services of high quality standards, in accordance with the state of the art in the industry and the provisions of this FWC, in particular the tender specifications and the terms of its tender. Where the contracting authority has the right to make modifications to the *results*, they must be delivered in a format and with the necessary information which effectively allow such modifications to be made in a convenient manner.\n- **II.4.3** The contractor must comply with the minimum requirements provided for in the tender specifications. This includes compliance with applicable obligations under environmental, social and labour law established by Union law, national law and collective agreements or by the international environmental, social and labour law provisions listed in Annex X to Directive 2014/24/EU43, compliance with data protection obligations resulting from Regulation (EU) 2016/67954 and Regulation (EU) 2018/17255.\n\n3 OJ L 94 of 28.03.2014, p. 65\n\n4 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119, 4.5.2016, p. 1, https://eurlex.europa.eu/legalcontent/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG\n\n5 Regulation (EU) 2018/1725 of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement", - "page_start": 14, - "page_end": 14, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "# **I.9. Data controller**\n\n# **I.9.1 Processing of personal data by the contracting authority**\n\nFor the purpose of Article II.9.1,\n\n(a) the data controller is ECHA;\n\n(b) the data protection notice is available at https://www.echa.europa.eu/documents/10162/13612/annex2_privacy_statement_procu rement_procedures_en.pdf/ed81cf95-4056-4bdf-adb5-c0f1bd0d57a5\n\n#### **I.9.2 Processing of personal data by the contractor**\n\nThis clause is not applicable to this FWC.\n\n# **I.10. Exploitation of the results of the FWC**\n\n#### **I.10.1. Detailed list of modes of exploitation of the results**\n\nIn accordance with Article II.13.1 whereby the contracting authority acquires ownership of the *results* as defined in this FWC, including the tender specifications, these *results* may be used for any of the following modes of exploitation:\n\n(a) use for its own purposes:\n\n- making available to the staff of the contracting authority;\n- making available to the persons and entities working for the contracting authority or cooperating with it, including contractors, subcontractors whether legal or natural persons, Union institutions, agencies and bodies, Member States' institutions;\n- installing, uploading, processing;\n- arranging, compiling, combining, retrieving;\n- copying, reproducing in whole or in part and in unlimited number of copies.\n\n(b) distribution to the public in hard copies, in electronic or digital format, on the internet including social networks as a downloadable or non-downloadable file;\n\n(c) communication through press information services;\n\n(d) inclusion in widely accessible databases or indexes, such as via 'open access' or 'open data' portals, or similar repositories, whether freely accessible or accessible only upon subscription;\n\n(e) modifications by the contracting authority or by a third party in the name of the contracting authority, including:\n\n- shortening;\n- summarising;", - "page_start": 8, - "page_end": 8, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "476 E European Agency for Safety and Health at Work,2013: European Risk Observatory, Analysis of the determinants of workplace occupational safety and health practice in a selection of EU Member States, https://osha.europa.eu/en/publications/reports/analysis-determinants-workplace-OSH-in-EU\n\n477 European Agency for Safety and Health at Work, 2019: The value of OSH and the societal costs of workrelated injuries and disease, Luxembourg;\n\n478 E European Agency for Safety and Health at Work, 2021: Improving compliance with occupational safety and health regulations: an overarching review\n\n479 Walters D, Johnstone R, Bluff E, Limborg HJ, Gensby U.: Improving compliance with occupational safety and health regulations: an overarching review EU-OSHA, 2021. Improving compliance with occupational safety and health regulations\n\n480 ILO and integration of OSH Into decent work https://www.ilo.org/global/topics/dw4sd/themes/osh/lang- en/index.htm\n\n481 Dijk, F., Yohama Caraballo-Arias, Y.: Where to Find Evidence-Based Information on Occupational Safety and Health? https://www.annalsofglobalhealth.org/articles/10.5334/aogh.3131/\n\nTrade union position, one example: Vogel L (2014), The point of view of the European trade unions: It is urgent to revitalise the EU occupational health and safety policy, http://www.osha.mddsz.gov.si/.../Laurent_VOGEL_EN.pdf Employer position, one example: Safer and healthier work for all - Modernisation of the EU occupational safety and health legislation and policy,\n\n482 Eurofound; Labour market change New forms of employment: 2020 update\n\n483 Eurofound, 2020: Working conditions in sectors, Publications Office of the European Union, Luxembourg, doi:10.2806/024695, p. 41\n\n484 Norway, STAMI: https://noa.stami.no/ National monitoring of work environment (National overvåkning af arbeidmiljø)\n\n485 Detailed Action Plan for the 4 Main Strategies to Create Safe Workplaces (update from 2020) https://kosha.or.kr/english/publications/Resources.do?mode=view&articleNo=277001&article.offset=0&articleLi mit=10\n\n486 Sakurai, H.: Occupational Safety and Health in Japan: Current Situations and the Future https://www.jstage.jst.go.jp/article/indhealth/50/4/50_MS1375/_pdf/-char/ja\n\n487 Occupational Safety and Health Administration Ministry of Labor, Republic of China (Taiwan): National Occupational Safety and Health Profile of Taiwan, 2014, Chapter 8 National Occupational Safety and Health Profile of Taiwan\n\n488 See: https://www.mom.gov.sg/workplace-safety-and-health/wsh-reports-and-statistics\n\n489 E.g.: National Academies of Sciences, Engineering, and Medicine. 2018. A Smarter National Surveillance System for Occupational Safety and Health in the 21st Century. Washington, DC: The National Academies Press. https://doi.org/10.17226/24835, 2018\n\n490 E.g.: Publications from the Association of Workers' Compensation Boards of Canada, https://awcbc.org/en/ 491 Australian Safety and Compensation Council, Report on indicators for occupational disease*,* Australian Government, 2006, p1-45\n\n492 E.g.: https://data.worksafe.govt.nz/\n\n493 International Labour Organisation ILO (no publishing date available). Decent work: Measuring decent work. http://www.ilo.org/integration/themes/mdw/lang--en/index.htm\n\n494 Country profiles on Occupational Safety and Health, https://www.ilo.org/safework/countries/lang- en/index.htm\n\n495 Work Health Organisation WHO (2011). Global Health Observatory: WHO indicator registry., from: http://www.who.int/gho/indicator_registry/en/index.html\n\n496 United Nations: Sustainable Development Goals, https://sustainingdevelopment.com/sdg8-indicators/ 497 UNECE, 2010: Measuring Quality of Employment, https://unece.org/statistics/publications/measuring-qualityemployment\n\n498https://ec.europa.eu/eurostat/web/labour-market/quality-of-employment/database\n\n499 Eurostat overview on their data related to quality of employment https://ec.europa.eu/eurostat/web/labourmarket/quality-of-employment\n\n500 Andersen, J. H., et al., 2019: Systematic literature review on the effects of occupational safety and health (OSH) interventions at the workplace. Scandinavian Journal of Work Environment and Health, 45(2): 103-113\n\nElsler D. et al: A review of case studies evaluating economic incentives to promote occupational safety and health. Scand J Work Environ Health 2010; 36: 289–298", - "page_start": 159, - "page_end": 159, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "3. The contracting authority may suspend the time limit for payment specified in point 2 in accordance with Article II.21.7. Once the suspension is lifted, the contracting authority shall give its approval and pay within the remainder of the time-limit indicated in point 2 unless it rejects partially or fully the submitted documents.\n\n# **I.6.4. Performance guarantee**\n\nPerformance guarantee is not applicable to this FWC.\n\n# **I.6.5. Retention money guarantee**\n\nRetention money guarantee is not applicable to this FWC.\n\n# **I.7. Bank account**\n\nPayments must be made to the contractor's (or leader's in the case of a joint tender) bank account denominated in euro, identified as follows:\n\nName of bank:\n\nFull address of branch:\n\nExact denomination of account holder:\n\nFull account number including bank codes:\n\n[IBAN1 code:]\n\n# **I.8. Communication details**\n\nFor the purpose of this FWC, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency Directorate and Unit D3, Risk Management I Telakkakatu 6 00150 Helsinki Finland E-mail: [insert functional mailbox]\n\nContractor (or leader in the case of a joint tender):\n\n[*Full name*] [*Function*] [*Company name*] [*Full official address*] E-mail: [*complete*]\n\nBy derogation from this Article, different contact details for the contracting authority or the contractor may be provided in specific contracts.\n\n1 BIC or SWIFT code for countries with no IBAN code", - "page_start": 7, - "page_end": 7, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "How can I get compensation if the european chemical agency terminates a contract we have ?", - "target_page": 11, - "target_passage": "If the FWC or a specific contract is terminated: a) neither party is entitled to compensation", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "- I.4.2. Period of provision of the services\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n- I.4.3. Implementation of FWC in cascade\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n- (a) send the specific tender back to the contracting authority; or\n- (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the *performance of the specific contract* (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n# **I.5. Prices**\n\n# **I.5.1. Maximum amount of the FWC and maximum prices**\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is **EUR 1 000 000** (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\n| Senior experts: | [ | ] EUR per man-day |\n| --- | --- | --- |\n| Experts: | [ | ] EUR per man-day |\n\n# **I.5.2. Price revision index**\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc_hicp_midx).]\n\n# **I.5.3. Reimbursement of expenses**\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "#### **I.10.3. Provision of list of pre-existing rights and documentary evidence**\n\nThe contractor must provide the contracting authority with a list of *pre-existing rights* as set out in Article II.13.4 together with the invoice for payment of the balance at the latest.\n\n#### **I.11. Termination by either party2**\n\nEither party may terminate the FWC and/or the FWC and specific contracts by sending *formal notification* to the other party with three months written notice.\n\nIf the FWC or a specific contract is terminated:\n\n- a) neither party is entitled to compensation;\nb) the contractor is entitled to payment only for the services provided before termination takes effect.\n\nThe second, third and fourth paragraphs of Article II.18.4 apply.\n\n#### **I.12. Applicable law and settlement of disputes**\n\n- **I.12.1** The FWC is governed by Union law, complemented, where necessary, by the law of Finland.\n- **I.12.2** The courts of Finland have exclusive jurisdiction over any dispute regarding the interpretation, application or validity of the FWC.\n\n#### **I.13. Interinstitutional FWC**\n\nNot applicable\n\n#### **I.14. Service provided on the premises of the contracting authority**\n\nNot applicable.\n\n#### **I.15. Other special conditions**\n\nElectronic documents exchange\n\nIt is intended that the documents exchange (e.g. invoices, deliverables) between the Agency and the Contractor will have to be carried out via electronic means.\n\nAt the request of the Agency, the use of such electronic applications will become mandatory, upon mutual agreement, during the performance of the contract, at no additional cost for the Agency.\n\n2 This article may be deleted on the basis of a risk assessment taking into account the specific market and the need for business continuity", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "quality or continuity of the services. The parties may agree to draw up a transition plan detailing the contractor's assistance unless such plan is already detailed in other contractual documents or in the tender specifications. The contractor must provide such assistance at no additional cost, except if it can demonstrate that it requires substantial additional resources or means, in which case it must provide an estimate of the costs involved and the parties will negotiate an arrangement in good faith.\n\n#### **II.18.4. Effects of termination**\n\nThe contractor is liable for damage incurred by the contracting authority as a result of the termination of the FWC or a specific contract, including the additional cost of appointing and contracting another contractor to provide or complete the services, except if the damage is a result of a termination in accordance with Article II.18.1(j), (k) or (l) or Article II.18.2. The contracting authority may claim compensation for such damage.\n\nThe contractor is not entitled to compensation for any loss resulting from the termination of the FWC or a specific contract, including loss of anticipated profits, unless the loss was caused by the situation specified in Article II.18.2.\n\nThe contractor must take all appropriate measures to minimise costs, prevent damage and cancel or reduce its commitments.\n\nWithin 60 days of the date of termination, the contractor must submit any report, deliverable or *result* and any invoice required for services that were provided before the date of termination.\n\nIn the case of joint tenders, the contracting authority may terminate the FWC or a specific contract with each member of the group separately on the basis of points (d), (e) or (g) of Article II.18.1, under the conditions set out in Article II.11.2\n\n# **II.19. Invoices, value added tax and e-invoicing**\n\n#### **II.19.1. Invoices and value added tax**\n\nInvoices must contain the contractor's (or leader's in the case of a joint tender) identification data, the amount, the currency and the date, as well as the FWC reference and reference to the specific contract.\n\nInvoices must indicate the place of taxation of the contractor (or leader in the case of a joint tender) for value added tax (VAT) purposes and must specify separately amounts not including VAT and amounts including VAT.\n\nThe contracting authority is exempt from all taxes and duties, including VAT, in accordance with Articles 3 and 4 of the Protocol 7 of the Treaty on the Functioning of the European Union on the privileges and immunities of the European Union.\n\nThe contractor (or leader in the case of a joint tender) must complete the necessary formalities with the relevant authorities to ensure that the supplies and services required for *implementation of the FWC* are exempt from taxes and duties, including VAT.\n\n#### **II.19.2. E-invoicing**\n\nIf provided for in the special conditions, the contractor (or leader in the case of a joint tender) submits invoices in electronic format if the conditions regarding electronic signature specified by Directive 2006/112/EC on VAT are fulfilled, i.e. using a qualified", - "page_start": 31, - "page_end": 31, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "*a specific contract* or any part of it:\n\n- (a) if the procedure for awarding the FWC or a specific contract or the *implementation of the FWC* proves to have been subject to *irregularities, fraud or breach of obligations*;\n- (b) in order to verify whether the presumed *irregularities, fraud* or *breach of obligations* have actually occurred.\n\nThe contracting authority must *formally notify* the contractor of the suspension and the reasons for it. Suspension takes effect on the date of *formal notification*, or at a later date if the *formal notification* so provides.\n\nThe contracting authority must *notify* the contractor as soon as the verification is completed whether:\n\n- (a) it is lifting the suspension; or\n- (b) it intends to terminate the FWC or a specific contract under Article II.18.1(f) or (j).\n\nThe contractor is not entitled to compensation for suspension of any part of the FWC or a specific contract.\n\nThe contracting authority may in addition suspend the time allowed for payments in accordance with Article II.21.7.\n\n# **II.18. Termination of the FWC**\n\n#### **II.18.1. Grounds for termination by the contracting authority**\n\nThe contracting authority may terminate the FWC or any on-going specific contract in the following circumstances:\n\n- (a) if provision of the services under an on-going specific contract has not actually started within 15 days of the scheduled date and the contracting authority considers that the new date proposed, if any, is unacceptable, taking into account Article II.11.2;\n- (b) if the contractor is unable, through its own fault, to obtain any permit or licence required for *implementation of the FWC*;\n- (c) if the contractor does not implement the FWC or perform the specific contract in accordance with the tender specifications or *request for service* or is in breach of another substantial contractual obligation or repeatedly refuses to sign specific contracts. Termination of three or more specific contracts in these circumstances also constitutes grounds for termination of the FWC;\n- (d) if the contractor or any person that assumes unlimited liability for the debts of the contractor is in one of the situations provided for in points (a) and (b) of Article 136(1) of the Financial Regulation6;\n- (e) if the contractor or any *related person* is in one of the situations provided for in points (c) to (h) of Article 136(1) or to Article 136(2) of the Financial Regulation;\n- (f) if the procedure for awarding the FWC or the *implementation of the FWC* prove to have been subject to *irregularities*, *fraud* or *breach of obligations*;\n\n6 Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council of 18 July 2018 on the financial rules applicable to the general budget of the Union, amending Regulations (EU) No 1296/2013, (EU) No 1301/2013, (EU) No 1303/2013, (EU) No 1304/2013, (EU) No 1309/2013, (EU) No 1316/2013, (EU) No 223/2014, (EU) No 283/2014, and Decision No 541/2014/EU and repealing Regulation (EU, Euratom) No 966/2012, OJ L 193 of 30.7.2018, p.1 https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG", - "page_start": 29, - "page_end": 29, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- (g) if the contractor does not comply with applicable obligations under environmental, social and labour law established by Union law, national law, collective agreements or by the international environmental, social and labour law provisions listed in Annex X to Directive 2014/24/EU;\n- (h) if the contractor is in a situation that could constitute a *conflict of interest* or a *professional conflicting interest* as referred to in Article II.7;\n- (i) if a change to the contractor's legal, financial, technical, organisational or ownership situation is likely to substantially affect the *implementation of the FWC* or substantially modify the conditions under which the FWC was initially awarded;\n- (j) in the event of *force majeure*, where either resuming implementation is impossible or the necessary ensuing amendments to the FWC or a specific contract would mean that the tender specifications are no longer fulfilled or result in unequal treatment of tenderers or contractors;\n- (k) if the needs of the contracting authority change and it no longer requires new services under the FWC; in such cases ongoing specific contracts remain unaffected;\n- (l) if the termination of the FWC with one or more of the contractors means that the multiple FWC with reopening of competition no longer has the minimum required level of competition;\n- (m) if the contractor is in breach of the data protection obligations resulting from Article II.9.2;\n- (n) if the contractor does not comply with the applicable data protection obligations resulting from Regulation (EU) 2016/679.\n\n# **II.18.2. Grounds for termination by the contractor**\n\nThe contractor may terminate the FWC or any on-going specific contract if the contracting authority fails to comply with its obligations, in particular the obligation to provide the information needed for the contractor to implement the FWC or to perform a specific contract as provided for in the tender specifications.\n\n# **II.18.3. Procedure for termination**\n\nA party must *formally notify* the other party of its intention to terminate the FWC or a specific contract and the grounds for termination.\n\nThe other party has 30 days following the date of receipt to submit observations, including the measures it has taken or will take to continue fulfilling its contractual obligations. Failing that, the decision to terminate becomes enforceable the day after the time limit for submitting observations has elapsed.\n\nIf the other party submits observations, the party intending to terminate must *formally notify* it either of the withdrawal of its intention to terminate or of its final decision to terminate.\n\nIn the cases referred to in points (a) to (d), (g) to (i), (k) and (l) of Article II.18.1 and in Article II.18.2, the date on which the termination takes effect must be specified in the *formal notification*.\n\nIn the cases referred to in points (e), (f) and (j) of Article II.18.1, the termination takes effect on the day following the date on which the contractor receives *notification* of termination.\n\nIn addition, at the request of the contracting authority and regardless of the grounds for termination, the contractor must provide all necessary assistance, including information, documents and files, to allow the contracting authority to complete, continue or transfer the services to a new contractor or internally, without interruption or adverse effect on the", - "page_start": 30, - "page_end": 30, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "# **II.16. Reduction in price**\n\n# **II.16.1. Quality standards**\n\nIf the contractor fails to provide the service in accordance with the FWC or a specific contract ('unperformed obligations') or if it fails to provide the service in accordance with the expected quality levels specified in the tender specifications ('low quality delivery'), the contracting authority may reduce or recover payments proportionally to the seriousness of the unperformed obligations or low quality delivery. This includes in particular cases where the contracting authority cannot approve a *result*, report or deliverable as defined in Article I.6 after the contractor has submitted the required additional information, correction or new version.\n\nA reduction in price may be imposed together with liquidated damages under the conditions of Article II.15.\n\n# **II.16.2. Procedure**\n\nThe contracting authority must *formally notify* the contractor of its intention to reduce payment and the corresponding calculated amount.\n\nThe contractor has 30 days following the date of receipt to submit observations. Failing that, the decision becomes enforceable the day after the time limit for submitting observations has elapsed.\n\nIf the contractor submits observations, the contracting authority, taking into account the relevant observations, must *notify* the contractor:\n\n(a) of the withdrawal of its intention to reduce payment; or\n\n(b) of its final decision to reduce payment and the corresponding amount,.\n\n# **II.16.3. Claims and liability**\n\nAny reduction in price does not affect the contractor's actual or potential liability or the contracting authority's rights under Article II.18.\n\n# **II.17. Suspension of the implementation of the FWC**\n\n# **II.17.1. Suspension by the contractor**\n\nIf the contractor is affected by *force majeure*, it may suspend the provision of the services under a specific contract.\n\nThe contractor must immediately *notify* the contracting authority of the suspension. The *notification* must include a description of the *force majeure* and state when the contractor expects to resume the provision of services.\n\nThe contractor must *notify* the contracting authority as soon as it is able to resume *performance of the specific contract*, unless the contracting authority has already terminated the FWC or the specific contract.\n\n# **II.17.2. Suspension by the contracting authority**\n\nThe contracting authority may suspend the *implementation of the FWC* or *performance of*", - "page_start": 28, - "page_end": 28, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "entities working for it or cooperating with it, including contractors and subcontractors, whether legal or natural persons, but only for the purpose of their mission for the contracting authority;\n\n(b) if the *result* is a \"document\" such as a report or a study, and it is meant to be published, the existence of *pre-existing materials* in the *result* may not prevent the publication of the document, its translation or its \"reuse\", it being understood however that the \"reuse\" may only be made of the *result* as a whole and not of the *pre-existing materials* taken separately from the *result*; for the sake of this provision, \"reuse\" and \"document\" have the meaning given by the Commission Decision of 12 December 2011 on the reuse of Commission documents (2011/833/EU).\n\nAll *pre-existing rights* are licensed to the contracting authority from the moment the *results* are delivered and approved by the contracting authority.\n\nThe licensing of *pre-existing rights* to the contracting authority under this FWC covers all territories worldwide and is valid for the duration of intellectual property rights protection.\n\nThe payment of the price as set out in the specific contracts is deemed to also include any fees payable to the contractor in relation to the licensing of *pre-existing rights* to the contracting authority, including for all forms of exploitation and of use of the *results*.\n\nWhere *implementation of the FWC* requires that the contractor uses *pre-existing materials* belonging to the contracting authority, the contracting authority may request that the contractor signs an adequate licence agreement. Such use by the contractor will not entail any transfer of rights to the contractor and is limited to the needs of this FWC.\n\n# **II.13.3. Exclusive rights**\n\nThe Contracting Authority acquires the following exclusive rights:\n\n- (a) reproduction: the right to authorise or prohibit direct or indirect, temporary or permanent reproduction of the *results* by any means (mechanical, digital or other) and in any form, in whole or in part;\n- (b) communication to the public: the exclusive right to authorise or prohibit any display, performance or communication to the public, by wire or wireless means, including the making available to the public of the *results* in such a way that members of the public may access them from a place and at a time individually chosen by them; this also includes the communication on Internet and broadcasting by cable or by satellite;\n- (c) distribution: the exclusive right to authorise or prohibit any form of distribution of *results* or copies of the *results* to the public, by sale or otherwise;\n- (d) rental: the exclusive right to authorise or prohibit rental or lending of the *results* or of copies of the *results*;\n- (e) adaptation: the exclusive right to authorise or prohibit any modification of the *results*;\n- (f) translation: the exclusive right to authorise or prohibit any translation, adaptation, arrangement, creation of derivative works based on the *results*, and any other alteration of the *results*, subject to the respect of moral rights of authors, where applicable;\n- (g) where the *results* are or include a database: the exclusive right to authorise or prohibit the extraction of all or a substantial part of the contents of the database to another medium by any means or in any form; and the exclusive right to authorise or prohibit the re-utilization of all or a substantial part of the contents of the database by the distribution of copies, by renting, by on-line or other forms of transmission;\n- (h) where the *results* are or include a patentable subject-matter: the right to register them as a patent and to further exploit such patent to the fullest extent;", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "3. The contracting authority may suspend the time limit for payment specified in point 2 in accordance with Article II.21.7. Once the suspension is lifted, the contracting authority shall give its approval and pay within the remainder of the time-limit indicated in point 2 unless it rejects partially or fully the submitted documents.\n\n# **I.6.4. Performance guarantee**\n\nPerformance guarantee is not applicable to this FWC.\n\n# **I.6.5. Retention money guarantee**\n\nRetention money guarantee is not applicable to this FWC.\n\n# **I.7. Bank account**\n\nPayments must be made to the contractor's (or leader's in the case of a joint tender) bank account denominated in euro, identified as follows:\n\nName of bank:\n\nFull address of branch:\n\nExact denomination of account holder:\n\nFull account number including bank codes:\n\n[IBAN1 code:]\n\n# **I.8. Communication details**\n\nFor the purpose of this FWC, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency Directorate and Unit D3, Risk Management I Telakkakatu 6 00150 Helsinki Finland E-mail: [insert functional mailbox]\n\nContractor (or leader in the case of a joint tender):\n\n[*Full name*] [*Function*] [*Company name*] [*Full official address*] E-mail: [*complete*]\n\nBy derogation from this Article, different contact details for the contracting authority or the contractor may be provided in specific contracts.\n\n1 BIC or SWIFT code for countries with no IBAN code", - "page_start": 7, - "page_end": 7, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- **II.4.4** The contractor must obtain any permit or licence required in the State where the services are to be provided.\n- **II.4.5** All periods specified in the FWC are calculated in calendar days, unless otherwise specified.\n- **II.4.6** The contractor must not present itself as a representative of the contracting authority and must inform third parties that it is not part of the European public service.\n- **II.4.7** The contractor is responsible for the *personnel* who carry out the services and exercises its authority over its *personnel* without interference by the contracting authority. The contractor must inform its *personnel* that:\n\t- (a) they may not accept any direct instructions from the contracting authority; and\n\t- (b) their participation in providing the services does not result in any employment or contractual relationship with the contracting authority.\n- **II.4.8** The contractor must ensure that the *personnel* implementing the FWC and any future replacement personnel possess the professional qualifications and experience required to provide the services, as the case may be on the basis of the selection criteria set out in the tender specifications.\n- **II.4.9** At the contracting authority's reasoned request, the contractor must replace any member of *personnel* who:\n\t- (a) does not have the expertise required to provide the services; or\n\t- (b) has caused disruption at the premises of the contracting authority.\n\nThe contractor bears the cost of replacing its *personnel* and is responsible for any delay in providing the services resulting from the replacement of *personnel*.\n\n- **II.4.10**The contractor must record and report to the contracting authority any problem that affects its ability to provide the services. The report must describe the problem, state when it started and what action the contractor is taking to resolve it.\n# **II.5. Communication between the parties**\n\n# **II.5.1. Form and means of communication**\n\nAny communication of information, notices or documents under the FWC must:\n\n- (a) be made in writing in paper or electronic format in the language of the contract;\n- (b) bear the FWC number and, if applicable, the specific contract number;\n- (c) be made using the relevant communication details set out in Article I.8; and\n- (d) be sent by mail, email or, for the documents specified in the special conditions, via *e-PRIOR*.\n\nIf a party requests written confirmation of an e-mail within a reasonable time, the other party must provide an original signed paper version of the communication as soon as possible.\n\nof such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC, OJ L 295/39, 21.11.2018, https://eur-lex.europa.eu/legalcontent/EN/TXT/PDF/?uri=CELEX:32018R1725&from=EN", - "page_start": 15, - "page_end": 15, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- **II.14.3** The parties must take all necessary measures to limit any damage due to *force majeure*.\n# **II.15. Liquidated damages**\n\n# **II.15.1. Delay in delivery**\n\nIf the contractor fails to perform its contractual obligations within the applicable time limits set out in this FWC, the contracting authority may claim liquidated damages for each day of delay using the following formula:\n\n0.3 x (*V/d*)\n\nwhere:\n\n*V* is the price of the relevant purchase or deliverable or *result*;\n\n*d* is the duration specified in the relevant specific contract for delivery of the relevant purchase or deliverable or *result* or, failing that, the period between the date specified in Article I.4.2 and the date of delivery or performance specified in the relevant specific contract, expressed in days.\n\nLiquidated damages may be imposed together with a reduction in price under the conditions laid down in Article II.16.\n\n# **II.15.2. Procedure**\n\nThe contracting authority must *formally notify* the contractor of its intention to apply liquidated damages and the corresponding calculated amount.\n\nThe contractor has 30 days following the date of receipt to submit observations. Failing that, the decision becomes enforceable the day after the time limit for submitting observations has elapsed.\n\nIf the contractor submits observations, the contracting authority, taking into account the relevant observations, must *notify* the contractor:\n\n(a) of the withdrawal of its intention to apply liquidated damages; or\n\n(b) of its final decision to apply liquidated damages and the corresponding amount.\n\n# **II.15.3. Nature of liquidated damages**\n\nThe parties expressly acknowledge and agree that any amount payable under this Article is not a penalty and represents a reasonable estimate of fair compensation for the damage incurred due to failure to provide the services within the applicable time limits set out in this FWC.\n\n# **II.15.4. Claims and liability**\n\nAny claim for liquidated damages does not affect the contractor's actual or potential liability or the contracting authority's rights under Article II.18.", - "page_start": 27, - "page_end": 27, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "According to the european chemical agency contracts, what is considers a grave professional misconduct ?", - "target_page": 14, - "target_passage": "'Grave professional misconduct': a violation of applicable laws or regulations or ethical standards of the profession to which a contractor or a related person belongs, including any conduct leading to sexual or other exploitation or abuse, or any wrongful conduct of the contractor or a related person which has an impact on its professional credibility where such conduct denotes wrongful intent or gross negligence.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "**The European Agency for Safety and Health at Work (EU-OSHA)** contributes to making Europe a safer, healthier and more productive place to work. The Agency researches, develops and distributes reliable, balanced and impartial safety and health information and organises pan-European awareness-raising campaigns. Set up by the European Union in 1994 and based in Bilbao, Spain, the Agency brings together representatives from the European Commission, Member State governments and employers' and workers' organisations, as well as leading experts in each of the EU Member States and beyond.\n\n#### **European Agency for Safety and Health at Work (EU-OSHA)**\n\nSantiago de Compostela 12, 5th floor 48003 Bilbao Spain Tel: (+34) 944 358 400 Email: information@osha.europa.eu\n\n#### **https://osha.europa.eu**", - "page_start": 163, - "page_end": 163, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Union budget, ii) the non-disclosure of information in violation of a specific obligation, with the same effect or iii) the misapplication of such funds or assets for purposes other than those for which they were originally granted, which damages the Union's financial interests;\n\n**'Grave professional misconduct':** a violation of applicable laws or regulations or ethical standards of the profession to which a contractor or a related person belongs, including any conduct leading to sexual or other exploitation or abuse, or any wrongful conduct of the contractor or a related person which has an impact on its professional credibility where such conduct denotes wrongful intent or gross negligence.\n\n**'Implementation of the FWC'**: the purchase of services envisaged in the FWC through the signature and *performance of specific contracts*;\n\n**'Interface control document'**: the guideline document which lays down the technical specifications, message standards, security standards, checks of syntax and semantics, etc. to facilitate machine-to-machine connection. This document is updated on a regular basis;\n\n**'Irregularity'**: any infringement of a provision of Union law resulting from an act or omission by an economic operator, which has, or would have, the effect of prejudicing the Union's budget.\n\n**'Notification'** (or 'notify'): form of communication between the parties made in writing including by electronic means;\n\n**'Order form'**: a simplified form of specific contract by which the contracting authority orders services under this FWC;\n\n**'Performance of a specific contract'**: the execution of tasks and delivery of the purchased services by the contractor to the contracting authority;\n\n**'Personnel'**: persons employed directly or indirectly or contracted by the contractor to implement the FWC;\n\n**'Pre-existing material'**: any material, document, technology or know-how which exists prior to the contractor using it for the production of a *result* in the *implementation of the FWC*;\n\n**'Pre-existing right'**: any industrial and intellectual property right on *pre-existing material*; it may consist in a right of ownership, a licence right and/or right of use belonging to the contractor, the *creator*, the contracting authority as well as to any other third parties;\n\n**'Professional conflicting interest'**: a situation in which the contractor's previous or ongoing professional activities affect its capacity to implement the FWC or to perform a specific contract to an appropriate quality standard.\n\n**'Related person'**: any natural or legal person who is a member of the administrative, management or supervisory body of the contractor, or who has powers of representation, decision or control with regard to the contractor;\n\n**'Request for services'**: a document from the contracting authority requesting that the contractors in a multiple FWC with re-opening of competition provide a specific tender for services whose terms are not entirely defined under the FWC;", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Obviously, **most informal, and** — in particular — **irregular and illegal types of work do not respect** legal OSH obligations — and at the same time legal monitoring obligations also fail. The EU Fundamental Rights Agency (FRA) published several case studies and examples in a series called 'Severe labour exploitation reports; 359 these studies provide an insight into most irregular working conditions.\n\n**Undeclared work** is defined as paid and lawful (not criminal) activity but undeclared to public authorities. ('paid activities that are lawful as regards their nature but not declared to public authorities, taking into account the differences in the regulatory systems of Member States'.)\n\nIn 2018, the European Commission estimated the scale of **undeclared work** in the EU. According to this estimate, on average, 11.6% of total labour input in the private sector is undeclared, and undeclared work constitutes on average 16.4% of gross value added. The main sectors according to the Special Flash Eurobarometer from 2019360 are personal services (childcare/elderly care/cleaning) followed by construction and hospitality services.361 The 'European Platform tackling undeclared work' provides fact sheets about the type and quantity of undeclared work in all EU Member States.362\n\nThe compliance of enterprises with OSH regulations is **supervised by state institutions, mainly the Labour Inspectorates**.363 At EU level, the SLIC developed common principles for their work. These common principles aim at harmonising their work and facilitate collaboration; they include planning and monitoring, inspectors' competencies and independence, prevention, protection, and assistance and guidance for inspectors, and internal and external communication.364\n\nPractically all labour inspections in the EU Member States worked in the past two decades on **organisational and strategic measures to achieve an effective and broad impact**, and also to better adapt to new and emerging risks.365 To enhance the level of implementation in terms of coverage and quality, many labour inspections developed **smart enforcement** and **supervision concepts**.366\n\nOn average, two million visits per year were made by labour inspectorates, in approximately 22 million businesses in the EU, in the decade 2010-2020, with a steady decline over the years.367 .368 Many enterprises that are regarded as low-risk establishments have never been inspected by a labour inspectorate. Often more than one inspection is done in large enterprises, for example, as a follow-up inspection; there might also be more than one annual inspection in enterprises with high risks. The labour inspection is also tasked to supervise enterprises with many separated sites or establishments, for example, construction companies and shops of supermarket chains. The visit of one headquarter or one shop cannot be regarded as a visit of a representative selection of enterprises' locations, which possibly show different levels of safety and health.\n\nIn the decade between 2000 and 2010, the development of the resources of labour inspections show a mixed picture, **some countries extended the capacities of labour inspections, others cut resources**. 369 For the period between 2010 and 2020, the European Trade Union Institute (ETUI) counted a decrease of labour inspectors and inspections in 20 of 27 Member States, a drop of 7% for inspectors and of 18% for inspections.370 Again, the picture between Member States differs but, in general, budget or staff cuts dominate. ESENER findings show that there was a significant decline between 2014 and 2019 regarding the number of visits by Labour Inspectorates.371\n\nAlthough labour inspections are at the core of supervision of working conditions, **other state authorities have similar or related tasks**, for example, regarding the control of undeclared work, checking minimum wages and social insurance contributions, and performing control of environmental or hygiene standards, of fire safety, or technical control of particularly dangerous production sites or equipment.\n\nThe **shift in working conditions towards psychosocial risks** generates new challenges for state supervision. SLIC recommends in its labour inspectors' guide for assessing the quality of risk assessments and risk management measures with regard to prevention of psychosocial risks:372\n\n*'When striving to prevent psychosocial risks, labour inspectors should take into account the fact that there is no single, across-the-board solution, and should recommend expert advice, for example, external OSH services, if needed for unusual or serious problems. A holistic approach is necessary in order to address psychosocial risks.'*\n\nPsychosocial risks at work were a topic in campaigns (EU-OSHA,373 European Commission,374 ILO,375 WHO, 376) in many national OSH strategies (see OSH Barometer 377), or in guiding regulations, for", - "page_start": 122, - "page_end": 122, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "OLAF's own staff or by any outside body authorised to do so on its behalf.\n\nSuch checks and audits may be initiated at any moment during the provision of the services and up to five years starting from the payment of the balance of the last specific contract issued under this FWC\n\nThe audit procedure is initiated on the date of receipt of the relevant letter sent by the contracting authority. Audits are carried out on a confidential basis.\n\n- **II.24.2** The contractor must keep all original documents stored on any appropriate medium, including digitised originals if authorised under national law, for a period of five years starting from the payment of the balance of the last specific contract issued under this FWC.\n- **II.24.3** The contractor must grant the contracting authority's staff and outside personnel authorised by the contracting authority the appropriate right of access to sites and premises where the FWC is implemented and to all the information, including information in electronic format, needed to conduct such checks and audits. The contractor must ensure that the information is readily available at the moment of the check or audit and, if so requested, that information is handed over in an appropriate format.\n- **II.24.4** On the basis of the findings made during the audit, a provisional report is drawn up. The contracting authority or its authorised representative must send it to the contractor, who has 30 days following the date of receipt to submit observations. The contractor must receive the final report within 60 days following the expiry of the deadline to submit observations.\n\nOn the basis of the final audit findings, the contracting authority may recover all or part of the payments made in accordance with Article II.23 and may take any other measures which it considers necessary.\n\n- **II.24.5** In accordance with Council Regulation (Euratom, EC) No 2185/96 of 11 November 1996 concerning on-the-spot checks and inspection carried out by the Commission in order to protect the European Communities' financial interests against *fraud* and other *irregularities* and Regulation (EU, Euratom) No 883/2013 of the European Parliament and of the Council of 11 September 2013 concerning investigations conducted by the European Anti-Fraud Office, the European Anti-Fraud Office may carry out investigations, including on the spot checks and inspections, to establish whether there has been *fraud*, corruption or any other illegal activity under the contract affecting the financial interests of the Union. Findings arising from an investigation may lead to criminal prosecution under national law.\nThe investigations may be carried out at any moment during the provision of the services and up to five years starting from the payment of the balance of the last specific contract issued under this FWC.\n\n- **II.24.6** The Court of Auditors, the European Public Prosecutor's Office established by Council Regulation (EU) 2017/193977 ('the EPPO') and, for the processing of personal data, the European Data Protection Supervisor have the same rights as the contracting authority, particularly right of access, for the purpose of checks,\n7 Council Regulation (EU) 2017/1939 of 12 October 2017 implementing enhanced cooperation on the establishment of the European Public Prosecutor's Office", - "page_start": 37, - "page_end": 37, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- I.4.2. Period of provision of the services\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n- I.4.3. Implementation of FWC in cascade\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n- (a) send the specific tender back to the contracting authority; or\n- (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the *performance of the specific contract* (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n# **I.5. Prices**\n\n# **I.5.1. Maximum amount of the FWC and maximum prices**\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is **EUR 1 000 000** (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\n| Senior experts: | [ | ] EUR per man-day |\n| --- | --- | --- |\n| Experts: | [ | ] EUR per man-day |\n\n# **I.5.2. Price revision index**\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc_hicp_midx).]\n\n# **I.5.3. Reimbursement of expenses**\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "less environmentally critical processes (see for example, the principles of 'green engineering', like prevention instead of treatment of waste288).\n\n**Chemical technologies have ousted traditional materials and processes.** The United Nations' (UNEP) 'Global Chemical Outlook' 289 documents a strong growth of chemical production between 1970 and 2010. The value of the global chemical production grew from US$171 billion in 1970, to approximately US$ 5.7 trillion in 2019, roughly 33 times more.290 The EU had a share of $1.3 trillion or about 20% of the global value. In less than two decades between 2000 and 2017, the capacity doubled and grew from 1,186 million tons to 2,276 million tons.291,292\n\nThe reasons for this strong growth are: a) the **replacement of traditional materials** (wood, stone, iron and other metals, paper, natural fibres) by chemically based products (foremost plastics and multimaterial products); b) **the replacement of traditional technologies by chemical processes** (e.g. gluing instead of screwing of connections in metal, two-component paints); c) the development of **new products** (e.g. electronic devices, new types of batteries, nano); and d) **new applications** (e.g. specific fertilisers and pesticides).\n\nApproximately 300 million tons of synthetic chemicals were consumed in the EU in 2019, 223 million tons, or 74%, were regarded as hazardous to health.\n\n| HAZARD (Labels) | 2021 |\n| --- | --- |\n| Hazardous to health | 214.3 |\n| Carcinogenic, mutagenic and reprotoxic (CMR) health hazard | 39.9 |\n| Chronic toxic health hazard | 25.4 |\n| Very toxic health hazard | 59.2 |\n| Toxic health hazard | 35.5 |\n| Harmful health hazard | 54.5 |\n| All labels referring to: Hazardous to the environment | 169.6 |\n| Hazardous and non-hazardous - Total | 278.9 |\n\n#### **Table 29: Production and consumption of chemicals by hazard class in the EU in 2019 – Eurostat293**\n\nAccording to the detailed register data of the Swedish Chemicals Agency, 10 million tonnes of synthetic chemicals were used in Sweden in 2019 that were classified as hazardous to health and the environment (not counting petrol). That equals approximately 1 ton per citizen of such chemicals.294\n\nThe ESENER 2019 survey provides information about **sectors that reported a particularly high prevalence of dangerous substances**. The percentage of enterprises reporting handling or exposure to chemicals are: 50% in 'Manufacturing', 49% in 'Construction, waste management, and water and electricity supply', and 47% in 'Human health and social work activities'.295\n\nThe prevention of risks from the use of chemicals at workplaces is done according to extensive regulatory frameworks. The most relevant pieces of legislation at the EU level are the OSH Framework Directive, the Chemical Agents Directive, and the Carcinogens and Mutagens Directive. Legislation in other policy areas contributes to the reduction of risks from dangerous substances in workplaces, such as EU legislation on chemical substances and mixtures (CLP, the regulation on classification, labelling and packaging of chemicals, its predecessor directive was already issued in 1967; REACH the regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals from 2007; and also specific EU and international legislation on specific aspects such as chemicals in waste, storage and transport, in specific products like batteries and cars, in specific sectors like agriculture, in natural environments like in water and soil, and in consumer products like food, detergents and cosmetics).", - "page_start": 106, - "page_end": 106, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- **II.6.3** The contractor is liable for any loss or damage caused to the contracting authority during or as a consequence of *implementation of the FWC*, including in the event of subcontracting, but only up to an amount not exceeding three times the total amount of the relevant specific contract. However, if the damage or loss is caused by the gross negligence or wilful misconduct of the contractor or of its *personnel* or subcontractors, as well as in the case of an action brought against the contracting authority by a third party for breach of its intellectual property rights, the contractor is liable for the whole amount of the damage or loss.\n- **II.6.4** If a third party brings any action against the contracting authority in connection with the *implementation of the FWC*, including any action for alleged breach of intellectual property rights, the contractor must assist the contracting authority in the legal proceedings, including by intervening in support of the contracting authority upon request. If the contracting authority's liability towards the third party is established and that such liability is caused by the contractor during or as a consequence of the *implementation of the FWC*, Article II.6.3 applies.\n- **II.6.5** If the contractor is composed of two or more economic operators (i.e. who submitted a joint tender), they are all jointly and severally liable to the contracting authority for the *implementation of the FWC*.\n- **II.6.6** The contracting authority is not liable for any loss or damage caused to the contractor during or as a consequence of *implementation of the FWC*, unless the loss or damage was caused by wilful misconduct or gross negligence of the contracting authority.\n\n# **II.7. Conflict of interest and professional conflicting interests**\n\n- **II.7.1** The contractor must take all the necessary measures to prevent any situation of *conflict of interest* or *professional conflicting interest*.\n- **II.7.2** The contractor must *notify* the contracting authority in writing as soon as possible of any situation that could constitute a *conflict of interest* or a *professional conflicting interest* during the *implementation of the FWC*. The contractor must immediately take action to rectify the situation.\n\nThe contracting authority may do any of the following:\n\n- (a) verify that the contractor's action is appropriate;\n- (b) require the contractor to take further action within a specified deadline;\n- (c) decide not to award a specific contract to the contractor.\n- **II.7.3** The contractor must pass on all the relevant obligations in writing to:\n\t- (a) its *personnel*;\n\t- (b) any natural person with the power to represent it or take decisions on its behalf;\n\t- (c) third parties involved in the *implementation of the FWC*, including subcontractors.\n\nThe contractor must also ensure that the persons referred to above are not placed in a situation which could give rise to conflicts of interest.", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "*a specific contract* or any part of it:\n\n- (a) if the procedure for awarding the FWC or a specific contract or the *implementation of the FWC* proves to have been subject to *irregularities, fraud or breach of obligations*;\n- (b) in order to verify whether the presumed *irregularities, fraud* or *breach of obligations* have actually occurred.\n\nThe contracting authority must *formally notify* the contractor of the suspension and the reasons for it. Suspension takes effect on the date of *formal notification*, or at a later date if the *formal notification* so provides.\n\nThe contracting authority must *notify* the contractor as soon as the verification is completed whether:\n\n- (a) it is lifting the suspension; or\n- (b) it intends to terminate the FWC or a specific contract under Article II.18.1(f) or (j).\n\nThe contractor is not entitled to compensation for suspension of any part of the FWC or a specific contract.\n\nThe contracting authority may in addition suspend the time allowed for payments in accordance with Article II.21.7.\n\n# **II.18. Termination of the FWC**\n\n#### **II.18.1. Grounds for termination by the contracting authority**\n\nThe contracting authority may terminate the FWC or any on-going specific contract in the following circumstances:\n\n- (a) if provision of the services under an on-going specific contract has not actually started within 15 days of the scheduled date and the contracting authority considers that the new date proposed, if any, is unacceptable, taking into account Article II.11.2;\n- (b) if the contractor is unable, through its own fault, to obtain any permit or licence required for *implementation of the FWC*;\n- (c) if the contractor does not implement the FWC or perform the specific contract in accordance with the tender specifications or *request for service* or is in breach of another substantial contractual obligation or repeatedly refuses to sign specific contracts. Termination of three or more specific contracts in these circumstances also constitutes grounds for termination of the FWC;\n- (d) if the contractor or any person that assumes unlimited liability for the debts of the contractor is in one of the situations provided for in points (a) and (b) of Article 136(1) of the Financial Regulation6;\n- (e) if the contractor or any *related person* is in one of the situations provided for in points (c) to (h) of Article 136(1) or to Article 136(2) of the Financial Regulation;\n- (f) if the procedure for awarding the FWC or the *implementation of the FWC* prove to have been subject to *irregularities*, *fraud* or *breach of obligations*;\n\n6 Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council of 18 July 2018 on the financial rules applicable to the general budget of the Union, amending Regulations (EU) No 1296/2013, (EU) No 1301/2013, (EU) No 1303/2013, (EU) No 1304/2013, (EU) No 1309/2013, (EU) No 1316/2013, (EU) No 223/2014, (EU) No 283/2014, and Decision No 541/2014/EU and repealing Regulation (EU, Euratom) No 966/2012, OJ L 193 of 30.7.2018, p.1 https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG", - "page_start": 29, - "page_end": 29, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "476 E European Agency for Safety and Health at Work,2013: European Risk Observatory, Analysis of the determinants of workplace occupational safety and health practice in a selection of EU Member States, https://osha.europa.eu/en/publications/reports/analysis-determinants-workplace-OSH-in-EU\n\n477 European Agency for Safety and Health at Work, 2019: The value of OSH and the societal costs of workrelated injuries and disease, Luxembourg;\n\n478 E European Agency for Safety and Health at Work, 2021: Improving compliance with occupational safety and health regulations: an overarching review\n\n479 Walters D, Johnstone R, Bluff E, Limborg HJ, Gensby U.: Improving compliance with occupational safety and health regulations: an overarching review EU-OSHA, 2021. Improving compliance with occupational safety and health regulations\n\n480 ILO and integration of OSH Into decent work https://www.ilo.org/global/topics/dw4sd/themes/osh/lang- en/index.htm\n\n481 Dijk, F., Yohama Caraballo-Arias, Y.: Where to Find Evidence-Based Information on Occupational Safety and Health? https://www.annalsofglobalhealth.org/articles/10.5334/aogh.3131/\n\nTrade union position, one example: Vogel L (2014), The point of view of the European trade unions: It is urgent to revitalise the EU occupational health and safety policy, http://www.osha.mddsz.gov.si/.../Laurent_VOGEL_EN.pdf Employer position, one example: Safer and healthier work for all - Modernisation of the EU occupational safety and health legislation and policy,\n\n482 Eurofound; Labour market change New forms of employment: 2020 update\n\n483 Eurofound, 2020: Working conditions in sectors, Publications Office of the European Union, Luxembourg, doi:10.2806/024695, p. 41\n\n484 Norway, STAMI: https://noa.stami.no/ National monitoring of work environment (National overvåkning af arbeidmiljø)\n\n485 Detailed Action Plan for the 4 Main Strategies to Create Safe Workplaces (update from 2020) https://kosha.or.kr/english/publications/Resources.do?mode=view&articleNo=277001&article.offset=0&articleLi mit=10\n\n486 Sakurai, H.: Occupational Safety and Health in Japan: Current Situations and the Future https://www.jstage.jst.go.jp/article/indhealth/50/4/50_MS1375/_pdf/-char/ja\n\n487 Occupational Safety and Health Administration Ministry of Labor, Republic of China (Taiwan): National Occupational Safety and Health Profile of Taiwan, 2014, Chapter 8 National Occupational Safety and Health Profile of Taiwan\n\n488 See: https://www.mom.gov.sg/workplace-safety-and-health/wsh-reports-and-statistics\n\n489 E.g.: National Academies of Sciences, Engineering, and Medicine. 2018. A Smarter National Surveillance System for Occupational Safety and Health in the 21st Century. Washington, DC: The National Academies Press. https://doi.org/10.17226/24835, 2018\n\n490 E.g.: Publications from the Association of Workers' Compensation Boards of Canada, https://awcbc.org/en/ 491 Australian Safety and Compensation Council, Report on indicators for occupational disease*,* Australian Government, 2006, p1-45\n\n492 E.g.: https://data.worksafe.govt.nz/\n\n493 International Labour Organisation ILO (no publishing date available). Decent work: Measuring decent work. http://www.ilo.org/integration/themes/mdw/lang--en/index.htm\n\n494 Country profiles on Occupational Safety and Health, https://www.ilo.org/safework/countries/lang- en/index.htm\n\n495 Work Health Organisation WHO (2011). Global Health Observatory: WHO indicator registry., from: http://www.who.int/gho/indicator_registry/en/index.html\n\n496 United Nations: Sustainable Development Goals, https://sustainingdevelopment.com/sdg8-indicators/ 497 UNECE, 2010: Measuring Quality of Employment, https://unece.org/statistics/publications/measuring-qualityemployment\n\n498https://ec.europa.eu/eurostat/web/labour-market/quality-of-employment/database\n\n499 Eurostat overview on their data related to quality of employment https://ec.europa.eu/eurostat/web/labourmarket/quality-of-employment\n\n500 Andersen, J. H., et al., 2019: Systematic literature review on the effects of occupational safety and health (OSH) interventions at the workplace. Scandinavian Journal of Work Environment and Health, 45(2): 103-113\n\nElsler D. et al: A review of case studies evaluating economic incentives to promote occupational safety and health. Scand J Work Environ Health 2010; 36: 289–298", - "page_start": 159, - "page_end": 159, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "quality or continuity of the services. The parties may agree to draw up a transition plan detailing the contractor's assistance unless such plan is already detailed in other contractual documents or in the tender specifications. The contractor must provide such assistance at no additional cost, except if it can demonstrate that it requires substantial additional resources or means, in which case it must provide an estimate of the costs involved and the parties will negotiate an arrangement in good faith.\n\n#### **II.18.4. Effects of termination**\n\nThe contractor is liable for damage incurred by the contracting authority as a result of the termination of the FWC or a specific contract, including the additional cost of appointing and contracting another contractor to provide or complete the services, except if the damage is a result of a termination in accordance with Article II.18.1(j), (k) or (l) or Article II.18.2. The contracting authority may claim compensation for such damage.\n\nThe contractor is not entitled to compensation for any loss resulting from the termination of the FWC or a specific contract, including loss of anticipated profits, unless the loss was caused by the situation specified in Article II.18.2.\n\nThe contractor must take all appropriate measures to minimise costs, prevent damage and cancel or reduce its commitments.\n\nWithin 60 days of the date of termination, the contractor must submit any report, deliverable or *result* and any invoice required for services that were provided before the date of termination.\n\nIn the case of joint tenders, the contracting authority may terminate the FWC or a specific contract with each member of the group separately on the basis of points (d), (e) or (g) of Article II.18.1, under the conditions set out in Article II.11.2\n\n# **II.19. Invoices, value added tax and e-invoicing**\n\n#### **II.19.1. Invoices and value added tax**\n\nInvoices must contain the contractor's (or leader's in the case of a joint tender) identification data, the amount, the currency and the date, as well as the FWC reference and reference to the specific contract.\n\nInvoices must indicate the place of taxation of the contractor (or leader in the case of a joint tender) for value added tax (VAT) purposes and must specify separately amounts not including VAT and amounts including VAT.\n\nThe contracting authority is exempt from all taxes and duties, including VAT, in accordance with Articles 3 and 4 of the Protocol 7 of the Treaty on the Functioning of the European Union on the privileges and immunities of the European Union.\n\nThe contractor (or leader in the case of a joint tender) must complete the necessary formalities with the relevant authorities to ensure that the supplies and services required for *implementation of the FWC* are exempt from taxes and duties, including VAT.\n\n#### **II.19.2. E-invoicing**\n\nIf provided for in the special conditions, the contractor (or leader in the case of a joint tender) submits invoices in electronic format if the conditions regarding electronic signature specified by Directive 2006/112/EC on VAT are fulfilled, i.e. using a qualified", - "page_start": 31, - "page_end": 31, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "What or Corning's corporate values ?", - "target_page": 12, - "target_passage": "Quality, Integrity, Performance, Leadership, Innovation, Independence, and The Individual", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "#### I NVESTOR I NFORMATION :\n\n#### A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on\n\nThursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n#### A DDITIONAL I NFORMATION\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\n#### I NVESTOR I NFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n#### C OMMON S TOCK\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is \"GLW.\"\n\n#### TRANSFER AGENT AND REGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nCHANGE OF ADDRESS Report change of address to Computershare Investor Services at the above address.\n\n#### I NDEPENDENT A CCOUNTANTS\n\nPricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\n#### **Corning Incorporated**\n\nOne Riverfront Plaza Corning, NY 14831-0001 607 974 9000 www.corning.com\n\n02BR24601EN\n\n\"Safe Harbor\" Statement under the Private Securities Litigation Reform Act of 1995 The statements in this annual report that are not historical facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n- global economic and political conditions,\n- currency fluctuations,\n- product demand and industry capacity,\n- competitive products and pricing,\n- sufficiency of manufacturing capacity and efficiencies,\n- cost reductions,\n- availability and costs of critical materials,\n- new product development and commercialization,\n- attracting and retaining key personnel,\n- order activity and demand from major customers,\n- fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n- financial condition of customers,\n- changes in the mix of sales between premium and non-premium products,\n- facility expansions and new plant start-up costs,\n- adverse litigation or regulatory developments, including future or pending tax legislation,\n- adequacy and availability of insurance,\n- capital resource and cash flow activities,\n- capital spending,\n- equity company activities,\n- interest costs,\n- acquisition and divestiture activity,\n- the rate of technology change,\n- the ability to enforce patents,\n- product performance issues,\n- stock price fluctuations, and\n- other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any offering of securities or for the purpose of promoting or influencing the sale of securities.\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## BALA NC E Corning Annual Report 2002", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "Corporate Governance", - "page_start": 47, - "page_end": 47, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### T O O U R S HAREHOLDERS :\n\nJ AMES R. H OUGHTON\n\nC HAIRMAN AND C HIEF E XECUTIVE O FFICER\n\nWe will long remember 2002 as one of the most challenging years — if not the most challenging — in Corning Incorporated's long history. I quickly became even more steeped in these challenges in April when, at the request of our Board of Directors, I returned to the company as Chairman and Chief Executive Officer.\n\nSince that time, I am increasingly convinced that, despite our downturn, the long-term future of Corning remains bright and filled with opportunity.\n\nBut in the meantime, we have been living in a very difficult reality – one marked by ongoing quarterly losses and drops in revenue. You, our shareholders—along with our employees and our friends in the communities we serve—felt the pain. We all watched our businesses retrench, battered by a weakened global economy and Wall Street turmoil. And we could only wonder what bad news would be next as our stock value continued its seemingly relentless decline.\n\nWith the severe drop-off in revenues from our telecommunications customers, we knew we could no longer afford to keep up the costly infrastructure of facilities and staff we had in place. Put simply, we couldn't spend more than we were making.\n\nWe also knew our strengths — and they were many! We knew we were not — nor had we ever been — merely a telecommunications company. Rather, we are a technology company, with the materials and process expertise to create life-changing products. That's what we've been for all of our 152 years; that's what we'll continue to be.\n\nAnd we knew something else … that our Values, the historic strength of our company, were alive and well. Quality, Integrity, Performance, Leadership, Innovation, Independence and The Individual continue to guide our every move, and continue to set us apart from other companies— especially those caught in the accounting scandals that marred the business world this past year.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## A M E S S A G E F R O M T H E B O A R D O F D I R E C T O R S\n\n#### **Dear Shareholders:**\n\nWe, the members of the HON INDUSTRIES Board of Directors, believe that integrity is central to good corporate governance. This belief is reflected in the HON INDUSTRIES vision statement (shown on the back of this annual report), adopted many years ago. Our Vision statement represents much more than a traditional \"mission,\" and it goes much deeper than company policy. The beliefs and values represented in that document are the very foundation of our corporate culture, and guide the attitude and actions of every member, every day.\n\nFrom its beginnings, HON INDUSTRIES has sought to implement its vision through sound policies and practices, and by maintaining a strong Board composed predominantly of outside directors. We are fully committed to executing our responsibilities, and we will continue to maintain the company's long-standing tradition of an independent, well-informed, active, and engaged Board of Directors.\n\nOur board meetings and procedures have been developed and refined to encourage open and informed communication. The company's accounting policies have always been conservative and straightforward. The Board's three committees — Audit; Human Resources and Compensation; Public Policy and Corporate Governance — have consisted entirely of non-management directors for many years.\n\nDuring 2003, we have given significant attention to the newly released rules emanating from the Sarbanes-Oxley Act of 2002 and the New York Stock Exchange listing requirements — rules intended to improve corporate governance across the country. It is gratifying to report that HON INDUSTRIES governance practices were already in accord with the spirit of the rules.\n\nIt is an honor to serve as directors of HON INDUSTRIES. We are very proud to represent you, the shareholder, as we oversee the management of this great company. Please be assured that we intend to remain vigilant and focused on good corporate governance.\n\nSincerely, The HON INDUSTRIES Board of Directors\n\nStan A. Askren\n\nGary M. Christensen\n\nCheryl A. Francis\n\nRobert L. Katz\n\nDennis J. Martin\n\nJack D. Michaels\n\nJoseph Scalzo\n\nAbbie J. Smith\n\nRichard H. Stanley\n\nBrian E. Stern\n\nRonald V. Waters, III", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## THE INTEGRATION OF OUR BELIEFS, WISDOM, CURIOSITY, & KNOWLEDGE PROVIDES BALANCE & STABILITY.\n\ni\n\n#### C O R P O R AT E VA L U E S :\n\nCorning's Values provide an unchanging moral and ethical compass that guides the actions of everyone in the company. The corporate values are: Quality, Integrity, Performance, Leadership, Innovation, Independence, and The Individual.\n\n#### T OTAL Q UALITY :\n\nIn alignment with the quality policy of the corporation, our policy is to achieve Total Quality performance. Total Quality performance means understanding who the customer is, what the requirements are, and meeting those requirements better than anyone else, without error, on time, every time.\n\n# quality integrity the individual performance leadership innovation independence i i i i i i", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "details of which are described in Note 18 to the financial statements.\n\nShares granted to the Specified Executives during the year are as follows, and were valued at $6.95 each, being their market value (as defined in the Income Tax Assessment Act).\n\n| Granted to: | Shares: |\n| --- | --- |\n| Gouadain, Jacques Elie | 6,216 |\n| Moore, Paul Derek | 5,827 |\n| Wasow, Peter Christopher | 7,770 |\n| Wilkinson, Richard John | 5,827 |\n| Wood, Bruce James | 5,439 |\n| Young, Jonathon Terence | 8,547 |\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 40\n\n- (b) Includes the notional value of shares and options granted by the Company during previous financial years (and which had not vested as at 1 January 2004) which have been amortized over the relevant vesting period. All options have been valued by independent valuers using the modified Black-Scholes or Binomial option pricing model.\n- (c) No options were granted by the Company during the year to the Directors or to the Specified Executives.\n- (d) No options or shares have been granted by the Company since the end of the financial year.\n- (6) This amount reflects the value during the current reporting period of the 1,000,000 Restricted Shares granted to Mr J C Ellice-Flint in 2000, further details of which are described in note 18(h) to the financial statements.", - "page_start": 41, - "page_end": 41, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# BOARD OF DIRECTORS »\n\n#### STANDING (LEFT TO RIGHT)\n\n### Merrill A. \"Pete\" Miller, Jr. (1,2)\n\nChairman, President and CEO National Oilwell Varco, Inc. Houston, Texas\n\n### SEATED (LEFT TO RIGHT)\n\n### Don Nickles(4)\n\nFormer U.S. Senator, Oklahoma Founder and President The Nickles Group, LLC Washington, D.C.\n\n# V. Burns Hargis(1)\n\nPresident Oklahoma State University Stillwater, Oklahoma\n\n> Charles T. Maxwell (3,4) Senior Energy Analyst Weeden & Co. Greenwich, Connecticut\n\nAubrey K. McClendon Chairman of the Board and Chief Executive Officer Chesapeake Energy Corporation Oklahoma City, Oklahoma\n\nFrederick B. Whittemore(3,4) Advisory Director Morgan Stanley New York, New York *Retiring from the Board in June 2011*\n\n# Richard K Davidson(1)\n\nRetired Chairman and CEO Union Pacific Corporation Bonita Springs, Florida\n\nFrank Keating(3) Former Governor, Oklahoma President and CEO American Bankers Association Washington, D.C.\n\n- (1) Audit Committee\nFounder and CEO Next Decade The Woodlands, Texas\n\n- (2) Lead Independent Director\nKathleen M. Eisbrenner (3,4)\n\n- (3) Compensation Committee\n- (4) Nominating and Corporate Governance Committee\n\nLouis A. Simpson Chairman SQ Advisors, LLC Naples, Florida *Nominated for* \n\n*election in June 2011*\n\n# Governance\n\nOur Board of Directors is responsible to our shareholders for the oversight of the company and for the implementation and operation of an effective and sound corporate governance environment. We believe that effective corporate governance contributes to long-term corporate performance. An effective governance structure should reinforce a culture of corporate integrity, foster the company's pursuit of long-term strategic goals of growth and profit and ensure quality and continuity of corporate leadership. Our directors will continue to be diligent in their efforts to preserve the public trust while fostering the long-term success of the company.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "W ENDELL P. W EEKS J AMES B. F LAWS\n\nPRESIDENT AND CHIEF OPERATING OFFICER\n\nIn our business operations during 2002 we invested a great deal of energy aligning our cost structure and business plans with our priority of restoring profitability. After massive restructuring — following restructuring efforts we launched in 2001—we feel we now have our cost structure and growth strategies in place to accomplish this goal.\n\nWe have re-balanced the company to take advantage of our broad and diverse set of businesses. And in charting our strategies, we have focused on ensuring that both our segments have solid business plans in place, enabling them to grow. Our people are rigorously committed to executing against these plans.\n\nAs you saw earlier in this report, our Corning Technologies businesses are in markets with solid growth potential. We have leading market positions in attractive businesses … we are ready to capitalize on that position of strength. Meanwhile, we are making these businesses even more cost-effective through significant manufacturing efficiency gains.\n\nIn telecommunications, we are not planning on a market recovery in 2003. We have aligned our cost structure to meet current demand levels after two very tough years of ongoing restructuring.\n\nWithin the context of our financial realities, however, we have not lost our sense of self. We will meet our goals … but the path we are taking to get there has been, and will continue to be, consistent with our Values. Integrity … quality … treating individuals with dignity and respect … these are the guiding principles of the decisions we make. We know that in adhering to our Values, solid business performance will follow.\n\nVICE CHAIRMAN AND CHIEF FINANCIAL OFFICER\n\nWe take great pride in saying that Corning continues to be a financially sound company, thanks to the aggressive strategies we executed throughout 2002. Although it has been a very painful process, we have dramatically slowed the rate at which we are spending cash. We ended the year with a balance of cash and short-term investments of $2.1 billion. And we have access to $2 billion in credit that we haven't touched — and don't plan to. We also continue to pay down debt each quarter. This, combined with our plan to return to profitability in 2003, gives us a high degree of confidence in our ability to meet any future financial obligations. So, we feel very good about our liquidity position right now.\n\nThe ongoing economic weakness and uncertainty in world events continue to make the overall business environment a volatile one. Still, we have greatly improved our ability to forecast revenues and expenses quarter-to-quarter, and we are encouraged by the near-term growth potential of our non-telecommunications businesses — especially our liquid-crystal display, environmental and semiconductor businesses. If these markets continue to grow as we expect, we are confident that we will be able to meet our goals.\n\nWe know that our shareholders are most eager to see a greater return on their investment with Corning, and of course our return to profitability will be key to building back Wall Street's confidence. We are 100 percent committed to reaching that goal of profitability in 2003 — and doing so within the rigorous compliance rules by which we have always been guided. Integrity characterizes all our relationships, both inside and outside of Corning, and we will never compromise that foundation of our reputation.", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "# **CORPORATE GOVERNANCE**\n\nThe Board of Sundance Energy Australia Limited (\"Sundance\" or \"the Company\") is committed to the Principles and Recommendations underpinning best practices in corporate governance as specified by the Australian Securities Exchange (the \"ASX\") Corporate Governance Council's 2nd Edition of *Corporate Governance Principles and Recommendations with 2010 Amendments*. Sundance's Board has taken, and will continue to take, all necessary actions to adopt the amended Principles in each instance where that is appropriate, or to design policies and procedures to adopt them in a fashion modified appropriately to the Company's particular circumstances.\n\nSundance's Board has carefully reviewed the *Corporate Governance Principles and Recommendations*. As is set forth below, the vast majority of these have already been achieved in total accordance with the Principles and Recommendations. In a few instances, the Company has adopted hybrid methodologies of compliance deemed appropriate to its size, structure and situation. The Board is comfortable that its practices are satisfactory for an entity of its structure and size. In some instances disclosures recommended by the ASX have been made in other areas of the Annual Report, namely the Directors' Report, and therefore will not be restated under this section.\n\nIn March 2014, the ASX Corporate Governance Council release the 3rd edition of the *Corporate Governance Principles and Recommendations*, which applies to ASX listed companies in respect of their first full financial year commencing on or after 1 July 2014. Accordingly, the 3rd edition of the Corporate Governance Principles and Recommendations will apply to Sundance for its financial year ended 31 December 2015. Sundance will report its compliance against those recommendations in the Company's Corporate Governance Statement for fiscal year 2015.\n\nDuring fiscal year 2014, the Company's corporate governance practices and policies discussed below have complied with those outlined in the *Corporate Governance Principles and Recommendations* (2nd Edition), except as noted otherwise.\n\n# **Principle 1: Lay Solid Foundations for Management and Oversight**\n\nThe respective roles and responsibilities of the Board and management, including those matter expressly reserved to the Board, are set out in the Board Charter, which is available in the corporate governance section of Sundance's website.\n\n#### **1.1 Roles and Responsibilities**\n\nThe Board is responsible for the corporate governance of the Company, including the setting and monitoring of objectives, goals and corporate strategy. Management is responsible for the implementation of the strategy and running the day to day business of the Company's affairs.\n\nResponsibilities of the board include –\n\n- Providing input into and final approval of management's development of corporate strategy and performance objectives;\n- Monitoring senior executives' performance and implementation of the Company's strategy;\n- Approving and monitoring the business plan, budget and corporate policies;\n- Monitoring and the approval of financial and other reporting;\n- Ensuring an effective system of internal controls exists and is functioning as required;\n- Establishing Sundance's vision, mission, values and ethical standards as reflected in a Code of Conduct;\n- Delegating an appropriate level of authority to management and approving any additional change to those delegations;\n- Ensuring appropriate resources are available to senior executives;\n- Appointment, succession, performance assessment, remuneration and dismissal of the Managing Director;\n- Reviewing, ratifying and monitoring systems of risk management and internal control, codes of conduct, and legal compliance; and\n- Approving and monitoring the progress of major capital expenditure, capital management, and acquisitions and divestitures.\n\nThe Board has delegated responsibility to the Managing Director (\"MD\") and the executive management team to manage the day-to-day operations and administration of the Company. In carrying out this delegation, the MD, supported by the senior executives, routinely reports to the Board regarding Sundance's progress on achieving both the short and long-term plans for the Company. The MD is accountable to the Board for the authority that is delegated by the Board.", - "page_start": 48, - "page_end": 48, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "As a Corning's investor, how can I get a summary of the annual meeting of shareholders ?", - "target_page": 11, - "target_passage": "A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### I NVESTOR I NFORMATION :\n\n#### A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on\n\nThursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n#### A DDITIONAL I NFORMATION\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\n#### I NVESTOR I NFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n#### C OMMON S TOCK\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is \"GLW.\"\n\n#### TRANSFER AGENT AND REGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nCHANGE OF ADDRESS Report change of address to Computershare Investor Services at the above address.\n\n#### I NDEPENDENT A CCOUNTANTS\n\nPricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\n#### **Corning Incorporated**\n\nOne Riverfront Plaza Corning, NY 14831-0001 607 974 9000 www.corning.com\n\n02BR24601EN\n\n\"Safe Harbor\" Statement under the Private Securities Litigation Reform Act of 1995 The statements in this annual report that are not historical facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n- global economic and political conditions,\n- currency fluctuations,\n- product demand and industry capacity,\n- competitive products and pricing,\n- sufficiency of manufacturing capacity and efficiencies,\n- cost reductions,\n- availability and costs of critical materials,\n- new product development and commercialization,\n- attracting and retaining key personnel,\n- order activity and demand from major customers,\n- fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n- financial condition of customers,\n- changes in the mix of sales between premium and non-premium products,\n- facility expansions and new plant start-up costs,\n- adverse litigation or regulatory developments, including future or pending tax legislation,\n- adequacy and availability of insurance,\n- capital resource and cash flow activities,\n- capital spending,\n- equity company activities,\n- interest costs,\n- acquisition and divestiture activity,\n- the rate of technology change,\n- the ability to enforce patents,\n- product performance issues,\n- stock price fluctuations, and\n- other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any offering of securities or for the purpose of promoting or influencing the sale of securities.\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "The Audit and Risk Management Committee's charter and information on the selection and appointment of the Company's external auditor is available in the corporate governance section on the Company's website. Information regarding qualifications and meeting attendance can be found in the Directors' Report of this Annual Report.\n\n# **Principle 5: Make Timely and Balanced Disclosure**\n\nThe Company has adopted a Market Disclosure Policy to ensure compliance with its continuous disclosure obligations whereby relevant information that could cause a reasonable person to expect a material effect on, or lead to a substantial movement in, the value of Sundance's share price, is immediately made available to shareholders and the public as a release to the ASX. D Connor, as Company Secretary, has been nominated as the person primarily responsible for communications with the ASX. All material information concerning the Company, including its financial situation, performance, ownership and governance is posted on the Company's web site to ensure all investors have equal and timely access. The Market Disclosure Policy is available in the corporate governance section on Sundance's website.\n\n# **Principle 6: Respect the Rights of Shareholders**\n\nThe Board fully recognises its responsibility to ensure that its shareholders are informed of all major developments affecting the Company. All shareholders, who have elected to do so, receive a copy of the Company's Annual Report and the Annual, Half Yearly and Quarterly Reports are prepared and posted on the Company's website in accordance with the ASX Listing Rules. Regular updates on operations are made via ASX releases. All information disclosed to the ASX is posted on Sundance's website as soon as possible after it is disclosed to the ASX. When analysts are briefed on aspects of the Company's operation, the material used in the presentation is immediately released to the ASX and posted on the Company's website. Sundance encourages its shareholders to attend its annual meetings and to discuss and question its Board and management. The Company's external auditor is requested to attend the annual general meeting and be available to answer shareholder questions about the conduct of the audit and the preparation and content of the audit report. The Shareholder Communications Policy is published on the Company's website under the corporate governance section.\n\n# **Principle 7: Recognise and Manage Risk**\n\n### **7.1 Risk Assessment and Management**\n\nSundance has established a Risk Management Policy whereby the primary purpose of the policy is to ensure that:\n\n- Appropriate systems are in place to identify, to the extent that is reasonably practical, all material risks that the Company faces in conducting its business;\n- The financial impact of those risks is understood and appropriate controls are in place to limit exposures to them;\n- Appropriate responsibilities are delegated to control the risks; and\n- Any material changes to the Company's risk profile are disclosed in accordance with the Company's continuous Market Disclosure Policy.\n\nThe Audit and Risk Management Committee is responsible for approving and monitoring the overall financial and operational business risk profile of the Company, and reporting its findings to the full Board. In addition, by the nature of the upstream oil and gas business this topic is intrinsically covered during each Board meeting. The Board requires the executives to design and implement the risk management and internal control system to manage the Company, and to report to the Board.\n\nThe Board has received assurance from the Managing Director and Chief Financial Officer that the declaration provided in accordance with section 295A of the *Corporations Act 2001* is founded on a sound system of risk management and internal control which is operating effectively.", - "page_start": 53, - "page_end": 53, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "### **NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)**\n\n### **7. STOCKHOLDERS' EQUITY**\n\nDuring 2000 through 2004, the Board of Directors authorized the repurchase of up to $1,025.0 million of the Company's Common Stock. As of December 31, 2004, the Company had paid $750.4 million to repurchase 35.2 million shares of its common stock, of which 9.6 million shares were acquired during the year ended December 31, 2004 for $266.1 million.\n\nIn July 2003, the Company announced that its board of directors initiated a quarterly cash dividend of $.06 per share, which was increased to $.12 per share in the third quarter of 2004. Dividends declared were $54.6 million and $19.0 million during 2004 and 2003, respectively. As of December 31, 2004, the Company recorded a dividend payable of approximately $18.1 million to shareholders of record at the close of business on January 3, 2005.\n\n### **8. EMPLOYEE BENEFIT PLANS**\n\nIn July 1998, the Company adopted the 1998 Stock Incentive Plan (\"\"Stock Incentive Plan'') to provide for grants of options to purchase shares of common stock, restricted stock and other equity-based compensation (\"\"Equity-Based Compensation Units'') to employees and non-employee directors of the Company who are eligible to participate in the Stock Incentive Plan. The Company accounts for stock-based compensation in accordance with Accounting Principles Board Opinion No. 25, \"\"Accounting for Stock Issued to Employees'' (\"\"APB 25''), and related interpretations. Stock options are granted at prices equal to the fair market value of the Company's common stock on the date of grant; therefore, no compensation expense is recognized. Compensation expense resulting from grants of restricted stock or stock units is recognized during the vesting period.\n\nOptions granted under the Stock Incentive Plan are non-qualiÑed and are granted at a price equal to the fair market value of the Company's common stock at the date of grant. Generally, options granted have a term of ten years from the date of grant, and vest in increments of 25% per year over a four year period beginning on the Ñrst anniversary date of the grant. Options granted to non-employee directors have a term of ten years and vest immediately at the date of grant. In May 2002, the Company's stockholders approved and adopted an amendment and restatement of the Stock Incentive Plan, which modiÑed a number of its provisions, including an increase in the number of shares of common stock reserved for issuance under the Stock Incentive Plan from 20.0 million to 27.0 million. As of December 31, 2004, there were 6.0 million stock options reserved for future grants under the Stock Incentive Plan.\n\nDuring the three months ended March 31, 2004, the Company awarded 20,000 deferred stock units to its non-employee directors under its Stock Incentive Plan. An additional 5,000 deferred stock units were granted to a new director during the three months ended December 31, 2004. These stock units vest immediately but the directors receive the underlying shares only after their board service ends. The stock units do not carry any voting or dividend rights, except the right to receive additional stock units in lieu of dividends.\n\nAlso during the three months ended March 31, 2004, the Company awarded 79,500 shares of restricted stock to its executive oÇcers. 7,500 of these restricted shares vest eÅective January 1, 2005. The remaining 72,000 shares vest in four equal annual installments beginning one year from the date of grant except that vesting may be accelerated based upon the achievement of certain performance targets. During the vesting period, the participants have voting rights and receive dividends, but the shares may not be sold, assigned, transferred, pledged or otherwise encumbered. Additionally, granted but unvested shares are forfeited upon termination of employment.\n\nThe fair value of stock units and restricted shares on the date of grant is amortized ratably over the vesting period, or the accelerated vesting period if certain performance targets are achieved. The fair value of stock units and restricted shares granted during the year ended December 31, 2004 was $2.8 million, of which", - "page_start": 85, - "page_end": 85, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## BALA NC E Corning Annual Report 2002", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "#### **22. SUBSEQUENT EVENTS**\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe financial effect of the above events have not been reflected in these financial statements.\n\n# **2000 1999 Cents per Cents per Share Share** Basic earnings per share (0.62) 8.09 Diluted earnings per share (0.21) 8.05 **2000 1999 No. No.** Weighted average number of ordinary shares on issue used in the calculation of basic earnings per share 43,000,000 30,356,164\n\n#### **23. EARNINGS PER SHARE**", - "page_start": 56, - "page_end": 56, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe effective date is 1 July 2000. The Company will issue 800,000 ordinary fully paid shares in Mermaid Marine Australia Limited.\n\nThere have not been any other matters or circumstances, other than those referred to in the Chairman's and Operations Reviews and/or in the financial statements and notes attached thereto, that have arisen since the end of the Financial Year that have significantly affected, or may significantly affect Mermaid's operations, the results of those operations or its state of affairs in future financial years.\n\nThe Chairman's and Operations Reviews give indications, in general terms, of likely developments in Mermaid's operations in future financial years and the expected results of those operations. FUTURE DEVELOPMENTS\n\nThe development of the Company's Dampier and Broome bases is subject to the approval of the Western Australian Environmental Protection Authority. ENVIRONMENTAL REGULATION\n\nAs at the date of this report the Company had a total of 7,115,000 unissued shares under option as follows: **30 November 2000 Options** SHARE OPTIONS\n\n> As at the date of this report there are outstanding 6,500,000 options to acquire 6,500,000 ordinary shares in the Company at an issue price of 0.75 cents per ordinary share. Each of these options expires on 30 November 2000.", - "page_start": 33, - "page_end": 33, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "of, non-executive, independent Directors, except for the Environmental and Safety Committee, which includes the CEO as a member.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 30\n\nThe Board Guidelines prescribe that the Board is to meet at least eight times a year, including a strategy meeting of two days duration. The number of meetings of the Board and of each of its Committees and the names of attendees at those meetings are set out on page 47 of this Annual Report. Board Meetings are structured in two separate sessions, without management present for one of those sessions. The agenda for meetings is prepared by the Company Secretary in conjunction with the Chairman and CEO, with periodic input from the Board. Comprehensive Board papers are distributed to Directors in advance of scheduled meetings. Board meetings take place both at the Company's head office and at key operating sites, to assist the Board in its understanding of operational issues.\n\nExecutive management attend Board and Committee meetings, at which they report to Directors within their respective areas of responsibility. This assists the Board in maintaining its understanding of the Company's business and assessing the executive management team. Where appropriate, advisors to the Company attend meetings of the Board and of its Committees.\n\n**2.3 Composition of the Board** The composition of the Board is determined in accordance with the Company's Constitution and the Board Guidelines which, among other things, require that:\n\n- the Board is to comprise a minimum of five and a maximum of ten Directors (exclusive of the CEO);\n- the Board should comprise a substantial majority of independent, non-executive Directors;\n- there should be a separation of the roles of Chairman and Chief Executive Officer of the Company; and\n- the Chairman of the Board should be an independent, non-executive Director.\n\nUnder the Company's Constitution approximately onethird of Directors retire by rotation each year and Directors appointed during the year are required to submit themselves for election by shareholders at the Company's next Annual General Meeting. The Board Guidelines encourage Directors to retire at the first Annual General Meeting after reaching the age of 72 years and not seek reappointment.\n\nCurrently, the Board comprises eight non-executive Directors and one executive Director. The Board has adopted the definition set out in the ASX Best Practice Recommendations and as defined in the 2002 guidelines of the Investment and Financial Services Association Limited and considers all current nonexecutive Directors, including the Chairman, to be independent directors.\n\nGenerally, the Board considers a Director to be independent if he or she is not a member of management and is free of any business or other relationship that could materially interfere with, or could reasonably be\n\nperceived to materially interfere with, the Director's ability to act in the best interests of the Company. The Board will assess the materiality of any given relationship that may affect independence on a case by case basis and has adopted materiality guidelines to assist in that assessment. Under these guidelines, the following interests are regarded as material in the absence of any mitigating factors:\n\n- a holding of 5% or more of the Company's voting shares or a direct association with an entity that holds more than 5% of the Company's voting shares;\n- an affiliation with an entity which accounts for 5% or more or the revenue or expense of the Company.\n\nThe Board has determined that there should not be any arbitrary length of tenure that should be considered to materially interfere with a Director's ability to act in the best interests of the Company, as it believes this assessment must be made on a case by case basis with reference to the length of service of all members of the Board.\n\nEach Director's independence is assessed by the Board on an individual basis, with reference to the above materiality guidelines and focussing on an assessment of each Director's capacity to bring independence of judgment to Board decisions. In this context, as mentioned below, Directors are required to promptly disclose their interests in contracts and other directorships and offices held.\n\nThe names and details of the experience, qualifications, special responsibilities, and term of office of each Director of the Company are set out on page 41 of this Annual Report. Details of each Director's attendance at Board and Committee Meetings and their shareholdings are also set out on page 47 of this Annual Report.\n\n## **2.4 Nomination Committee**\n\nThe role, responsibilities and membership requirements of the Nomination Committee are documented in the Board Guidelines and in a separate Charter, approved by the Board.\n\nUnder the Board Guidelines, it is the responsibility of the Nomination Committee to devise the criteria for, and review membership of, and nominations to, the Board. The primary criteria adopted in selection of suitable Board candidates is their capacity to contribute to the ongoing development of the Company having regard to the location and nature of the Company's significant business interests and to the candidates' age and experience by reference to the attributes of existing Board members.\n\nWhen a Board vacancy exists or where it is considered that the Board would benefit from the services of a new Director with particular skills, the Nomination Committee has responsibility for proposing candidates for consideration by the Board and, where appropriate, engages the services of external consultants.\n\nPrior to appointment, each Director is provided with a letter of appointment which encloses a copy of the Company's Constitution and of the relevant policies. Additionally, the expectations of the Board in", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "In May 1997, the Company registered 400,000 shares of its common stock under its 1997 Equity Plan for Non-Employee Directors. This plan permits the Company to issue to its non-employee directors options to purchase shares of Company common stock, restricted stock of the Company, and awards of Company stock. The plan also permits non-employee directors to elect to receive all or a portion of their annual retainers and other compensation in the form of shares of Company common stock. During 2003, 2002, and 2001, 10,922; 11,958; and 8,662 shares of Company common stock were issued under the plan, respectively.\n\nCash dividends declared and paid per share for each year are:\n\n| (In dollars) | 2003 | 2002 | 2001 |\n| --- | --- | --- | --- |\n| Common shares | $ .52 | $ .50 | $ .48 |\n\nDuring 2002, shareholders approved the 2002 Members' Stock Purchase Plan. Under the new plan, 800,000 shares of common stock were registered for issuance to participating members. Beginning on June 30, 2002, rights to purchase stock are granted on a quarterly basis to all members who have one year of employment eligibility and work a minimum of 20 hours a week. The price of the stock purchased under the plan is 85% of the closing price on the applicable purchase date. No member may purchase stock under the plan in an amount which exceeds the lesser of 20% of his/her gross earnings or a maximum fair value of $25,000 in any calendar year. During 2003, 79,237 shares of common stock were issued under the plan at an average price of $29.25. During 2002, 47,419 shares of common stock were issued under the plan at an average price of $22.58. An additional 673,344 shares were available for issuance under the plan at January 3, 2004. This plan replaced the 1994 Members' Stock Purchase Plan. Under this plan, during 2002 and 2001, 43,388 shares at an average price of $23.63 and 85,385 shares at an average price of $20.51 were issued, respectively.\n\nThe Company has a shareholders' rights plan which will expire August 20, 2008. The plan becomes operative if certain events occur involving the acquisition of 20% or more of the Company's common stock by any person or group in a transaction not approved by the Company's Board of Directors. Upon the occurrence of such an event, each right entitles its holder to purchase an amount of common stock of the Company with a market value of $400 for $200, unless the Board authorizes the rights be redeemed. The rights may be redeemed for $0.01 per right at any time before the rights become exercisable. In certain instances, the right to purchase applies to the capital stock of the acquirer instead of the common stock of the Company. The Company has reserved preferred shares necessary for issuance should the rights be exercised.\n\nThe Company has entered into change in control employment agreements with corporate officers and certain other key employees. According to the agreements, a change in control occurs when a third person or entity becomes the beneficial owner of 20% or more of the Company's common stock or when more than one-third of the Company's Board of Directors is composed of persons not recommended by at least three-fourths of the incumbent Board of Directors. Upon a change in control, a key employee is deemed to have a two-year employment with the Company, and all his or her benefits are vested under Company plans. If, at any time within two years of the change in control, his or her position, salary, bonus, place of work, or Companyprovided benefits are modified, or employment is terminated by the Company for any reason other than cause or by the key employee for good reason, as such terms are defined in the agreement, then the key employee is entitled to receive a severance payment equal to two times annual salary and the average of the prior two years' bonuses.\n\n#### **Stock-Based Compensation**\n\nUnder the Company's 1995 Stock-Based Compensation Plan, as amended and restated effective November 10, 2000, the Company may award options to purchase shares of the Company's common stock and grant other stock awards to executives, managers, and key personnel. The Plan is administered by the Human Resources and Compensation Committee of the Board of Directors. Restricted stock awarded under the plan is expensed ratably over the vesting period of the awards. Stock options awarded to employees under the Plan must be at exercise prices equal to or exceeding the fair market value of the Company's common stock on the date of grant. Stock options are generally subject to four-year cliff vesting and must be exercised within 10 years from the date of grant.\n\nThe weighted-average fair value of options granted during 2003, 2002, and 2001, estimated on the date of grant using the Black-Scholes option-pricing model, was $10.74, $11.74, and $9.70, respectively. The fair value of 2003, 2002, and 2001 options granted is estimated on the date of grant using the following assumptions: dividend yield of 1.2% to 2.1%, expected volatility of 34.9% to 38.4%, risk-free interest rate of 4.2% to 5.4%, and an expected life of 10 to 12 years, depending on grant date.\n\nThe status of the Company's stock option plans is summarized in the following table:", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## **18. Contributed Equity (continued)**\n\n#### **(d) Santos Executive Share Option Plan**\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 65\n\nThe Santos Executive Share Option Plan was approved by shareholders at the Annual General Meeting on 15 May 1997 and its continuation, with amendment, approved at the Annual General Meeting on 5 May 2000.\n\nThe Plan provides for the grant of options to subscribe for or purchase ordinary shares in the capital of the Company to eligible executives selected by the Board. Participation will be limited to those executives who, in the opinion of the Board, are able to significantly influence the generation of shareholder wealth. Directors envisage the Plan applying to up to 50 executives.\n\nEach option is a right to acquire one share, subject to adjustment in accordance with the Rules of the Plan. The options entitle the holder to participate in any bonus issue conducted by the Company, upon exercise of the options. The exercise price of each option will be adjusted in the event of a rights issue.\n\nThere are no voting or dividend rights attached to the options. There are no voting rights attached to the unissued ordinary shares. Voting rights will be attached to the unissued ordinary shares when the options have been exercised.\n\nThe exercise price of the options and other conditions, including any performance hurdles, will be determined by the Board. No consideration is provided by Executives for the options. The Plan provides for options with a life of up to ten years.\n\nThe ability to exercise the options is generally conditional on the Company achieving a prescribed performance hurdle or exercise condition. To reach the performance hurdle, the Company's Total Shareholder Return (broadly, growth in share price plus dividends reinvested) (\"TSR Growth\") over a minimum three-year period must equal or exceed 10% per annum calculated on a compound basis. If Total Shareholder Return does not reach the performance hurdle at the end of those respective periods, the options may nevertheless be exercisable if the hurdle is subsequently reached within the remaining life of the options. In assessing the performance against the hurdle, the Board may apply on a consistent basis an averaging method over a period of three months to allow for short-term volatility.\n\nThe fair value of shares issued as a result of exercising the options during the reporting period at their issue date is the market price of shares of the Company on the Australian Stock Exchange as at close of trading.\n\nDuring the financial year, the Company granted 330,148 options over unissued shares as set out below. The ability to exercise 200,000 of these options is generally conditional on the Company achieving the performance hurdle described above and the balance are subject to the forfeiture provision described in the Senior Executive Long Term Incentive section of the Santos Executive Share Purchase Plan described above.\n\nThe amounts recognised in the financial statements of the Santos Group and the Company in relation to executive share options exercised during the financial year were:\n\n| | Consolidated | | Santos Ltd | |\n| --- | --- | --- | --- | --- |\n| | 2004 | 2003 | 2004 | 2003 |\n| | $million | $million | $million | $million |\n| Issued ordinary share capital | 4.1 | 5.7 | 4.1 | 5.7 |", - "page_start": 66, - "page_end": 66, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "#### **Item 11.** *EXECUTIVE COMPENSATION*\n\nInformation for the year ended October 25, 2003, commencing with \"Summary Compensation Table\" on page 12 through page 15 and \"Compensation of Directors\" on page 5 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## **Item 12.** *SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER MATTERS*\n\nInformation for the year ended October 25, 2003, under \"Principal Stockholders\" and \"Security Ownership of Management\" on pages 7 through 9 and information under \"Equity Compensation Plan Information\" on page 15 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## **Item 13.** *CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS*\n\nInformation under \"Other Information Relating to Directors, Nominees, and Executive Officers\" for the year ended October 25, 2003, as set forth on page 17 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## **Item 14.** *PRINCIPAL ACCOUNTING FEES AND SERVICES*\n\nThe information under the \"Audit Committee Report and Ratification of Appointment of Auditors—Audit Fees\" through \"—Audit Committee Preapproval Policies and Procedures\" on page 7 of the Company's definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## **PART IV**\n\n### **Item 15.** *EXHIBITS, FINANCIAL STATEMENT SCHEDULES AND REPORTS ON FORM 8-K*\n\n- (a) (1) and (2) The response to this portion of Item 15 is submitted as a separate section of this report. (3) List of Exhibits—The response to this portion of Item 15 is submitted as a separate section of this report.\n- (b) The following reports on Form 8-K were filed during the fourth quarter:\n\nForm 8-K was filed on August 1, 2003, announcing a January 24, 2004 retirement of Eric Brown, Group Vice President of Prepared Foods and member of the Board of Directors.\n\nForm 8-K was furnished on August 21, 2003, disclosing the issuance of the Company's earnings release for the third quarter ended July 26, 2003.\n\nForm 8-K was filed on October 7, 2003, announcing union workers from five of the Company's production facilities voted to ratify a new four-year labor contract.\n\nForm 8-K was filed on October 23, 2003, announcing the Company entered into an unsecured 3-year revolving credit facility in the amount of $150,000,000, which replaced an existing $150,000,000 credit facility entered into on October 25, 2001.\n\n- (c) The response to this portion of Item 15 is submitted as a separate section of this report.\n- (d) The response to this portion of Item 15 is submitted as a separate section of this report.\n\n## **SIGNATURES**\n\nPursuant to the requirements of Section 13 or 15(d) of the Securities Exchange Act of 1934, the Registrant has duly caused this report to be signed on its behalf by the undersigned, thereunto duly authorized.\n\n### **HORMEL FOODS CORPORATION**\n\nBy: /s/ JOEL W. JOHNSON\n\nDate: January 23, 2004\n\nJOEL W. JOHNSON Chairman of the Board, President and Chief Executive Officer\n\nPursuant to the requirements of the Securities Exchange Act of 1934, this report has been signed below by the following persons on behalf of the Registrant and in the capacities and on the dates indicated. Each person whose signature to this report on Form 10-K appears below hereby constitutes and appoints each of Michael J. McCoy, Jody H. Feragen and Mark P. Kalvoda as his or her true and lawful attorney-in-fact and agent, with full power of substitution, to sign on his or her behalf individually and in the capacity stated below and to perform any acts necessary to be done in order to file the Annual Report on Form 10-K and all amendments to this report on Form 10-K, and any and all instruments or documents filed as part of or in connection with this report on Form 10-K or the amendments hereto, and each of the undersigned does hereby ratify and confirm all that said attorney-in-fact and agent, or his substitutes, shall do or cause to be done by virtue hereof.", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_HRL_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "How many employees did Corning company count at the end of 2002 ?", - "target_page": 5, - "target_passage": "We are continuing to invest in our people — all 23,200 of them", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### I NVESTOR I NFORMATION :\n\n#### A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on\n\nThursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n#### A DDITIONAL I NFORMATION\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\n#### I NVESTOR I NFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n#### C OMMON S TOCK\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is \"GLW.\"\n\n#### TRANSFER AGENT AND REGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nCHANGE OF ADDRESS Report change of address to Computershare Investor Services at the above address.\n\n#### I NDEPENDENT A CCOUNTANTS\n\nPricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\n#### **Corning Incorporated**\n\nOne Riverfront Plaza Corning, NY 14831-0001 607 974 9000 www.corning.com\n\n02BR24601EN\n\n\"Safe Harbor\" Statement under the Private Securities Litigation Reform Act of 1995 The statements in this annual report that are not historical facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n- global economic and political conditions,\n- currency fluctuations,\n- product demand and industry capacity,\n- competitive products and pricing,\n- sufficiency of manufacturing capacity and efficiencies,\n- cost reductions,\n- availability and costs of critical materials,\n- new product development and commercialization,\n- attracting and retaining key personnel,\n- order activity and demand from major customers,\n- fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n- financial condition of customers,\n- changes in the mix of sales between premium and non-premium products,\n- facility expansions and new plant start-up costs,\n- adverse litigation or regulatory developments, including future or pending tax legislation,\n- adequacy and availability of insurance,\n- capital resource and cash flow activities,\n- capital spending,\n- equity company activities,\n- interest costs,\n- acquisition and divestiture activity,\n- the rate of technology change,\n- the ability to enforce patents,\n- product performance issues,\n- stock price fluctuations, and\n- other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any offering of securities or for the purpose of promoting or influencing the sale of securities.\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## BALA NC E Corning Annual Report 2002", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "### M A N A G E M E N T ' S D I S C U S S I O N A N D A N A L Y S I S\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n#### **Overview**\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n#### **Critical Accounting Policies and Estimates** *G E N E R A L*\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.\n\nAn accounting policy is deemed to be critical if it requires an accounting estimate to be made based on assumptions about matters that are uncertain at the time the estimate is made, and if different estimates that reasonably could have been used, or changes in the accounting estimates that are reasonably likely to occur periodically, could materially impact the financial statements. Management believes the following critical accounting policies reflect its more significant estimates and assumptions used in the preparation of the Consolidated Financial Statements.\n\n*Fiscal year end* – The Company's fiscal year ends on the Saturday nearest December 31. Fiscal year 2003, the year ended January 3, 2004, contained 53 weeks, while fiscal year 2002, the year ended December 28, 2002, and fiscal year 2001, the year ended December 29, 2001, contained 52 weeks. A 53-week year occurs approximately every sixth year.\n\n*Revenue recognition* – Revenue is normally recognized upon shipment of goods to customers. In certain circumstances revenue is not recognized until the goods are received by the customer or upon installation and customer acceptance based on the terms of the sale agreement. Revenue includes freight charged to customers; related", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### **NOTE 11 — STOCKHOLDERS' EQUITY**\n\nShare repurchases are only conducted under repurchase programs approved by the Board of Directors and publicly announced. Share repurchase activity was as follows:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| August 2001 authorization (0, 1.4 million | | | | |\n| and 6.4 million shares purchased) $ | — | $ 36,034 | $ 207,590 | |\n| February 2003 authorization | | | | |\n| (10 million shares purchased) | — | 335,911 | — | |\n| November 2003 authorization (8 million | | | | |\n| and 2 million shares purchased) | 348,895 | 70,919 | — | |\n| | $ 348,895 | $ 442,864 | $ 207,590 | |\n| Average price of shares repurchased $ | 43.59 | $ 33.17 | $ 32.28 | |\n\nAt December 31, 2004, we had 10 million shares available for repurchase under a July 2004 authorization.\n\nIn May 2002, the Board of Directors approved a restricted stock plan. The plan allowed for the issuance of up to 1 million shares of Company common stock to certain key employees. The restrictions on selling these shares lapse 50% on the third anniversary date from the grant date and 50% on the fourth anniversary date after the grant date. Through December 31, 2004, 903,000 shares were issued, with an aggregate value of $32 million. This amount was recorded as deferred compensation in the accompanying consolidated balance sheet and is being amortized to operating expenses on a straight-line basis through the period in which the restrictions fully lapse. Amortization of deferred compensation was $7 million, $8 million and $5 million for the years ended December 31, 2004, 2003 and 2002, respectively, and 855,000 shares were outstanding under the plan at December 31, 2004. In November 2002, the Board of Directors determined that no more awards would be granted under the plan.\n\n#### **NOTE 12 — EMPLOYEE BENEFIT PLANS**\n\nEmployees of the Company who are members of various unions are covered by union-sponsored, collectively bargained, multi-employer health and welfare and defined benefit pension plans. The Company recorded an expense of $86 million in 2004, $77 million in 2003 and $66 million in 2002 under such plans. The plans' sponsors have not provided sufficient information to permit the Company to determine its share of unfunded vested benefits, if any.\n\nThe Company is self-insured for most health care benefits for its non-union employees. The liability for claims filed and estimates of claims incurred but not reported is included in the \"Other accrued liabilities\" caption in the accompanying consolidated balance sheets.\n\nThe Company has a retirement savings plan under Section 401(k) of the Internal Revenue Code for eligible employees not covered by a collective bargaining agreement that does not specifically provide for participation in the plan. The plans allow employees to defer, within prescribed limits, up to 30% of their income on a pre-tax basis through contributions to the plans. The Company matches, within prescribed limits, a portion of eligible employees' contributions. In the case of certain union employees, the Company contributes to the plan are based on hours worked. The Company recorded charges for 401(k) contributions of $12 million in 2004, $10 million in 2003 and $12 million in 2002.\n\nThe Company maintains a nonqualified deferred retirement plan for certain key employees. The plan allows participants to defer, on a pre-tax basis, a portion of their salary and bonus and accumulate tax deferred earnings, plus investment earnings on the deferred balances, as a retirement fund. Participants receive a Company match of up to 4% of salary, net of any Company match received under the Company's 401(k) plan. All employee deferrals vest immediately. The Company matching contributions vest ratably over a three-year period. The Company recorded charges for matching contributions of $1 million in 2004, $2 million in 2003 and $1 million in 2002.", - "page_start": 72, - "page_end": 72, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### T O O U R S HAREHOLDERS :\n\nJ AMES R. H OUGHTON\n\nC HAIRMAN AND C HIEF E XECUTIVE O FFICER\n\nWe will long remember 2002 as one of the most challenging years — if not the most challenging — in Corning Incorporated's long history. I quickly became even more steeped in these challenges in April when, at the request of our Board of Directors, I returned to the company as Chairman and Chief Executive Officer.\n\nSince that time, I am increasingly convinced that, despite our downturn, the long-term future of Corning remains bright and filled with opportunity.\n\nBut in the meantime, we have been living in a very difficult reality – one marked by ongoing quarterly losses and drops in revenue. You, our shareholders—along with our employees and our friends in the communities we serve—felt the pain. We all watched our businesses retrench, battered by a weakened global economy and Wall Street turmoil. And we could only wonder what bad news would be next as our stock value continued its seemingly relentless decline.\n\nWith the severe drop-off in revenues from our telecommunications customers, we knew we could no longer afford to keep up the costly infrastructure of facilities and staff we had in place. Put simply, we couldn't spend more than we were making.\n\nWe also knew our strengths — and they were many! We knew we were not — nor had we ever been — merely a telecommunications company. Rather, we are a technology company, with the materials and process expertise to create life-changing products. That's what we've been for all of our 152 years; that's what we'll continue to be.\n\nAnd we knew something else … that our Values, the historic strength of our company, were alive and well. Quality, Integrity, Performance, Leadership, Innovation, Independence and The Individual continue to guide our every move, and continue to set us apart from other companies— especially those caught in the accounting scandals that marred the business world this past year.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "The wireless industry in the late 1990's became increasingly competitive and the Company was not immune to these industry issues. The Clear PaySM program, introduced by Sprint as a no-deposit offering in 2001, attracted high credit risk customers in the Company's markets. As the results began to materialize, the Company implemented deposits on this program (mid-April 2002), and experienced high levels of customer turnover (churn) and uncollectable accounts. The write-offs of uncollectable accounts peaked in the third quarter of 2002. During the fourth quarter of 2002 there was some evidence that the strengthened credit policy was having a favorable impact. Nonetheless, the 2002 net loss in the PCS operation was $5.4 million, as compared to $5.5 million in 2001. Despite the disappointing financial results for 2002, the PCS customer base grew by over 40%. While the PCS operation was adding customers, the cellular operation continued to lose its local customer base.\n\nThe growing belief that national branding was critical to our wireless operations, the expectation that roaming revenues from our analog cellular operation would not continue to grow, and the increase in the number of wireless competitors in our markets, prompted the Company to exit the cellular business in order to focus on our PCS operations. The Company entered into an agreement on November 21, 2002, to sell its 66% ownership interest in the Virginia 10 RSA cellular operation which was classified as a discontinued operation. The closing occurred February 28, 2003. The Company received $37.0 million in proceeds, including $5.0 million in escrow for two years and $1.7 million for working capital.\n\nIn many respects, 2003 was a successful year. Churn and levels of uncollectable accounts in the PCS operation returned to more acceptable levels. PCS revenues reached $67.0 million, and total revenues reached $105.9 million. The PCS operation recognized a small profit for the year, including favorable adjustments associated with settlement of disputed items with Sprint. Excluding the favorable adjustments, the PCS operation recognized a profit in the fourth quarter. With improved operating cash flow and reduced capital spending in 2003, the Company prepaid $4.6 million in debt, selecting those notes with nominal prepayment penalties. Additionally, after receiving the cash and paying taxes on the gain of the sale of the Virginia 10 partnership interest, the Company invested the remaining proceeds in liquid financial instruments, available for future deployment. Additionally, the Company has been successful at decreasing its dependency on wireline revenues. Wireline revenues, at $29.0 million in 2003 compared to $18.6 million in 1998, were 27.4% of total revenues in 2003 compared to 76.6% in 1998.\n\nEntering 2004, the Company is pleased with the milestone of a profitable quarter in the PCS operation, but recognizes that much work remains to ultimately earn a reasonable return on this investment. The recently announced signing of an addendum to the management and services agreements with Sprint is expected to lead to cost savings and greater certainty in fees paid to Sprint. However, the consolidation predicted for the wireless industry in recent years, including the recently announced Cingular/ATT deal and anticipated improvements in the overall economics of wireless services, has not yet materialized. Future Sprint marketing efforts, designed to meet the competition, could potentially have an unfavorable impact on the Company and lead to additional losses. The risks associated with the Sprint PCS affiliation are described in further detail elsewhere in this document. The Company is now reviewing alternatives for other businesses to further diversify our revenue base, from either a services platform or a geographic concentration.\n\n#### **Significant Transactions**\n\nThe Company had several significant transactions during 2003. The largest was the sale of its 66% interest in the Virginia 10 RSA cellular operation, as described above. The Company originally entered into the agreement with Verizon Wireless in November 2002. The Company was the general partner of the limited partnership which operated an analog cellular network in the six-county area of Northwestern Virginia, including Clarke, Frederick, Page, Rappahannock, Shenandoah, and Warren counties, and the city of Winchester. The sales price was $37.0 million plus the Company's 66% share of the partnership's working capital, which was approximately $1.7 million. The Company was required to do a working capital true up following the closing, from which the Company recorded a charge for $23 thousand after taxes. In the fourth quarter the Company recorded an additional charge for taxes of $0.2 million to reflect the consolidated effective tax rate based on the final operating results for the year.\n\nThe sale of this business is reflected in the discontinued operations section of the income statement along with the results of operations for the two months of 2003 that the operation remained a part of the Company.", - "page_start": 41, - "page_end": 41, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "### **Production Growth**\n\nAverage mmcfe per day for year\n\n### **Proved Reserve Growth** Bcfe at end of year\n\n20,000 20,000\n\n### **Total Resource Base Growth*** Bcfe at end of year\n\n0\n\n0\n\n0\n\n0\n\n500\n\n100\n\n200\n\n300\n\n400\n\n500\n\n100\n\n200\n\n300\n\n400\n\n500\n\n5\n\n10\n\n15\n\n20\n\n0\n\n0\n\n**Chesapeake's Stock Price** Chesapeake's Stock Price at Month End Henry Hub Natural Gas Spot Price at Month End\n\n### **Chesapeake's Five-Year and Ten-Year Common Stock Performance**\n\n0\n\n0\n\n0\n\n80 The graphs below compare the performance of our common stock to the S&P 500 Stock Index and a group of peer companies for the past five and 10 years. The graph on the left assumes an investment of $100 on December 31, 2004 and the reinvestment of all dividends. The graph on the right assumes an investment of $100 on December 31, 1999 and the reinvestment of all dividends. The graphs show the value of the investment at the end of each year.\n\n0\n\n0\n\n30\n\n60\n\n90\n\n120\n\n0\n\n0\n\n30\n\n60\n\n90\n\n120\n\n150\n\n500\n\n150\n\n100\n\n200\n\n300\n\n400\n\n500\n\n100\n\n200\n\n300\n\n400\n\n0\n\n### **FIVE-YEAR PERFORMANCE** 70\n\n0\n\n0\n\n0\n\n0\n\n150\n\n30\n\n60\n\n90\n\n120\n\n150\n\n30\n\n60\n\n90\n\n120\n\n150\n\n0\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\n0\n\n0\n\n500\n\n1,000\n\n1,500\n\n2,000\n\n2,500\n\n3,000\n\n500\n\n1,000\n\n1,500\n\n2,000\n\n2,500\n\n3,000\n\n### **TEN-YEAR PERFORMANCE**\n\n(1) The 2010 peer group is comprised of Anadarko Petroleum Corp., Apache Corp., Devon Energy Corp., Encana Corp. and EOG Resources, Inc. XTO Energy, Inc. was not included in the 2010 peer group due to its acquisition by Exxon Mobil Corp. 500 150", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "#### **Note 1. Summary of Significant Accounting Policies (Continued)**\n\n*Reclassifications:* Certain amounts reported in the 2002 and 2001 financial statements have been reclassified to conform with the 2003 presentation, with no effect on net income or shareholders' equity.\n\n#### **Note 2. Discontinued Operations**\n\nIn November 2002, the Company entered into an agreement to sell its 66% General Partner interest in the Virginia 10 RSA Limited Partnership (cellular operation) to Verizon Wireless for $37.0 million. The closing of the sale took place at the close of business on February 28, 2003. The total proceeds received were $38.7 million, including $5.0 million held in escrow, and a $1.7 million adjustment for estimated working capital at the time of closing. There was a post closing adjustment based on the actual working capital balance as of the closing date, which resulted in a $39 thousand charge for the Company. The $5.0 million escrow was established for any contingencies and indemnification issues that may arise during the two-year post-closing period and is included in deferred charges and other assets in the 2003 consolidated balance sheet. The Company's gain on the transaction was approximately $35 million. Post closing, the Company provided transition services to Verizon for a period of approximately three months, with compensation for those services being approximately $40 thousand per month during the transition period.\n\nThe assets and liabilities attributable to the cellular operation have been classified as held for sale in the consolidated balance sheets and consist of the following at December 31, 2002 and 2001:\n\n| | | 2002 | | 2002 | 2002 | 2001 | | | 2001 2001 |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Assets | | (in thousands) | | | (in thousands) | | (in thousands) (in thousands) | | |\n| Accounts receivable | $ | 2,608 | $ | $ $ | 2,608 2,608 $ | | 2,759 | $ $ | 2,759 |\n| Other current assets | | 309 | | | 309 | | 309 214 | | 214 |\n| Property, plant and equipment, (net) | | 2,631 | | | 2,631 2,631 | | 3,272 | | 3,272 |\n| Total assets | $ | 5,548 | $ | $ $ | 5,548 $ 5,548 | | 6,245 | $ $ | 6,245 |\n| Liabilities and minority interest | | | | | | | | | |\n| Accounts payable and accrued expenses | $ | 381 | $ | $ $ | 381 $ | | 381 499 | $ $ | 499 |\n| Deferred revenue and deposits | | 161 | | | 161 | | 161 236 | | 236 |\n| Minority interest | | 1,666 | | | 1,666 1,666 | | 1,838 | | 1,838 |\n| Total liabilities and minority interest | $ | 2,208 | $ | $ $ | 2,208 $ 2,208 | | 2,573 | $ $ | 2,573 |\n\nThe operations of the cellular partnership including the minority interest have been reclassified as discontinued operations, net of taxes in the consolidated statements of income for all periods presented. Operating results and the sale of the discontinued operations are summarized as follows:\n\n| | 2003 2003 | | 2002 2002 | | 2001 2001 |\n| --- | --- | --- | --- | --- | --- |\n| | $ | | $ (in thousands) | 5 | $ |\n| Revenues | $ | 3,056 | | $ 20,895 | $ 20,012 |\n| Operating expenses | | 453 | | 3,618 | 4,674 |\n| Other income | | - | | 3 | 16 |\n| Income before minority interest and taxes | $ | 2,603 | | $ 17,280 | $ 15,354 |\n| Minority interests | | (773) | | (5,200) | (4,526) |\n| Sale of partnership interest | | 34,973 | | - | - |\n| Income taxes | (14,414) | | | (4,668) | (4,150) |\n| Net income from discontinued operations | $ 22,389 | | $ | 7,412 | $ 6,678 |", - "page_start": 24, - "page_end": 24, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## **(b)** *Industry Segment*\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n#### **(c)** *Description of Business*\n\n## **Products and Distribution**\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n| --- | --- | --- | --- |\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| Other | 8.7 | 4.6 | 4.0 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).\n\nNo new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n### **Raw Materials**\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "# *people*\n\n**T**he worst of 2001 brought out the best in The Hartford's people.\n\nAs the world watched the horrors of Sept. 11, some 330 of our New York employees fled their offices in 7 World Trade Center. Though many were caught in the debris and dust from the nearby Twin Towers, all escaped safely.\n\nBy the time the 47-story 7 World Trade Center building collapsed at about 5:20 p.m., The Hartford had already arranged for temporary space in several of the company's other offices. Employees and suppliers immediately began working around the clock to get the business up and running again. Despite the destruction, back-up systems kept distributors' and customers' data secure.\n\nA hundred miles from Ground Zero, home office employees in Hartford, Conn., began shuttling equipment and supplies to our temporary offices. Some booked Long Island Sound ferries from Connecticut to Long Island within 48 hours of the attack. Others spent the weekend driving supplies to the new locations so employees could concentrate on customers instead of on finding pens and paper. Employees and suppliers were determined to get the company, its distributors and its customers through the crisis.\n\nBy Monday, Sept. 17, all of The Hartford's business units in New York were serving customers again. Employees had new furniture, phones, servers and PCs. Distributors' and customers' access to company e-mail was never interrupted. Calls to old phone numbers were rerouted to cell phones or new office phones. Print and radio ads—along with The Hartford's Web site gave customers instructions for filing claims quickly. Customer relationships were stronger than ever. The Hartford Experience—customer solutions, ease of doing business and extraordinary service—was never better demonstrated.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "What is the shortcut to mute myself in MS teams ?", - "target_page": 3, - "target_passage": "Use [Ctrl]+[Shift]+[M] for a shortcut to mute and unmute during meetings.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "# **Meeting essentials**\n\n### **Create meetings**\n\n- Select **+ New meeting** or double-click on a time in your calendar to create a new meeting. 1.\n- 2. Add people, a location and any notes.\n- 3. Send your invite.\n\n### **Join meetings**\n\n- From the calendar tab, select the meeting you intend to join, then select join. . 1.\n- A new screen will show up. Here you can choose how you want to appear in the meeting, and your audio preferences. 2.\n- 3. Then select join now. .\n\n### **Present in meetings**\n\n- Screen share from the Share button at the top of your meeting window. 1.\n- Choose what screen or window you want to share. Don't forget to include audio if you're sharing something with sound. 2.\n- When you are finished, use the share button at the top of your meeting window to stop sharing. 3.\n\n# **Meeting controls**\n\nWhen you join meetings, a different window will pop-up. These are the controls you need to know:\n\nClick to see who has been invited to the meeting, or to add new people.\n\nUse chat to share files, ideas, and notes.\n\nStay involved without breaking the flow—you can share an emoji reaction to let the presenter know how you feel. Reactions also allow you to raise your hand, which will signal that you'd like an opportunity to speak.\n\nMute and unmute your microphone when you want to speak.\n\nTurn your camera on or off. You can also select the … button near the camera to access audio and video settings.\n\nUse this to share your screen with others.\n\n**Tip** Use [Ctrl]+[Shift]+[M] for a shortcut to mute and unmute during meetings.", - "page_start": 2, - "page_end": 2, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "## Make your meaning more visual by formatting text\n\nTo format text, select it, and then select a button in the Font or Paragraph area on the Home tab.\n\nTry it: Select text in the lines below and choose formatting options so that the text is an example of the formatting it's describing:\n\n| Bold (keyboard shortcut: Ctrl+B) |\n| --- |\n| Italic (keyboard shortcut: Ctrl+I) |\n| Highlight |\n| Font color |\n| Bullets |\n| Numbering |\n\nPro tip: If you selected whole words for this exercise, did you notice that Word popped up a little toolbar, with the font formatting options?\n\n| Segoe UI - 11 | - A A | Aa - | Po |\n| --- | --- | --- | --- |\n| B I U v abe X2 X2 | | A - all - A - | |\n\nBetween that and keyboard shortcuts like Ctrl+B and Ctrl+I, you save time by not having to go up to the Home tab all the time.", - "page_start": 3, - "page_end": 3, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Create something\n\nBegin with a **Blank document** to get right to work. Or start with a template to save yourself time and steps. Just select **File** > **New**, and then select or search for the template you want.\n\n| | New |\n| --- | --- |\n| (n) Home | |\n| New | |\n| Open | |\n| Info | |\n| Save a Copy | |\n| Save as Adobe PDF | Blank document |\n| Print | |\n| Share | Search for online templates Q |\n| Export | Suggested searches Business Cards Flyers Letters Education Resumes and Cover Letters Holiday |\n| Transform | Aa NAME |\n| Clase | Take a tour |\n\n### Access files anywhere\n\nNeed to work on the go and across different devices? Click **File** > **Account** to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n#### Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting **File** > **Open** takes you to your recently used documents and any files that you may have pinned to your list.\n\n| € | Open | | | | |\n| --- | --- | --- | --- | --- | --- |\n| (2 Home | | | | | |\n| D New | L Recent | | 0 Search | | |\n| | | | Documents Folders | | |\n| Open | 08 | Shared with Me | | | |\n| | Contass | | 13 Name | | Date modified |\n| Info | | OneDrive - Contoso | Pinned | Pin files you want to easily find later. Click the pin icon that appears when you hover over a file. | |\n| Save a Copy | | MeganB@contoso.com | | | |\n| | | | Today | | |\n| Save as Adobe PCC | | Sites - Contoso MeganB@contoso.com | 四元 Connector - Elbow.doco Desktop | | 11/4/2021 3:01 AM |\n| Print | | | | | |\n| Share | This PC | | CE Annual Report.docx W OneDrive - Contoso | | 11/4/2021 2:48 AM |\n| | Add a Place | | | | |\n| Export | | | Older | | |\n| Transform | Browse | | Document (8).doco W | | 10/S/2021 4:48 PM |\n| | | | OneOrive - Contaso | | |\n| Close | | | 8 | Voice Capture Document.docx | 10/5/2021 4:37 PM |\n| | | | OneOrive - Contoso | | |\n| | | | W | Manufacturing and delivery plan.docx Mark 8 Project Team > Research and Development | 9/16/2021 8:28 AM |\n\n### Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the **Table Design** and **Layout** tabs, which offer additional options.\n\n| Review | View | Help | Acrobat | Table Design | | Layout | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | | | | 1/2 pt | |\n| | | | | | Shading | Border | | Borders Border |\n| | | | | | | | Styles × | Painter |\n| Table Styles | | | | | | | Borders | 7 |", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "#### **Charge for managed self-isolation package**\n\n**9.** The Secretary of State or a person designated by the Secretary of State may impose a charge in relation to the accommodation, transport and testing package mentioned in the definition of a \"managed self-isolation package\" and the Secretary of State may recover any sum owed by P pursuant to such a charge as a debt.\n\n#### **Duty to self-isolate and period of self-isolation**\n\n**10.** Unless P leaves the common travel area where P is permitted to do so under these Regulations, P must self-isolate in the place in the accommodation designated in the managed selfisolation package until whichever is the later of—\n\n- (a) the end of the period of 10 days beginning with the day after P's arrival in England;\n- (b) the end of the period for which P is required to self-isolate under Schedule 8 (mandatory testing after arrival in England).\n\n#### **Exceptions from duty to self-isolate**\n\n**11.** Paragraph 10 does not require P to remain in self-isolation—\n\n- (a) from any person with whom they were travelling when they arrived in England and who is also self-isolating in the place where P is self-isolating;\n- (b) from any person who is staying in the place where P is self-isolating whose assistance P reasonably requires by reason of—\n\t- (i) P being a child, or\n\t- (ii) any disability of P's.\n\n**12.** Paragraph 10 does not require P to remain in self-isolation from a person (\"V\") when V is at the place where P is self-isolating in exceptional circumstances such as—\n\n- (a) to provide emergency assistance;\n- (b) to provide care or assistance, including relevant personal care within the meaning of paragraph 1(1B) or 7(3B) of Schedule 4 to the Safeguarding Vulnerable Groups Act 2006(**a**);\n- (c) to provide medical assistance to P or to any other person who is staying in the place where P is self-isolating where this is required urgently or on the advice of a registered medical practitioner;\n- (d) to provide veterinary services where this is required urgently or on the advice of a veterinary surgeon;\n- (e) to provide critical public services including social services or services provided to victims (such as victims of crime).\n\n#### **Permitted reasons to leave or be outside place of self-isolation**\n\n**13.**—(1) During the period of their self-isolation P may not leave or be outside of the place where P is self-isolating except—\n\n- (a) to travel directly to a port to leave the common travel area;\n- (b) to fulfil a legal obligation, including attending court or satisfying bail conditions or to participate in legal proceedings;\n- (c) to take exercise;\n\n(<b>a) 2006 c. 47 paragraph 1(1B) was inserted by section 64 the Protection of Freedoms Act 2012 (c. 9) and paragraph 7(3B) was inserted by section 66 of that Act.", - "page_start": 76, - "page_end": 76, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "# **Welcome to Microsoft Teams**\n\nMicrosoft Teams is the app that brings your conversations, meetings, and files together in one place. This guide will help you get started with Teams, learn the basics, get tips to practice on your own, and discover ways to engage your team.\n\n**Download** the app for desktop and mobile to access Teams with the best performance anywhere you go.\n\n**Hit the ground running now!** Build confidence by trying things on your own. Go to the meet now button (at the top right corner on the Calendar tab) to play around and test all the meetings functionalities before you're in the spotlight!", - "page_start": 0, - "page_end": 0, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "# Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share, and send a link to this document. (keyboard shortcut – Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n# Add visuals with pictures from the web\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures, and then search for something, like puppy clip art.\n- 2. Select the picture you want, and select Insert.", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "After either activation process is complete, you can see a green check mark in the column labeled Licensed for the control enclosure, as shown in Figure 12-11.\n\n| ▼ Encryption Licenses | | | | | |\n| --- | --- | --- | --- | --- | --- |\n| Add the license keys for the following enclosures | | | | | |\n| 三 Actions | | | | | |\n| 个 | | M/T-M | S/N | Licensed Ili | Type |\n| Control Enclosure | | 2076-624 | 7822DFF | | |\n\n*Figure 12-11 Successful encryption license activation on a running system*\n\n# **12.3.4 Activate the license automatically**\n\nThe automatic license activation is the faster method to activate the encryption license for IBM Spectrum Virtualize. You need the authorization code and the workstation that is used to access the GUI that can access the external network.\n\n**Important:** To perform this operation, the personal computer that was used to connect to the GUI and activate the license must connect to the internet.\n\nTo activate the encryption license for a control enclosure automatically, complete the following steps:\n\n- 1. Select **Activate License Automatically** to open the Activate License Automatically window, as shown in Figure 12-12.\n\n| Activate License Automatically |\n| --- |\n| Enter the authorization code for node: |\n| Type or paste the authorization code here |\n| Cancel Activate |\n\n*Figure 12-12 Encryption license Activate License Automatically window*\n\n- 2. Enter the authorization code that is specific to the control enclosure that you selected, as shown in Figure 12-13 on page 615. You can now click **Activate**.", - "page_start": 635, - "page_end": 635, - "source_file": "sg247938.pdf" - }, - { - "text": "# **12.10 Disabling encryption**\n\nYou are prevented from disabling encryption if any encrypted objects are defined apart from self-encrypting MDisks. You can disable encryption in the same way whether you use USB flash drives, key server, or both providers.\n\nTo disable encryption, complete the following steps:\n\n- 1. Click **Settings** → **Security** → **Encryption** and click **Enabled**. If no encrypted objects exist, a menu is displayed. Click **Disabled** to disable encryption on the system. Figure 12-90 shows an example for a system with both encryption key providers configured.\n\n| Remote Authentication | | | |\n| --- | --- | --- | --- |\n| | Encryption | | |\n| Encryption | State: | Enabled > | |\n| | Encryption Keys: | Enabled | |\n| Secure Communications | - ▶ 4 Key Servers | Disabled | |\n| | | | VVVV |\n| | - > 3 USB Flash Drives Detected | | |\n| | | | Configured |\n| | - ▶ Certificate | | |\n\n*Figure 12-90 Disabling encryption on a system with both providers*\n\n- 2. You receive a message confirming that encryption was disabled. Figure 12-91 shows the message when a key server is used.\n\n| Disable Encryption | |\n| --- | --- |\n| Task completed. | 100% |\n| V View more details | |\n| Task started. | 12:58 AM |\n| Running command: | 12:58 AM |\n| svctask chencryption -keyserver disable | 12:58 AM |\n| Running command: | 12:58 AM |\n| svctask chencryption -usb disable | 12:58 AM |\n| Synchronizing memory cache. | 12:58 AM |\n| The task is 100% complete. | 12:58 AM |\n| Task completed. | 12:58 AM |\n| Cancel | Close |\n\n*Figure 12-91 Encryption disabled*", - "page_start": 692, - "page_end": 692, - "source_file": "sg247938.pdf" - }, - { - "text": "#### **Connecting to the lab environment**\n\nTo connect to the lab environment, you need access to the account credentials to your admin OpenShift web console and the terminal window with root privileges, as shown in Figure B-1.\n\n*Figure B-1 Access OpenShift web console*\n\nComplete the following steps:\n\n- 1. Modify the hosts file on your local computer.\nThe containers that are deployed in OpenShift are on a private network within the OpenShift cluster. Accessing applications from an external connection requires network and deployment planning, which is beyond the scope of this lab exercise. In this step, you modify the hosts file on your local machine to access the deployments that were created for this lab, as shown in Example B-1.\n\n*Example B-1 Modifying the hosts file*\n\nIP_ADDRESS console.router.default.svc.cluster.local app2-http-git\n\n- 2. Open a terminal window to the OCP machine by using a user with root privileges. Run the **ssh** command or PuTTY from your local computer.\n- 3. Verify the release of Red Hat and other pieces of information about your operating system, as shown in Example B-2.\n\n*Example B-2 Verifying the release of Red Hat*\n\n$ uname -srm Linux 3.10.0-957.21.3.el7.ppc64le ppc64le", - "page_start": 235, - "page_end": 235, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "How can I make a channel visible to an invited member ?", - "target_page": 4, - "target_passage": "Channels can be: • Shared (visible to invited team members and external members of your organization who are not on the team)", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Meeting essentials**\n\n### **Create meetings**\n\n- Select **+ New meeting** or double-click on a time in your calendar to create a new meeting. 1.\n- 2. Add people, a location and any notes.\n- 3. Send your invite.\n\n### **Join meetings**\n\n- From the calendar tab, select the meeting you intend to join, then select join. . 1.\n- A new screen will show up. Here you can choose how you want to appear in the meeting, and your audio preferences. 2.\n- 3. Then select join now. .\n\n### **Present in meetings**\n\n- Screen share from the Share button at the top of your meeting window. 1.\n- Choose what screen or window you want to share. Don't forget to include audio if you're sharing something with sound. 2.\n- When you are finished, use the share button at the top of your meeting window to stop sharing. 3.\n\n# **Meeting controls**\n\nWhen you join meetings, a different window will pop-up. These are the controls you need to know:\n\nClick to see who has been invited to the meeting, or to add new people.\n\nUse chat to share files, ideas, and notes.\n\nStay involved without breaking the flow—you can share an emoji reaction to let the presenter know how you feel. Reactions also allow you to raise your hand, which will signal that you'd like an opportunity to speak.\n\nMute and unmute your microphone when you want to speak.\n\nTurn your camera on or off. You can also select the … button near the camera to access audio and video settings.\n\nUse this to share your screen with others.\n\n**Tip** Use [Ctrl]+[Shift]+[M] for a shortcut to mute and unmute during meetings.", - "page_start": 2, - "page_end": 2, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "# **Next Steps**\n\nYou will **get the most out of Teams** when you get to truly connect with your team and collaborate together. Keep practicing until each step of your workflow feels natural.\n\n| Test meetings | | |\n| --- | --- | --- |\n| 1. | Use the Meet now button in the | Calendar tab |\n| Then select \"Start meeting\" | 2. | |\n| 3. | And then \"Join now\" | |\n| Here you can try to share your screen, | start a whiteboard or even record | |\n| yourself while you are practicing a | presentation. This is your safe space | |\n| to test everything out! | | |\n\n# **Share knowledge**\n\nTeamwork is all about collaboration! **Share with your team best practices** you learn along the way, tips and tricks for how you can best organize your workflows and ask for their own advice to define how you can best use Teams together.\n\n# **Keep learning**\n\nNo matter how you like to learn and practice, we've got resources to support and inspire you:\n\n- Virtual classes: We have instructors to answer your questions and walk you through all the details. •\n- Training series: Complete the beginner series of videos at your own pace.\n- Support articles and step-by-step guides: To get answers to your most common questions.\n- Feature overviews, tutorials, and announcements: Our YouTube channel has carefully curated content to get you excited and show how you can use Teams effortlessly.", - "page_start": 5, - "page_end": 5, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "# **Creating Fibre Channel hosts**\n\nTo create Fibre Channel hosts, complete the following steps:\n\n- 1. Select **Fibre Channel**. The Fibre Channel configuration window opens (see Figure 8-4).\n\n| Add Host | | | | × |\n| --- | --- | --- | --- | --- |\n| Required Fields | | | | |\n| Name: | Windows-Host-01 | | | |\n| Host connections: | Fibre Channel (SCSI) | | | |\n| Host port (WWPN): | | C | ન ન | |\n| Optional Fields | | | | |\n| Host type: | Generic | D | | |\n| I/O groups: | All | ▼ | | |\n| Host cluster: | No Host Cluster Selected | ▼ | | |\n| | | Cancel | Add | |\n\n*Figure 8-4 Fibre Channel host configuration*", - "page_start": 350, - "page_end": 350, - "source_file": "sg247938.pdf" - }, - { - "text": "I am aware of the above [framework] [specific] contract, especially Articles [I.10 and II.13] concerning intellectual property rights and exploitation of the results and I confirm that I transferred all the relevant rights to [*insert name of contractor or other intermediary right holder*].\n\nI declare that [I have received full remuneration] [I agreed to receive remuneration by [*insert date*]].\n\n[As creator, I also confirm that I do not object to the following:\n\n- (a) that my name be mentioned or not mentioned when the results are presented to the public;\n- (b) that the results be divulged or not after they have been delivered in their final version to the contracting authority;\n- (c) that the results be adapted, provided that this is done in a manner which is not prejudicial to my honour or reputation.]\n\nDate, place, signature", - "page_start": 48, - "page_end": 48, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "# **Adding a Fibre Channel port**\n\nTo add a Fibre Channel port, complete the following steps:\n\n- 1. Click **Fibre Channel Port** (see Figure 8-52 on page 362). The Add Fibre Channel Ports window opens (see Figure 8-53).\n\n| Add Fibre Channel (SCSI) Ports |\n| --- |\n| Host Name: |\n| ITSO- |\n| VMHOST-02 |\n| Fibre Channel (SCSI) Ports |\n| Add Port to List Rescan . |\n| Port Definitions |\n| You have not added any WWPNs yet. |\n| Cancel Add Ports to Host |\n\n*Figure 8-53 Add Fibre Channel Ports window*\n\n- 2. Click the drop-down menu to display a list of all discovered Fibre Channel WWPNs. If the WWPN of your host is not available in the menu, enter it manually or check the SAN zoning to ensure that connectivity is configured. Then, rescan storage from the host.", - "page_start": 384, - "page_end": 384, - "source_file": "sg247938.pdf" - }, - { - "text": "- 10.If the host that needs access to the migrated data is not configured, select **Add Host** to begin the Add Host wizard. Enter the host connection type, name, and connection details. Optionally, click **Advanced** to modify the host type and I/O group assignment. Figure 9-9 shows the Add Host wizard with the details completed.\nFor more information about the Add Host wizard, see Chapter 8, \"Hosts\" on page 317.\n\n| Add Host | | | | × |\n| --- | --- | --- | --- | --- |\n| Required Fields | | | | |\n| Name: | ISCSI HOST | | | |\n| Host connections: | ISCSI (SCSI) | ー | | |\n| Site: | None | ▼ | | |\n| Host IQN: | 24532 | | ① Θ | |\n| Optional Fields | | | | |\n| CHAP authentication: | | | | |\n| CHAP secret: | Enter 1 to 79 characters | | | |\n| CHAP username: | Enter 1 to 31 characters | | | |\n| Host type: | Generic | ৰ | | |\n| I/O groups: | All | ৰ | | |\n| Host cluster: | No Host Cluster Selected | ▼ | | |\n| | | Cancel | Add | |\n\n*Figure 9-9 If not already defined, you can create a host during the migration process*\n\n- 11.Click **Add**. The host is created and is now listed in the Configure Hosts window, as shown in Figure 9-8 on page 394. Click **Next** to proceed.\n- 12.The next window lists the new volumes and enables you to map them to hosts. The volumes are listed with names that were automatically assigned by the system. The names can be changed to reflect something more meaningful to the user by selecting the volume and clicking **Rename** in the **Actions** menu.", - "page_start": 416, - "page_end": 416, - "source_file": "sg247938.pdf" - }, - { - "text": "| Add Host | | | | × |\n| --- | --- | --- | --- | --- |\n| Name: | Windows-Host-01 | | | |\n| Host connections: | Fibre Channel (SCSI) | | | |\n| Host port (WWPN): | 2100000E1E09E3E9 | | +) (-) | |\n| | 2100000E1E30E60F | C - | (ન) ૯ | |\n| Optional Fields | | | | |\n| Host type: | Generic | | | |\n| I/O groups: | | | | |\n| Host cluster: | HP/UX | | | |\n| | OpenVMS | | | |\n| | TPGS | | | |\n| | VVOL | | Add | |\n\n*Figure 8-6 Host type selection*\n\n- 6. Click **Add** to create the host object.\n- 7. Click **Close** to return to the host window. Repeat these steps for all of your Fibre Channel hosts. Figure 8-7 shows the **All Hosts** window after creating a second host.\n\n| + Add Host | 三 Actions ▼ | > | | | Default | > Contains | Filter |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | | Status | Host Type | # of Ports | Host Mappings | Host Cluster ID | Host Cluster Nam ₪ |\n| Windows-Host-01 | | V Online | Generic | 2 | No | | |\n\n*Figure 8-7 Hosts view after creating a host*\n\nAfter you complete the adding of Fibre Channel hosts, see Chapter 7, \"Volumes\" on page 241 to create volumes and map them to the created hosts.\n\n# **Creating iSCSI hosts**\n\nWhen creating an iSCSI attached host, consider the following points:\n\n- iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for multipathing support in the iSCSI host.\n- - The IQN of the host is added to a Storwize V7000 host object in the same way that you add FC WWPNs.\n- -Host objects can have WWPNs and IQNs.", - "page_start": 352, - "page_end": 352, - "source_file": "sg247938.pdf" - }, - { - "text": "#### First Paragraph\n\nIntroduce yourself, and explain why you are writing the letter. If you are responding to a job advertisement, state which advertisement you are responding to, and indicate where you found it.\n\n#### For example:\n\n\"I would like to apply for the position of Graphic Designer, as advertised in the Career Times on 1 March 2015.\"\n\nIf possible, mention a mutual contact or acquaintance.\n\nFor example:\n\n\"Samantha Stevens mentioned that you are looking for an experienced Graphic Designer with a keen interest in the fashion industry.\"\n\n#### Second Paragraph\n\nMention your qualifications, skills and experience, and relate them to the needs of the company. Give relevant examples of how you have used your skills in the past to perform similar tasks and responsibilities to those set out in the job description.\n\n#### Third Paragraph\n\nExplain why you want to work for this organisation in particular. Where relevant, explain any gaps in your CV. If you don't have the required academic qualifications, for example, you can explain how your practical work experience makes up for it.\n\n#### Fourth paragraph\n\nMention any documents or attachments that you have included with your cover letter, and state your availability for an interview.\n\n#### Close\n\nThank the recipient for taking the time to read your letter, and sign off with a professional greeting, such as \"Yours sincerely\" or \"Kind regards\", followed by your full name, telephone number and e-mail address.", - "page_start": 46, - "page_end": 46, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "By default, your chats will be arranged along the left-hand side of the chat panel, with the most recent messages at the top. You can right-click on any chat and select \"Pin,\" which will keep it at the top of your list for quick access.\n\nWhen you create group chats you can edit the name of the group by selecting the pen symbol next to the group icon in the chat. This will help you give it context and make it easier to find.\n\n# **Chat Teams and channels**\n\nWhen you are invited to a new Team, it will automatically appear on the left panel along with all its associated channels. You can choose to \"show\" the most relevant chanels and \"hide\" the rest.", - "page_start": 3, - "page_end": 3, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "- 2. Confirm that the window displays the correct list of hosts that you want to remove by entering the number of hosts to remove, and clicking **Delete** (see Figure 8-43).\n\n| × Remove Host |\n| --- |\n| Removing host(s) cannot be undone. Are you sure you want to continue? |\n| You selected 1 host to remove. Verify the host to remove. |\n| ITS0-VMHOST-02 |\n| Verify the number of hosts that you are removing: |\n| 1 |\n| Remove the hosts even if volumes are mapped to them. These volumes will no longer be accessible to the hosts. |\n| Cancel Remove |\n\n*Figure 8-43 Confirm the removal of the host*\n\n- 3. If the host that you are removing has volumes mapped to it, force the removal by selecting the **Remove the hosts even if volumes are mapped to them** option in the lower part of the window. By selecting this option, the host is removed and it no longer can access any volumes on this system.\n- 4. After the task is completed, click **Close**.", - "page_start": 377, - "page_end": 377, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "How can I notify a collegue mentionned in a chat message in Teams ?", - "target_page": 5, - "target_passage": "Tag a teammate in a message by typing the @ symbol followed by their name. They will receive a special notification calling for their attention.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "By default, your chats will be arranged along the left-hand side of the chat panel, with the most recent messages at the top. You can right-click on any chat and select \"Pin,\" which will keep it at the top of your list for quick access.\n\nWhen you create group chats you can edit the name of the group by selecting the pen symbol next to the group icon in the chat. This will help you give it context and make it easier to find.\n\n# **Chat Teams and channels**\n\nWhen you are invited to a new Team, it will automatically appear on the left panel along with all its associated channels. You can choose to \"show\" the most relevant chanels and \"hide\" the rest.", - "page_start": 3, - "page_end": 3, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "# **Connect through messages**\n\nWhether you're in a meeting, channel, or a chat, your messaging box will look the same.\n\n# **Compose**\n\n- **Format** your messages, add bullet points, charts or hyperlinks.\n- **Mark as important** to call attention to specific messages.\n- **Attach files** to share with your teammates.\n- **Include gifs**, emojis, stickers to bring lightness to your conversations.\n\n# **Respond**\n\n- **Tag a teammate** in a message by typing the **@ symbol** followed by their name. They will receive a special notification calling for their attention. **@**\n- React to individual messages or **quote** them in a response.\n\n**Tip** Going into format mode will prevent your message from sending when you hit [Enter], so it's a great way to draft and preview messages before sending them.\n\n**Tip** If you want to revisit an important message later, hover on that message, select the three d , then choose \"Save.\" Saved messages will be found under your profile picture dropdown menu.", - "page_start": 4, - "page_end": 4, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "| Add SNMP Server | | | |\n| --- | --- | --- | --- |\n| Server IP: | 10.18.228.61 | | |\n| Port: | 162 | | |\n| Community: | public | | |\n| Events: | × Error Warning Info | | |\n| | | Cancel | Add |\n\n*Figure 13-63 Add SNMP Server*\n\n# **13.7.4 Syslog notifications**\n\nThe syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog messages that notify personnel about an event.\n\nYou can configure a syslog server to receive log messages from various systems and store them in a central repository by entering the following information (see Figure 13-64 on page 725):\n\n- -IP Address\nThe IP address for the syslog server.\n\n- -Facility\nThe facility determines the format for the syslog messages. The facility can be used to determine the source of the message.\n\n- -Message Format\nThe message format depends on the facility. The system can transmit syslog messages in the following formats:\n\n- The concise message format provides standard detail about the event.", - "page_start": 745, - "page_end": 745, - "source_file": "sg247938.pdf" - }, - { - "text": "**1**\n\n**2**\n\n**3**\n\n**4**\n\n**5**\n\n**6**\n\n**7**\n\n**8**\n\n# **Getting around**\n\nNavigate Teams using the menu along the left side and the top bar of your Teams desktop app.\n\n## **Activity**\n\n**1**\n\n**2**\n\n**3**\n\n**4**\n\n**5**\n\nFind notifications for all recent actions to stay on top of things. You can manage your notifications according to your preferences.\n\n## **Chat**\n\nMessage someone or a group of people. This tab brings up the list of all your chats.\n\n## **Teams**\n\nCreate teams and channels to gather people together in focused spaces with conversations and files. This tab brings up a list of all the teams you are a part of.\n\n## **Calendar**\n\nBring up your calendar to view, create, and respond to meetings.\n\n## **Calls**\n\nStart video and audio calls by dialing a phone number or placing a call over the internet. View your call history and voicemail.\n\n### **Files 6**\n\n**9 10**\n\nFiles shared in chats, meetings, or channels are consolidated under this tab. Files will appear in a list view and can be sorted by type, name, date, or location.\n\n### **Apps**\n\n**7**\n\n**8**\n\n**9**\n\n**10**\n\nSearch for, choose, and integrate apps to optimize how you work in Teams. Apps can appear in chat, channels, or meetings.\n\n### **Help**\n\nLearn more about Teams with articles and training content. Stay up to date with the latest features, and report problems when things aren't working out.\n\n### **Search**\n\nSearch for people, files, meetings, or conversations in Teams, then filter results to find just what you need.\n\n**Profile**\n\nSelecting your profile picture shows you a menu where you can customize your profile, find saved messages, or set your status and a message people can see when they try to reach you.", - "page_start": 1, - "page_end": 1, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "- The expanded format provides more details about the event.\n- -Event Notifications\n\nConsider the following points about event notifications:\n\n- Select **Error** if you want the user to receive messages about problems, such as hardware failures, that must be resolved immediately.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Warning** if you want the user to receive messages about problems and unexpected conditions. Investigate the cause immediately to determine whether any corrective action is necessary.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Info** if you want the user to receive messages about expected events. No action is required for these events.\n\n| SNMP | Syslog | A syslog server receives log messages from various systems and stores them in a central repository. Use this panel to define, modify, or remove syslog |\n| --- | --- | --- |\n| Syslog | servers. | |\n| | Syslog Servers | Notifications |\n| | IP Address | Facility Message Format Error Warning Info |\n| | 10.18.228.90 | Level 0 ▶ Concise > 0 0 �Θ |\n| | 6 Changes to the syslog servers are pending. | |\n| | Apply Changes | |\n\n*Figure 13-64 Syslog configuration*\n\nTo remove a syslog server, click the Minus sign (**-**).\n\nTo add another syslog server, click the Plus sign (**+**).\n\nThe syslog messages can be sent in concise message format or expanded message format.\n\nExample 13-4 shows a compact format syslog message.\n\n*Example 13-4 Compact syslog message example*\n\n```\nIBM2076 #NotificationType=Error #ErrorID=077102 #ErrorCode=1091 #Description=Node\nDouble fan failed #ClusterName=V7000G2_1 #Timestamp=Wed Jul 02 08:00:00 2017 BST\n#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=120\n```\nExample 13-5 shows an expanded format syslog message.\n\n#### *Example 13-5 Full format syslog message example*\n\n```\nIBM2076 #NotificationType=Error #ErrorID=077102 #ErrorCode=1091 #Description=Node\nDouble fan failed #ClusterName=V7000G2_1 #Timestamp=Wed Jul 02 08:00:00 2017 BST\n#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=120 #ObjectID=2\n#NodeID=2 #MachineType=2076624#SerialNumber=1234567 #SoftwareVersion=8.1.0.0\n```", - "page_start": 746, - "page_end": 746, - "source_file": "sg247938.pdf" - }, - { - "text": "From this window, you can view and configure a syslog server to receive log messages from various systems and store them in a central repository by entering the following information:\n\n- -IP Address\nThe IP address for the syslog server.\n\n- -Facility\nThe facility determines the format for the syslog messages. The facility can be used to determine the source of the message.\n\n- -Message Format\nThe message format depends on the facility. The system can transmit syslog messages in the following formats:\n\n- The concise message format provides standard detail about the event.\n- The expanded format provides more details about the event.\n- -Event Notifications\n\nConsider the following points about event notifications:\n\n- Select **Error** if you want the user to receive messages about problems, such as hardware failures, that must be resolved immediately.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Warning** if you want the user to receive messages about problems and unexpected conditions. Investigate the cause immediately to determine whether any corrective action is necessary.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Info** if you want the user to receive messages about expected events. No action is required for these events.\nTo remove a syslog server, click the Minus sign (**-**). To add another syslog server, click the Plus sign (**+**).\n\nThe syslog messages can be sent in concise message format or expanded message format.\n\nExample 5-1 shows a compact format syslog message.\n\n*Example 5-1 Compact syslog message example*\n\n```\nIBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node\nCPU fan failed #ClusterName=V7kCluster1 #Timestamp=Wed Oct 02 08:00:00 2018 BST\n#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100\n```\nExample 5-2 shows an expanded format syslog message.\n\n#### *Example 5-2 Full format syslog message example*\n\n```\nIBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node\nCPU fan failed #ClusterName=V7kCluster1 #Timestamp=Wed Oct 02 08:00:00 2018 BST\n#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2\n#NodeID=2 #MachineType=2076524#SerialNumber=1234567 #SoftwareVersion=8.2.1.0\n```", - "page_start": 186, - "page_end": 186, - "source_file": "sg247938.pdf" - }, - { - "text": "From this window (see Figure 5-54 on page 163), you can view and configure an SNMP server to receive various informational, error, or warning notifications by setting the following information:\n\n- -IP Address\nThe address for the SNMP server.\n\n- -Server Port\nThe remote port number for the SNMP server. The remote port number must be a value of 1 - 65535.\n\n- -Community\nThe SNMP community is the name of the group to which devices and management stations that run SNMP belong.\n\n- -Event Notifications\nConsider the following points about event notifications:\n\n- Select **Error** if you want the user to receive messages about problems, such as hardware failures, that must be resolved immediately.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Warning** if you want the user to receive messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action.\n**Important:** Browse to **Recommended Actions** to run the fix procedures on these notifications.\n\n- Select **Info** if you want the user to receive messages about expected events. No action is required for these events.\nTo remove an SNMP server, click the Minus sign (**-**). To add another SNMP server, click the Plus sign (**+**).\n\n# **Syslog notifications**\n\nThe syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog messages that notify personnel about an event. You can use the Syslog pane to view the Syslog messages that are sent by the IBM Storwize V7000. To view the Syslog configuration, use the System pane and point to **Settings** and click **Notification** → **Syslog** (see Figure 5-55).\n\n| SNMP | Syslog | A syslog server receives log messages from various systems and stores them in a central repository. Use this panel to define, modify, or remove syslog | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Syslog | servers. | | | | | |\n| | Syslog Servers | | | | | |\n| | | Notifications | | | | |\n| | IP Address | Warning Info | Facility | Message Format | Error | |\n| | 10.18.228.10 | ▶ | Level 7 | Expanded | > | € G |\n\n*Figure 5-55 Setting Syslog messages*", - "page_start": 185, - "page_end": 185, - "source_file": "sg247938.pdf" - }, - { - "text": "*Figure 8 – Error message dialog.*", - "page_start": 41, - "page_end": 41, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n#### **1.6 Frequently used contacts**\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n#### **1.7 Fitness data**\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n#### **1.8 Sports modes** (walking, running, cycling, rope skipping, badminton,\n\n#### basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the \"Start\" button on the screen to start the exercise; click the \"Start\" button again to pause the recording of the exercise; click the \"End\" button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n#### **1.9 Heart rate**\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n#### **1.10 ECG**\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n#### **2.0 My QR code**\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n#### **2.1 Remote control music**", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "# **2.7.3 Simple Network Management Protocol traps**\n\nSNMP is a standard protocol for managing networks and exchanging messages. The IBM Spectrum Virtualize can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that IBM Spectrum Virtualize sends. You can use the management GUI or the IBM Storwize V7000 CLI to configure and modify your SNMP settings.\n\nYou can use the Management Information Base (MIB) file for SNMP to configure a network management program to receive SNMP messages that are sent by the IBM Spectrum Virtualize.\n\n# **2.7.4 Syslog messages**\n\nThe syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6.\n\nIBM Storwize V7000 can send syslog messages that notify personnel about an event. The event messages can be sent in expanded or concise format. You can use a syslog manager to view the syslog messages that IBM Storwize V7000 sends.\n\nIBM Spectrum Virtualize uses the User Datagram Protocol (UDP) to transmit the syslog message. You can use the management GUI or the IBM Storwize V7000 CLI to configure and modify your syslog settings.\n\n# **2.7.5 Call Home email**\n\nThe Call Home feature transmits operational and error-related data to you and IBM through an SMTP server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. You can use the Call Home function if you have a maintenance contract with IBM or if the IBM Storwize V7000 is within the warranty period.\n\nTo send email, at least one SMTP server must be configured. The system support as many as five more SMTP servers for backup purposes. The SMTP server must accept the relaying of email from the IBM Storwize V7000 clustered system IP address.\n\nUse the management GUI or the IBM Storwize V7000 CLI to configure the email settings, including contact information and email recipients. Set the reply address to a valid email address.\n\nSend a test email to check that all connections and infrastructure are set up correctly. The Call Home function can be disabled at any time by using the management GUI or the IBM Storwize V7000 CLI.", - "page_start": 62, - "page_end": 62, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What are the 3 prerequisites to be elligible as president of Botswana ?", - "target_page": 18, - "target_passage": "A person shall be qualified for election as President if, and shall not be qualified unless, he or she- (a) is a citizen of Botswana by birth or descent; (b) has attained the age of 30 years; and (c) is qualified to be elected as a Member of the National Assembly", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "her lawful detention shall not be held to be inconsistent with or in contravention of this\n\nsection.(3) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) for the imposition of restrictions that are reasonably required in the interests of defence, public safety, public order, public morality or public health or the imposition of restrictions on the acquisition or use by any person of land or other property in Botswana and except so far as that provision or, as the case may be, the thing done under the authority thereof, is shown not to be reasonably justifiable in a democratic society;\n- (b) for the imposition of restrictions on the freedom of movement of any person who is not a citizen of Botswana;\n- (c) for the imposition of restrictions on the entry into or residence within defined areas of Botswana of persons who are not Bushmen to the extent that such restrictions are reasonably required for the protection or well-being of Bushmen;\n- (d) for the imposition of restrictions upon the movement or residence within Botswana of public officers; or\n- (e) .......\n\n(4) If any person whose freedom of movement has been restricted by order under such a provision as is referred to in subsection (3)(a) of this section (other than a restriction which is applicable to persons generally or to general classes of persons) so requests at any time during the period of that restriction not earlier than six months after the order was made or six months after he or she last made such request, as the case may be, his or her case shall be reviewed by an independent and impartial tribunal presided over by a person, qualified to be enrolled as an advocate in Botswana, appointed by the Chief Justice.\n\n(5) On any review by a tribunal in pursuance of this section of the case of a person whose freedom of movement has been restricted, the tribunal may make recommendations, concerning the necessity or expediency of continuing the restriction to the authority by which it was ordered but, unless it is otherwise provided by law, that authority shall not be obliged to act in accordance with any such recommendations.\n\n# **15. Protection from discrimination on the grounds of race, etc.**\n\n(1) Subject to the provisions of subsections (4), (5) and (7) of this section, no law shall make any provision that is discriminatory either of itself or in its effect.\n\n(2) Subject to the provisions of subsections (6), (7) and (8) of this section, no person shall be treated in a discriminatory manner by any person acting by virtue of any written law or in the performance of the functions of any public office or any public authority.\n\n(3) In this section, the expression \"discriminatory\" means affording different treatment to different persons, attributable wholly or mainly to their respective descriptions by race, tribe, place of origin, political opinions, colour, creed or sex whereby persons of one such description are subjected to disabilities or restrictions to which persons of another such description are not made subject or are accorded privileges or advantages which are not accorded to persons of another such description.\n\n(4) Subsection (1) of this section shall not apply to any law so far as that law makes provision-\n\n- (a) for the appropriation of public revenues or other public funds;\n- (b) with respect to persons who are not citizens of Botswana;\n- (c) with respect to adoption, marriage, divorce, burial, devolution of property on death or other matters of personal law;", - "page_start": 12, - "page_end": 12, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (d) if he or she is elected as Speaker;\n- (e) if he or she is removed from office by a resolution of the Assembly supported by the votes of not less than two-thirds of all the Members of the Assembly; or\n- (f) when the Assembly first sits after any dissolution of Parliament.\n\n# **61. Qualifications for election to National Assembly**\n\nSubject to the provisions of section 62 of this Constitution, a person shall be qualified to be elected as a Member of the National Assembly if, and shall not be qualified to be so elected unless-\n\n- (a) he or she is a citizen of Botswana;\n- (b) he or she has attained the age of 18 years;\n- (c) he or she is qualified for registration as a voter for the purposes of the election of the Elected Members of the National Assembly and is so registered; and\n- (d) he or she is able to speak, and, unless incapacitated by blindness or other physical cause, to read English well enough to take an active part in the proceedings of the Assembly.\n\n# **62. Disqualifications for membership of National Assembly**\n\n(1) No person shall be qualified to be elected as a Member of the National Assembly who-\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law for the time being in force in Botswana and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified to be insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) is a Member of the Ntlo ya Dikgosi;\n- (e) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (f) is under sentence of death imposed on him or her by a court in any part of the Commonwealth, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by competent authority for some other sentence imposed on him or her by such a court;\n- (g) holds, or is acting in, any office the functions of which involve any responsibility for, or in connection with, the conduct of any elections to the Assembly or the compilation or revision of any electoral register for the purposes of such elections.\n\n(2) Parliament may provide that a person shall not be qualified for election to the National Assembly for such period (not exceeding five years) as may be prescribed if he or she is convicted of any such offence connected with elections to the Assembly as may be prescribed.\n\n(3) For the purposes of this section two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n# **63. Constituencies**\n\nBotswana shall be divided into as many constituencies as there are Elected Members of the National Assembly and each of those constituencies shall return one Member to the National Assembly.", - "page_start": 27, - "page_end": 27, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "# **32. Election of President after dissolution of Parliament**\n\n(1) Whenever Parliament is dissolved an election shall be held to the office of President in such manner as is prescribed by this section and, subject thereto, by or under an Act of Parliament.\n\n(2) Nominations in the election of a President shall be delivered to the returning officer on such day and at such time as may be prescribed by or under any law for the time being in force in Botswana; the nomination of a candidate in an election of a President shall not be valid unless it is supported, in such manner as may be prescribed by or under an Act of Parliament, by not less than 1000 persons registered as voters for the purpose of elections to the Assembly.\n\n(3) The following provisions shall then apply-\n\n- (a) a person nominated as a Parliamentary candidate may, at the time of his or her nomination and subject to the provisions of paragraph (b), declare in such manner as may be prescribed by or under an Act of Parliament which of the candidates in the election of President he or she supports, but the nomination of a Parliamentary candidate shall be valid notwithstanding that the nomination paper does not contain such a declaration;\n- (b) such a declaration shall not be made in relation to any Presidential candidate unless that candidate has signified, in such manner as may be prescribed by or under an Act of Parliament, his or her consent to the making of a declaration in his or her favour by that Parliamentary candidate;\n- (c) where the Parliamentary election is contested in any constituency a poll shall be taken in that constituency at which the votes shall be given by ballot, and for the purposes of that poll any Parliamentary candidate who declared support in accordance with paragraph (a) for a particular Presidential candidate shall use the same voting colour and symbol, if any, as may have been allocated under any law for the time being in force in Botswana to that Presidential candidate for the purposes of the Presidential election;\n- (d) the returning officer shall declare to be elected as President any candidate for whom support has been declared in accordance with paragraph (a) above by not less than such number of persons elected as Members of the National Assembly in the Parliamentary election as corresponds to more than half the total number of seats for Elected Members in the Assembly, and if there is no such person the returning officer shall declare that no candidate has been elected.\n\n(4) Parliament may make provision whereby the time for nominating Presidential candidates may be extended in the event of there being no qualified candidate nominated at the expiration of the time for the delivery of such nominations.\n\n(5) Where, at the expiration of the time for the delivery of nominations in the election of a President, more than one qualified candidate is validly nominated and any of those candidates dies before the commencement of the poll in the Parliamentary election, the poll in the Parliamentary election shall be countermanded, fresh nominations of Parliamentary candidates shall take place in every constituency and a fresh election of a President shall be held in accordance with the foregoing provisions of this section.\n\n(6) Where-\n\n- (a) any candidate in an election of a President dies during the period commencing with the taking of the poll in the Parliamentary election and ending when the result of the election has been ascertained and that candidate would, but for his or her death, have been entitled to have been declared elected as President under subsection (3) of this section; or", - "page_start": 16, - "page_end": 16, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "for his or her education or welfare during any period ending not later than the date when he or she attains the age of 18 years;\n\n- (g) for the purpose of preventing the spread of an infectious or contagious disease;\n- (h) in the case of a person who is, or is reasonably suspected to be, of unsound mind, addicted to drugs or alcohol, or a vagrant, for the purpose of his or her care or treatment or the protection of the community;\n- (i) for the purpose of preventing the unlawful entry of that person into Botswana, or for the purpose of effecting the expulsion, extradition or other lawful removal of that person from Botswana, or for the purpose of restricting that person while he or she is being conveyed through Botswana in the course of his or her extradition or removal as a convicted prisoner from one country to another;\n- (j) to such extent as may be necessary in the execution of a lawful order requiring that person to remain within a specified area within Botswana or prohibiting him or her from being within such an area, or to such extent as may be reasonably justifiable for the taking of proceedings against that person relating to the making of any such order, or to such extent as may be reasonably justifiable for restraining that person during any visit that he or she is permitted to make to any part of Botswana in which, in consequence of any such order, his or her presence would otherwise be unlawful; or\n- (k) for the purpose of ensuring the safety of aircraft in flight.\n\n(2) Any person who is arrested or detained shall be informed as soon as reasonably practicable, in a language that he or she understands, of the reasons for his or her arrest or detention.\n\n(3) Any person who is arrested or detained-\n\n- (a) for the purpose of bringing him or her before a court in execution of the order of a court; or\n- (b) upon reasonable suspicion of his or her having committed, or being about to commit, a criminal offence under the law in force in Botswana,\n\nand who is not released, shall be brought as soon as is reasonably practicable before a court; and if any person arrested or detained as mentioned in paragraph (b) of this subsection is not tried within a reasonable time, then, without prejudice to any further proceedings that may be brought against him or her, he or she shall be released either unconditionally or upon reasonable conditions, including in particular such conditions as are reasonably necessary to ensure that he or she appears at a later date for trial or for proceedings preliminary to trial.\n\n(4) Any person who is unlawfully arrested or detained by any other person shall be entitled to compensation therefor from that other person.\n\n# **6. Protection from slavery and forced labour**\n\n(1) No person shall be held in slavery or servitude.\n\n(2) No person shall be required to perform forced labour.\n\n(3) For the purposes of this section, the expression \"forced labour\" does not include-\n\n- (a) any labour required in consequence of the sentence or order of a court;\n- (b) labour required of any person while he or she is lawfully detained that, though not required in consequence of the sentence or order of a court, is reasonably necessary in the interests of hygiene or for the maintenance of the place at which he or she is detained;\n- (c) any labour required of a member of a disciplined force in pursuance of his or her duties as such or, in the case of a person who has conscientious objections to service as a member of a naval, military or air force, any labour that that person is required by law to perform in place of such service;", - "page_start": 5, - "page_end": 5, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "for local government; and\n\n- (c) select a Member to the Ntlo ya Dikgosi for that region by election or in such other manner as the Regional Electoral College may agree.\n(5) Notwithstanding the provisions of section 77(1)(a) and subsections (2) and (4)(c) of this section, the areas of Ghanzi and Kgalagadi shall each have the option of either selecting one Member under subsection (2) of this section or of each selecting two regional Members under subsection (4)(c) of this section, but may not select Members under both subsections.\n\n# **79. Qualifications for membership of Ntlo ya Dikgosi**\n\n(1) A person shall be qualified to be appointed under section 77(1)(b) as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is a citizen of Botswana; and\n- (b) has attained the age of 21 years.\n\n(2) No person shall be qualified to be appointed, selected or designated as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law in force in any part of the Commonwealth or any country with a comparable legal system and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (e) is under sentence of death imposed on him or her by a court in any part of the Commonwealth or any country with a comparable legal system, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by a competent authority for some other sentence imposed on him or her by such a court;\n- (f) holds, or is acting in,anyoffice the functions ofwhichinvolveany responsibility for, or in connection with, the conduct of any elections to the National Assembly or the compilation or revision of any electoral register for the purposes of such elections; or\n- (g) is disqualified for election to the National Assembly by virtue of provision made in pursuance of section 62 (2) of this Constitution.\n\n(3) For the purposes of this section, two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n(4) A Member of the Ntlo ya Dikgosi shall not, while he or she is such a Member, participate in party politics, but active participation in politics prior to being a Member of the Ntlo ya Dikgosi shall not bar any person from being such a Member.\n\n# **80. Oath of allegiance**\n\nEvery Member of the Ntlo ya Dikgosi shall, before taking his or her seat therein, take and subscribe before the Ntlo ya Dikgosi the oath of allegiance.\n\n# **81. Secretary to Ntlo ya Dikgosi**\n\nThere shall be a Secretary to the Ntlo ya Dikgosi whose office shall be an office in the public service.\n\n**82. Tenure of office of Members of Ntlo ya Dikgosi** (1) A Member of the Ntlo ya", - "page_start": 35, - "page_end": 35, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "person or authority.\n\n(3) Nothing in this section shall prevent Parliament from conferring functions on persons or authorities other than the President.\n\n# **48. Command of armed forces**\n\n(1) The supreme command of the armed forces of the Republic shall vest in the President and he or she shall hold the office of Commander in Chief.\n\n(2) The powers conferred on the President by subsection (1) of this section shall include-\n\n- (a) the power to determine the operational use of the armed forces;\n- (b) the power to appoint members of the armed forces, to make appointments on promotion to any office in the armed forces and to dismiss any member of the armed forces.\n\n(3) The President may, by directions in writing and subject to such conditions as he or she may think fit, delegate to any member of the armed forces any of the powers mentioned in subsection (2) of this section.\n\n(4) Parliament may regulate the exercise of the powers conferred by or under this section.\n\n# **49. Functions of Vice-President**\n\nThe Vice-President shall be the principal assistant of the President in the discharge of his or her executive functions and shall be responsible, under the directions of the President, for such business of the government of Botswana (including the administration of any department of Government) as the President may assign to him or her.\n\n# **50. Functions of Cabinet Ministers and Assistant Ministers**\n\n(1) The Cabinet shall be responsible for advising the President with respect to the policy of the Government and with respect to such other matters as may be referred to it by the President and shall, subject to the provisions of this Constitution, be responsible to the National Assembly for all things done by or under the authority of the President, Vice-President or any Minister in the execution of his or her office.\n\n(2) The President shall, so far as practicable and subject to the provisions of this Constitution, consult the Cabinet on matters of policy and the exercise of his or her functions.\n\n(3) The obligation of the President to consult his or her Cabinet and for the Cabinet to accept responsibility under this section shall not apply to the exercise by the President of his or her powers in relation to the appointment or removal of the Vice- President, Ministers and Assistant Ministers, the dissolution of Parliament, the Prerogative of Mercy, the assignment of responsibility to the Vice-President or any Minister and the specification of the functions of an Assistant Minister.\n\n(4) A Minister shall be responsible, under the direction of the President, for such business of the government of Botswana (including the administration of any department of Government) as the President may assign to him or her.\n\n(5) An Assistant Minister shall-\n\n- (a) assist the President or the Vice-President in the discharge of such of the functions of the office of President or Vice-President as the President may specify; or\n- (b) assist such Minister in the discharge of the functions assigned to him or her under subsection (4) of this section as the President may specify.\n\n# **51. Attorney-General**\n\n(1) There shall be an Attorney-General appointed by the President whose office shall be a public office.\n\n(2) A person shall not be qualified to be appointed to the Office of Attorney-", - "page_start": 23, - "page_end": 23, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "discharging the functions of his or her office and the infirmity is of such a nature that the President is unable to authorize another person under this section to perform the functions of his or her office; or\n\n- (b) the Vice-President is by reason of physical or mental infirmity unable to discharge the functions of his or her office,\nshall, in respect of any period for which it is in force, be conclusive and shall not be questioned in any court:\n\nProvided that any such certificate as is referred to in paragraph (a) of this subsection shall cease to have effect if the President notifies any person under subsection (4) of this section that he or she is about to resume the functions of the office of President.\n\n# **37. Oath of President**\n\nA person assuming the office of President shall, before entering upon the duties of that office, take and subscribe such oaths as may be prescribed by Parliament.\n\n## **38. Returning officer at elections of President**\n\n(1) The Chief Justice shall be the returning officer for the purposes of elections to the office of President.\n\n(2) Any question which may arise as to whether-\n\n- (a) any provision of this Constitution or any law relating to the election of a President under section 32 or 35 of this Constitution has been complied with; or\n(b) any person has been validly elected as President under those sections, shall be referred to and determined by the returning officer whose decision shall not be questioned in any court.\n\n# **39. Vice President**\n\n(1) There shall be a Vice-President who shall be appointed by the President from among the Elected Members of the National Assembly who are citizens of Botswana by birth or descent, which appointment shall be endorsed by the said Elected Members.\n\n(2) The Vice-President shall continue in office until a person elected at the next election of President under section 32 or 35 of this Constitution assumes office:\n\nProvided that the office of Vice-President shall become vacant-\n\n- (i) if the appointment of the holder of the office is revoked by the President; or\n- (ii) if the holder of the office ceases to be a Member of the National Assembly for any other reason than a dissolution of Parliament.\n\n(3) The Vice-President shall not enter upon the duties of his or her office unless he or she has taken and subscribed the oath of allegiance and such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n(4) If the Vice-President is absent from Botswana or is incapable by reason of illness or any other cause of discharging the functions of his or her office, the President may appoint a person, from among the Members of the Assembly, to perform the functions of the office of Vice-President and any person so appointed may discharge those functions accordingly:\n\nProvided that a person appointed under this subsection shall cease to perform the functions of the office of Vice-President-\n\n- (i) if his or her appointment is revoked by the President;\n- (ii) if he or she ceases to be a Member of the Assembly otherwise than by reason of a dissolution of Parliament;\n- (iii) upon the assumption by any person of the office of President; or\n- (iv) upon the President giving him or her notice that the Vice-President is about to resume his or her functions.\n\n(5) Where the Vice-President is performing the functions of the office of President in accordance with section 35 or 36 of this Constitution he or she may appoint a person,", - "page_start": 20, - "page_end": 20, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "provisions of sections 3 to 16 (inclusive) of this Constitution.\n\n(3) If in any proceedings in any subordinate court any question arises as to the contravention of any of the provisions of sections 3 to 16 (inclusive) of this Constitution, the person presiding in that court may, and shall if any party to the proceedings so requests, refer the question to the High Court unless, in his or her opinion, the raising of the question is merely frivolous or vexatious.\n\n(4) Parliament may confer upon the High Court such powers in addition to those conferred by this section as may appear to be necessary or desirable for the purpose of enabling that court more effectively to exercise the jurisdiction conferred upon it by this\n\nsection.(5) Rules of court making provision with respect to the practice and procedure of the High Court for the purposes of this section may be made by the person or authority for the time being having power to make rules of court with respect to the practice and procedure of that court generally.\n\n## **19. Interpretation and savings**\n\n(1) In this Chapter, unless the context otherwise requires-\n\n**\"court\"** means any court of law having jurisdiction in Botswana other than a court established by a disciplinary law, and in sections 4 and 6 of this Constitution a court established by a disciplinary law;\n\n> **\"disciplinary law\"** means a law regulating the discipline of any disciplined force; **\"disciplined force\"** means-\n\n- (a) a naval, military or air force;\n- (b) a police force; or\n- (c) a prison service;\n\n**\"legal representative\"** means a person entitled to practise in Botswana as an advocate or attorney;\n\n**\"member\"**, in relation to a disciplined force, includes any person who, under the law regulating the discipline of that force, is subject to that discipline.\n\n(2) In relation to any person who is a member of a disciplined force raised under an Act of Parliament, nothing contained in or done under the authority of the disciplinary law of that force shall be held to be inconsistent with or in contravention of any of the provisions of this Chapter other than sections 4, 6 and 7.\n\n(3) In relation to any person who is a member of a disciplined force raised otherwise than as aforesaid and lawfully present in Botswana, nothing contained in or done under the authority of the disciplinary law of that force shall be held to be inconsistent with or in contravention of any of the provisions of this Chapter.\n\n## **CHAPTER III**\n\n### **Citizenship (ss 20-29: repealed)**\n\n**20 to 29 inclusive. [Repealed.]**\n\n# **CHAPTER IV**\n\n# **The Executive (ss 30-56)**\n\n### **PART I The President and the Vice-President (ss 30-41)**\n\n# **30. Office of President**\n\nThere shall be a President of the Republic of Botswana who shall be the Head of State.\n\n### **31. First President**\n\n(1) The first President shall be the person who immediately before 30th September, 1966 holds the office of Prime Minister under the Constitution.\n\n(2) The first President shall be deemed to have assumed office at the coming into operation of this Constitution.", - "page_start": 15, - "page_end": 15, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "has more than one residence in Botswana in the constituency in which he or she has his or her principal residence; or\n\n- (b) in the case of a person who does not have a residence in Botswana but is able to register in person, in the constituency in which he or she last resided, or in which he or she was born; or\n- (c) in the case of a person who is not resident in Botswana and is unable to register in person, at such place as may be prescribed by Parliament and registration at such place shall be treated as registration in the constituency in which he or she last resided, or in which he or she was born in Botswana.\n\t- (4) A person shall be entitled to be registered as a voter in one constituency only.\n\n(5) Every person who is registered in any constituency as a voter for the purposes of elections of the Elected Members of the National Assembly shall, unless he or she is disqualified by Parliament from voting in such elections on the grounds of his or her having been convicted of an offence in connection with the elections or on the grounds of his or her having been reported guilty of such an offence by the court trying an election petition or on the grounds of his or her being in lawful custody at the date of the election, be entitled so to vote in that constituency in accordance with the provisions made by or under a law in that behalf; and no other person may so vote.\n\n## **68. Tenure of office of Members**\n\n(1) The seat of an Elected Member or a Specially Elected Member of the National Assembly shall become vacant-\n\n- (a) upon the dissolution of Parliament;\n- (b) if he or she is absent from the sittings of the Assembly for such period and in such circumstances as may be prescribed in the rules of procedure of the Assembly;\n- (c) subject to the provisions of subsections (2) to (3) of this section, if any circumstances arise that, if he or she were not a Member of the Assembly, would cause him or her to be disqualified for election thereto.\n\n(2) If circumstances such as are referred to in paragraph (c) of the preceding subsection arise in relation to a Member of the Assembly by virtue of the fact that he or she is declared insolvent, adjudged to be of unsound mind, sentenced to death or imprisonment, or convicted of an election offence and it is open to the Member to appeal against the decision (either with the leave of the court or other authority or without such leave), he or she shall forthwith cease to perform his or her functions as a Member of the Assembly but, subject to the next following subsection, he or she shall not vacate his or her seat until the expiration of a period of 30 days thereafter:\n\nProvided that the Speaker may, at the request of the Member, from time to time extend that period for further periods of 30 days to enable the Member to pursue an appeal against the decision, so, however, that extensions of time exceeding in the aggregate 150 days shall not be given without the approval of the Assembly signified by resolution.\n\n(3) If, on the determination of any appeal, such circumstances continue to exist and no further appeal is open to the Member of the Assembly, or if, by reason of the expiration of any period for entering an appeal or notice thereof or the refusal of leave to appeal or for any other reason, it ceases to be open to the Member to appeal, he or she shall forthwith vacate his or her seat.\n\n(4) If at any time before the Member of the Assembly vacates his or her seat such circumstances as aforesaid cease to exist, his or her seat shall not become vacant by reason of those circumstances, and he or she may resume the performance of his or her functions as a Member of the Assembly.\n\n**69. Determination of questions as to membership of National Assembly**", - "page_start": 32, - "page_end": 32, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "citizenship in force at that time, shall be regarded as a citizen by descent.\n\n# **34. Tenure of office of President**\n\n(1) The President shall, subject to the provisions of this section, hold office for an aggregate period not exceeding 10 years beginning from the date of his or her first assumption of office of President after the commencement of this Act.\n\n(2) The President shall cease to hold the office of President if at any time during his or her tenure of office any circumstances arise that would, if he or she were not a member of the National Assembly, cause him or her to be disqualified for election\n\nthereto.(3) The President shall cease to hold office of President at the expiry of the period prescribed under subsection (1) of this section, or when the person elected at the next election of President following a dissolution of Parliament assumes office.\n\n# **35. Vacancy in office of President**\n\n(1) Whenever the President dies, resigns or ceases to hold office, the Vice- President shall assume office as President with effect from the date of the death, resignation or ceasing to be President.\n\n(2) If the office of President-\n\n- (a) becomes vacant in circumstances in which there is no Vice-President; or\n- (b) is vacant whilst the Vice-President is absent from Botswana or is, by reason of physical or mental infirmity unable to perform the functions of his or her office,\n\nthe functions of the office of President shall, until such time as a new President assumes office in accordance with this section or section 32 of this Constitution, be performed by such Minister as the Cabinet shall appoint. For the purposes of this subsection, a certificate of the Chief Justice that the Vice-President is by reason of physical or mental infirmity unable to discharge the functions of his or her office, shall, in respect of any period for which it is in force, be conclusive and shall not be questioned in any court.\n\n(3) Any person performing the functions of the office of President by virtue of subsection (1) or (2) of this section shall not exercise the power of the President to revoke the appointment of Vice-President or to dissolve Parliament.\n\n(4) If the office of President becomes vacant, the National Assembly shall, unless Parliament is dissolved, and notwithstanding that it may be prorogued, meet on the seventh day after the office of President becomes vacant, or on such earlier day as may be appointed by the Speaker, and shall elect a person to the office in such manner as is prescribed by the next following subsection and, subject thereto, by or under an Act of Parliament.\n\n- (5) In an election of a President under this section-\n- (a) the Speaker shall preside at the meeting and conduct the election;\n- (b) a person may be a candidate if and shall not be a candidate unless he or she has been nominated as a candidate with his or her consent prior to the sitting of the National Assembly at which the election takes place, by not less than 10 Members of the National Assembly entitled to vote in that election;\n- (c) at the election every Member of the Assembly except the Speaker shall be entitled to vote;\n- (d) the votes of the Members of the Assembly who are entitled to vote shall be given by ballot in such manner as not to disclose how any particular Member voted, and any person who receives the votes of more than one half of the total number of persons entitled to vote shall be declared elected as President;\n- (e) a person elected as President under this section shall assume the office of President on the day upon which he or she is declared to be elected;\n- (f) not more than three ballots shall be taken unless in the opinion of the Speaker the holding of further ballots is likely to result in the election of a President, in", - "page_start": 18, - "page_end": 18, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What is the condition to be allowing to access the position of Director of public prosecution in Botswana ?", - "target_page": 25, - "target_passage": "A person shall not be qualified to be appointed to the Office of Director of Public Prosecutions unless he or she is qualified to be appointed to the Office of a Judge of the High Court", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "for his or her education or welfare during any period ending not later than the date when he or she attains the age of 18 years;\n\n- (g) for the purpose of preventing the spread of an infectious or contagious disease;\n- (h) in the case of a person who is, or is reasonably suspected to be, of unsound mind, addicted to drugs or alcohol, or a vagrant, for the purpose of his or her care or treatment or the protection of the community;\n- (i) for the purpose of preventing the unlawful entry of that person into Botswana, or for the purpose of effecting the expulsion, extradition or other lawful removal of that person from Botswana, or for the purpose of restricting that person while he or she is being conveyed through Botswana in the course of his or her extradition or removal as a convicted prisoner from one country to another;\n- (j) to such extent as may be necessary in the execution of a lawful order requiring that person to remain within a specified area within Botswana or prohibiting him or her from being within such an area, or to such extent as may be reasonably justifiable for the taking of proceedings against that person relating to the making of any such order, or to such extent as may be reasonably justifiable for restraining that person during any visit that he or she is permitted to make to any part of Botswana in which, in consequence of any such order, his or her presence would otherwise be unlawful; or\n- (k) for the purpose of ensuring the safety of aircraft in flight.\n\n(2) Any person who is arrested or detained shall be informed as soon as reasonably practicable, in a language that he or she understands, of the reasons for his or her arrest or detention.\n\n(3) Any person who is arrested or detained-\n\n- (a) for the purpose of bringing him or her before a court in execution of the order of a court; or\n- (b) upon reasonable suspicion of his or her having committed, or being about to commit, a criminal offence under the law in force in Botswana,\n\nand who is not released, shall be brought as soon as is reasonably practicable before a court; and if any person arrested or detained as mentioned in paragraph (b) of this subsection is not tried within a reasonable time, then, without prejudice to any further proceedings that may be brought against him or her, he or she shall be released either unconditionally or upon reasonable conditions, including in particular such conditions as are reasonably necessary to ensure that he or she appears at a later date for trial or for proceedings preliminary to trial.\n\n(4) Any person who is unlawfully arrested or detained by any other person shall be entitled to compensation therefor from that other person.\n\n# **6. Protection from slavery and forced labour**\n\n(1) No person shall be held in slavery or servitude.\n\n(2) No person shall be required to perform forced labour.\n\n(3) For the purposes of this section, the expression \"forced labour\" does not include-\n\n- (a) any labour required in consequence of the sentence or order of a court;\n- (b) labour required of any person while he or she is lawfully detained that, though not required in consequence of the sentence or order of a court, is reasonably necessary in the interests of hygiene or for the maintenance of the place at which he or she is detained;\n- (c) any labour required of a member of a disciplined force in pursuance of his or her duties as such or, in the case of a person who has conscientious objections to service as a member of a naval, military or air force, any labour that that person is required by law to perform in place of such service;", - "page_start": 5, - "page_end": 5, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "her lawful detention shall not be held to be inconsistent with or in contravention of this\n\nsection.(3) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) for the imposition of restrictions that are reasonably required in the interests of defence, public safety, public order, public morality or public health or the imposition of restrictions on the acquisition or use by any person of land or other property in Botswana and except so far as that provision or, as the case may be, the thing done under the authority thereof, is shown not to be reasonably justifiable in a democratic society;\n- (b) for the imposition of restrictions on the freedom of movement of any person who is not a citizen of Botswana;\n- (c) for the imposition of restrictions on the entry into or residence within defined areas of Botswana of persons who are not Bushmen to the extent that such restrictions are reasonably required for the protection or well-being of Bushmen;\n- (d) for the imposition of restrictions upon the movement or residence within Botswana of public officers; or\n- (e) .......\n\n(4) If any person whose freedom of movement has been restricted by order under such a provision as is referred to in subsection (3)(a) of this section (other than a restriction which is applicable to persons generally or to general classes of persons) so requests at any time during the period of that restriction not earlier than six months after the order was made or six months after he or she last made such request, as the case may be, his or her case shall be reviewed by an independent and impartial tribunal presided over by a person, qualified to be enrolled as an advocate in Botswana, appointed by the Chief Justice.\n\n(5) On any review by a tribunal in pursuance of this section of the case of a person whose freedom of movement has been restricted, the tribunal may make recommendations, concerning the necessity or expediency of continuing the restriction to the authority by which it was ordered but, unless it is otherwise provided by law, that authority shall not be obliged to act in accordance with any such recommendations.\n\n# **15. Protection from discrimination on the grounds of race, etc.**\n\n(1) Subject to the provisions of subsections (4), (5) and (7) of this section, no law shall make any provision that is discriminatory either of itself or in its effect.\n\n(2) Subject to the provisions of subsections (6), (7) and (8) of this section, no person shall be treated in a discriminatory manner by any person acting by virtue of any written law or in the performance of the functions of any public office or any public authority.\n\n(3) In this section, the expression \"discriminatory\" means affording different treatment to different persons, attributable wholly or mainly to their respective descriptions by race, tribe, place of origin, political opinions, colour, creed or sex whereby persons of one such description are subjected to disabilities or restrictions to which persons of another such description are not made subject or are accorded privileges or advantages which are not accorded to persons of another such description.\n\n(4) Subsection (1) of this section shall not apply to any law so far as that law makes provision-\n\n- (a) for the appropriation of public revenues or other public funds;\n- (b) with respect to persons who are not citizens of Botswana;\n- (c) with respect to adoption, marriage, divorce, burial, devolution of property on death or other matters of personal law;", - "page_start": 12, - "page_end": 12, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## **122. Remuneration of certain officers**\n\n(1) There shall be paid to the holders of the offices to which this section applies such salaries and such allowances as may be prescribed by Parliament.\n\n(2) The salaries and any allowances payable to the holders of the offices to which this section applies shall be a charge on the Consolidated Fund.\n\n(3) The salary payable to the holder of any office to which this section applies and his or her terms of office, other than allowances, shall not be altered to his or her disadvantage after his or her appointment.\n\n(4) Where a person's salary or terms of office depend upon his or her option, the salary or terms for which he or she opts shall, for the purposes of subsection (3) of this section, be deemed to be more advantageous to him or her than any others for which he or she might have opted.\n\n(5) This section applies to the offices of judge of the Court of Appeal, judge of the High Court, member of the Public Service Commission, member of the Judicial Service Commission, member of the Delimitation Commission, Auditor-General, Director of Public Prosecutions and Attorney-General.\n\n## **123. Public debt**\n\n(1) There shall be charged on the Consolidated Fund all debt charges for which Botswana is liable.\n\n(2) For the purposes of this section debt charges include interest, sinking fund charges, the repayment or amortization of debt, and all expenditure in connection with the raising of loans on the security of the revenues or the Consolidated Fund of the former Protectorate of Bechuanaland or Botswana, and the service and redemption of debt thereby created.\n\n### **124. Auditor-General**\n\n(1) There shall be an Auditor-General, whose office shall be a public office.\n\n(2) The public accounts of Botswana and of all officers, courts and authorities of the Government of Botswana shall be audited and reported on by the Auditor-General and for that purpose the Auditor-General or any person authorized by him or her in that behalf shall have access to all books, records, reports and other documents relating to those accounts:\n\nProvided that, if it is so provided by Parliament in the case of any body corporate directly established by law, the accounts of that body corporate shall be audited and reported on by such person as may be specified by or under that law.\n\n(3) The Auditor-General shall submit his or her reports to the Minister responsible for finance, who shall cause them to be laid before the National Assembly.\n\n(4) The Auditor-General shall perform such other duties and exercise such other powers in relation to the accounts of the Government or the accounts of other public authorities or other bodies as may be prescribed by or under any Act of Parliament.\n\n(5) In the exercise of his or her functions the Auditor-General shall not be subject to the direction or control of any other person or authority.\n\n# **CHAPTER IX Miscellaneous (ss 125-127)**\n\n## **125. Resignations**\n\n(1) Any person who is appointed or elected to any office established by this Constitution may resign from that office by writing under his or her hand addressed to the person or authority by whom he or she was appointed or elected:\n\nProvided that in the case of a person who holds office as President his or her resignation from that office shall be addressed to the Chief Justice, in the case of a person who holds office as Speaker or Deputy Speaker of the National Assembly his or her resignation from that office shall be addressed to the Assembly, in the case of an", - "page_start": 52, - "page_end": 52, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "holding or acting in such offices shall, subject to the provisions of sections 113 and 114 of this Constitution, vest in the President.\n\n(2) The offices to which this section applies are-\n\n- (a) Ambassador, High Commissioner or other principal representative of Botswana in any other country or accredited to any international organisation;\n- (b) Secretary to the Cabinet;\n- (c) Attorney-General;\n- (cA) Director of Public Prosecutions;\n- (d) Permanent Secretary;\n- (e) Commissioner of Police; and\n- (f) any other superscale office (other than an office to which this Constitution makes specific provision for appointment or an office to which appointment is made under the provisions of section 104 of this Constitution) which may be prescribed by Act of Parliament.\n\n**113. Tenure of office of Director of Public Prosecutions** (1) Subject to the provisions of this section, a person appointed as Director of Public Prosecutions shall hold office for a 5 year renewable term or until he or she attains the age of 60 years, whichever is the earlier.\n\n(2) A person holding the office of Director of Public Prosecutions may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or any other cause) or for misbehaviour or for incompetence and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the President considers that the question of removing a person holding the office of Director of Public Prosecutions from office ought to be investigated then-\n\n- (a) he or she shall appoint a tribunal which shall consist of a Chairman and not less than two other members, who hold or have held high judicial office; and\n- (b) the tribunal shall enquire into the matter and report on the facts thereof to the President and advise the President whether the person holding the office of Director of Public Prosecutions ought to be removed from office under this section for inability as aforesaid or for misbehaviour or for incompetence.\n- (4) Where a tribunal appointed under subsection (3) of this section advises the President that a person holding the office of Director of Public Prosecutions ought to be removed from office for inability as aforesaid or for misbehaviour or for incompetence, the President shall remove such person from office.\n\n(5) If the question of removing a person holding the office of Director of Public Prosecutions from office has been referred to a tribunal under this section, the President may suspend that person from performing the functions of his or her office, and any such suspension may at any time be revoked by the President and shall in any case cease to have effect if the tribunal advises the President that the person ought not to be removed from office.\n\n# **114. Tenure of office of Auditor-General**\n\n(1) Subject to the provisions of this section, a person holding the office of Auditor- General shall vacate his or her office when he or she attains the age of 60 years or such other age as may be prescribed by Parliament.\n\n(2) A person holding the office of Auditor-General may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or any other cause) or for misbehaviour and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the National Assembly resolves that the question of removing a person holding the office of Auditor-General from office under this section ought to be", - "page_start": 48, - "page_end": 48, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (d) if he or she is elected as Speaker;\n- (e) if he or she is removed from office by a resolution of the Assembly supported by the votes of not less than two-thirds of all the Members of the Assembly; or\n- (f) when the Assembly first sits after any dissolution of Parliament.\n\n# **61. Qualifications for election to National Assembly**\n\nSubject to the provisions of section 62 of this Constitution, a person shall be qualified to be elected as a Member of the National Assembly if, and shall not be qualified to be so elected unless-\n\n- (a) he or she is a citizen of Botswana;\n- (b) he or she has attained the age of 18 years;\n- (c) he or she is qualified for registration as a voter for the purposes of the election of the Elected Members of the National Assembly and is so registered; and\n- (d) he or she is able to speak, and, unless incapacitated by blindness or other physical cause, to read English well enough to take an active part in the proceedings of the Assembly.\n\n# **62. Disqualifications for membership of National Assembly**\n\n(1) No person shall be qualified to be elected as a Member of the National Assembly who-\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law for the time being in force in Botswana and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified to be insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) is a Member of the Ntlo ya Dikgosi;\n- (e) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (f) is under sentence of death imposed on him or her by a court in any part of the Commonwealth, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by competent authority for some other sentence imposed on him or her by such a court;\n- (g) holds, or is acting in, any office the functions of which involve any responsibility for, or in connection with, the conduct of any elections to the Assembly or the compilation or revision of any electoral register for the purposes of such elections.\n\n(2) Parliament may provide that a person shall not be qualified for election to the National Assembly for such period (not exceeding five years) as may be prescribed if he or she is convicted of any such offence connected with elections to the Assembly as may be prescribed.\n\n(3) For the purposes of this section two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n# **63. Constituencies**\n\nBotswana shall be divided into as many constituencies as there are Elected Members of the National Assembly and each of those constituencies shall return one Member to the National Assembly.", - "page_start": 27, - "page_end": 27, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "General unless he or she is qualified to be appointed to the Office of a Judge of the High Court.\n\n(3) The Attorney-General shall be the principal legal adviser to the Government.\n\n(4) A person holding the Office of Attorney-General shall vacate his or her office when he or she attains the age of 60 years or such other age as may be prescribed by Parliament.\n\n# **51A. Director of Public Prosecutions**\n\n(1) There shall be a Director of Public Prosecutions appointed by the President whose office shall be a public office and who shall be subject to the administrative supervision of the Attorney-General.\n\n(2) A person shall not be qualified to be appointed to the Office of Director of Public Prosecutions unless he or she is qualified to be appointed to the Office of a Judge of the High Court.\n\n(3) The Director of Public Prosecutions shall have power in any case in which he or she considers it desirable to do so-\n\n- (a) to institute and undertake criminal proceedings against any person before any court (other than a court martial) in respect of any offence alleged to have been committed by that person;\n- (b) to take over and continue any such criminal proceedings that have been instituted or undertaken by any other person or authority; and\n- (c) to discontinue, at any stage before judgment is delivered, any such criminal proceedings instituted or undertaken by himself or herself or any other person or authority.\n\n(4) The powers of the Director of Public Prosecutions under subsection (3) may be exercised by him or her in person or by officers subordinate to him or her acting in accordance with his or her general or special authority.\n\n(5) For the purposes of this section any appeal from any judgment in any criminal proceedings before any court, or any case stated or question of law reserved for the purpose of any such proceedings, to any other court shall be deemed to be part of those proceedings:\n\nProvided that the power conferred on the Director of Public Prosecutions by subsection (3)(c) of this section shall not be exercised in relation to any appeal by a person convicted in any criminal proceedings or to any case stated or question of law reserved at the instance of such person.\n\n(6) In the exercise of the functions vested in him or her by subsection (3) of this section the Director of Public Prosecutions shall not be subject to the direction or control of any other person or authority:\n\nProvided that-\n\n- (a) where any other person or authority has instituted criminal proceedings, nothing in this subsection shall prevent the withdrawal of those proceedings by or at the instance of that person or authority, and with the leave of the court; and\n- (b) before exercising his or her powers in relation to cases considered by the Attorney-General to be of national importance, the Director of Public Prosecutions shall consult the Attorney-General.\n\n# **52. Permanent Secretaries**\n\nWhere any Minister has been charged with responsibility for any department of Government, he or she shall exercise general direction and control over that department and, subject to such direction and control, the department shall be under the supervision of a Permanent Secretary whose office shall be a public office.\n\n## **53. Prerogative of Mercy**\n\nThe President may-", - "page_start": 24, - "page_end": 24, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "has more than one residence in Botswana in the constituency in which he or she has his or her principal residence; or\n\n- (b) in the case of a person who does not have a residence in Botswana but is able to register in person, in the constituency in which he or she last resided, or in which he or she was born; or\n- (c) in the case of a person who is not resident in Botswana and is unable to register in person, at such place as may be prescribed by Parliament and registration at such place shall be treated as registration in the constituency in which he or she last resided, or in which he or she was born in Botswana.\n\t- (4) A person shall be entitled to be registered as a voter in one constituency only.\n\n(5) Every person who is registered in any constituency as a voter for the purposes of elections of the Elected Members of the National Assembly shall, unless he or she is disqualified by Parliament from voting in such elections on the grounds of his or her having been convicted of an offence in connection with the elections or on the grounds of his or her having been reported guilty of such an offence by the court trying an election petition or on the grounds of his or her being in lawful custody at the date of the election, be entitled so to vote in that constituency in accordance with the provisions made by or under a law in that behalf; and no other person may so vote.\n\n## **68. Tenure of office of Members**\n\n(1) The seat of an Elected Member or a Specially Elected Member of the National Assembly shall become vacant-\n\n- (a) upon the dissolution of Parliament;\n- (b) if he or she is absent from the sittings of the Assembly for such period and in such circumstances as may be prescribed in the rules of procedure of the Assembly;\n- (c) subject to the provisions of subsections (2) to (3) of this section, if any circumstances arise that, if he or she were not a Member of the Assembly, would cause him or her to be disqualified for election thereto.\n\n(2) If circumstances such as are referred to in paragraph (c) of the preceding subsection arise in relation to a Member of the Assembly by virtue of the fact that he or she is declared insolvent, adjudged to be of unsound mind, sentenced to death or imprisonment, or convicted of an election offence and it is open to the Member to appeal against the decision (either with the leave of the court or other authority or without such leave), he or she shall forthwith cease to perform his or her functions as a Member of the Assembly but, subject to the next following subsection, he or she shall not vacate his or her seat until the expiration of a period of 30 days thereafter:\n\nProvided that the Speaker may, at the request of the Member, from time to time extend that period for further periods of 30 days to enable the Member to pursue an appeal against the decision, so, however, that extensions of time exceeding in the aggregate 150 days shall not be given without the approval of the Assembly signified by resolution.\n\n(3) If, on the determination of any appeal, such circumstances continue to exist and no further appeal is open to the Member of the Assembly, or if, by reason of the expiration of any period for entering an appeal or notice thereof or the refusal of leave to appeal or for any other reason, it ceases to be open to the Member to appeal, he or she shall forthwith vacate his or her seat.\n\n(4) If at any time before the Member of the Assembly vacates his or her seat such circumstances as aforesaid cease to exist, his or her seat shall not become vacant by reason of those circumstances, and he or she may resume the performance of his or her functions as a Member of the Assembly.\n\n**69. Determination of questions as to membership of National Assembly**", - "page_start": 32, - "page_end": 32, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "for local government; and\n\n- (c) select a Member to the Ntlo ya Dikgosi for that region by election or in such other manner as the Regional Electoral College may agree.\n(5) Notwithstanding the provisions of section 77(1)(a) and subsections (2) and (4)(c) of this section, the areas of Ghanzi and Kgalagadi shall each have the option of either selecting one Member under subsection (2) of this section or of each selecting two regional Members under subsection (4)(c) of this section, but may not select Members under both subsections.\n\n# **79. Qualifications for membership of Ntlo ya Dikgosi**\n\n(1) A person shall be qualified to be appointed under section 77(1)(b) as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is a citizen of Botswana; and\n- (b) has attained the age of 21 years.\n\n(2) No person shall be qualified to be appointed, selected or designated as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law in force in any part of the Commonwealth or any country with a comparable legal system and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (e) is under sentence of death imposed on him or her by a court in any part of the Commonwealth or any country with a comparable legal system, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by a competent authority for some other sentence imposed on him or her by such a court;\n- (f) holds, or is acting in,anyoffice the functions ofwhichinvolveany responsibility for, or in connection with, the conduct of any elections to the National Assembly or the compilation or revision of any electoral register for the purposes of such elections; or\n- (g) is disqualified for election to the National Assembly by virtue of provision made in pursuance of section 62 (2) of this Constitution.\n\n(3) For the purposes of this section, two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n(4) A Member of the Ntlo ya Dikgosi shall not, while he or she is such a Member, participate in party politics, but active participation in politics prior to being a Member of the Ntlo ya Dikgosi shall not bar any person from being such a Member.\n\n# **80. Oath of allegiance**\n\nEvery Member of the Ntlo ya Dikgosi shall, before taking his or her seat therein, take and subscribe before the Ntlo ya Dikgosi the oath of allegiance.\n\n# **81. Secretary to Ntlo ya Dikgosi**\n\nThere shall be a Secretary to the Ntlo ya Dikgosi whose office shall be an office in the public service.\n\n**82. Tenure of office of Members of Ntlo ya Dikgosi** (1) A Member of the Ntlo ya", - "page_start": 35, - "page_end": 35, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality, public health, town and country planning, the development and utilization of mineral resources, for the purpose of any census or in order to secure the development or utilization of any property for a purpose beneficial to the community;\n- (b) that is reasonably required for the purpose of protecting the rights or freedoms of other persons;\n- (c) that authorizes an officer or agent of the Government of Botswana, a local government authority or a body corporate established by law for a public purpose to enter on the premises of any person in order to inspect those premises or anything thereon for the purpose of any tax, rate or duty or in order to carry out work connected with any property that is lawfully on those premises and that belongs to that Government, authority or body corporate, as the case may be; or\n- (d) that authorizes, for the purpose of enforcing the judgment or order of a court in any civil proceedings, the search of any person or property by order of a court or entry upon any premises by such order,\n\nand except so far as that provision or, as the case may be, anything done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n# **10. Provisions to secure protection of law**\n\n(1) If any person is charged with a criminal offence, then, unless the charge is withdrawn, the case shall be afforded a fair hearing within a reasonable time by an independent and impartial court established or recognized by law.\n\n(2) Every person who is charged with a criminal offence-\n\n- (a) shall be presumed to be innocent until he or she is proved or has pleaded guilty;\n- (b) shall be informed as soon as reasonably practicable, in a language that he or she understands and in detail, of the nature of the offence charged;\n- (c) shall be given adequate time and facilities for the preparation of his or her defence;\n- (d) shall be permitted to defend himself or herself before the court in person or, at his or her own expense, by a legal representative of his or her own choice;\n- (e) shall be afforded facilities to examine in person or by his or her legal representative the witnesses called by the prosecution before the court, and to obtain the attendance and carry out the examination of witnesses to testify on his or her behalf before the court on the same conditions as those applying to witnesses called by the prosecution; and\n- (f) shall be permitted to have without payment the assistance of an interpreter if he or she cannot understand the language used at the trial of the charge,\n\nand except with his or her own consent the trial shall not take place in his or her absence unless he or she so conducts himself or herself as to render the continuance of the proceedings in his or her presence impracticable and the court has ordered him or her to be removed and the trial to proceed in his or her absence.\n\n(3) When a person is tried for any criminal offence, the accused person or any person authorized by him or her in that behalf shall, if he or she so requires and subject to payment of such reasonable fee as may be prescribed by law, be given within a reasonable time after judgment a copy for the use of the accused person of any record of the proceedings made by or on behalf of the court.", - "page_start": 8, - "page_end": 8, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "### own procedure.\n\n(14) Except as may be otherwise provided in its rules or procedure, the Commission may act notwithstanding any vacancy in its membership or the absence of any member and its proceedings shall not be invalidated by the presence or participation of any person not entitled to be present at or to participate in those proceedings.\n\n(15) Any decision of the Commission shall require the concurrence of a majority of all the members thereof.\n\n(16) A member of the Commission shall not, during the tenure of his or her office or during the three years immediately following such tenure, be eligible for appointment to any public office other than that of Ambassador, High Commissioner or other principal representative of Botswana in any other country or accredited to any international organization.\n\n### **110. Appointment, etc., of public officers**\n\n(1) Subject to the provisions of this section and of sections 111, 113 and 114 of this Constitution, power to appoint persons to hold or to act in any office in the public service, to exercise disciplinary control over persons holding or acting in such offices and to remove from such offices shall vest in such person or persons as may be prescribed by Act of Parliament.\n\n(2) The provisions of this section shall not apply in relation to the following offices, that is to say-\n\n- (a) the office of judge of the Court of Appeal or of the High Court;\n- (b) any office to which section 104 or 112 of the Constitution applies.\n\n(3) Before any person or persons as may have been prescribed under the provisions of subsection (1) exercise power to appoint to or to act in any public office any person who holds or is acting in any office the power to make appointments to which is vested by this Constitution in the President acting in accordance with the advice of the Judicial Service Commission such person shall consult with the Judicial Service Commission.\n\n### **111. Appeals to President**\n\n(1) Any person other than a member of the Botswana Police Force or the Prison Service who has been removed from office or subjected to any other punishment by the exercise of any powers conferred on any person under the provisions of section 110 of this Constitution may appeal to the Public Service Commission who may dismiss such appeal or allow it wholly or in part.\n\n(2) Subject to the provisions of subsection (3) every decision of the Public Service Commission under the provisions of this section shall be final.\n\n(3) Notwithstanding anything contained in subsection (2) if the Public Service Commission dismisses an appeal or allows it in part only the person who appealed may appeal to the President.\n\n(4) If any person appeals to the President in accordance with the provisions of subsection (3) of this section the President shall either dismiss the appeal or shall order that it be heard by a tribunal appointed by the President, the Chairman of which shall be a person who holds or has held high judicial office or is qualified to be appointed as a judge of the High Court.\n\n(5) If the President appoints a tribunal to hear an appeal in accordance with subsection (4) of this section the tribunal shall hear the appeal and shall advise the President whether or not the appeal should be allowed either wholly or in part, and the President shall act in accordance with that advice.\n\n### **112. Powers of President in relation to certain public offices**\n\n(1) The power to appoint a person to hold or act in offices to which this section applies and to remove from office and to exercise disciplinary control over persons", - "page_start": 47, - "page_end": 47, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What are considered \"disciplined force\" according to Botswana constitution ?", - "target_page": 16, - "target_passage": "\"disciplined force\" means- (a) a naval, military or air force; (b) a police force; or (c) a prison service", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "provisions of sections 3 to 16 (inclusive) of this Constitution.\n\n(3) If in any proceedings in any subordinate court any question arises as to the contravention of any of the provisions of sections 3 to 16 (inclusive) of this Constitution, the person presiding in that court may, and shall if any party to the proceedings so requests, refer the question to the High Court unless, in his or her opinion, the raising of the question is merely frivolous or vexatious.\n\n(4) Parliament may confer upon the High Court such powers in addition to those conferred by this section as may appear to be necessary or desirable for the purpose of enabling that court more effectively to exercise the jurisdiction conferred upon it by this\n\nsection.(5) Rules of court making provision with respect to the practice and procedure of the High Court for the purposes of this section may be made by the person or authority for the time being having power to make rules of court with respect to the practice and procedure of that court generally.\n\n## **19. Interpretation and savings**\n\n(1) In this Chapter, unless the context otherwise requires-\n\n**\"court\"** means any court of law having jurisdiction in Botswana other than a court established by a disciplinary law, and in sections 4 and 6 of this Constitution a court established by a disciplinary law;\n\n> **\"disciplinary law\"** means a law regulating the discipline of any disciplined force; **\"disciplined force\"** means-\n\n- (a) a naval, military or air force;\n- (b) a police force; or\n- (c) a prison service;\n\n**\"legal representative\"** means a person entitled to practise in Botswana as an advocate or attorney;\n\n**\"member\"**, in relation to a disciplined force, includes any person who, under the law regulating the discipline of that force, is subject to that discipline.\n\n(2) In relation to any person who is a member of a disciplined force raised under an Act of Parliament, nothing contained in or done under the authority of the disciplinary law of that force shall be held to be inconsistent with or in contravention of any of the provisions of this Chapter other than sections 4, 6 and 7.\n\n(3) In relation to any person who is a member of a disciplined force raised otherwise than as aforesaid and lawfully present in Botswana, nothing contained in or done under the authority of the disciplinary law of that force shall be held to be inconsistent with or in contravention of any of the provisions of this Chapter.\n\n## **CHAPTER III**\n\n### **Citizenship (ss 20-29: repealed)**\n\n**20 to 29 inclusive. [Repealed.]**\n\n# **CHAPTER IV**\n\n# **The Executive (ss 30-56)**\n\n### **PART I The President and the Vice-President (ss 30-41)**\n\n# **30. Office of President**\n\nThere shall be a President of the Republic of Botswana who shall be the Head of State.\n\n### **31. First President**\n\n(1) The first President shall be the person who immediately before 30th September, 1966 holds the office of Prime Minister under the Constitution.\n\n(2) The first President shall be deemed to have assumed office at the coming into operation of this Constitution.", - "page_start": 15, - "page_end": 15, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (d) if he or she is elected as Speaker;\n- (e) if he or she is removed from office by a resolution of the Assembly supported by the votes of not less than two-thirds of all the Members of the Assembly; or\n- (f) when the Assembly first sits after any dissolution of Parliament.\n\n# **61. Qualifications for election to National Assembly**\n\nSubject to the provisions of section 62 of this Constitution, a person shall be qualified to be elected as a Member of the National Assembly if, and shall not be qualified to be so elected unless-\n\n- (a) he or she is a citizen of Botswana;\n- (b) he or she has attained the age of 18 years;\n- (c) he or she is qualified for registration as a voter for the purposes of the election of the Elected Members of the National Assembly and is so registered; and\n- (d) he or she is able to speak, and, unless incapacitated by blindness or other physical cause, to read English well enough to take an active part in the proceedings of the Assembly.\n\n# **62. Disqualifications for membership of National Assembly**\n\n(1) No person shall be qualified to be elected as a Member of the National Assembly who-\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law for the time being in force in Botswana and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified to be insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) is a Member of the Ntlo ya Dikgosi;\n- (e) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (f) is under sentence of death imposed on him or her by a court in any part of the Commonwealth, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by competent authority for some other sentence imposed on him or her by such a court;\n- (g) holds, or is acting in, any office the functions of which involve any responsibility for, or in connection with, the conduct of any elections to the Assembly or the compilation or revision of any electoral register for the purposes of such elections.\n\n(2) Parliament may provide that a person shall not be qualified for election to the National Assembly for such period (not exceeding five years) as may be prescribed if he or she is convicted of any such offence connected with elections to the Assembly as may be prescribed.\n\n(3) For the purposes of this section two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n# **63. Constituencies**\n\nBotswana shall be divided into as many constituencies as there are Elected Members of the National Assembly and each of those constituencies shall return one Member to the National Assembly.", - "page_start": 27, - "page_end": 27, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "her lawful detention shall not be held to be inconsistent with or in contravention of this\n\nsection.(3) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) for the imposition of restrictions that are reasonably required in the interests of defence, public safety, public order, public morality or public health or the imposition of restrictions on the acquisition or use by any person of land or other property in Botswana and except so far as that provision or, as the case may be, the thing done under the authority thereof, is shown not to be reasonably justifiable in a democratic society;\n- (b) for the imposition of restrictions on the freedom of movement of any person who is not a citizen of Botswana;\n- (c) for the imposition of restrictions on the entry into or residence within defined areas of Botswana of persons who are not Bushmen to the extent that such restrictions are reasonably required for the protection or well-being of Bushmen;\n- (d) for the imposition of restrictions upon the movement or residence within Botswana of public officers; or\n- (e) .......\n\n(4) If any person whose freedom of movement has been restricted by order under such a provision as is referred to in subsection (3)(a) of this section (other than a restriction which is applicable to persons generally or to general classes of persons) so requests at any time during the period of that restriction not earlier than six months after the order was made or six months after he or she last made such request, as the case may be, his or her case shall be reviewed by an independent and impartial tribunal presided over by a person, qualified to be enrolled as an advocate in Botswana, appointed by the Chief Justice.\n\n(5) On any review by a tribunal in pursuance of this section of the case of a person whose freedom of movement has been restricted, the tribunal may make recommendations, concerning the necessity or expediency of continuing the restriction to the authority by which it was ordered but, unless it is otherwise provided by law, that authority shall not be obliged to act in accordance with any such recommendations.\n\n# **15. Protection from discrimination on the grounds of race, etc.**\n\n(1) Subject to the provisions of subsections (4), (5) and (7) of this section, no law shall make any provision that is discriminatory either of itself or in its effect.\n\n(2) Subject to the provisions of subsections (6), (7) and (8) of this section, no person shall be treated in a discriminatory manner by any person acting by virtue of any written law or in the performance of the functions of any public office or any public authority.\n\n(3) In this section, the expression \"discriminatory\" means affording different treatment to different persons, attributable wholly or mainly to their respective descriptions by race, tribe, place of origin, political opinions, colour, creed or sex whereby persons of one such description are subjected to disabilities or restrictions to which persons of another such description are not made subject or are accorded privileges or advantages which are not accorded to persons of another such description.\n\n(4) Subsection (1) of this section shall not apply to any law so far as that law makes provision-\n\n- (a) for the appropriation of public revenues or other public funds;\n- (b) with respect to persons who are not citizens of Botswana;\n- (c) with respect to adoption, marriage, divorce, burial, devolution of property on death or other matters of personal law;", - "page_start": 12, - "page_end": 12, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "for local government; and\n\n- (c) select a Member to the Ntlo ya Dikgosi for that region by election or in such other manner as the Regional Electoral College may agree.\n(5) Notwithstanding the provisions of section 77(1)(a) and subsections (2) and (4)(c) of this section, the areas of Ghanzi and Kgalagadi shall each have the option of either selecting one Member under subsection (2) of this section or of each selecting two regional Members under subsection (4)(c) of this section, but may not select Members under both subsections.\n\n# **79. Qualifications for membership of Ntlo ya Dikgosi**\n\n(1) A person shall be qualified to be appointed under section 77(1)(b) as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is a citizen of Botswana; and\n- (b) has attained the age of 21 years.\n\n(2) No person shall be qualified to be appointed, selected or designated as a Member of the Ntlo ya Dikgosi if he or sheó\n\n- (a) is, by virtue of his or her own act, under any acknowledgement of allegiance, obedience or adherence to a foreign power or state;\n- (b) has been declared insolvent or adjudged or otherwise declared bankrupt under any law in force in any part of the Commonwealth or any country with a comparable legal system and has not been discharged, or has made a composition with his or her creditors and has not paid his or her debts in full;\n- (c) is certified insane or otherwise adjudged or declared to be of unsound mind under any law for the time being in force in Botswana;\n- (d) subject to such exceptions as may be prescribed by Parliament, holds any public office, or is acting in any public office by virtue of a contract of service expressed to continue for a period exceeding six months;\n- (e) is under sentence of death imposed on him or her by a court in any part of the Commonwealth or any country with a comparable legal system, or is under a sentence of imprisonment (by whatever name called) exceeding six months imposed on him or her by such a court or substituted by a competent authority for some other sentence imposed on him or her by such a court;\n- (f) holds, or is acting in,anyoffice the functions ofwhichinvolveany responsibility for, or in connection with, the conduct of any elections to the National Assembly or the compilation or revision of any electoral register for the purposes of such elections; or\n- (g) is disqualified for election to the National Assembly by virtue of provision made in pursuance of section 62 (2) of this Constitution.\n\n(3) For the purposes of this section, two or more terms of imprisonment that are required to be served consecutively shall be regarded as a single term of imprisonment for the aggregate period of those terms, and no account shall be taken of a sentence of imprisonment imposed as an alternative to or in default of the payment of a fine.\n\n(4) A Member of the Ntlo ya Dikgosi shall not, while he or she is such a Member, participate in party politics, but active participation in politics prior to being a Member of the Ntlo ya Dikgosi shall not bar any person from being such a Member.\n\n# **80. Oath of allegiance**\n\nEvery Member of the Ntlo ya Dikgosi shall, before taking his or her seat therein, take and subscribe before the Ntlo ya Dikgosi the oath of allegiance.\n\n# **81. Secretary to Ntlo ya Dikgosi**\n\nThere shall be a Secretary to the Ntlo ya Dikgosi whose office shall be an office in the public service.\n\n**82. Tenure of office of Members of Ntlo ya Dikgosi** (1) A Member of the Ntlo ya", - "page_start": 35, - "page_end": 35, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "for his or her education or welfare during any period ending not later than the date when he or she attains the age of 18 years;\n\n- (g) for the purpose of preventing the spread of an infectious or contagious disease;\n- (h) in the case of a person who is, or is reasonably suspected to be, of unsound mind, addicted to drugs or alcohol, or a vagrant, for the purpose of his or her care or treatment or the protection of the community;\n- (i) for the purpose of preventing the unlawful entry of that person into Botswana, or for the purpose of effecting the expulsion, extradition or other lawful removal of that person from Botswana, or for the purpose of restricting that person while he or she is being conveyed through Botswana in the course of his or her extradition or removal as a convicted prisoner from one country to another;\n- (j) to such extent as may be necessary in the execution of a lawful order requiring that person to remain within a specified area within Botswana or prohibiting him or her from being within such an area, or to such extent as may be reasonably justifiable for the taking of proceedings against that person relating to the making of any such order, or to such extent as may be reasonably justifiable for restraining that person during any visit that he or she is permitted to make to any part of Botswana in which, in consequence of any such order, his or her presence would otherwise be unlawful; or\n- (k) for the purpose of ensuring the safety of aircraft in flight.\n\n(2) Any person who is arrested or detained shall be informed as soon as reasonably practicable, in a language that he or she understands, of the reasons for his or her arrest or detention.\n\n(3) Any person who is arrested or detained-\n\n- (a) for the purpose of bringing him or her before a court in execution of the order of a court; or\n- (b) upon reasonable suspicion of his or her having committed, or being about to commit, a criminal offence under the law in force in Botswana,\n\nand who is not released, shall be brought as soon as is reasonably practicable before a court; and if any person arrested or detained as mentioned in paragraph (b) of this subsection is not tried within a reasonable time, then, without prejudice to any further proceedings that may be brought against him or her, he or she shall be released either unconditionally or upon reasonable conditions, including in particular such conditions as are reasonably necessary to ensure that he or she appears at a later date for trial or for proceedings preliminary to trial.\n\n(4) Any person who is unlawfully arrested or detained by any other person shall be entitled to compensation therefor from that other person.\n\n# **6. Protection from slavery and forced labour**\n\n(1) No person shall be held in slavery or servitude.\n\n(2) No person shall be required to perform forced labour.\n\n(3) For the purposes of this section, the expression \"forced labour\" does not include-\n\n- (a) any labour required in consequence of the sentence or order of a court;\n- (b) labour required of any person while he or she is lawfully detained that, though not required in consequence of the sentence or order of a court, is reasonably necessary in the interests of hygiene or for the maintenance of the place at which he or she is detained;\n- (c) any labour required of a member of a disciplined force in pursuance of his or her duties as such or, in the case of a person who has conscientious objections to service as a member of a naval, military or air force, any labour that that person is required by law to perform in place of such service;", - "page_start": 5, - "page_end": 5, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "# **The Republic (ss 1-2)**\n\n## **1. Declaration of Republic**\n\nBotswana is a sovereign Republic.\n\n# **2. Public Seal**\n\nThe Public Seal of the Republic shall be such device as may be prescribed by or under an Act of Parliament.\n\n## **CHAPTER II**\n\n## **Protection of Fundamental Rights and Freedoms of the Individual (ss 3-19) 3. Fundamental rights and freedoms of the individual**\n\nWhereas every person in Botswana is entitled to the fundamental rights and freedoms of the individual, that is to say, the right, whatever his or her race, place of origin, political opinions, colour, creed or sex, but subject to respect for the rights and freedoms of others and for the public interest to each and all of the following, namely-\n\n- (a) life, liberty, security of the person and the protection of the law;\n- (b) freedom of conscience, of expression and of assembly and association; and\n- (c) protection for the privacy of his or her home and other property and from deprivation of property without compensation,\n\nthe provisions of this Chapter shall have effect for the purpose of affording protection to those rights and freedoms subject to such limitations of that protection as are contained in those provisions, being limitations designed to ensure that the enjoyment of the said rights and freedoms by any individual does not prejudice the rights and freedoms of others or the public interest.\n\n# **4. Protection of right to life**\n\n(1) No person shall be deprived of his or her life intentionally save in execution of the sentence of a court in respect of an offence under the law in force in Botswana of which he or she has been convicted.\n\n(2) A person shall not be regarded as having been deprived of his or her life in contravention of subsection (1) of this section if he or she dies as the result of the use, to such extent and in such circumstances as are permitted by law, of such force as is reasonably justifiable-\n\n- (a) for the defence of any person from violence or for the defence of property;\n- (b) in order to effect a lawful arrest or to prevent the escape of a person lawfully detained;\n- (c) for the purpose of suppressing a riot, insurrection or mutiny; or\n\n(d) in order to prevent the commission by that person of a criminal offence,\n\nor if he or she dies as the result of a lawful act of war.\n\n# **5. Protection of right to personal liberty**\n\n(1) No person shall be deprived of his or her personal liberty save as may be authorized by law in any of the following cases, that is to say-\n\n- (a) in execution of the sentence or order of a court, whether established for Botswana or some other country, in respect of a criminal offence of which he or she has been convicted;\n- (b) in execution of the order of a court of record punishing him or her for contempt of that or another court;\n- (c) in execution of the order of a court made to secure the fulfilment of any obligation imposed on him or her by law;\n- (d) for the purpose of bringing him or her before a court in execution of the order of a court;\n- (e) upon reasonable suspicion of his or her having committed, or being about to commit, a criminal offence under the law in force in Botswana;\n- (f) under the order of a court or with the consent of his or her parent or guardian,", - "page_start": 4, - "page_end": 4, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## **122. Remuneration of certain officers**\n\n(1) There shall be paid to the holders of the offices to which this section applies such salaries and such allowances as may be prescribed by Parliament.\n\n(2) The salaries and any allowances payable to the holders of the offices to which this section applies shall be a charge on the Consolidated Fund.\n\n(3) The salary payable to the holder of any office to which this section applies and his or her terms of office, other than allowances, shall not be altered to his or her disadvantage after his or her appointment.\n\n(4) Where a person's salary or terms of office depend upon his or her option, the salary or terms for which he or she opts shall, for the purposes of subsection (3) of this section, be deemed to be more advantageous to him or her than any others for which he or she might have opted.\n\n(5) This section applies to the offices of judge of the Court of Appeal, judge of the High Court, member of the Public Service Commission, member of the Judicial Service Commission, member of the Delimitation Commission, Auditor-General, Director of Public Prosecutions and Attorney-General.\n\n## **123. Public debt**\n\n(1) There shall be charged on the Consolidated Fund all debt charges for which Botswana is liable.\n\n(2) For the purposes of this section debt charges include interest, sinking fund charges, the repayment or amortization of debt, and all expenditure in connection with the raising of loans on the security of the revenues or the Consolidated Fund of the former Protectorate of Bechuanaland or Botswana, and the service and redemption of debt thereby created.\n\n### **124. Auditor-General**\n\n(1) There shall be an Auditor-General, whose office shall be a public office.\n\n(2) The public accounts of Botswana and of all officers, courts and authorities of the Government of Botswana shall be audited and reported on by the Auditor-General and for that purpose the Auditor-General or any person authorized by him or her in that behalf shall have access to all books, records, reports and other documents relating to those accounts:\n\nProvided that, if it is so provided by Parliament in the case of any body corporate directly established by law, the accounts of that body corporate shall be audited and reported on by such person as may be specified by or under that law.\n\n(3) The Auditor-General shall submit his or her reports to the Minister responsible for finance, who shall cause them to be laid before the National Assembly.\n\n(4) The Auditor-General shall perform such other duties and exercise such other powers in relation to the accounts of the Government or the accounts of other public authorities or other bodies as may be prescribed by or under any Act of Parliament.\n\n(5) In the exercise of his or her functions the Auditor-General shall not be subject to the direction or control of any other person or authority.\n\n# **CHAPTER IX Miscellaneous (ss 125-127)**\n\n## **125. Resignations**\n\n(1) Any person who is appointed or elected to any office established by this Constitution may resign from that office by writing under his or her hand addressed to the person or authority by whom he or she was appointed or elected:\n\nProvided that in the case of a person who holds office as President his or her resignation from that office shall be addressed to the Chief Justice, in the case of a person who holds office as Speaker or Deputy Speaker of the National Assembly his or her resignation from that office shall be addressed to the Assembly, in the case of an", - "page_start": 52, - "page_end": 52, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Elected or Specially Elected Member of the Assembly his or her resignation shall be addressed to the Speaker, and in the case of a Member of the Ntlo ya Dikgosi his or her resignation from that office shall be addressed to the Chairman of the Ntlo ya Dikgosi.\n\n(2) The resignation of any person from any office established by this Constitution shall take effect on the date or at the time indicated in the writing signifying the resignation or, if no such date or time is so indicated, at the time the writing is received by the person or authority to whom it is addressed or by any person authorized by that person or authority to receive it.\n\n### **126. Reappointments and concurrent appointments**\n\n(1) Where any person has vacated any office established by this Constitution, he or she may, if qualified, again be appointed or elected to hold that office in accordance with the provisions of this Constitution.\n\n(2) Where a power is conferred by this Constitution upon any person to make any appointment to any office, a person may be appointed to that office notwithstanding that some other person may be holding that office, when that other person is on leave of absence pending the relinquishment of the office; and where two or more persons are holding the same office by reason of an appointment made in pursuance of this subsection, then, for the purposes of any function conferred upon the holder of that office, the person last appointed shall be deemed to be the sole holder of the office.\n\n## **127. Interpretation**\n\n(1) In this Constitution unless the context otherwise requires-\n\n**\"the Assembly\"** means the National Assembly;\n\n**\"Botswana\"** means the territory that, on 29th September, 1966, was comprised in the former Protectorate of Bechuanaland;\n\n**\"financial year\"** means the period of 12 months ending on 31st March in any year or on such other day as Parliament may prescribe;\n\n**\"the Gazette\"** means the Botswana Government Gazette;\n\n**\"high judicial office\"** means the office of a judge of a court of unlimited jurisdiction in civil and criminal matters in Botswana, a Commonwealth country or in any country outside the Commonwealth that may be prescribed by Parliament or the office of judge of a court having jurisdiction in appeals from such a court;\n\n**''Kgosana'' (pl. Dikgosana)** means Headman;\n\n**''Kgosi'' (pl. Dikgosi)** means Chief or Sub-Chief as defined in the Chieftainship Act;\n\n**\"oath\"** includes affirmation;\n\n**\"the oath of allegiance\"** means such oath of allegiance as may be prescribed by law;\n\n**\"public office\"** means, subject to the provisions of subsections (2) and (3) of this section, an office of emolument in the public service;\n\n**\"public officer\"** means a person holding or acting in any public office;\n\n**\"the public service\"** means the civil service of the Government;\n\n**\"session\"** means the sittings of the National Assembly beginning when it first sits after the coming into operation of this Constitution or after Parliament is prorogued or dissolved at any time and ending when Parliament is prorogued or is dissolved without having been prorogued;\n\n**\"sitting\"** means a period during which the National Assembly is sitting without adjournment and includes any period during which it is in committee;\n\n**\"subordinate court\"** means any court established for Botswana other than-\n\n- (a) the Court of Appeal;\n- (b) the High Court;\n- (c) a court martial; or", - "page_start": 53, - "page_end": 53, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "National Assembly.\n\n(6) A person attending the proceedings of the Ntlo ya Dikgosi by virtue of the provisions of subsection (3) or (4) of this section shall be entitled to take part in the proceedings of the Ntlo ya Dikgosi relating to the matter in respect of which he or she attends as if he or she were a Member of the Ntlo ya Dikgosi:\n\nProvided that he or she shall not be entitled to vote in the Ntlo ya Dikgosi.\n\n## **PART IV**\n\n# **Powers of Parliament (ss 86-89)**\n\n# **86. Legislative powers**\n\nSubject to the provisions of this Constitution, Parliament shall have power to make laws for the peace, order and good government of Botswana.\n\n# **87. Mode of exercising legislative powers**\n\n(1) Subject to the provisions of section 89(4) of this Constitution the power of Parliament to make laws shall be exercised by Bills passed by the National Assembly, after reference in the cases specified in section 88(2) of this Constitution to the Ntlo ya Dikgosi, and assented to by the President.\n\n(2) When a Bill is presented to the President for assent he or she shall either assent or withhold his or her assent.\n\n(3) Where the President withholds his or her assent to a Bill, the Bill shall be returned to the National Assembly.\n\n(4) If where the President withholds his or her assent to a Bill the Assembly resolves within six months of the Bill being returned to it that the Bill should again be presented for assent, the President shall assent to the Bill within 21 days of its being again presented to him or her, unless he or she sooner dissolves Parliament.\n\n(5) When a Bill that has been duly passed and presented for assent is assented to in accordance with the provisions of this Constitution it shall become law and the President shall thereupon cause it to be published in the Gazette as a law.\n\n(6) No law made by Parliament shall come into operation until it has been published in the Gazette, but Parliament may postpone the coming into operation of any such law and may make laws with retrospective effect.\n\n(7) All laws made by Parliament shall be styled \"Acts\" and the words of enactment shall be \"enacted by the Parliament of Botswana\".\n\n# **88. Introduction of Bills**\n\n(1) Except upon the recommendation of the President, which recommendation may be signified by the Vice-President or a Minister, the National Assembly shall not-\n\n- (a) proceed upon any Bill (including any amendment to a Bill) that, in the opinion of the person presiding, makes provision for any of the following purposes-\n\t- (i) for the imposition of taxation or the alteration of taxation otherwise than by reduction;\n\t- (ii) for the imposition of any charge upon the revenues or other funds of Botswana or the alteration of any such charge otherwise than by reduction;\n\t- (iii) for the payment, issue or withdrawal from any public fund of Botswana of any moneys not charged thereon or any increase in the amount of such payment, issue or withdrawal; or\n\t- (iv) for the composition or remission of any debt to the Government of Botswana;\n- (b) proceed upon any motion (including any amendment to a motion) the effect of which, in the opinion of the person presiding, would be to make provision for any of those purposes.\n\n(2) The National Assembly shall not proceed upon any Bill (including any amendment to a Bill) that, in the opinion of the person presiding, would, if enacted, alter", - "page_start": 37, - "page_end": 37, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "court to try a member of a disciplined force for a criminal offence notwithstanding any trial and conviction or acquittal of that member under the disciplinary law of that force, so, however, that any court so trying such a member and convicting him or her shall in sentencing him or her to any punishment take into account any punishment awarded him or her under that disciplinary law;\n\n- (e) subsection (8) of this section to the extent that the law in question authorizes a court to convict a person of a criminal offence under any customary law to which, by virtue of that law, such person is subject.\n(13) In the case of any person who is held in lawful detention, the provisions of subsection (1), subsection (2)(d) and (e) and subsection (3) of this section shall not apply in relation to his or her trial for a criminal offence under the law regulating the discipline of persons held in such detention.\n\n(14) In this section \"criminal offence\" means a criminal offence under the law in force in Botswana.\n\n## **11. Protection of freedom of conscience**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of conscience, and for the purposes of this section the said freedom includes freedom of thought and of religion, freedom to change his or her religion or belief, and freedom, either alone or in community with others, and both in public and in private, to manifest and propagate his or her religion or belief in worship, teaching, practice and observance.\n\n(2) Every religious community shall be entitled, at its own expense, to establish and maintain places of education and to manage any place of education which it wholly maintains; and no such community shall be prevented from providing religious instruction for persons of that community in the course of any education provided at any place of education which it wholly maintains or in the course of any education which it otherwise provides.\n\n(3) Except with his or her own consent (or, if he or she is a minor, the consent of his or her guardian) no person attending any place of education shall be required to receive religious instruction or to take part in or attend any religious ceremony or observance if that instruction, ceremony or observance relates to a religion other than his or her own.\n\n(4) No person shall be compelled to take any oath which is contrary to his or her religion or belief or to take any oath in a manner which is contrary to his or her religion or belief.\n\n(5) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision which is reasonably required-\n\n- (a) in the interests of defence, public safety, public order, public morality or public health; or\n- (b) for the purpose of protecting the rights and freedoms of other persons, including the right to observe and practise any religion without the unsolicited intervention of members of any other religion,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **12. Protection of freedom of expression**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of expression, that is to say, freedom to hold opinions without interference, freedom to receive ideas and information without interference, freedom to communicate ideas and information without interference (whether the", - "page_start": 10, - "page_end": 10, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "How much does AWS lambda charge when the function is not running ?", - "target_page": 52, - "target_passage": "there is no charge when your code is not running", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker.\n\nTo help simplify troubleshooting, the AWS Serverless Application Model CLI (AWS SAM CLI) has a command called sam logs which will show you CloudWatch Logs generated by your Lambda function.\n\nFor example, the following terminal command would show the live tail of logs generated by the *YourLambdaFunctionName* Lambda function:\n\n```\nsam logs -n YourLambdaFunctionName --tail\n```\nLogging and debugging go hand in hand. Traces of events are available with Amazon X-Ray for debugging.\n\n### **Securing functions**\n\nAWS Identity and Access Management (IAM) is the service used to manage access to AWS services. Lambda is fully integrated with IAM, allowing you to control precisely what each Lambda function can do within the AWS Cloud. There are two important things that define the scope of permissions in Lambda functions:\n\n- *resource policy*: Defines which events are authorized to invoke the function.\n- *execution role policy*: Limits what the Lambda function is authorized to do.\n\nUsing IAM roles to describe a Lambda function's permissions, decouples security configuration from the code. This helps reduce the complexity of a lambda function, making it easier to maintain.\n\nA Lambda function's resource and execution policy should be granted the minimum required permissions for the function to perform it's task effectively. This is sometimes referred to as the rule of least privilege. As you develop a Lambda function, you expand the scope of this policy to allow access to other resources as required.", - "page_start": 59, - "page_end": 59, - "source_file": "serverless-core.pdf" - }, - { - "text": "*\"No Server Is Easier To Manage Than No Server\"* - Werner Vogels, VP and CTO\n\nThe Lambda service runs instances of your function only when needed and scales automatically from zero requests per day to thousands per second. You pay only for the compute time that's actually used — there is no charge when your code is not running.\n\n## **Fundamentals**\n\nServerless solutions are based on *event-driven architecture,* or EDA, where services send and receive *events*, which represent an update or change in state. The primary activity of Lambda functions is to process events.\n\nWithin the Lambda service, your function code is stored in a code package, deployed as a .zip or a container image. All interaction with the code occurs through the Lambda API. There is no direct invocation of functions from outside of the Lambda service.\n\nWhat you will learn on your journey to building applications with Lambda:\n\n- How the event-driven programming model invokes Lambda functions\n- How to create, invoke, test, update, package, and secure functions\n- How the execution and runtime environment runs your functions\n- How to view logs and monitor your functions\n- Where to find hands-on opportunities to learn how to invoke functions", - "page_start": 51, - "page_end": 51, - "source_file": "serverless-core.pdf" - }, - { - "text": "After the handler finishes processing the first event, the runtime sends it another, and another. Each instance of your function could process thousands of requests.\n\nUnlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an *invocation*. Lambda functions are limited to 15 minutes in duration, but on average, across all AWS customers, most invocations last for less than a second.\n\nThere are many types of invocation events. Some examples:\n\n- HTTP request from API Gateway\n- Schedule managed by an EventBridge rule\n- Message from an IOT device\n- Notification that a file was uploaded to an S3 bucket\n\nEven the smallest Lambda-based application uses at least one event that invokes your function.\n\n### **How Lambda invokes your function (runtime environment)**\n\nLambda invokes your function in an *execution environment*, which contains a secure and isolated *runtime environment*.\n\n- A *runtime* provides a language-specific environment which relays invocation events, context information, and responses between the Lambda and your functions.\n- An *execution environment* manages the processes and resources that are required to run the function.", - "page_start": 55, - "page_end": 55, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### **Related resource(s):**\n\n- Permissions boundaries for IAM entities official documentation.\n# **Additional resources**\n\nOfficial AWS documentation:\n\n- AWS Identity and Access Management Documentation\n- Example IAM identity-based policies an extensive list of example policies, including AWS Lambda: Allows a lambda function to access an Amazon DynamoDB table which is useful in microservices\n- Grant least privilege section of the *Policies and permissions* chapter suggests a method to refine permissions for increased security\n\nResources from the serverless community:\n\n- Simplifying serverless permissions with AWSAWS SAM Connectors AWS Compute blog post by Kurt Tometich, Senior Solutions Architect, AWS, from Oct 2022 that introduces a AWS SAM abstraction that creates minimally scoped IAM policies\n- Building AWS Lambda governance and guardrails AWS Compute blog post by Julian Wood, Senior Solutions Architect, AWS, from Aug 2022 that highlights how Lambda, as a serverless service, simplifies cloud security and compliance so you can concentrate on your business logic.\n\n# **Next Steps**\n\n- Work through the Getting Started Resource Center 30-45 min tutorial on Setting Up Your AWS Environment to properly set up your AWS account, secure the root user, create an IAM user, and setup AWS CLI and (optionally) Cloud9 environment.\n# **Get started with Lambda**\n\nAll projects need a compute capability to handle processing tasks. Here are some examples:\n\n- Handling web application and API requests\n- Transforming batches of data\n- Processing messages from a queue", - "page_start": 49, - "page_end": 49, - "source_file": "serverless-core.pdf" - }, - { - "text": "### **Deploy with containers**\n\nIf you need a custom runtime that is not provided by AWS, you can create and deploy a custom container image. AWS provides base images preloaded with a language runtime and other components that are required to run the image on Lambda. AWS provides a Dockerfile for each of the base images to help with building your container image.\n\nCustom containers are one way you might experiment with lift and shift of existing code to Lambda runtimes. If you do this, consider the architectural differences between always running containers, versus on demand nature of Lambda functions.\n\nRelated resource:\n\n- Deploy container images\n#### **Add code with Layers**\n\nA Lambda *layer* is a .zip file archive that can contain additional code or other content. A layer can contain libraries, a custom runtime, data, or configuration files. Layers are also necessary if your function .zip archive exceeds the size limit.\n\nLayers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. Layers also promote code sharing and separation of responsibilities so that you can iterate faster on writing business logic.\n\nRelated resource:\n\n- Creating and sharing Lambda layers\n#### **Extensions**\n\nYou can use Lambda extensions to augment your Lambda functions. For example, use Lambda Extensions to integrate with your preferred monitoring, observability, security, and governance tools.\n\nLambda supports internal or external extensions. An internal extension runs as part of the runtime process. An external extension runs as an independent process in the execution environment and continues to run after the function invocation is fully processed.", - "page_start": 61, - "page_end": 61, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### **Connect to functions with Function URLs**\n\nA function URL is a dedicated HTTP(S) endpoint for your Lambda function. You can create and configure a function URL through the Lambda console or the Lambda API. When you create a function URL, Lambda automatically generates a unique URL endpoint for you. Once you create a function URL, its URL endpoint never changes. Function URL endpoints have the following format:\n\nhttps://.lambda-url..on.aws\n\nAfter you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint with a web browser, curl, Postman, or any HTTP client.\n\nRelated resources:\n\n- Function URLs official documentation\n### **Additional resources**\n\nOfficial AWS documentation:\n\n- AWS Lambda Developer Guide extensive and complete documentation for Lambda\n#### **Next steps**\n\n#### **Learn serverless techniques in an online workshop**\n\nLearn by doing in the **Serverless Patterns Workshop**. The first module introduces a serverless microservice to retrieve data from DynamoDB with Lambda and API Gateway. Additional modules provide practical examples of unit and integration testing, using infrastructure as code to deploy resources, and how to build common architectural patterns used in serverless solutions.", - "page_start": 63, - "page_end": 63, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### Related resources:\n\n- Datadog Lambda Extension an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n- Lambda Extensions official documentation\n\n#### **Launch functions faster with SnapStart**\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n#### Related resources:\n\n- Accelerate Your Lambda Functions with Lambda SnapStart an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "This guide will highlight what you need to know right away and link to service documentation for more service-specific details.\n\nFor example, you will learn that the Lambda service creates an *execution environment* to run compute functions. For more information on how Lambda manages function scaling or reduces start-up time, we will link you to relevant sections of the Lambda developer guide.\n\nThe topics in this guide will cover the prerequisites for understanding serverless development on AWS, such as account creation and an overview of AWS cloud infrastructure. Then, you will learn how to shift from a traditional development model to a serverless, event-driven architecture with which to develop applications on the cloud.\n\nAlong the way, this guide will introduce core services, workshops, and tutorials, you can choose to reinforce your learning with hands-on activities.\n\n- AWS Identity and Access Management for securely accessing resources on AWS.", - "page_start": 5, - "page_end": 5, - "source_file": "serverless-core.pdf" - }, - { - "text": "- Policies that grant least privilege to your functions\n**Workshop - Intro to Serverless** - Before diving too deep, you can choose to try out serverless in a workshop or tutorial. Connect to a data source and create a REST API with your first Lambda function.\"\n\n- Services used: AWS Management Console, Lambda, DynamoDB, API Gateway\n#### **Programming Model**\n\nThe Lambda service provides the same event-based programming model for all languages. The Lambda runtime passes an *invocation event* and *context* to your Lambda function *handler* which does some work and produces a resulting event:\n\nThe *invocation event* contains data, as a JSON packet, which varies from service to service. For example, API gateway events include path, HTTP method, query string parameters, headers, cookies, and more. DynamoDB events could contain updated or delete record data. S3 events include the bucket name and object key, among other things.\n\n*The context* contains information about the environment the function is running inside. Additional contextual information can be set in familiar environment variables (ENV).\n\nThe function *handler* is a method in your function code that processes the inbound event. The handler, which is a standard function in your language of choice, does some work and emits a *result event*.", - "page_start": 54, - "page_end": 54, - "source_file": "serverless-core.pdf" - }, - { - "text": "The architecture of traditional monolithic web applications tends to become more complex over time. Complexity increases ramp-up time for new developers, makes tracking down the source of bugs more challenging, and delays the delivery of new features.\n\n# **Use services instead of custom code**\n\nServerless applications usually comprise several AWS services, integrated with custom code run in Lambda functions. While Lambda can be integrated with most AWS services, the services most commonly used in serverless applications are:\n\n|\n| |\n\n| Category | AWS service |\n| --- | --- |\n| Compute | Lambda |\n| Data storage | Amazon S3, DynamoDB, Amazon RDS |\n| API | API Gateway |\n| Application integration | EventBridge, Amazon SNS, Amazon SQS |\n| Orchestration | Step Functions |\n| Streaming data and analytics | Amazon Data Firehose |\n\nThere are many well-established, common patterns in distributed architectures that you can build yourself or implement using AWS services. For most customers, there is little commercial value in investing time to develop these patterns from scratch. When your application needs one of these patterns, use the corresponding AWS service:\n\n#### **Common patterns and corresponding AWS services**\n\n| Pattern | AWS service |\n| --- | --- |\n| Queue | Amazon SQS |\n| Event bus | EventBridge |\n| Publish/subscribe (fan-out) | Amazon SNS |", - "page_start": 22, - "page_end": 22, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "What is the role of resource policies of lambda functions ?", - "target_page": 60, - "target_passage": "resource policy: Defines which events are authorized to invoke the function.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker.\n\nTo help simplify troubleshooting, the AWS Serverless Application Model CLI (AWS SAM CLI) has a command called sam logs which will show you CloudWatch Logs generated by your Lambda function.\n\nFor example, the following terminal command would show the live tail of logs generated by the *YourLambdaFunctionName* Lambda function:\n\n```\nsam logs -n YourLambdaFunctionName --tail\n```\nLogging and debugging go hand in hand. Traces of events are available with Amazon X-Ray for debugging.\n\n### **Securing functions**\n\nAWS Identity and Access Management (IAM) is the service used to manage access to AWS services. Lambda is fully integrated with IAM, allowing you to control precisely what each Lambda function can do within the AWS Cloud. There are two important things that define the scope of permissions in Lambda functions:\n\n- *resource policy*: Defines which events are authorized to invoke the function.\n- *execution role policy*: Limits what the Lambda function is authorized to do.\n\nUsing IAM roles to describe a Lambda function's permissions, decouples security configuration from the code. This helps reduce the complexity of a lambda function, making it easier to maintain.\n\nA Lambda function's resource and execution policy should be granted the minimum required permissions for the function to perform it's task effectively. This is sometimes referred to as the rule of least privilege. As you develop a Lambda function, you expand the scope of this policy to allow access to other resources as required.", - "page_start": 59, - "page_end": 59, - "source_file": "serverless-core.pdf" - }, - { - "text": "*\"No Server Is Easier To Manage Than No Server\"* - Werner Vogels, VP and CTO\n\nThe Lambda service runs instances of your function only when needed and scales automatically from zero requests per day to thousands per second. You pay only for the compute time that's actually used — there is no charge when your code is not running.\n\n## **Fundamentals**\n\nServerless solutions are based on *event-driven architecture,* or EDA, where services send and receive *events*, which represent an update or change in state. The primary activity of Lambda functions is to process events.\n\nWithin the Lambda service, your function code is stored in a code package, deployed as a .zip or a container image. All interaction with the code occurs through the Lambda API. There is no direct invocation of functions from outside of the Lambda service.\n\nWhat you will learn on your journey to building applications with Lambda:\n\n- How the event-driven programming model invokes Lambda functions\n- How to create, invoke, test, update, package, and secure functions\n- How the execution and runtime environment runs your functions\n- How to view logs and monitor your functions\n- Where to find hands-on opportunities to learn how to invoke functions", - "page_start": 51, - "page_end": 51, - "source_file": "serverless-core.pdf" - }, - { - "text": "After the handler finishes processing the first event, the runtime sends it another, and another. Each instance of your function could process thousands of requests.\n\nUnlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an *invocation*. Lambda functions are limited to 15 minutes in duration, but on average, across all AWS customers, most invocations last for less than a second.\n\nThere are many types of invocation events. Some examples:\n\n- HTTP request from API Gateway\n- Schedule managed by an EventBridge rule\n- Message from an IOT device\n- Notification that a file was uploaded to an S3 bucket\n\nEven the smallest Lambda-based application uses at least one event that invokes your function.\n\n### **How Lambda invokes your function (runtime environment)**\n\nLambda invokes your function in an *execution environment*, which contains a secure and isolated *runtime environment*.\n\n- A *runtime* provides a language-specific environment which relays invocation events, context information, and responses between the Lambda and your functions.\n- An *execution environment* manages the processes and resources that are required to run the function.", - "page_start": 55, - "page_end": 55, - "source_file": "serverless-core.pdf" - }, - { - "text": "- Policies that grant least privilege to your functions\n**Workshop - Intro to Serverless** - Before diving too deep, you can choose to try out serverless in a workshop or tutorial. Connect to a data source and create a REST API with your first Lambda function.\"\n\n- Services used: AWS Management Console, Lambda, DynamoDB, API Gateway\n#### **Programming Model**\n\nThe Lambda service provides the same event-based programming model for all languages. The Lambda runtime passes an *invocation event* and *context* to your Lambda function *handler* which does some work and produces a resulting event:\n\nThe *invocation event* contains data, as a JSON packet, which varies from service to service. For example, API gateway events include path, HTTP method, query string parameters, headers, cookies, and more. DynamoDB events could contain updated or delete record data. S3 events include the bucket name and object key, among other things.\n\n*The context* contains information about the environment the function is running inside. Additional contextual information can be set in familiar environment variables (ENV).\n\nThe function *handler* is a method in your function code that processes the inbound event. The handler, which is a standard function in your language of choice, does some work and emits a *result event*.", - "page_start": 54, - "page_end": 54, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### **Connect to functions with Function URLs**\n\nA function URL is a dedicated HTTP(S) endpoint for your Lambda function. You can create and configure a function URL through the Lambda console or the Lambda API. When you create a function URL, Lambda automatically generates a unique URL endpoint for you. Once you create a function URL, its URL endpoint never changes. Function URL endpoints have the following format:\n\nhttps://.lambda-url..on.aws\n\nAfter you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint with a web browser, curl, Postman, or any HTTP client.\n\nRelated resources:\n\n- Function URLs official documentation\n### **Additional resources**\n\nOfficial AWS documentation:\n\n- AWS Lambda Developer Guide extensive and complete documentation for Lambda\n#### **Next steps**\n\n#### **Learn serverless techniques in an online workshop**\n\nLearn by doing in the **Serverless Patterns Workshop**. The first module introduces a serverless microservice to retrieve data from DynamoDB with Lambda and API Gateway. Additional modules provide practical examples of unit and integration testing, using infrastructure as code to deploy resources, and how to build common architectural patterns used in serverless solutions.", - "page_start": 63, - "page_end": 63, - "source_file": "serverless-core.pdf" - }, - { - "text": "### **Deploy with containers**\n\nIf you need a custom runtime that is not provided by AWS, you can create and deploy a custom container image. AWS provides base images preloaded with a language runtime and other components that are required to run the image on Lambda. AWS provides a Dockerfile for each of the base images to help with building your container image.\n\nCustom containers are one way you might experiment with lift and shift of existing code to Lambda runtimes. If you do this, consider the architectural differences between always running containers, versus on demand nature of Lambda functions.\n\nRelated resource:\n\n- Deploy container images\n#### **Add code with Layers**\n\nA Lambda *layer* is a .zip file archive that can contain additional code or other content. A layer can contain libraries, a custom runtime, data, or configuration files. Layers are also necessary if your function .zip archive exceeds the size limit.\n\nLayers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. Layers also promote code sharing and separation of responsibilities so that you can iterate faster on writing business logic.\n\nRelated resource:\n\n- Creating and sharing Lambda layers\n#### **Extensions**\n\nYou can use Lambda extensions to augment your Lambda functions. For example, use Lambda Extensions to integrate with your preferred monitoring, observability, security, and governance tools.\n\nLambda supports internal or external extensions. An internal extension runs as part of the runtime process. An external extension runs as an independent process in the execution environment and continues to run after the function invocation is fully processed.", - "page_start": 61, - "page_end": 61, - "source_file": "serverless-core.pdf" - }, - { - "text": "# **Advanced Topics**\n\nYou can do a lot by just creating a function and connecting it to an event source like API Gateway or S3 triggers.\n\nAs you progress on your journey, you should explore the following more advanced topics.\n\n- Connect services with event source mapping\n- Deploy code in containers\n- Add additional code with layers\n- Augment functions with extensions\n- Launch functions faster with SnapStart\n- Connect to functions with Function URLs\n\n#### **Event source mapping**\n\nSome services can trigger Lambda functions directly, for example, when an image is added to an S3 bucket, a Lambda can be triggered to resize it. Some services cannot invoke Lambda directly; but you can instead use an *event source mapping* which is a polling mechanism that reads from an event source and invokes a Lambda function.\n\nYou can use event source mappings to process items from a stream or queue in the following services:\n\n- Amazon DynamoDB\n- Amazon Kinesis\n- Amazon MQ\n- Amazon Managed Streaming for Apache Kafka (Amazon MSK)\n- Self-managed Apache Kafka\n- Amazon Simple Queue Service\n\n#### Related resource:\n\n- Event source mapping official documentation, including the default behavior that batches records together into a single payload that Lambda sends to your function.", - "page_start": 60, - "page_end": 60, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### Related resources:\n\n- Datadog Lambda Extension an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n- Lambda Extensions official documentation\n\n#### **Launch functions faster with SnapStart**\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n#### Related resources:\n\n- Accelerate Your Lambda Functions with Lambda SnapStart an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "could be listening. The handler function might create and send another event to an SNS queue so that alerts for high temperature are sent to users through SMS messages.\n\nThe function finally wraps up the JSON weather data into a new event and sends it back to API gateway. Afterward, the function continues to handle hundreds of additional requests. Request from users slow down after 2AM, so after some time the Lambda service will tear down the function execution environment to conserve resources. As a Customer, you will only be charged for function usage.", - "page_start": 38, - "page_end": 38, - "source_file": "serverless-core.pdf" - }, - { - "text": "You can use runtimes that Lambda provides for JavaScript (Node.js), TypeScript, Python, Java, Python, Go, C#, and PowerShell, or you can build your own custom runtime environment inside of a container.\n\nIf you package your code as a .zip file archive, you must configure your function to use a runtime that matches your programming language. For a container image, you include the runtime when you build the image.\n\n### **How to process events with a Lambda handler**\n\nConceptually, there are only three steps to processing events with Lambda:\n\n- 1. Configure the entry point to your function, known as the *handler*, and deploy the function.\n- 2. Lambda service initializes the function, then it invokes the *handler* with an invocation event and context.\n- 3. Your handler function processes the event and returns a response event.\n\nSubsequent events will invoke the handler again, without the initialization delay. During this cycle, the function stays in memory, so clients and variables declared outside of the handler method can be reused.\n\nAfter a period of time, Lambda will eventually tear down the runtime. This can happen for a variety of reasons; some examples: scaling down to conserve resources, updating the function, updating the runtime.\n\nThe function **handler** is the essential component of your function code. As noted previously, the handler is the entry point, but it may not be the only function in your code. In fact, a best practice is keeping the handler sparse and doing the actual processing in other functions in your code.\n\nHere are some example **handlers**:\n\nPython\n\n```\n# Example handler method in Python\ndef lambda_handler(event, context): \n message = 'Hello {} {}!'.format(event['first_name'], event['last_name']) \n return { \n 'message' : message \n }\n```", - "page_start": 56, - "page_end": 56, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "Why can't I use SnapStart on my function tagged with $LATEST ?", - "target_page": 63, - "target_passage": " You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### Related resources:\n\n- Datadog Lambda Extension an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n- Lambda Extensions official documentation\n\n#### **Launch functions faster with SnapStart**\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n#### Related resources:\n\n- Accelerate Your Lambda Functions with Lambda SnapStart an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "- 5. Click in the **Filter** box and enter snap to see a list of snap files, as shown in Figure 13-72. Locate the exact name of the snap that was generated by using the **svc_snap** command that was issued earlier. Select that file, and click **Download**.\n\n| Select Support Package or Logs to Download | | | | | × |\n| --- | --- | --- | --- | --- | --- |\n| You can select a previously created support package or individual logs to download. | | | | | |\n| > )হা Contains snap × node2 | Default | > | | | |\n| File Name | | | → | Ili | |\n| /dumps/snap.7822DFF-2.180919.161312.log | | | | A | |\n| /dumps/snap.7822DFF-2.181031.164046.tgz | | | | H | |\n| /dumps/snap.ietd.7822DFF-2.180919.143442.tar | | | | | |\n| /dumps/snap.ietd.7822DFF-2.180919.152558.tar | | | | | |\n| 1 70 9 Showing 16 Files Selecting 1 File | | | | | |\n| (?) Need Help Cancel Download | | | | | |\n\n*Figure 13-72 Filtering on snap to download*\n\n- 6. Save the file to a folder of your choice on your workstation.\n# **13.9.3 Uploading files to the Support Center**\n\nIf you chose to not have the Storwize V7000 upload the support package automatically, it can still be uploaded for analysis from the Enhanced Customer Data Repository (ECuRep). Any uploads should be associated with a specific problem management report (PMR). The PMR is also known as a *service request* and is a mandatory requirement when uploading.", - "page_start": 753, - "page_end": 753, - "source_file": "sg247938.pdf" - }, - { - "text": "#### **Starting the servers**\n\nServers are started by running **STRTCPSVR *ONDMD**. The **INSTANCE** parameter of the **STRTCPSVR *ONDMD** command supports the special values of *DFT, *ALL, and *AUTOSTART, and the specification of the name of an instance. (An instance is set to autostart if the ars.cfg file for that instance contains ARS_AUTOSTART_INSTANCE=1.) The default value for the **INSTANCE** parameter is *DFT. You can also create a data area that is named **STRTCPSVR** to further control the behavior of the **STRTCPSVR** command. For more information about the data area, see the IBM Content Manager OnDemand for i - Common Server Administration Guide, SC19-2792.\n\nWithout the STRTCPSVR data area, the values of *DFT and *AUTOSTART work identically. All instances that are set to autostart are started. Use the special value *ALL to start all of the instances that are configured on the system. You can also specify the name of a single instance to start, for example:\n\nSTRTCPSVR SERVER(*ONDMD) INSTANCE(ONDTEST)\n\nWith the data area, the value of *DFT starts only the instance that is named in the data area. The data area must be named STRTCPSVR and in library QUSRRDARS. The data area must be of the type character with a length of 10. To create the data area, run the following command (all as one command):\n\nCRTDTAARA DTAARA(QUSRRDARS/STRTCPSVR) TYPE(*CHAR) LEN(10) VALUE(QUSROND) TEXT('Autostart instance name for STRTCPSVR *ONDMD *DFT')\n\nQUSROND is the name of the instance to start.\n\nThe special values *ALL and *AUTOSTART work the same with the data area as without the data area.\n\nTo determine the instances that are started when **STRTCPSVR SERVER(*ONDMD) INSTANCE(*AUTOSTART)** is run, you can look for the ARS_AUTOSTART_INSTANCE=1 in the ARS.CFG file. However, an easier way is available so that you do not need to check the ARS.CFG file for every instance.\n\nRun **grep** in Qshell to search the contents of all of the ARS.CFG files for the string ARS_AUTOSTART_INSTANCE=1, for example:\n\n$\n\n```\ngrep -n 'ARS_AUTOSTART_INSTANCE=1' /qibm/userdata/ondemand/*/ars.cfg \n/qibm/userdata/ondemand/ONDDEMO/ars.cfg:53:ARS_AUTOSTART_INSTANCE=1 \n/qibm/userdata/ondemand/ONDDEU/ars.cfg:53:ARS_AUTOSTART_INSTANCE=1 \n/qibm/userdata/ondemand/ONDENU/ars.cfg:53:ARS_AUTOSTART_INSTANCE=1 \n/qibm/userdata/ondemand/QUSROND/ars.cfg:53:ARS_AUTOSTART_INSTANCE=1 \n$\n```\nFrom the last four detail lines, which are the output of the **grep** command, you can determine that instances ONDDEMO, ONDDEU, ONDENU, and QUSROND are started when the **STRTCPSVR SERVER(*ONDMD) INSTANCE(*AUTOSTART)** command is run.\n\nTable 2-1 on page 35 summarizes the behavior of the **STRTCPSVR** command with and without the STRTCPSVR data area.", - "page_start": 57, - "page_end": 57, - "source_file": "sg246915.pdf" - }, - { - "text": "- 2. We collect the type 3 (option 3) and have it automatically uploaded to the PMR number that is provided by IBM Support, as shown in Example 13-6.\n\n```\nssh superuser@9.173.156.250\nPassword:\nIBM_Storwize:ITSO-V7k:superuser>svc_snap upload pmr=12345,000,866 gui3\n```\n- 3. If you do not want to automatically upload the snap to IBM, do not specify the **upload pmr=ppppp,bbb,ccc** part of the commands. When the snap creation completes, it creates a file named that uses the following format:\n/dumps/snap..YYMMDD.hhmmss.tgz\n\nIt takes a few minutes for the snap file to complete (longer if statesaves are included).\n\n- 4. The generated file can then be retrieved from the GUI clicking **Settings** → **Support** → **Manual Upload Instructions** twisty → **Download Support Package** and then, clicking **Download Existing Package**, as shown in Figure 13-71.\n*Figure 13-71 Downloaded Existing Package*", - "page_start": 752, - "page_end": 752, - "source_file": "sg247938.pdf" - }, - { - "text": "- 4. Click **Create** to create the route, as shown in Figure B-25.\n\n| Routes Learn More @ | | | | Create Route |\n| --- | --- | --- | --- | --- |\n| Filter by label | | | | Add |\n| Name | Hostname | Service | Target Port | TLS Termination |\n| app2-route | http://app2-http-git @ | app2a | web | |\n\n*Figure B-25 Creating a route* \n\n#### **Testing load balancing across NGINX instances**\n\nComplete the following steps:\n\n- 1. Open a terminal window to the OCP machine with a user with root privileges. Use the **ssh** command or PuTTY from your local computer.\n- 2. Run the script as shown in Example B-15 to test out the load balancing. The **wget** command uses a capital letter O:\n\n```\n$ for i in 1 2 3 4 5 6 7 8 9 10\ndo\nwget -q http://app2-http-git/ -O index.test\ngrep App index.test\ndone\nExample B-15 Running a script to test load balancing\n$ for i in 1 2 3 4 5 6 7 8 9 10; do wget -q http://app2-http-git/ -O \nindex.test; grep App2 index.test; done\nApp2a\nApp2b\nApp2a\nApp2b\nApp2a\nApp2b\nApp2a\nApp2b\nApp2a\nApp2b\n```\nFrom the command output, you can see that the route alternated between App2a and App2b at a rate of 50%.\n\n- 3. Close the terminal window and log out of the OpenShift console.", - "page_start": 254, - "page_end": 254, - "source_file": "sg248459.pdf" - }, - { - "text": "- 3. The Upload Support Package window provides four options for data collection. If you were contacted by IBM Support because your system called home or you manually opened a call with IBM Support, you receive a *PMR number*. Enter that PMR number into the **PMR** field and select the snap type (often referred to as an *option 1, 2, 3, 4 snap*) as requested by IBM Support (see Figure 13-69). In our example, we entered our PMR number, selected snap type 3 (option 3) because this automatically collects the statesave that were created at the time the node restarted, and clicked **Upload**.\n**Tip:** To open a service request online, see the IBM Support Service requests and PMRs web page.\n\n| Upload Support Package | × |\n| --- | --- |\n| · IPIN INUITINGL. PULL HOVE I PING LA | |\n| pppp,bbb,ccc | |\n| Select the type of new support package to generate and upload to the IBM support center: | |\n| Snap Type 1: Standard logs | |\n| Contains the most recent logs for the system, including the event and | |\n| audit logs. | |\n| Snap Type 2: Standard logs plus one existing statesave | |\n| Contains all the standard logs plus one existing statesave from any of | |\n| the nodes in the system. | |\n| Snap Type 3: Standard logs plus most recent statesave from each node | |\n| Contains all the standard logs plus each node's most recent | |\n| statesave. | |\n| ? Need Help Cancel | Upload |\n\n*Figure 13-69 Upload Support Package window*", - "page_start": 750, - "page_end": 750, - "source_file": "sg247938.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n#### **1.6 Frequently used contacts**\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n#### **1.7 Fitness data**\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n#### **1.8 Sports modes** (walking, running, cycling, rope skipping, badminton,\n\n#### basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the \"Start\" button on the screen to start the exercise; click the \"Start\" button again to pause the recording of the exercise; click the \"End\" button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n#### **1.9 Heart rate**\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n#### **1.10 ECG**\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n#### **2.0 My QR code**\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n#### **2.1 Remote control music**", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "There are no mandatory naming conventions for OWL entities. In chapter 7, we will discuss names and labels in more detail. A best practice is to select one set of naming conventions and then abide by that convention across your organization. For this tutorial we will follow the standard where class and individual names start with a capital letter for each word and do not contain spaces. This is known as CamelBack notation. For example: Pizza, PizzaTopping, etc. Also, we will follow the standard that class names are always singular rather than plural. E.g., Pizza rather than Pizzas, PizzaTopping rather than PizzaToppings.\n\n#### 4.2 Using a Reasoner\n\nYou may notice that one or more of your classes is highlighted in red as in Figure 4.5. This is because we haven't run the reasoner yet so Protégé has not been able to verify that our new classes have no inconsistencies. When just creating classes and subclasses in a new ontology there is little chance of an inconsistency. However, it is a good idea to run the reasoner often. When there is an inconsistency the sooner it is discovered the easier it is to fix. One common mistake that new users make is to do a lot of development and then run the reasoner only to find that there are multiple inconsistencies which can make debugging significantly more difficult. So let's get into the good habit of running the reasoner often. Protégé comes with some reasoners bundled in and others available as plugins. Since we are going to write some SWRL rules later in the tutorial, we want to use the Pellet reasoner. It has the best support for SWRL at the time this tutorial is being written.\n\n#### **Exercise 5: Install and Run the Pellet Reasoner**\n\n1. Check to see if the Pellet reasoner is installed. Click on the Reasoner menu. At the bottom of the menu there will be a list of the installed reasoners such as Hermit and possibly Pellet. If Pellet is visible in that menu then select it and skip to step 3.\n\n_____________________________________________________________________________________\n\n2. If Pellet is not visible then do File>Check for plugins and select Pellet from the list of available plugins and then select Install. This will install Pellet and you should get a message that says it will take effect the next time you start Protégé. Do a File>Save to save your work then quit Protégé and restart it. Then go to File>Open recent. You should see your saved Pizza tutorial in the list of recent ontologies. Select it to load it. Now you should see Pellet under the Reasoner menu and be able to select it so do so.\n\n3. With Pellet selected in the Reasoner menu execute the command Reasoner>Start reasoner. The reasoner should run very quickly since the ontology is so simple. You will notice that the little text message in the lower right corner of the Protégé window has changed to now say Reasoner active. The next time you make a change to the ontology that text will change to say: Reasoner state out of sync with active ontology. With small ontologies the reasoner runs very quickly, and it is a good idea to get into the habit of running it often, as much as after every change.\n\n4. It is possible that one or more of your classes will still be highlighted in red after you run the reasoner. If that happens do: Window>Refresh user interface and any red highlights should go away. Whenever your user interface seems to show something you don't expect the first thing to do is to try this command.", - "page_start": 15, - "page_end": 15, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### **B**.**Bind to the APP**\n\n#### **1. APP download method**\n\n1.1 Scan the QR code to download\n\n1.2 Search the application at App market and download\n\nFor Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\nAfter WearPro is installed, the app icon appears as .\n\n#### 2.Bind Bluetooth\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n#### 2.2 Connected to the APP state:\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## **3. Find Watch**\n\nAfter the smartwatch is bound to the APP, you click \"Find Watch\" in the APP, the smartwatch will light up and vibrate for once.\n\n#### **4. Camera**", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "you will be prompted to create a user ID (your email address) and a password. Once you do that you should have a fresh Web Protégé workspace. Figure 12.1 shows what my Web Protégé workspace currently looks like. Most of the projects are owned by me although note that the CODO project is owned by my colleague Biswanath Dutta. However, I still have complete access to that ontology due to the way Biswanath has configured my access as being able to both view and edit the ontology.\n\nTo upload the Pizza ontology, select the large Create New Project button. This will bring up the window shown in figure 12.2. Fill out the project name and description, then select the Choose File button and navigate to where you have the latest version of the Pizza tutorial with data. Note that in the figure I have already done this navigation so there is a value for the file to load. You can leave the Language field blank. Once you have all the fields set up similar to figure 12.2 click the Create New Project button on this dialog (note this is a different button than the one you started from).\n\n| Create New Project | |\n| --- | --- |\n| Project name | |\n| Pizza With Data | |\n| Language | |\n| empty for no language tag. | Enter a language tag for labelling new entities and to use as the primary display language. Leave |\n| Description | |\n| The Pizza tutorial ontology with data. | |\n| Create from existing sources | |\n| Choose File PizzaTutori... thDataV2.owl | |\n\nFigure 12.2 The Create New Project Dialog\n\nYour workspace should now include your first project. Click on the three horizontal bars at the far right of the project. This should bring up a pop-up menu. Select the Open option. This should bring you into the main Web Protégé UI to browse an ontology.\n\nBefore you make changes to the ontology you need to make sure the settings for new entities and rendering are consistent with the settings you used for the Pizza ontology. The default in Web Protégé as with Protégé is to use Auto-Generated UUIDs rather than user supplied names. If you aren't sure about these settings you can go back to exercise 2 at the beginning of chapter 4 and chapter 7 to refresh your memory. There are excellent reasons to use auto-generated UUIDs but for beginners, especially for those who want to learn SPARQL, I think they make learning the basics more difficult so we have been using the alternative of user supplied names. At the top of the Web Protégé UI in the right corner there are", - "page_start": 84, - "page_end": 84, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "At Shentel company, what determines an employees pension ?", - "target_page": 22, - "target_passage": "Pension benefits are based primarily on the employee's compensation and years of service", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:\n\n- overseeing the funding, administration, communication and investment management of the plans\n- selecting and monitoring the performance of all third parties performing duties in respect of the plans, including audit, actuarial and investment management services\n- proposing, considering and approving amendments to the defined benefit pension plans\n- proposing, considering and approving amendments of the Statement of Investment Policies and Procedures\n- reviewing management and actuarial reports prepared in respect of the administration of the defined benefit pension plans\n- reviewing and approving the audited financial statements of the defined benefit pension plan funds.\n\nThe assets of the defined benefit pension plans are invested and managed following all applicable regulations and the Statement of Investment Policies and Procedures, and reflect the characteristics and asset mix of each defined benefit pension plan. Investment and market return risk is managed by:\n\n- contracting professional investment managers to execute the investment strategy following the Statement of Investment Policies and Procedures and regulatory requirements\n- specifying the kinds of investments that can be held in the plans and monitoring compliance\n- using asset allocation and diversification strategies, and\n- purchasing annuities from time to time.\n\nThe funded pension plans are registered with the Office of the Superintendent of Financial Institutions and are subject to the Federal Pension Benefits Standards Act. The plans are also registered with the Canada Revenue Agency and are subject to the Canada Income Tax Act. The benefits provided under the plans and the contributions to the plans are funded and administered in accordance with all applicable legislation and regulations.\n\nSignificant estimates are involved in determining pension related balances. Actuarial estimates are based on projections of employees' compensation levels at the time of retirement. Maximum retirement benefits are primarily based on career average earnings, subject to certain adjustments. The most recent actuarial valuations were completed as at January 1, 2013.\n\nThe table below sets out the estimated present value of accrued plan benefits and the estimated market value of the net assets available to provide these benefits for our funded plans at December 31, 2013 and 2012.\n\n| | 2013 | | | 2012 |\n| --- | --- | --- | --- | --- |\n| Plan assets, at fair value | | $ 1,037 | $ | 833 |\n| Accrued benefit obligations | | 1,209 | | 1,167 |\n| Deficiency of plan assets over accrued benefit obligations | | (172) | | (334) |\n| Effect of asset ceiling limit | | (9) | | – |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n| Consists of: | | | | |\n| Deferred pension asset | $ | 8 | $ | 9 |\n| Deferred pension liability | | (189) | | (343) |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n\nThe table below shows our pension fund assets for the years ended 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Plan assets, January 1 | $ 833 | $ 684 |\n| Interest income | 40 | 40 |\n| Remeasurements, return on plan assets recognized in other | | |\n| comprehensive income and equity | 65 | 37 |\n| Contributions by employees | 26 | 22 |\n| Contributions by employer | 101 | 85 |\n| Benefits paid | (26) | (33) |\n| Administrative expenses paid from plan assets | (2) | (2) |\n| Plan assets, December 31 | $ 1,037 | $ 833 |\n\nThe table below shows the accrued benefit obligations arising from funded obligations for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Accrued benefit obligations, January 1 | $ 1,167 | $ 817 |\n| Service cost | 71 | 46 |\n| Interest cost | 52 | 45 |\n| Benefits paid | (26) | (33) |\n| Contributions by employees | 26 | 23 |\n| Remeasurements, recognized in other comprehensive | | |\n| income and equity | (81) | 269 |\n| Accrued benefit obligations, December 31 | $ 1,209 | $ 1,167 |\n\nThe table below shows the effect of the asset ceiling for the years ended December 31, 2013 and 2012.\n\n| | 2013 | | 2012 | |\n| --- | --- | --- | --- | --- |\n| Asset ceiling, January 1 | $ | – | $ | – |\n| Interest income | | – | | – |\n| Remeasurements, change in asset ceiling (excluding interest | | | | |\n| income) recognized in comprehensive income and equity | (9) | | – | |\n| Effect of changes in foreign exchange rates | | – | – | |\n| Asset ceiling, December 31 | $ (9) | | $ – | |\n\nPlan assets are comprised mainly of pooled funds that invest in common stocks and bonds that are traded in an active market. The table below shows the fair value of the total pension plan assets by major category for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Equity securities | $ 631 | $ 480 |\n| Debt securities | 403 | 348 |\n| Other – cash | 3 | 5 |\n| Total fair value of plan assets | $ 1,037 | $ 833 |", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **We must serve well to prosper – We must prosper to serve well**\n\nShenTel Service Company • Shenandoah Long Distance Company • Shenandoah Mobile Company Shenandoah Network Company • Shenandoah Telephone Company • Shenandoah Valley Leasing Company Shenandoah Cable Television Company • ShenTel Communications Company Shenandoah Personal Communications Company\n\n> PO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 • Fax 540-984-8192 www.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "This annual report contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, including statements regarding our expectations, hopes, intentions, or strategies regarding the future. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to, changes in the interest rate environment, management's business strategy, national, regional and local market conditions, and legislative and regulatory conditions. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.\n\n#### **General**\n\nShenandoah Telecommunications Company is a diversified telecommunications company providing both regulated and unregulated telecommunications services through its nine wholly owned subsidiaries. These subsidiaries provide local exchange telephone services, wireless personal communications services (PCS), as well as cable television, paging, Internet access, long distance, fiber optics facilities, and leased tower facilities. The Company is the exclusive provider of wireless mobility communications network products and services under the Sprint brand from Harrisonburg, Virginia to Harrisburg, York and Altoona, Pennsylvania. The Company refers to the Hagerstown, Maryland; Martinsburg, West Virginia; and Harrisonburg and Winchester, Virginia markets as its Quad State markets. The Company refers to the Altoona, Harrisburg, and York, Pennsylvania markets as its Central Penn markets. Competitive local exchange carrier (CLEC) services were established on a limited basis during 2002. In addition, the Company sells and leases equipment, mainly related to services it provides, and also participates in emerging services and technologies by direct investment in non-affiliated companies.\n\nThe Company reports revenues as wireless, wireline and other revenues. These revenue classifications are defined as follows: Wireless revenues are made up of the Personal Communications Company (a PCS Affiliate of Sprint), and the Mobile Company. Wireline revenues include the following subsidiary revenues in the financial results: Telephone Company, Network Company, Cable Television Company, and the Long Distance Company. Other revenues are comprised of the revenues of ShenTel Service Company, the Leasing Company, ShenTel Communications Company and the Holding Company. For additional information on the Company's business segments, see Note 14 to audited consolidated financial statements appearing elsewhere in this report.\n\nThe Company participates in the telecommunications industry, which requires substantial investment in fixed assets or plant. This significant capital requirement may preclude profitability during the initial years of operation. The strategy of the Company is to grow and diversify the business by adding services and geographic areas that can leverage the existing plant, but to do so within the opportunities and constraints presented by the industry. For many years the Company focused on reducing reliance on the regulated telephone operation, which up until 1981 was the primary business within the Company. This initial diversification was concentrated in other wireline businesses, such as the cable television and regional fiber facility businesses, but in 1990 the Company made its first significant investment in the wireless sector through its former investment in the Virginia 10 RSA Limited partnership. By 1998, revenues of the regulated telephone operation had decreased to 59.2% of total revenues. In that same year more than 76.6% of the Company's total revenue was generated by wireline operations, and initiatives were already underway to make wireless a more significant contributor to total revenues.\n\nDuring the 1990's significant investments were made in the cellular and PCS (wireless) businesses. The VA 10 RSA cellular operation, in which the Company held a 66% interest and was the general partner, experienced rapid revenue growth and excellent margins in the late 1990's. The cellular operation covered only six counties, and became increasingly dependent on roaming revenues. Management believed the roaming revenues and associated margins would be unsustainable as other wireless providers increasingly offered nationally-branded services with significantly reduced usage charges. To position it to participate in the newer, more advanced, digital wireless services, in 1995 the Company entered the PCS business through an affiliation with American Personal Communications (APC), initiating service along the Interstate 81 corridor from Harrisonburg, Virginia to Chambersburg, Pennsylvania. This territory was a very close match to the Company's fiber network, thereby providing economic integration that might not be available to other wireless carriers. In 1999, the Company entered a new affiliation arrangement with Sprint, the successor to APC (which introduced the Company to a nationally-branded wireless service) and expanded the PCS footprint further into Central Pennsylvania. The Company's combined capital investment in 2000 and 2001 in the PCS operation was $45.1 million.", - "page_start": 40, - "page_end": 40, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "### NOTE 11: LEASES\n\nPlan Assets\n\nallocation as of June 30:\n\nEquity securities do not include any Company common stock.\n\nfive years and in the aggregate for the subsequent five years:\n\nthe target asset allocation of the pension portfolio.\n\nAsset Class:\n\nCash Flows\n\nEmployer Contributions\n\nEstimated Future Benefit Payments\n\nThe fair value of each major class of plan assets for the Company's Qualified Benefit Retirement Plan are valued using quoted market prices in active markets for identical instruments, or Level 1 in the fair value hierarchy. Following are the fair values and target\n\nEquity securities 40 – 70% **$ 3,735** $ 3,876 Debt securities 20 – 50% **2,382** 1,756 Other 0 – 20% **322** 424 Total 100% **$ 6,439** $ 6,056\n\nThe Company has established an investment policy and regularly monitors the performance of the assets of the trust maintained in conjunction with the Qualified Defined Benefit Retirement Plan. The strategy implemented by the trustee of the Qualified Defined Benefit Retirement Plan is to achieve long-term objectives and invest the pension assets in accordance with ERISA and fiduciary standards. The long-term primary objectives are to provide for a reasonable amount of long-term capital, without undue exposure to risk; to protect the Qualified Defined Benefit Retirement Plan assets from erosion of purchasing power; and to provide investment results that meet or exceed the actuarially assumed long-term rate of return. The expected long-term rate of return on assets assumption was developed by considering the historical returns and the future expectations for returns of each asset class as well as\n\nThe Company expects to contribute $6,000 to its pension benefit plans and $240 to its retiree health care benefit plans in\n\nThe following benefit payments, which reflect expected future service, as applicable, are expected to be paid in each of the next\n\n2013 $ 6,200 $ 240 2014 5,900 240 2015 5,700 240 2016 4,500 240 2017 1,700 260 2018 through 2022 15,200 1,420\n\n2013. Contributions do not equal estimated future payments as certain payments are made from plan assets.\n\nDuring Fiscal Years Pension Benefits\n\nTarget Allocation Fair Value\n\n**2012** 2011\n\nRetiree Health Care\n\nBenefits\n\nThe Company leases its corporate headquarters facility along with many service center and distribution center facilities, vehicles and equipment under non-cancelable lease agreements accounted for as operating leases. The minimum annual rental commitments under non-cancelable operating leases as of June 30, 2012 are as follows:\n\n| During Fiscal Years | | |\n| --- | --- | --- |\n| 2013 | $ | 23,500 |\n| 2014 | | 18,000 |\n| 2015 | | 14,300 |\n| 2016 | | 9,600 |\n| 2017 | | 5,100 |\n| Thereafter | | 11,100 |\n| Total minimum lease payments | $ | 81,600 |\n\nRental expenses incurred for operating leases, principally from leases for real property, vehicles and computer equipment were $31,200 in 2012, $31,400 in 2011 and $30,700 in 2010.\n\n### NOTE 12: SEGMENT AND GEOGRAPHIC INFORMATION\n\nThe Company's reportable segments are: Service Center Based Distribution and Fluid Power Businesses. The Service Center Based Distribution segment provides customers with solutions to their maintenance, repair and original equipment manufacturing needs through the distribution of industrial products including bearings, power transmission components, fluid power components, industrial rubber products, linear motion products, safety products, general maintenance and a variety of mill supply products. The Fluid Power Businesses segment distributes fluid power components and operates shops that assemble fluid power systems and components, performs equipment repair, and offers technical advice to customers.\n\nThe accounting policies of the Company's reportable segments are generally the same as those described in Note 1. Sales primarily from the Fluid Power Businesses segment to the Service Center Based Distribution segment of $18,097, $17,665 and $14,006, in fiscal 2012, 2011 and 2010, respectively, have been eliminated in the table below.\n\n#### Segment Financial Information\n\n| | Service Center | | Fluid Power | |\n| --- | --- | --- | --- | --- |\n| | Based Distribution | | Businesses | Total |\n| Year Ended June 30, 2012 | | | | |\n| Net sales | $ | 1,904,564 | $ 470,881 | $ 2,375,445 |\n| Operating income for reportable segments | | 135,240 | 43,236 | 178,476 |\n| Assets used in the business | | 731,915 | 230,268 | 962,183 |\n| Depreciation and amortization of property | | 9,403 | 1,833 | 11,236 |\n| Capital expenditures | | 24,339 | 1,682 | 26,021 |\n| Year Ended June 30, 2011 | | | | |\n| Net sales | $ | 1,770,798 | $ 442,051 | $ 2,212,849 |\n| Operating income for reportable segments | | 115,798 | 41,793 | 157,591 |\n| Assets used in the business | | 700,486 | 214,445 | 914,931 |\n| Depreciation and amortization of property | | 9,152 | 2,082 | 11,234 |\n| Capital expenditures | | 19,392 | 1,039 | 20,431 |\n| Year Ended June 30, 2010 | | | | |\n| Net sales | $ | 1,536,543 | $ 356,665 | $ 1,893,208 |\n| Operating income for reportable segments | | 77,029 | 26,794 | 103,823 |\n| Assets used in the business | | 690,970 | 200,550 | 891,520 |\n| Depreciation and amortization of property | | 9,336 | 2,129 | 11,465 |\n| Capital expenditures | | 6,389 | 827 | 7,216 |\n\n25358_AIT_Report_WT.indd 35 8/23/12 8:33 AM", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "#### 9. RETIREMENT BENEFIT PLANS\n\nThe Company and its domestic consolidated subsidiaries have defined benefit plans, i.e., welfare pension fund plans (\"WPFP\"), tax-qualified pension plans and lump-sum payment plans, covering substantially all employees who are entitled to lump-sum or annuity payments, the amounts of which are determined by reference to their basic rates of pay, length of service, and the conditions under which termination occurs. Certain foreign consolidated subsidiaries have defined benefit and contribution plans.\n\nThe following table sets forth the funded and accrued status of the plans, and the amounts recognized in the consolidated balance sheets as of March 31, 2005 and 2004 for the Company's and the consolidated subsidiaries' defined benefit plans:\n\n| | | | Thousands of |\n| --- | --- | --- | --- |\n| | Millions of yen | | U.S. dollars |\n| 2004 | | 2003 | 2004 |\n| As of | Mar. 31, 2005 | Mar. 31, 2004 | Mar. 31, 2005 |\n| Retirement benefit obligation ¥(1,217,260) | | ¥(1,041,483) | $(11,376,262) |\n| Plan assets at fair value | 500,815 | 377,169 | 4,680,514 |\n| Unfunded retirement benefit obligation | (716,445) | (664,314) | (6,695,748) |\n| Unrecognized net retirement benefit obligation at transition | 120,718 | 131,666 | 1,128,206 |\n| Unrecognized actuarial gain or loss | 154,689 | 152,867 | 1,445,691 |\n| Unrecognized prior service cost | (66,720) | (61,833) | (623,551) |\n| Net retirement benefit obligation | (507,758) | (441,614) | (4,745,402) |\n| Prepaid pension cost | 445 | 652 | 4,159 |\n| Accrued retirement benefits ¥ | (508,203) ¥ | (442,266) | $ (4,749,561) |\n\nThe substitutional portion of the benefits under the WPFP has been included in the amounts shown in the above table.\n\nThe Company received the approval from the Minister of Health, Labor and Welfare (\"MHLW\") in the year ended March 31, 2003 with respect to its application for exemption from the obligation for benefits related to future employee services under the substitutional portion of the WPFP. Certain domestic consolidated subsidiaries received the same approval from MHLW during the year ended March 31, 2004. In accordance with the transitional provision stipulated in \"Practical Guidelines for Accounting for Retirement Benefits,\" the Company and the domestic consolidated subsidiaries accounted for the separation of the substitutional portion of the benefit obligation from the corporate portion of the benefit obligation under their WPFPs as of the dates of approval for their exemption assuming that the transfer to the Japanese government of the substitutional portion of the benefit obligation and related pension plan assets had been completed as of those dates. As a result, the Company recognized a loss of ¥30,945 million for the year ended March 31, 2003 and the domestic consolidated subsidiaries recognized an aggregate gain of ¥3,669 million and an aggregate loss of ¥1,587 million for the year ended March 31, 2004. The pension assets to be transferred were calculated at ¥35,770 million for the domestic consolidated subsidiaries at March 31, 2004 and ¥241,203 million for the Company at March 31, 2003.\n\nThe components of retirement benefit expenses for the years ended March 31, 2005, 2004 and 2003 are outlined as follows:\n\n| | | | | Thousands of |\n| --- | --- | --- | --- | --- |\n| | | Millions of yen | | U.S. dollars |\n| 2004 | | 2003 | 2002 | 2004 |\n| For the years ended | Mar. 31, 2005 | Mar. 31, 2004 | Mar. 31, 2003 | Mar. 31, 2005 |\n| Service cost ¥47,802 | | ¥48,418 | ¥ 51,543 | $446,748 |\n| Interest cost | 33,288 | 33,012 | 45,269 | 311,103 |\n| Expected return on plan assets | (17,999) | (15,523) | (26,708) | (168,215) |\n| Amortization of net retirement benefit obligation at transition | 12,009 | 14,169 | 24,280 | 112,234 |\n| Amortization of actuarial gain or loss | 12,298 | 18,689 | 11,464 | 114,934 |\n| Amortization of prior service cost | (5,431) | (7,049) | (7,762) | (50,757) |\n| Other | 179 | 57 | 5 | 1,673 |\n| Retirement benefit expenses | 82,146 | 91,773 | 98,091 | 767,720 |\n| (Gain) loss on return of the substitutional portion of | | | | |\n| welfare pension fund plans | (1,107) | (5,594) | 30,945 | (10,346) |\n| Total ¥81,039 | | ¥86,179 | ¥129,036 | $757,374 |", - "page_start": 83, - "page_end": 83, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "**Figure 18: Employment types in EU27, development 2005 to 202265 – Eurostat**\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by **two major contractual distinctions** that are important for OSH: 1) **full- or part-time** work, and 2) the **time limit of the contract** (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n#### **Definitions Eurostat66**\n\n**Employers = self-employed with employee:** employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\n**Self-employed:** not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\n**Employees:** persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### **OUR BUSINESS**\n\nShenandoah Telecommunications Company is a diversified telecommunications holding company which provides various telecommunications services through its operating subsidiaries. These services include: wireline telephone service, primarily in Shenandoah County and small service areas in Rockingham, Frederick, and Warren counties, all in Virginia; cable television service in Shenandoah County; unregulated telecommunications equipment sales and services; online information and Internet access provided to the multi-state region surrounding the Northern Shenandoah Valley of Virginia; financing of purchases of telecommunications facilities and equipment; paging services in the Northern Shenandoah Valley; resale of long distance services; operation and maintenance of an interstate fiber optic network; wireless personal communications services (PCS) and a tower network in the four-state region from Harrisonburg, Virginia to the Harrisburg, York and Altoona, Pennsylvania markets.\n\n#### **ANNUAL MEETING**\n\nThe Board of Directors extends an invitation to all shareholders to attend the Annual Meeting of Shareholders. The meeting will be held at 11:00 AM (EST) on April 20, 2004 in the Auditorium of the Company's offices at the Shentel Center, 500 Mill Road, Edinburg, Virginia.\n\n#### **FORMS 10-K, 10-Q, and 8-K**\n\n**The Company files periodic reports with the Securities and Exchange Commission. The Company's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K, along with any amendments to these reports, are available to shareholders through the Company's website, www.shentel.com. This website also has recent news releases and other information potentially of interest to shareholders.**\n\n**A copy of the Company's Annual Report on Form 10-K, without exhibits, may be obtained, without charge, by writing to Shenandoah Telecommunications Company, 124 South Main Street, P.O. Box 459, Edinburg, Virginia 22824, Attention: Secretary.**\n\n#### **MARKET AND DIVIDEND INFORMATION**\n\nThe Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low sales prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below: The Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low closing prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below:\n\n| | | | | | | | 2003 | 2003 | | | | | | | | | | | | | 2002 | 2002 | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | Qtr. 1 Quarter 1 Quarter 1 | | | | Qtr. 2 Quarter 2 Quarter 2 2003 | | | Qtr. 3 Quarter 3 | Quarter 3 | Qtr. 4 Quarter 4 | Quarter 4 | | Quarter 1 | Quarter 1 | Qtr. 1 | | Quarter 2 | | Qtr. 2 Quarter 2 | | Quarter 3 2002 | | Qtr. 3 Quarter 3 | | Quarter 4 | Qtr. 4 Quarter 4 |\n| High price High price | $ | $ Quarter 1 24.31 $ 24.31 | | $ | $ $ | Quarter 2 24.98 24.98 | 24.98 | $ $ $ | $ | Quarter 3 25.48 25.48 25.48 | $ $ $ | Quarter 4 27.50 27.50 | $ | Quarter 1 | $ $ 20.06 | 20.06 | $ | Quarter 2 $ $ $ 27.25 | | | 27.25 27.25 | Quarter 3 $ | $ $ $ 27.25 | 27.25 27.25 | | Quarter 4 $ 25.95 $ | $ 25.95 |\n| High price | | $ | 24.31 | | $ | | 24.98 | | $ | 25.48 | $ | 27.50 | $ | | 20.06 | | $ | 27.25 | | | $ | | 27.25 | | | $ 25.95 | |\n| Low price Low price | $ | $ $ 13.64 13.64 | | $ | $ $ | 14.33 14.33 | 14.33 | $ $ $ | $ | 19.25 19.25 19.25 | $ 19.74 $ | $ 19.74 | $ | | $ $ 16.50 | 16.50 | $ | $ $ | 19.69 | $ | 19.69 19.69 | $ 22.75 | $ | $ 22.75 $ 22.75 $ 22.75 | $ | $ $ 21.61 $ | 21.61 21.61 |\n\nLow price $ 13.64 $ 14.33 $ 19.25 $ 19.74 $ 16.50 $ 19.69 $ 22.75 $ 21.61 All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. *All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004.*\n\nAll share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid.\n\npaid. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nAs of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nEdinburg, VA 22824 Richmond, VA 23219\n\n#### **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR CORPORATE HEADQUARTERS**\n\nShenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Shenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Edinburg, VA 22824 Richmond, VA 23219 Shenandoah Telecommunications Company 124 South Main Street Edinburg, VA 22824 124 South Main Street 1021 East Cary Street\n\n **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR**\n\n#### **SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS**\n\n**SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS** CALL (540) 984-5200\n\nCALL (540) 984-5200 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Edinburg, VA 22824\n\nEdi b VA 22824\n\n*This Annual Report to Shareholders contains forward-looking statements. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to: changes in the interest rate environment; management's business strategy; national, regional, and local market conditions; and legislative and regulatory conditions. Readers should not place undue reliance on forward-looking statements which reflect management's view only as of the date hereof. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.*", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Pension Obligations**\n\nOur retiree pension plans had a funding deficit of approximately $172 million at December 31, 2013. We have been making special minimum monthly payments in addition to our regular contributions to eliminate the pension liability. During 2013, our funding deficit was reduced by $162 million.\n\nThe special payments, including contributions associated with benefits paid from the plans, were approximately $7 million in 2013. We expect our total estimated funding requirements to be $96 million in 2014 and to be adjusted annually thereafter, based on various market factors such as interest rates and expected returns and staffing assumptions.\n\nChanges in factors such as the discount rate, increase in compensation and the expected return on plan assets can affect the accrued benefit obligation, pension expense and the deficiency of plan assets over accrued obligations in the future. See *Critical accounting estimates* for more information.\n\n#### *Purchase of Annuities*\n\nFrom time to time we have made additional lump-sum contributions to our pension plans, and the pension plans have purchased annuities from insurance companies to fund the pension benefit obligations for certain groups of retired employees in the plans. Purchasing the annuities relieves us of our primary responsibility for that portion of the accrued benefit obligations for the retired employees and eliminates the significant risk associated with the obligations.\n\nWe did not make any additional lump-sum contributions to our pension plans in 2013 or 2012, and the pension plans did not purchase additional annuities.\n\n#### FINANCIAL RISK MANAGEMENT\n\nWe normally use three categories of derivative instruments to manage risks related to our business activities:\n\n| Categories | The risk it manages | Types of derivative instruments |\n| --- | --- | --- |\n| Debt Derivatives | • Impact of fluctuations in foreign exchange rates on | • Cross-currency interest rate exchange agreements |\n| | principal and interest payments for US denominated | • Forward foreign exchange agreements (from time |\n| | long-term debt | to time, as applicable) |\n| Expenditure Derivatives | • Impact of fluctuations in foreign exchange rates on | • Forward foreign exchange agreements |\n| | forecasted US dollar denominated expenditures | |\n| Equity Derivatives | • Impact of fluctuations in share price on stock-based | • Total return swap agreements |\n| | compensation expense | |\n\nWe also manage our exposure to fluctuating interest rates and we have fixed the interest rate on 95.3% of our debt including short-term borrowings at December 31, 2013 (2012 – 100%).\n\n#### **Debt Derivatives**\n\nWe use cross currency interest exchange agreements (Debt Derivatives), to hedge the foreign exchange risk on all of the principal and interest obligations of our US dollar denominated senior notes and debentures. At December 31, 2013 we used Debt Derivatives to hedge the foreign exchange risk on 100% of the principal and interest obligations on all our US dollar denominated debt. We use Debt Derivatives for risk management purposes only.\n\nDuring 2013, we completed Debt Derivatives transactions as follows:\n\n- entered into new Debt Derivatives to hedge senior notes issued in 2013\n- terminated existing Debt Derivatives and entered into Debt Derivatives with different terms to hedge existing senior notes\n- settled Debt Derivatives related to senior notes that matured during the year.\n\n*Terminated and Replaced Existing Debt Derivatives*\n\n| | Notional amount | Original maturity | Cash settlement payment |\n| --- | --- | --- | --- |\n| Termination date | (millions) | date | |\n\n1 Converting from a fixed US$ coupon rate to a weighted average Cdn$ fixed rate.\n\n2 Converting from a fixed US$ principal amount to a fixed Cdn$ principal amount.\n\nAll of our Debt Derivatives currently outstanding have been designated as effective hedges against foreign exchange risk for accounting purposes as described below and in note 20 to the consolidated financial statements.\n\n*New Debt Derivatives to Hedge Senior Notes Issued In 2013*\n\n| | | | | US$ | | Hedging effect | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | US$ Principal/ | | | | Fixed | | Cdn$ |\n| | notional amount | | Maturity | | | equivalent | |\n| Effective date | (millions) | | date | Coupon rate | hedged Cdn.$ interest rate 1 | (millions) | |\n| March 7, 2013 | US$ | 500 | 2023 | 3.00% | 3.60% | $ | 515 |\n| March 7, 2013 | US$ | 500 | 2043 | 4.50% | 4.60% | $ | 515 |\n| Subtotal | US$ 1,000 | | | | | $ 1,030 | |\n| October 2, 2013 | US$ | 850 | 2023 | 4.10% | 4.59% | $ | 877 |\n| October 2, 2013 | US$ | 650 | 2043 | 5.45% | 5.61% | $ | 671 |\n| Subtotal | US$ 1,500 | | | | | $ 1,548 | |\n\n1 Converting from a fixed US$ coupon rate to a weighted average Cdn$ fixed rate.\n\n| Terminated Debt Derivatives | | | | | | New Debt Derivatives | | Hedging effect |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | | | | | Fixed |\n| | Notional | Original | Cash settlement | | Derivative | New | Fixed | Cdn$ |\n| | amount | maturity | payment | | amount | maturity | weighted | equivalent |\n| Termination date | (millions) | date | (millions) | Date entered | (millions) | date | average 1 | (millions) 2 |\n| Mar 6, 2013 | US$ 350 2 | 2018 | Nil | Mar 6, 2013 | US$ 3502 | 2038 | 7.62% | $ 356 |\n| Sep 27, 2013 | US$ 1,075 3,4 | 2014 – 2015 | $ 263 | Sep 27, 2013 | US$ 1,0753 | 2014-2015 | 7.42% | $ 1,110 |", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (Continued)\n\n#### (In thousands, except per share amounts)\n\nThe weighted-average remaining contractual terms for SARs and stock options outstanding and exercisable at June 30, 2012 were 6.0 and 4.8 years, respectively. The aggregate intrinsic values of SARs and stock options outstanding and exercisable at June 30, 2012 were $15,023 and $10,775, respectively. The aggregate intrinsic value of the SARs and stock options exercised during fiscal 2012, 2011 and 2010 was $13,747, $18,526 and $5,157, respectively.\n\nPerformance Grants\n\nNOTE 10: BENEFIT PLANS\n\nDeferred Compensation Plans\n\nPostemployment Benefit Plans\n\nmutual funds and Company common stock.\n\nSupplemental Executive Retirement Benefits Plan\n\nRestoration Plan\n\nQualified Defined Benefit Retirement Plan\n\nincome (loss)) of $302 ($492 loss, net of income tax of $190).\n\n$128 of expense associated with this plan in fiscal 2012.\n\nRetirement Savings Plan\n\n2010, respectively.\n\nare unfunded:\n\nKey Executive\n\nretirement.\n\nIn fiscal 2009 and 2008, the Executive Organization and Compensation Committee made annual awards of three-year performance grants to key officers. A target payout was established at the beginning of each three-year performance period. The actual payout at the end of the period is calculated based upon the Company's achievement of sales growth, return on sales, and total shareholder return targets. All performance periods had expired by June 30, 2011. During fiscal 2011 and 2010, the Company recorded $1,020 and $(231), respectively, of compensation expense (income) for achievement relative to the total shareholder return-based goals of\n\nSubstantially all U.S. associates participate in the Applied Industrial Technologies, Inc. Retirement Savings Plan. Participants may elect to contribute up to 50% of their compensation, subject to Internal Revenue Code maximums. The Company makes a discretionary profit-sharing contribution to the Retirement Savings Plan generally based upon a percentage of the Company's U.S. income before income taxes and before the amount of the contribution (5% for fiscal 2012, 2011 and 2010). The Company partially matches 401(k) contributions by participants; this match was suspended from January 1, 2009 to June 30, 2010. The Company's expense for profit sharing and matching of associates' 401(k) contributions was $10,866, $11,251 and $4,891 during fiscal 2012, 2011 and\n\nThe Company has deferred compensation plans that enable certain associates of the Company to defer receipt of a portion of their compensation and non-employee directors to defer receipt of director fees. The Company funds these deferred compensation liabilities by making contributions to rabbi trusts. Assets held in these rabbi trusts consist of investments in money market and\n\nThe Company provides the following postemployment benefits which, except for the Qualified Defined Benefit Retirement Plan,\n\nThe Company has a non-qualified pension plan to provide supplemental retirement benefits to certain officers. Benefits are payable beginning at retirement and determinable at retirement based upon a percentage of the participant's historical compensation. On December 19, 2011, the Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and final average earnings) and entry into the Supplemental Executive Retirement Benefits Plan (SERP) effective December 31, 2011. This action constituted a plan curtailment. The plan liability was remeasured in conjunction with the curtailment using a 3.5% discount rate and participant final average earnings through the curtailment date. The remeasurement in conjunction with the curtailment resulted in an actuarial loss (recorded in other comprehensive\n\nThe curtailment is reflected in the Company's consolidated balance sheets as: 1) a reduction to the overall SERP liability (included in postemployment benefits) of $8,860, 2) a reduction to deferred tax assets of $3,411 and 3) an increase in accumulated other comprehensive income (loss) of $5,449. Prior service costs previously recorded through accumulated other comprehensive income (loss) were reclassified into the statements of consolidated income ($3,117 gross expense, net of income\n\nIn fiscal 2012, the Executive Organization & Compensation Committee of the Board of Directors adopted the Key Executive Restoration Plan (KERP), an unfunded, non-qualified deferred compensation plan, to replace the SERP. The Company recorded\n\nThe Company has a qualified defined benefit retirement plan that provides benefits to certain hourly associates at retirement. These associates do not participate in the Retirement Savings Plan. The benefits are based on length of service and date of\n\ntax of $1,200). The gross expense is recorded in selling, distribution and administrative expense in fiscal 2012.\n\nthe Company's performance grants. The liability at June 30, 2011 was $1,558; this was paid in fiscal 2012.\n\nAs of June 30, 2012, unrecognized compensation cost related to SARs and stock options amounted to $1,951. That cost is expected to be recognized over a weighted-average period of 2.4 years. The total fair value of shares vested during fiscal 2012, 2011 and 2010 was $4,266, $2,645 and $2,673, respectively.\n\n#### Performance Shares\n\nPerformance shares are intended to provide incentives to achieve three-year goals. Performance shares pay out in shares of Applied stock at the end of a three-year period provided the Company achieves the established goals. The number of Applied shares payable will vary depending on the level of the goal achieved.\n\nA summary of nonvested performance shares activity at June 30, 2012 is presented below:\n\n| Year Ended June 30, 2012 | | Weighted-Average. | |\n| --- | --- | --- | --- |\n| | | Grant-Date. | |\n| (Share amounts in thousands) | Shares | Fair Value. | |\n| Nonvested, beginning of year | 222 | $ | 23.23 |\n| Granted | 31 | | 28.34 |\n| Forfeitures | (47 ) | | 27.15 |\n| Vested | (144 ) | | 20.67 |\n| Nonvested, end of year | 62 | $ | 28.80 |\n\nThe Committee set three one-year goals for the 2012 and 2011 grants tied to the Company's earnings before interest, tax, depreciation, and amortization (EBITDA) and after-tax return on assets (ROA). Each fiscal year during the three-year term has its own separate goals. Achievement during any particular fiscal year is \"banked\" for payout at the end of the three-year term.\n\nAs of June 30, 2012, the potential shares to be banked in future periods was 62. Unrecognized compensation cost relating to these shares has the potential to reach $1,812 and would be recognized in expense over the weighted-average remaining vesting period of 1.7 years.\n\n#### Restricted Stock and Restricted Stock Units\n\nRestricted stock award recipients are entitled to receive dividends on, and have voting rights with respect to their respective shares, but are restricted from selling or transferring the shares prior to vesting. Restricted stock awards vest over periods of one to four years. RSUs are grants valued in shares of Applied stock, but shares are not issued until the grants vest three years from the award date, assuming continued employment with Applied. RSUs vest on a pro rata basis upon retirement during the three-year term. Applied pays dividend equivalents on RSUs on a current basis.\n\nA summary of the status of the Company's nonvested restricted stock and RSUs at June 30, 2012 is presented below:\n\n| Year Ended June 30, 2012 | | | |\n| --- | --- | --- | --- |\n| | | Weighted-Average. | |\n| | | Grant-Date. | |\n| (Share amounts in thousands) | Shares | Fair Value. | |\n| Nonvested, beginning of year | 162 | $ | 25.97 |\n| Granted | 135 | | 31.58 |\n| Forfeitures | (31 ) | | 27.30 |\n| Vested | (15 ) | | 31.42 |\n| Nonvested, end of year | 251 | $ | 28.50 |\n\nUnrecognized compensation cost related to unvested restricted stock awards and RSUs aggregated $3,670 at June 30, 2012, and is expected to be recognized over the weighted-average remaining vesting period of 2.1 years.\n\n25358_AIT_Report_WT.indd 30 8/23/12 8:33 AM", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "For over 100 years Shenandoah Telecommunications Company has been committed to providing outstanding service to our customers. Our employees take that same dedication after hours to make a difference in their community.\n\nWe take this opportunity to share with you, our shareholders, the stories of just a few of your dedicated employees.\n\n*Patty Pomeroy* **help people.\"**\n\nVolunteerism is in Patty Pomeroy's blood. Her grandfather was a dispatcher for the rescue squad in Middletown, VA for 25 years and her grandmother was in the ladies auxiliary. Her father was a charter member of the Middletown Rescue Squad. In 1997, Patty, a customer service representative at Shentel for four years, continued the family tradition by earning her Emergency Medical Technician certification and going to \"work\" for the Strasburg Rescue Squad. Patty is the administrator of membership recruitment and retention for the squad and is the liaison coordinator for junior squad members under 18. It is her job to make sure that new members are brought in to the squad and current members stay active.\n\n# **\"There is a great satisfaction that comes from knowing that what you can do will**\n\nJeff Beard has been an installer repairman with Shentel for almost five years. Two years ago, Jeff helped start Project Isaiah 58, a faith-based recovery ministry that reaches out to people who are struggling with addiction. Project Isaiah 58 has weekly group meetings in Winchester, Woodstock and Warrenton, VA. Jeff, who lives in Winchester, participates in the group meetings and also makes time to meet one-on-one with people who need personal attention.\n\n**\"I feel the need to reach out to people who are suffering.\"** \n\n*Jeff Beard*\n\nJohn Gardner has been with Shentel for two years as a PCS technician in Central Pennsylvania, but for almost a year of that time he was on Naval Reserve duty in Sasebo, Japan. John joined the Reserves after serving 10 years of active duty. In October 2002, he was activated under Noble Eagle-Enduring Freedom as part of the increase in security at bases around the world. John worked on Motorola radios and repeater systems while stationed in Japan. It was tough for the serviceman to be away from his wife and children, but John believes very strongly in serving his country.\n\n**\"Being in the Reserves is a way for me to be a civilian and still serve my country.\"**\n\n## *John Gardner*\n\nAt Shentel, George Brinkley, the store manager in Front Royal, VA, is known for being one of the biggest fund-raisers for the Shenandoah County American Cancer Society Relay for Life event. In his six years at the Company, George has raised nearly $20,000. In 2003, he raised $4,246 and was recognized as the top individual fund-raiser for the entire event.\n\nIn 2002, George was chairman of the parade committee for the Woodstock, VA 250th anniversary celebration. Under George's leadership, the 26-member committee worked for a year preparing for the parade, which was the largest in the town's history.\n\n**\"I just have a knack for volunteering. I want to make my community better any way I can.\"**\n\n*George Brinkley* 3 ■ 2003 ANNUAL REPORT", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_SHEN_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "At the end of 2003, how many available-for-sales investments did Shenandoah company count in its portfolio ?", - "target_page": 53, - "target_passage": "The Company’s available-for-sale portfolio at December 31, 2003 is made up of two investments", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "## **We must serve well to prosper – We must prosper to serve well**\n\nShenTel Service Company • Shenandoah Long Distance Company • Shenandoah Mobile Company Shenandoah Network Company • Shenandoah Telephone Company • Shenandoah Valley Leasing Company Shenandoah Cable Television Company • ShenTel Communications Company Shenandoah Personal Communications Company\n\n> PO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 • Fax 540-984-8192 www.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Note 14. Segment Reporting**\n\nThe Company, as a holding company with various operating subsidiaries, has identified ten reporting segments based on the products and services each provides. Each segment is managed and evaluated separately because of differing technologies and marketing strategies.\n\nThe reporting segments and the nature of their activities are as follows:\n\n| Shenandoah Telecommunications Company (Holding) | Holding company, which invests in both affiliated |\n| --- | --- |\n| | and non-affiliated companies. |\n| Shenandoah Telephone Company (Telephone) | Provides both regulated and unregulated telephone |\n| | services and leases fiber optic facilities primarily |\n| | throughout the Northern Shenandoah Valley. |\n| Shenandoah Cable Television Company (CATV) | Provides cable television service in Shenandoah |\n| | County. |\n| ShenTel Service Company (ShenTel) | Provides Internet access to a multi-state region |\n| | surrounding the Northern Shenandoah Valley, hosts |\n| | Travel 511 for Virginia, and sells and services |\n| | telecommunication equipment. |\n| Shenandoah Valley Leasing Company (Leasing) | Finances purchases of telecommunications |\n| | equipment to customers of other segments. |\n| Shenandoah Mobile Company (Mobile) | Provides tower rental space in the Company's PCS |\n| | markets and paging services throughout the Northern |\n| | Shenandoah Valley. |\n| Shenandoah Long Distance Company (Long Distance) | Provides long distance services. |\n| Shenandoah Network Company (Network) | Leases interstate fiber optic facilities. |\n| ShenTel Communications Company (Shen Comm) | Provides DSL services as a CLEC operation. |\n| Shenandoah Personal Communications Company (PCS) | As a PCS Affiliate of Sprint, provides digital wireless |\n| | service to a portion of a four-state area covering the |\n| | region from Harrisburg, York and Altoona, |\n| | Pennsylvania, to Harrisonburg, Virginia. |\n\nThe accounting policies of the segments are the same as those described in the summary of significant accounting policies. Each segment accounts for inter-segment sales and transfers as if the sales or transfers were to outside parties.\n\nIncome (loss) recognized from equity method nonaffiliated investees by segment is as follows:\n\n| | | | | Consolidated | |\n| --- | --- | --- | --- | --- | --- |\n| Year | Holding | | Telephone | Totals | |\n| | | | (in thousands) | | |\n| 2003 | $ | (441) | $ 65 | $ | (376) |\n| 2002 | $ | (822) | $ 45 | $ | (777) |\n| 2001 | $ (1,218) | | $104 | $ (1,114) | |", - "page_start": 36, - "page_end": 36, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **OUR BUSINESS**\n\nShenandoah Telecommunications Company is a diversified telecommunications holding company which provides various telecommunications services through its operating subsidiaries. These services include: wireline telephone service, primarily in Shenandoah County and small service areas in Rockingham, Frederick, and Warren counties, all in Virginia; cable television service in Shenandoah County; unregulated telecommunications equipment sales and services; online information and Internet access provided to the multi-state region surrounding the Northern Shenandoah Valley of Virginia; financing of purchases of telecommunications facilities and equipment; paging services in the Northern Shenandoah Valley; resale of long distance services; operation and maintenance of an interstate fiber optic network; wireless personal communications services (PCS) and a tower network in the four-state region from Harrisonburg, Virginia to the Harrisburg, York and Altoona, Pennsylvania markets.\n\n#### **ANNUAL MEETING**\n\nThe Board of Directors extends an invitation to all shareholders to attend the Annual Meeting of Shareholders. The meeting will be held at 11:00 AM (EST) on April 20, 2004 in the Auditorium of the Company's offices at the Shentel Center, 500 Mill Road, Edinburg, Virginia.\n\n#### **FORMS 10-K, 10-Q, and 8-K**\n\n**The Company files periodic reports with the Securities and Exchange Commission. The Company's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K, along with any amendments to these reports, are available to shareholders through the Company's website, www.shentel.com. This website also has recent news releases and other information potentially of interest to shareholders.**\n\n**A copy of the Company's Annual Report on Form 10-K, without exhibits, may be obtained, without charge, by writing to Shenandoah Telecommunications Company, 124 South Main Street, P.O. Box 459, Edinburg, Virginia 22824, Attention: Secretary.**\n\n#### **MARKET AND DIVIDEND INFORMATION**\n\nThe Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low sales prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below: The Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low closing prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below:\n\n| | | | | | | | 2003 | 2003 | | | | | | | | | | | | | 2002 | 2002 | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | Qtr. 1 Quarter 1 Quarter 1 | | | | Qtr. 2 Quarter 2 Quarter 2 2003 | | | Qtr. 3 Quarter 3 | Quarter 3 | Qtr. 4 Quarter 4 | Quarter 4 | | Quarter 1 | Quarter 1 | Qtr. 1 | | Quarter 2 | | Qtr. 2 Quarter 2 | | Quarter 3 2002 | | Qtr. 3 Quarter 3 | | Quarter 4 | Qtr. 4 Quarter 4 |\n| High price High price | $ | $ Quarter 1 24.31 $ 24.31 | | $ | $ $ | Quarter 2 24.98 24.98 | 24.98 | $ $ $ | $ | Quarter 3 25.48 25.48 25.48 | $ $ $ | Quarter 4 27.50 27.50 | $ | Quarter 1 | $ $ 20.06 | 20.06 | $ | Quarter 2 $ $ $ 27.25 | | | 27.25 27.25 | Quarter 3 $ | $ $ $ 27.25 | 27.25 27.25 | | Quarter 4 $ 25.95 $ | $ 25.95 |\n| High price | | $ | 24.31 | | $ | | 24.98 | | $ | 25.48 | $ | 27.50 | $ | | 20.06 | | $ | 27.25 | | | $ | | 27.25 | | | $ 25.95 | |\n| Low price Low price | $ | $ $ 13.64 13.64 | | $ | $ $ | 14.33 14.33 | 14.33 | $ $ $ | $ | 19.25 19.25 19.25 | $ 19.74 $ | $ 19.74 | $ | | $ $ 16.50 | 16.50 | $ | $ $ | 19.69 | $ | 19.69 19.69 | $ 22.75 | $ | $ 22.75 $ 22.75 $ 22.75 | $ | $ $ 21.61 $ | 21.61 21.61 |\n\nLow price $ 13.64 $ 14.33 $ 19.25 $ 19.74 $ 16.50 $ 19.69 $ 22.75 $ 21.61 All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. *All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004.*\n\nAll share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid.\n\npaid. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nAs of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nEdinburg, VA 22824 Richmond, VA 23219\n\n#### **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR CORPORATE HEADQUARTERS**\n\nShenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Shenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Edinburg, VA 22824 Richmond, VA 23219 Shenandoah Telecommunications Company 124 South Main Street Edinburg, VA 22824 124 South Main Street 1021 East Cary Street\n\n **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR**\n\n#### **SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS**\n\n**SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS** CALL (540) 984-5200\n\nCALL (540) 984-5200 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Edinburg, VA 22824\n\nEdi b VA 22824\n\n*This Annual Report to Shareholders contains forward-looking statements. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to: changes in the interest rate environment; management's business strategy; national, regional, and local market conditions; and legislative and regulatory conditions. Readers should not place undue reliance on forward-looking statements which reflect management's view only as of the date hereof. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.*", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## **SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES**\n\n## **2003 Financial Statements**\n\n## **INDEPENDENT AUDITOR'S REPORT**\n\nThe Board of Directors and Shareholders Shenandoah Telecommunications Company:\n\nWe have audited the accompanying consolidated balance sheets of Shenandoah Telecommunications Company and subsidiaries (the Company), as of December 31, 2003, 2002, and 2001, and the related consolidated statements of income, shareholders' equity and comprehensive income, and cash flows for the years then ended. These consolidated financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these consolidated financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the consolidated financial statements referred to above present fairly, in all material respects, the financial position of Shenandoah Telecommunications Company and subsidiaries as of December 31, 2003, 2002 and 2001, and the results of their operations and their cash flows for the years then ended, in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for goodwill in 2002. As further discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for asset retirement obligations in 2003.\n\nRichmond, Virginia February 6, 2004", - "page_start": 12, - "page_end": 12, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Note 1. Summary of Significant Accounting Policies**\n\n*Description of business:* Shenandoah Telecommunications Company and subsidiaries (the Company) provides telephone service, wireless personal communications service (PCS) under the Sprint brand name, cable television, unregulated communications equipment sales and services, Internet access, and paging services. In addition, the Company leases towers and operates and maintains an interstate fiber optic network. The Company's operations are located in the four state region surrounding the Northern Shenandoah Valley of Virginia. Pursuant to a management agreement with Sprint Communications Company and its related parties (collectively, \"Sprint\"), the Company is the exclusive PCS Affiliate of Sprint providing wireless mobility communications network products and services in the geographic area extending from Altoona, Harrisburg and York, Pennsylvania, south through Western Maryland, and the panhandle of West Virginia, to Harrisonburg, Virginia. The Company is licensed to use the Sprint brand name in this territory, and operates its network under the Sprint radio spectrum license (Note 7). A summary of the Company's significant accounting policies follows:\n\n*Stock split:* All share and per share information reflect the two for one stock split announced in October 2003, to shareholders of record as of the close of business on January 30, 2004. The additional shares were distributed on February 20, 2004. The effective date of the split is February 23, 2004. All previously reported share and per share data included herein are retroactively adjusted to reflect the split.\n\n*Principles of consolidation:* The consolidated financial statements include the accounts of all wholly owned subsidiaries and other entities where effective control is exercised. All significant intercompany balances and transactions have been eliminated in consolidation.\n\n*Use of estimates:* Management of the Company has made a number of estimates and assumptions related to the reporting of assets and liabilities, the disclosure of contingent assets and liabilities at the date of the consolidated financial statements and the reported amounts of revenues and expenses during the reporting periods. Management reviews its estimates, including those related to recoverability and useful lives of assets as well as liabilities for income taxes and pension benefits. Changes in facts and circumstances may result in revised estimates and actual results could differ from those reported estimates.\n\n*Cash and cash equivalents:* The Company considers all temporary cash investments purchased with a maturity of three months or less to be cash equivalents. The Company places its temporary cash investments with high credit quality financial institutions. At times, these investments may be in excess of FDIC insurance limits. Cash and cash equivalents were $28.7million, $2.2 million, and $2.0 million at December 31, 2003, 2002 and 2001, respectively.\n\n*Accounts receivable:* Accounts receivable are recorded at the invoiced amount and do not bear interest. The allowance for doubtful accounts is the Company's best estimate of the amount of probable credit losses in the Company's existing accounts receivable. The Company determines the allowance based on historical write-off experience and by industry and national economic data. The Company reviews its allowance for doubtful accounts monthly. Past due balances meeting specific criteria are reviewed individually for collectibility. All other balances are reviewed on a pooled basis. Account balances are charged off against the allowance after all means of collection have been exhausted and the potential for recovery is considered remote. Accounts receivable are concentrated among customers within the Company's geographic service area and large telecommunications companies. The Company's allowance for uncollectable receivables related to continuing operations was $477 thousand, $914 thousand and $650 thousand at December 31, 2003, 2002 and 2001, respectively.\n\n*Securities and investments:* The classifications of debt and equity securities are determined by management at the date individual investments are acquired. The appropriateness of such classification is continually reassessed. The Company monitors the fair value of all investments, and based on factors such as market conditions, financial information and industry conditions, the Company will reflect impairments in values as is warranted. The classification of those securities and the related accounting policies are as follows:\n\n*Available-for-Sale Securities:* Debt and equity securities classified as available-for-sale consist of securities which the Company intends to hold for an indefinite period of time, but not necessarily to maturity. Any decision to sell a security classified as available-for-sale would be based on various factors, including changes in market conditions, liquidity needs and similar criteria. Available-for-sale securities are recorded at fair value as determined by quoted *Available-for-Sale Securities:* Debt and equity securities classified as available-for-sale consist of securities which the Company intends to hold for an indefinite period of time, but not necessarily to maturity. Any decision to sell a security classified as available-for-sale would be based on various factors, including changes in market conditions, liquidity needs and similar criteria. Available-for-sale securities are recorded at fair value as determined by quoted *Available-for-Sale Securities:* Debt and equity securities classified as available-for-sale consist of securities which the Company intends to hold for an indefinite period of time, but not necessarily to maturity. Any decision to sell a security classified as available-for-sale would be based on various factors, including changes in market conditions, liquidity needs and similar criteria. Available-for-sale securities are recorded at fair value as determined by quoted market prices. Unrealized holding gains and losses, net of the related tax effect, are excluded from earnings and are reported as a separate component of other comprehensive", - "page_start": 19, - "page_end": 19, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "is the intent of the Company to evaluate whether to hold or sell parts or all of each investment on an individual basis. At December 31, 2003, the Company had external investments totaling $7.5 million.\n\nIn 2004, the Company anticipates taking advantage of a conversion feature on its Rural Telephone Bank stock. The Company will convert a portion of its holdings into a different class of stock that will pay cash dividends each year. The bank declares a dividend rate that varies, each year. The range of the dividend has been between 4.2% and 5.65% over the last 5 years. The rate in the two most recent years was 4.2%. This transaction is estimated to provide the Company with approximately $0.3 million in dividend income each year, based on the 2003 dividend rate of 4.2% and assuming we had converted the stock at the beginning of 2003.\n\n#### **Financial Condition, Liquidity and Capital Resources**\n\nThe Company has four principal sources of funds available to meet the financing needs of its operations, capital projects, debt service, investments and potential dividends. These sources include cash flows from operations, cash and cash equivalents, the liquidation of investments and borrowings. Management routinely considers the alternatives available to determine what mix of sources are best suited for the long-term benefit of the Company.\n\nDuring the 2003 year, with the closing of the sale of the Virginia 10 RSA Limited partnership interest, the Company evaluated its capital requirements, and as a result eliminated its $20.0 million revolving line of credit with CoBank in May 2003. The Company had paid off the outstanding balance in early 2003, and did not borrow on it during the remaining time the facility was in place. In light of the $27.9 million balance in cash equivalent investments, management determined additional debt capacity is not necessary for the near-term.\n\nThe term debt loan agreements with CoBank have three financial covenants. These are measured on a trailing 12-month basis and are calculated on continuing operations. The first of the covenants is the total leverage ratio, which is total debt to operating cash flow. This ratio must remain below 3.5, and as of December 31, 2003 it was 1.2. The second measure is equity to total assets, which must be 35% or higher. At December 31, 2003 the ratio was 57.3%. The third measure is the debt service coverage ratio, which is operating cash flow to scheduled debt service, which must exceed 2.0. At December 31, 2003 this measure was 4.3. Management believes the Company will meet these covenant measures for the coming year. The Company has pledged all of its affiliates capital stock as collateral for the CoBank loans.\n\nThe Company's covenants on the RUS/RTB debt require the pledge of all current and future assets of the Telephone subsidiary until the debt is retired.\n\nAnother external source of funding is a $0.5 million unsecured, variable rate revolving line of credit with SunTrust Bank. This facility is in place to allow the Company to better manage its daily cash balances. The facility expires May 31, 2004. Management anticipates renewing this facility with SunTrust Bank under similar terms and conditions. At December 31, 2003 there were no balances outstanding under this facility.\n\nDue to make-whole provisions in the Company's debt agreements it is currently uneconomical for the Company to prepay any debt.\n\nThe Company is obligated to make future payments under various contracts it has entered into, including amounts pursuant to its various long-term debt facilities, and non-cancelable operating lease agreements for retail space, tower space and cell sites. Expected future minimum contractual cash obligations for the next five years and in the aggregate at December 30, 2003, are as follows:\n\n| Payments due by periods | | | | | |\n| --- | --- | --- | --- | --- | --- |\n| (unaudited) | | Less than Less than 1 | | | After |\n| (unaudited) (in thousands) (in thousands) | Total Total | 1 year year | 1-3 years 1-3 years | 4-5 years 4-5 years | 5 years After 5 years |\n| Long-term debt principal | $ 43,346 | $ 4,230 | $ 8,898 | $ 9,552 | $ 20,666 |\n| Interest on long –term debt | 15,429 | 3,019 | 5,099 | 3,778 | 3,533 |\n| Operating leases | 12,592 | 3,216 | 4,616 | 2,229 | 2,531 |\n| Capital calls on investments | 1,790 | - | 1,790 | - | - |\n| Purchase obligations | 98 | 98 | - | - | - |\n| Total obligations | $ 73,255 | $ 10,563 | $ 20,403 | $ 15,559 | $ 26,730 |", - "page_start": 53, - "page_end": 53, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "### M A N A G E M E N T ' S D I S C U S S I O N A N D A N A L Y S I S\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n#### **Overview**\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n#### **Critical Accounting Policies and Estimates** *G E N E R A L*\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.\n\nAn accounting policy is deemed to be critical if it requires an accounting estimate to be made based on assumptions about matters that are uncertain at the time the estimate is made, and if different estimates that reasonably could have been used, or changes in the accounting estimates that are reasonably likely to occur periodically, could materially impact the financial statements. Management believes the following critical accounting policies reflect its more significant estimates and assumptions used in the preparation of the Consolidated Financial Statements.\n\n*Fiscal year end* – The Company's fiscal year ends on the Saturday nearest December 31. Fiscal year 2003, the year ended January 3, 2004, contained 53 weeks, while fiscal year 2002, the year ended December 28, 2002, and fiscal year 2001, the year ended December 29, 2001, contained 52 weeks. A 53-week year occurs approximately every sixth year.\n\n*Revenue recognition* – Revenue is normally recognized upon shipment of goods to customers. In certain circumstances revenue is not recognized until the goods are received by the customer or upon installation and customer acceptance based on the terms of the sale agreement. Revenue includes freight charged to customers; related", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## **NEW TELEPHONE DIRECTORY FOR THE NORTHERN SHENANDOAH VALLEY**\n\nThe Shenandoah Telephone Directory has undergone many changes since we published our first directory in 1906, as The Farmers' Mutual Telephone System of Shenandoah County. In 1906, the entire phone number listings were on 15 pages. The first Company directory to include yellow pages was distributed in 1946. That year local businesses invested in a new way to reach their potential customers.\n\nThe goal has always been to provide a useful tool for our customers. The pace of change has quickened in the last few years. In 2000, for the first time, Shenandoah Telephone's directory expanded from telephone listings for only Shenandoah County and Bergton, to include business and residential listings for Rockingham, Frederick, Clarke, and Warren counties. In 2001, Page County listings were added. The name of our directory was changed to ShentelPages in 2002 to reflect the expanded listing area. Although we included additional information in our directory, we continued to only furnish it to our local telephone customers.\n\nEarly in 2003, we conducted a customer survey to measure potential public acceptance of a regional phone directory for the six-county area. The\n\nfindings of the survey indicated almost 60% would likely use an expanded six-county directory, with a fourth of all respondents saying they would use a regional directory more often than the directory they currently had in their home or business. Based on these positive results, Shentel launched an expanded directory to meet the demand.\n\nAn extensive public-awareness campaign was launched on television and radio, in a variety of daily and weekly newspapers and at regional county fairs. The campaign helped build anticipation for the directory and increase awareness of yellow page advertising opportunities. As a result of the added value of the expanded distribution area, ShentelPages' yellow page advertising revenues increased 21%, to $1.8 million for the 2004 book.\n\nIn December 2003, Shentel mailed out 120,000 ShentelPages directories to every home and business in Shenandoah, Rockingham, Frederick, Page, Clarke and Warren counties. ShentelPages now has a potential audience that exceeds 300,000 readers. The 2004 directory continues to be an important local resource. In addition to telephone listings, it contains both general and county-specific information - from ZIP codes to area codes, and from international dialing instructions to the listing of regional interstate exits.\n\nThrough ShentelPages, businesses have a new way of reaching thousands more potential customers within the sixcounty area to sell their products and services. ShentelPages is bundled with our electronic version, ShentelPages.com. This service allows area residents to use their computer and the Internet to let their fingers do the walking.\n\nJust like our first book in 1906, the 2004 ShentelPages provides area residents with a quick and easy way to stay in touch.", - "page_start": 9, - "page_end": 9, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Total income tax expense for continuing operations differs from the amount that would be provided by applying the statutory federal income tax rate to pretax earnings as illustrated below (in thousands):\n\n| | | | | YEAR ENDED DECEMBER 31, | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | | 2003 | | 2002 | | 2001 |\n| Income tax expense at the statutory federal income tax rate | $ | 2,298 | $ | 1,858 | $ | 2,062 |\n| Increase (decrease) resulting from: | | | | | | |\n| State income taxes | | 34 | | 80 | | 220 |\n| Decrease in valuation allowance | | — | | — | | (68) |\n| R&D credit | | (100) | | (164) | | (52) |\n| Foreign sales benefit | | (250) | | (244) | | (352) |\n| Other, net | | (103) | | (127) | | (7) |\n| Total income tax expense | $ | 1,879 | $ | 1,403 | $ | 1,803 |\n\n## STOCKHOLDERS' EQUITY\n\n6\n\n7\n\nThe Board of Directors of the Company has at various times authorized repurchases of Company stock in open-market or negotiated transactions at such times and at such prices as management may from time to time decide. The Company has effected a number of open-market or negotiated transactions to purchase its stock during the past three years. These repurchases totaled 20,200, 26,000 and 10,300 shares during the years 2003, 2002 and 2001, respectively, at per share prices ranging from $14.02 to $42.42. As of December 31, 2003, authorization for the repurchase of 94,000 additional shares remained. The Company purchased 173,614 shares of its common stock at $23.00 per share in April 2003 pursuant to a tender offer. The Company purchased 502,229 shares of its common stock at $34.50 per share in December 2001 pursuant to a tender offer. All shares purchased in the tender offers and in the open-market or negotiated transactions became treasury shares upon repurchase by the Company.\n\nIn September 2003, the Company announced that it had adopted a policy for the payment of regular quarterly cash dividends on the Company's common stock. The Company subsequently paid a quarterly cash dividend of $ .12 per common share in both September and December of 2003.\n\nThe Company has a Common Share Purchase Rights Plan, which is intended to protect the interests of stockholders in the event of a hostile attempt to take over the Company. The rights, which are not presently exercisable and do not have any voting powers, represent the right of the Company's stockholders to purchase at a substantial discount, upon the occurrence of certain events, shares of common stock of the Company or of an acquiring company involved in a business combination with the Company. In January 2000, this plan, which was adopted in February 1990, was extended until February 2005.\n\n# INCOME PER SHARE\n\nThe following is the computation for basic and diluted income per share from continuing operations:\n\n| | | | | YEAR ENDED DECEMBER 31, | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| (IN THOUSANDS, EXCEPT PER SHARE AMOUNTS) | | 2003 | | 2002 | | 2001 |\n| Income from continuing operations | $ | 4,892 | $ | 4,065 | $ | 4,262 |\n| Weighted average basic shares outstanding | | 1,711 | | 1,711 | | 2,033 |\n| Add: Effect of dilutive securities (options) | | 128 | | 152 | | 239 |\n| Weighted average diluted shares outstanding | | 1,839 | | 1,863 | | 2,272 |\n| Income per share from continuing operations: | | | | | | |\n| Basic | $ | 2.86 | $ | 2.37 | $ | 2.10 |\n| Diluted | $ | 2.66 | $ | 2.18 | $ | 1.88 |\n\nFor the years ended December 31, 2003, 2002 and 2001, options to purchase approximately 25,250, 40,625 and 7,800 shares of common stock, respectively, were not included in the computation of diluted income per share because their effect would have been antidilutive.", - "page_start": 18, - "page_end": 18, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "and operate MGM Grand Australia. This transaction closed in July 2004 with net proceeds to the Company of $136 million.\n\nThe results of the Golden Nugget Subsidiaries, Online and MGM Grand Australia are classified as discontinued operations in the accompanying consolidated statements of income for all periods presented. Net revenues of discontinued operations were $45 million, $231 million and $222 million, respectively, for the years ended December 31, 2004, 2003 and 2002. Included in income from discontinued operations is an allocation of interest expense based on the ratio of the net assets of the discontinued operations to the total consolidated net assets and debt of the Company. Interest allocated to discontinued operations was $2 million, $9 million and $9 million for the years ended December 31, 2004, 2003 and 2002, respectively. Included in discontinued operations for the year ended December 31, 2003 is a loss on disposal of Online of $7 million relating primarily to unrecoverable costs of computer hardware and software. Included in the tax benefit from discontinued operations for the year ended December 31, 2003 is $2 million of previously unrecognized tax benefits relating to prior year operating losses of Online. Included in discontinued operations for the year ended December 31, 2004 is a gain on the sale of the Golden Nugget Subsidiaries of $8 million and a gain on sale of the MGM Grand Australia Subsidiaries of $74 million.\n\nThe following table summarizes the assets and liabilities of discontinued operations (the Golden Nugget Subsidiaries and Online) as of December 31, 2003, included as assets and liabilities held for sale in the accompanying consolidated balance sheet:\n\n| At December 31, 2003 (In thousands) | |\n| --- | --- |\n| Cash $ | 15,230 |\n| Accounts receivable, net | 6,024 |\n| Inventories | 4,321 |\n| Prepaid expenses and other | 5,174 |\n| Total current assets | 30,749 |\n| Property and equipment, net | 185,516 |\n| Other assets, net | 9,817 |\n| Total assets | 226,082 |\n| Accounts payable | 2,180 |\n| Other current liabilities | 20,885 |\n| Total current liabilities | 23,065 |\n| Long-term debt | 391 |\n| Total liabilities | 23,456 |\n| Net assets $ 202,626 | |", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "What was the main reason of the decrease of customer base of the Shenandoah and Virginia 10 RSA partnership ?", - "target_page": 51, - "target_passage": "he decline was the result of competition with digital technologies and increased competition from national carriers in the area", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## **We must serve well to prosper – We must prosper to serve well**\n\nShenTel Service Company • Shenandoah Long Distance Company • Shenandoah Mobile Company Shenandoah Network Company • Shenandoah Telephone Company • Shenandoah Valley Leasing Company Shenandoah Cable Television Company • ShenTel Communications Company Shenandoah Personal Communications Company\n\n> PO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 • Fax 540-984-8192 www.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Note 14. Segment Reporting**\n\nThe Company, as a holding company with various operating subsidiaries, has identified ten reporting segments based on the products and services each provides. Each segment is managed and evaluated separately because of differing technologies and marketing strategies.\n\nThe reporting segments and the nature of their activities are as follows:\n\n| Shenandoah Telecommunications Company (Holding) | Holding company, which invests in both affiliated |\n| --- | --- |\n| | and non-affiliated companies. |\n| Shenandoah Telephone Company (Telephone) | Provides both regulated and unregulated telephone |\n| | services and leases fiber optic facilities primarily |\n| | throughout the Northern Shenandoah Valley. |\n| Shenandoah Cable Television Company (CATV) | Provides cable television service in Shenandoah |\n| | County. |\n| ShenTel Service Company (ShenTel) | Provides Internet access to a multi-state region |\n| | surrounding the Northern Shenandoah Valley, hosts |\n| | Travel 511 for Virginia, and sells and services |\n| | telecommunication equipment. |\n| Shenandoah Valley Leasing Company (Leasing) | Finances purchases of telecommunications |\n| | equipment to customers of other segments. |\n| Shenandoah Mobile Company (Mobile) | Provides tower rental space in the Company's PCS |\n| | markets and paging services throughout the Northern |\n| | Shenandoah Valley. |\n| Shenandoah Long Distance Company (Long Distance) | Provides long distance services. |\n| Shenandoah Network Company (Network) | Leases interstate fiber optic facilities. |\n| ShenTel Communications Company (Shen Comm) | Provides DSL services as a CLEC operation. |\n| Shenandoah Personal Communications Company (PCS) | As a PCS Affiliate of Sprint, provides digital wireless |\n| | service to a portion of a four-state area covering the |\n| | region from Harrisburg, York and Altoona, |\n| | Pennsylvania, to Harrisonburg, Virginia. |\n\nThe accounting policies of the segments are the same as those described in the summary of significant accounting policies. Each segment accounts for inter-segment sales and transfers as if the sales or transfers were to outside parties.\n\nIncome (loss) recognized from equity method nonaffiliated investees by segment is as follows:\n\n| | | | | Consolidated | |\n| --- | --- | --- | --- | --- | --- |\n| Year | Holding | | Telephone | Totals | |\n| | | | (in thousands) | | |\n| 2003 | $ | (441) | $ 65 | $ | (376) |\n| 2002 | $ | (822) | $ 45 | $ | (777) |\n| 2001 | $ (1,218) | | $104 | $ (1,114) | |", - "page_start": 36, - "page_end": 36, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **OUR BUSINESS**\n\nShenandoah Telecommunications Company is a diversified telecommunications holding company which provides various telecommunications services through its operating subsidiaries. These services include: wireline telephone service, primarily in Shenandoah County and small service areas in Rockingham, Frederick, and Warren counties, all in Virginia; cable television service in Shenandoah County; unregulated telecommunications equipment sales and services; online information and Internet access provided to the multi-state region surrounding the Northern Shenandoah Valley of Virginia; financing of purchases of telecommunications facilities and equipment; paging services in the Northern Shenandoah Valley; resale of long distance services; operation and maintenance of an interstate fiber optic network; wireless personal communications services (PCS) and a tower network in the four-state region from Harrisonburg, Virginia to the Harrisburg, York and Altoona, Pennsylvania markets.\n\n#### **ANNUAL MEETING**\n\nThe Board of Directors extends an invitation to all shareholders to attend the Annual Meeting of Shareholders. The meeting will be held at 11:00 AM (EST) on April 20, 2004 in the Auditorium of the Company's offices at the Shentel Center, 500 Mill Road, Edinburg, Virginia.\n\n#### **FORMS 10-K, 10-Q, and 8-K**\n\n**The Company files periodic reports with the Securities and Exchange Commission. The Company's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K, along with any amendments to these reports, are available to shareholders through the Company's website, www.shentel.com. This website also has recent news releases and other information potentially of interest to shareholders.**\n\n**A copy of the Company's Annual Report on Form 10-K, without exhibits, may be obtained, without charge, by writing to Shenandoah Telecommunications Company, 124 South Main Street, P.O. Box 459, Edinburg, Virginia 22824, Attention: Secretary.**\n\n#### **MARKET AND DIVIDEND INFORMATION**\n\nThe Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low sales prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below: The Company's stock is traded on the NASDAQ National Market under the symbol \"SHEN.\" Information on the high and low closing prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below:\n\n| | | | | | | | 2003 | 2003 | | | | | | | | | | | | | 2002 | 2002 | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | Qtr. 1 Quarter 1 Quarter 1 | | | | Qtr. 2 Quarter 2 Quarter 2 2003 | | | Qtr. 3 Quarter 3 | Quarter 3 | Qtr. 4 Quarter 4 | Quarter 4 | | Quarter 1 | Quarter 1 | Qtr. 1 | | Quarter 2 | | Qtr. 2 Quarter 2 | | Quarter 3 2002 | | Qtr. 3 Quarter 3 | | Quarter 4 | Qtr. 4 Quarter 4 |\n| High price High price | $ | $ Quarter 1 24.31 $ 24.31 | | $ | $ $ | Quarter 2 24.98 24.98 | 24.98 | $ $ $ | $ | Quarter 3 25.48 25.48 25.48 | $ $ $ | Quarter 4 27.50 27.50 | $ | Quarter 1 | $ $ 20.06 | 20.06 | $ | Quarter 2 $ $ $ 27.25 | | | 27.25 27.25 | Quarter 3 $ | $ $ $ 27.25 | 27.25 27.25 | | Quarter 4 $ 25.95 $ | $ 25.95 |\n| High price | | $ | 24.31 | | $ | | 24.98 | | $ | 25.48 | $ | 27.50 | $ | | 20.06 | | $ | 27.25 | | | $ | | 27.25 | | | $ 25.95 | |\n| Low price Low price | $ | $ $ 13.64 13.64 | | $ | $ $ | 14.33 14.33 | 14.33 | $ $ $ | $ | 19.25 19.25 19.25 | $ 19.74 $ | $ 19.74 | $ | | $ $ 16.50 | 16.50 | $ | $ $ | 19.69 | $ | 19.69 19.69 | $ 22.75 | $ | $ 22.75 $ 22.75 $ 22.75 | $ | $ $ 21.61 $ | 21.61 21.61 |\n\nLow price $ 13.64 $ 14.33 $ 19.25 $ 19.74 $ 16.50 $ 19.69 $ 22.75 $ 21.61 All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. *All share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004.*\n\nAll share and per share figures are restated to reflect the 2 for 1 stock split effected February 23, 2004. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid. The Company historically has paid an annual cash dividend on or about December 1st of each year. The cash dividend per share was $0.39 in 2003 and $0.37 in 2002. The Company's ability to pay dividends is restricted by its long-term loan agreements. The loan agreements are not expected to limit dividends in amounts that the Company historically has paid.\n\npaid. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock. As of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nAs of February 15, 2004, there were approximately 3,930 holders of record of the Company's common stock.\n\nEdinburg, VA 22824 Richmond, VA 23219\n\n#### **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR CORPORATE HEADQUARTERS**\n\nShenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Shenandoah Telecommunications Company KPMG LLP 124 South Main Street 1021 East Cary Street Edinburg, VA 22824 Richmond, VA 23219 Shenandoah Telecommunications Company 124 South Main Street Edinburg, VA 22824 124 South Main Street 1021 East Cary Street\n\n **CORPORATE HEADQUARTERS INDEPENDENT AUDITOR**\n\n#### **SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS**\n\n**SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS** CALL (540) 984-5200\n\nCALL (540) 984-5200 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Transfer Agent - Common Stock Shenandoah Telecommunications Company P.O. Box 459 Edinburg, VA 22824\n\nEdi b VA 22824\n\n*This Annual Report to Shareholders contains forward-looking statements. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to: changes in the interest rate environment; management's business strategy; national, regional, and local market conditions; and legislative and regulatory conditions. Readers should not place undue reliance on forward-looking statements which reflect management's view only as of the date hereof. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.*", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## **NEW TELEPHONE DIRECTORY FOR THE NORTHERN SHENANDOAH VALLEY**\n\nThe Shenandoah Telephone Directory has undergone many changes since we published our first directory in 1906, as The Farmers' Mutual Telephone System of Shenandoah County. In 1906, the entire phone number listings were on 15 pages. The first Company directory to include yellow pages was distributed in 1946. That year local businesses invested in a new way to reach their potential customers.\n\nThe goal has always been to provide a useful tool for our customers. The pace of change has quickened in the last few years. In 2000, for the first time, Shenandoah Telephone's directory expanded from telephone listings for only Shenandoah County and Bergton, to include business and residential listings for Rockingham, Frederick, Clarke, and Warren counties. In 2001, Page County listings were added. The name of our directory was changed to ShentelPages in 2002 to reflect the expanded listing area. Although we included additional information in our directory, we continued to only furnish it to our local telephone customers.\n\nEarly in 2003, we conducted a customer survey to measure potential public acceptance of a regional phone directory for the six-county area. The\n\nfindings of the survey indicated almost 60% would likely use an expanded six-county directory, with a fourth of all respondents saying they would use a regional directory more often than the directory they currently had in their home or business. Based on these positive results, Shentel launched an expanded directory to meet the demand.\n\nAn extensive public-awareness campaign was launched on television and radio, in a variety of daily and weekly newspapers and at regional county fairs. The campaign helped build anticipation for the directory and increase awareness of yellow page advertising opportunities. As a result of the added value of the expanded distribution area, ShentelPages' yellow page advertising revenues increased 21%, to $1.8 million for the 2004 book.\n\nIn December 2003, Shentel mailed out 120,000 ShentelPages directories to every home and business in Shenandoah, Rockingham, Frederick, Page, Clarke and Warren counties. ShentelPages now has a potential audience that exceeds 300,000 readers. The 2004 directory continues to be an important local resource. In addition to telephone listings, it contains both general and county-specific information - from ZIP codes to area codes, and from international dialing instructions to the listing of regional interstate exits.\n\nThrough ShentelPages, businesses have a new way of reaching thousands more potential customers within the sixcounty area to sell their products and services. ShentelPages is bundled with our electronic version, ShentelPages.com. This service allows area residents to use their computer and the Internet to let their fingers do the walking.\n\nJust like our first book in 1906, the 2004 ShentelPages provides area residents with a quick and easy way to stay in touch.", - "page_start": 9, - "page_end": 9, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## **A SIGNIFICANT MILESTONE FOR PCS**\n\n2003 was the 10th anniversary of Shentel's decision to enter the PCS business and the 8th year operating as a Sprint PCS Affiliate. This year was a significant milestone for Shentel's PCS business, as we posted our first profitable quarter and recorded net income for the year of $0.3 million versus a net loss of $5.4 million in 2002.\n\nOur Sprint PCS wireless customer base continues to grow, with year-end customers at 85,139 spread from Harrisonburg, Virginia to Harrisburg, Pennsylvania. Our customers are averaging approximately 700 minutes of usage per month and we have one of the lowest customer churn rates in the industry. To keep up with this growth and improve our service, we continued investing in additional network facilities. We added capacity to 26 existing tower sites and installed 16 new tower locations bringing our total sites to 253. Our plan is to add capacity and build additional sites in 2004 in order to meet expected growth.\n\nWe added a new type of customer in 2003. Through Sprint's relationship with its wholesale cutomers, more than 11,000 pre-paid customers were added to our network. These pre-paid accounts, usually for customers with no established credit, are a low cost method to increase customers. They can purchase phones and some minutes at various convenience, electronic or department stores in addition to one of our company locations. When needed, they can easily purchase additional minutes.\n\nCamera phones and e-mailing pictures were hot in 2003. We now offer phones that can take and send a 15 second video. Late in the year, we launched Spirit PCS ReadyLinksm, the Sprint walkie-talkie style service. It is hoped that these new services will be major sales drivers in 2004.\n\nIn 2003, we focused on improving our distribution channels. We expanded and relocated our stores in Harrisonburg and Winchester, Virginia to handle our growing customer base. At our Edinburg, Virginia store, we expanded both our hours and office space. We continue to increase our direct sales force to expand our base of business customers. To make it convenient for our potential customers, we also grew the number of local third-party sales partners.\n\nA much publicized development in our industry was the introduction of Wireless Local Number Portability (WLNP) on November 24th, 2003. Starting on that day, customers in the 100 largest population centers in the United States were able to change wireless carriers while keeping their existing phone number. WLNP will be available in the entire country on May 24, 2004. To date, this change has had only a minor impact on Shentel's customer base.\n\nWe continue to work to make PCS a growth vehicle of revenue and net income for Shenandoah Telecommunications Company.", - "page_start": 10, - "page_end": 10, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "For over 100 years Shenandoah Telecommunications Company has been committed to providing outstanding service to our customers. Our employees take that same dedication after hours to make a difference in their community.\n\nWe take this opportunity to share with you, our shareholders, the stories of just a few of your dedicated employees.\n\n*Patty Pomeroy* **help people.\"**\n\nVolunteerism is in Patty Pomeroy's blood. Her grandfather was a dispatcher for the rescue squad in Middletown, VA for 25 years and her grandmother was in the ladies auxiliary. Her father was a charter member of the Middletown Rescue Squad. In 1997, Patty, a customer service representative at Shentel for four years, continued the family tradition by earning her Emergency Medical Technician certification and going to \"work\" for the Strasburg Rescue Squad. Patty is the administrator of membership recruitment and retention for the squad and is the liaison coordinator for junior squad members under 18. It is her job to make sure that new members are brought in to the squad and current members stay active.\n\n# **\"There is a great satisfaction that comes from knowing that what you can do will**\n\nJeff Beard has been an installer repairman with Shentel for almost five years. Two years ago, Jeff helped start Project Isaiah 58, a faith-based recovery ministry that reaches out to people who are struggling with addiction. Project Isaiah 58 has weekly group meetings in Winchester, Woodstock and Warrenton, VA. Jeff, who lives in Winchester, participates in the group meetings and also makes time to meet one-on-one with people who need personal attention.\n\n**\"I feel the need to reach out to people who are suffering.\"** \n\n*Jeff Beard*\n\nJohn Gardner has been with Shentel for two years as a PCS technician in Central Pennsylvania, but for almost a year of that time he was on Naval Reserve duty in Sasebo, Japan. John joined the Reserves after serving 10 years of active duty. In October 2002, he was activated under Noble Eagle-Enduring Freedom as part of the increase in security at bases around the world. John worked on Motorola radios and repeater systems while stationed in Japan. It was tough for the serviceman to be away from his wife and children, but John believes very strongly in serving his country.\n\n**\"Being in the Reserves is a way for me to be a civilian and still serve my country.\"**\n\n## *John Gardner*\n\nAt Shentel, George Brinkley, the store manager in Front Royal, VA, is known for being one of the biggest fund-raisers for the Shenandoah County American Cancer Society Relay for Life event. In his six years at the Company, George has raised nearly $20,000. In 2003, he raised $4,246 and was recognized as the top individual fund-raiser for the entire event.\n\nIn 2002, George was chairman of the parade committee for the Woodstock, VA 250th anniversary celebration. Under George's leadership, the 26-member committee worked for a year preparing for the parade, which was the largest in the town's history.\n\n**\"I just have a knack for volunteering. I want to make my community better any way I can.\"**\n\n*George Brinkley* 3 ■ 2003 ANNUAL REPORT", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "# CHESAPEAKE'S COMMITMENT TO BEING A GOOD NEIGHBOR »\n\nThrough volunteer programs and responsible operations, we strive to be the best neighbor possible in every one of our operating areas by investing in our communities.", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## **SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES**\n\n## **2003 Financial Statements**\n\n## **INDEPENDENT AUDITOR'S REPORT**\n\nThe Board of Directors and Shareholders Shenandoah Telecommunications Company:\n\nWe have audited the accompanying consolidated balance sheets of Shenandoah Telecommunications Company and subsidiaries (the Company), as of December 31, 2003, 2002, and 2001, and the related consolidated statements of income, shareholders' equity and comprehensive income, and cash flows for the years then ended. These consolidated financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these consolidated financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the consolidated financial statements referred to above present fairly, in all material respects, the financial position of Shenandoah Telecommunications Company and subsidiaries as of December 31, 2003, 2002 and 2001, and the results of their operations and their cash flows for the years then ended, in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for goodwill in 2002. As further discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for asset retirement obligations in 2003.\n\nRichmond, Virginia February 6, 2004", - "page_start": 12, - "page_end": 12, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "### **PEOPLE OF SHENTEL**\n\nTarinda Showman has worked part-time in the Shentel Communications Center since she was a summer intern in 1998. She initially joined the Conicville, VA Volunteer Fire Department to help raise funds, but when she ran her first emergency call she was hooked. During her six years in the Department, she served a two-year stint as captain and currently holds the office of secretary. In 1999, she joined the Mount Jackson, VA Rescue Squad. Each week she pulls two 12-hour shifts with the rescue squad and spends at least 10 hours at the fire department.\n\n**\"I do it because one day it might be my family. It's always somebody's family.\"**\n\n#### *Tarinda Showman*\n\nDuring his 36 years at Shentel, David Ferguson, Vice President-Customer Services, has been involved in a variety of community, civic and church organizations such as the Woodstock Rotary Club, the American Cancer Society and the March of Dimes. David is a charter member of the Board of Directors of the Shenandoah County Free Clinic and served as chairman of the fund-raising drive. The clinic opened its doors in June 2002, offering medical, dental and pharmaceutical services to county citizens who would not otherwise receive these services. In the first six months, more than 300 patients were served.\n\nFor their work at the clinic, David and his wife, Janet, received the Unsung Hero Award from Governor Mark Warner in 2003, and David earned the 2003 Beyond the Call Award from the United States Telecommunications Association.\n\n*David Ferguson*\n\n#### **\"It is so rewarding - you can see it on the faces of the people.\"**\n\nBrian Bosley, a Sprint PCS business-to-business sales representative with Shentel for the past three years, has always enjoyed sports. He takes his passion, knowledge and experience in sports, and volunteers his time with young people in his community. Brian has been active in the very successful Bridgewater, Virginia Community Little League program for the past four years. He currently serves as vice president of the Girls Minor League Softball. Brian also finds time to coach his daughters' T-ball and basketball teams.\n\n**\"I get a great sense of satisfaction from teaching kids and watching them grow and learn.\"**\n\n#### *Brian Bosley*\n\nCindy Rinker, corporate content editor at Shentel since October 2002, was recently named the 2004 chairman of the Woodstock, Virginia Downtown Enhancement Committee's Promotion Committee. The Downtown Enhancement group was established to find ways to revitalize downtown Woodstock. As a member for the past four years, Cindy has helped develop, plan and promote an impressive list of events from Light Up Woodstock at Christmastime to a street dance in spring, to Halloween on Court Square in October. The ultimate goal is to create a downtown area that is lively, attractive and reflective of Woodstock's important historical significance to the Shenandoah Valley and the Commonwealth of Virginia.\n\n**\"It is important to preserve the beauty and history of this area for the generations to come.\"**\n\n*Cindy Rinker*", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "# INVESTING IN OUR COMMUNITIES »\n\nChesapeake's sense of civic commitment provides a bountiful harvest of benefits to cities large and small. We partner with groups and organizations across all of our operating areas to improve the communities our employees, contractors, vendors, land and mineral owners call home. We believe the success of our business depends on the strength, goodwill and vitality of those communities. Most importantly, we believe it is the responsibility of every successful business to share success with its neighbors.\n\nIn 2010 we gave more than $25 million to charitable organizations and projects across our operating areas, primarily focusing on community development, education, health and medical and social services.\n\n# **Economic Impact**\n\nWhile much of the U.S. is still struggling to recover from the economic recession, the positive impact of natural gas and oil operations has provided a valuable economic recovery stimulus for states that are home to exploration and development activities. As the nation's second-largest producer of natural gas, a Top 15 producer of liquids and most active driller of new wells, Chesapeake's arrival in a new play stimulates economic activity, augments personal income through jobs and royalty payments, generates substantial tax revenue and sustains communities throughout its operating areas.\n\nIn addition to the general economic impact of our activities on local economies, the company's tax contributions are substantial. In 2010 Chesapeake paid approximately $675 million in taxes, including ad valorem, severance, sales, employer, and corporate income and franchise taxes. These taxes pay for ongoing government services and also build and maintain schools, recreational facilities, and parks and roads — at a time when state and local governments are still feeling the pinch of recession. We are proud to support America's economy with our growth while also helping to protect the environment through the greater use of clean-burning natural gas and reducing the country's dependence on expensive foreign oil.\n\nChesapeake also makes contributions that help improve lives and economies in cities where we operate: $25 million in 2010 alone. For example, this past year we donated $200,000 to establish the Chesapeake Environmental and Recycling Center at Goodwill Industries of Central Oklahoma. The center will provide an additional 80 jobs to disabled Oklahomans, as well as help Goodwill recycle 10 million pounds a year, which\n\n### **Chesapeake's $25 million of charitable giving in 2010**\n\n- Community Development\n- Education\n- Health and Medical\n- Social Services\n\nequates to one-third of the goods that otherwise would have been destined for Oklahoma City-area landfills. In West Virginia, we helped fund construction of the Morgantown Market\n\n*Equipping the next generation — West Virginia students hold their new laptops from Chesapeake as part of the company's Discovering Tomorrow's Leaders program.* \n\nPlace, a permanent site for the city's farmers' market, creating more business opportunities for local farmers.\n\nChesapeake also supports local chambers of commerce and city councils in all of its operating areas. In the Haynesville Shale last year, we awarded grants to the Shelby County, Sabine Parish and Coushatta-Red River chambers of commerce to help fund tourism, business communications and chamber events. In Texas, we assisted more than 250 civic, professional and community service organizations throughout Johnson, Tarrant and western Dallas counties, and sponsored memberships in 35 local Texas chambers of commerce. By helping local chambers and businesses grow and thrive, we are creating stronger economies.\n\nWe also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n# **Educational Impact**\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "As a product manager, how can I reject an inventory in NAIIS ?", - "target_page": 38, - "target_passage": "Log in as PM. Click on “View Inventories Progress” under sub menu “Submission Management”. The “View Inventories Progress” screen appears. Select the appropriate inventory by clicking the Inventory name under column “Name” Press the “Reject” button ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# NAIIS Web Application\n\n(Release version 1.1.3) User Manual\n\n(As of 10 February 2014)", - "page_start": 0, - "page_end": 0, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **10 Submission management**\n\n# **10.1 Workflow**\n\nCreating and preparing an inventory, generating tables for checking by the NFP and approving and/or rejecting submission, follows a number of steps known collectively as a workflow. This chapter describes the workflow relating to the submission of the GHG inventory/(ies), which users should follow to create, prepare, and send GHG inventories for internal checking, and approval/rejection of the submission by the NFP, within the NAIIS web application (figure 52).\n\n### *Figure 52: Non-Annex I Inventory Software workflow*\n\n# **10.2 Start of inventory/submission (NFP or PM)**\n\nThis procedure allows the NFP or PM to start a new (created) inventory. The existing data for the inventory year identified will be made available in the new inventory/submission.\n\nThese are the steps to start a new inventory:\n\n- 1. Click on \"View Inventories Progress\" under sub menu \"Submission Management\" (figure 53).\n### *Figure 53. View Inventories Progress sub menu*\n\n- 2. The \"View Inventories Progress\" screen appears (figure 54).\n- 3. Select the appropriate inventory by clicking the box under column \"Working Inventory\" (figure 54, a).\n\n*** Note: The selected appropriate inventory should be in status \"created\" (figure 54, b)", - "page_start": 34, - "page_end": 34, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **3.2.2 Create, Start, Add new and View GHG inventory year**\n\nThese functions allow the NFP and PM to create or edit a GHG inventory within the NAIIS software.\n\n# *3.2.2.1 Create a new GHG inventory or Start a GHG inventory year*\n\n### 3.2.2.1.1 Create a new GHG inventory\n\n### **Note**: This step can ONLY be undertaken by the NFP or PM !\n\nIn order to create one or several GHG inventories, the following steps can be done by the NFP or PM:\n\n- Log in as NFP or PM\n- Hover the cursor on \"Submission Management\" menu and click on the \"View Inventories Progress\" button. (see Figure 5). Left click on the \"+\" sign will create a new GHG inventory. (see Figure 6)\n\nThe new GHG Inventory name will be automatically generated by the NAIIS system, as follows: ___ Inventory\n\nFor example: Paraguay_2013_1_Inventory or Bhutan_2014_2_Inventory\n\n### *Figure 5. Create new GHG inventory screen*", - "page_start": 7, - "page_end": 7, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **2.2 Pending NAIIS features**\n\nList of pending functionalities in NAIIS:\n\n- ----------------------------------------- 1. Web services integration for help desk\n- 2. Display of information in 5 remaining UN languages.\n\n# **2.3 Contact**\n\nRequests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address **naiisapp@unfccc.int**.", - "page_start": 4, - "page_end": 4, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "### 3.2.2.1.2 Start a GHG inventory\n\nIn order to START a GHG inventory, please follow the steps below:\n\n- Log in as PM.\n- Hover the cursor on the \"Submission Management\" and click on the \"View Inventories Progress\" button.\n- Click/select the appropriate GHG Inventory in Status = \"created\" (see figure 7a).\n- Click on \"Work on Inventories\" under Submission Management (see figure 7b).\n\n### *Figure 7: Select an Inventory screen*\n\n| Non-Annex I Inventory Software NAIS v1.1.3 | Non-Annex I Party | Inventory #1 Editable | | | Non-Annex PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- |\n| United Nations | | | | | | |\n| Framework Convention on | | | | | | |\n| Climate Change | | | | | | |\n| Users Management | Submission Management Data Entry | Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | |\n| View Inventories Progress | | | | | | |\n| Work on Inventories | | | | | | |\n| Name | Working Invento Submission year Creator | Creation date | Status | Updater | Submission date Energy | |\n| NAI 2013 1 Inventory | D 4.9 16 | Non-Annex PM Wed Dec 18 12:18:57 CET 2013 | created | | | |\n\n- Left click to select the appropriate Inventory (figure 8a)\n- Press the \"Start Inventory\" button (figure 8b)\n\n### *Figure 8: Start an Inventory screen*\n\n| Non-Annex I Inventory Software NAIS v1.1.3 | - | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | United Nations | | | | | | |\n| | Framework Convention on | | | | | | |\n| | Climate Change | | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | | |\n| | ission year Creator | Creation date | Status | Updater | Submission date Energy | | Industrial Proce |\n| 1- NAI_2013_1_Inventory | Non-Annex PM | Wed Dec 18 12:18:57 CET 2013 created | | Non-Annex I PM | | D | D |\n| | 4 III | | | | | | |\n| EJS EJS TreeGrid v9.2 | | | | | | | |\n| General Properties | | Sector | | | Inventory Years | | |\n| Name | NAI_2013_1_Inventory | Energy | | | 1990 | | = |\n| Submission year | | Industrial Processes | છ | | 1991 | | |\n| Creator | Non-Annex I PM | Solvent and other product use | റ്റ | 三 | 1992 | | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 00 | | 1993 | 000 | |\n| Status | created | LUCF | | | 1994 | | |\n| Updater | Non-Annex I PM | LULUCF | 0 | | 1995 | | |\n| Submission date | | Waste | D | P | 1996 | | . |\n| Start Inventory | | | | | | | |\n\nOnce the \"Start Inventory\" button is pressed, the status of the selected Inventory change to \"started\". (see Figure 9)\n\n### *Figure 9: \"Started\" status of an Inventory*\n\n| Non-Annex I Inventory Software NAJS v1.1.3 | | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM Sign Out |\n| --- | --- | --- | --- | --- | --- | --- |\n| | United Nations | | | | | |\n| | Framework Convention on | | 5 | | | |\n| | Climate Change | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | |\n| Name | Submission year Creator | Creation date | Status | Updater | Submission date Energy | Industrial Proces |\n| - NAI_2013_1_Inventory | | Non-Annex I PM Wed Dec 18 12:18:57 CET 2013 | started | Non-Annex I.PM. | | D 14) |\n| | .811 | | | | | |\n| Ext.IS EJS TreeGod v3.2 | | | | | | |\n| General Properties | | Sector | | | Inventory Years | |\n| Name | NAI_2013_1_Inventory | Energy | | | 1990 | |\n| Submission year | | Industrial Processes | | | 1991 | |\n| Creator | Non-Annex I PM | Solvent and other product use | | | 1992 | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 000 | | 1993 | 000 |\n| Status | created | LUCE | | | 1994 | |\n| Updater | Non-Annex I PM | LULUCE | D | | 1995 | |\n| Submission date | | Waste | > | V | 1896 | |\n| Send for | | | | | | |\n| Checking | | | | | | |", - "page_start": 8, - "page_end": 8, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "| 10.1 Workflow 35 |\n| --- |\n| 10.2 Start of inventory/submission (NFP or PM) 35 |\n| 10.3 Send for checking (PM) 37 |\n| 10.4 Send for approval/rejection of an Inventory (PM) 38 |\n| 10.4.1 Send for approval of an Inventory 38 |\n| 10.4.2 Rejection of an Inventory 38 |\n| 10.5 Approval or Rejection of an inventory (NFP) 40 |\n| 10.5.1 Approval of an inventory 40 |\n| 10.5.2 Rejection of an inventory 41 |\n| 10.6 Submit inventory (PM) 42 |\n| 10.6.1 Submit select tables for preparing the general submission 42 |\n| Glossary of terms and abbreviations 44 |\n| Annex 1: Non-Annex I (NAI) Parties 45 |\n| Annex 2: Fuel categories 47 |\n| Annex 3: Global Warming Potentials (GWPs) 48 |\n| Annex 4: Default values 49 |", - "page_start": 2, - "page_end": 2, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **10.5 Approval or Rejection of an inventory (NFP)**\n\nThis section describes how the NFP approves or rejects an inventory after being sent for approval by the PM (See section 10.4).\n\n# **10.5.1 Approval of an inventory**\n\n- 1. Log in as NFP.\n- 2. Click on \"View Inventories Progress\" under sub menu \"Submission Management\".\n- 3. The \"View Inventories Progress\" screen appears.\n- 4. Select the appropriate inventory by clicking the Inventory name under column \"Name\" (figure 64).\n- 5. Press the \"Approve\" button (figure 64, b).\n\nOnce the \"Approve\" button was pressed, the status of the selected inventory changes to \"approved\" (figure 65, b).\n\n*** Note: A notification email will be sent to the PM that the inventory has been approved. Therefore, the PM may proceed to selecting the tables for preparing the official submission (See section 10.6).\n\n### *Figure 64. Work on Inventories screen – Approve an inventory - Status = awaiting_approval*\n\n| Figure 65. Work on Inventories screen – Approve an inventory - Status = approved |\n| --- |", - "page_start": 39, - "page_end": 39, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **1 Introduction**\n\nThe Non-Annex I Inventory software (NAIIS) web application is a web-based tool developed for use by Parties not included in Annex I to the Convention (non-Annex I Parties) to estimate and report their national greenhouse gas inventories (GHG inventories). As per Article 4, paragraph 1 (a), and Article 12, paragraph 1 (a) of the Convention, non-Annex I Parties are required to communicate to the Conference of the Parties a national inventory of anthropogenic emissions by sources and removals by sinks of all greenhouse gases (GHGs) not controlled by the Montreal Protocol, to the extent their capacities permit, following the guidelines contained in the annex to decision17/CP.8.\n\nIn order to assist non-Annex I Parties in estimating and reporting their GHG inventories as part of their national communications, the secretariat developed an Excel-based software which incorporated all the elements of a national GHG inventory prescribed by decision 17/CP.8. The software was based on the IPCC inventory software version 1.1, which used the Tier 1 methodologies for estimating GHG emissions and removals for all source categories included in the Revised 1996 IPCC Guidelines, and further complemented by the GPGs.1\n\nSince its release in 2005, most non-Annex I Parties have been using that software for the development of their national GHG inventories. In December 2011, Parties requested the secretariat to upgrade the software and make it available to non-Annex I Parties by June 2013. Pursuant to that request, the secretariat converted the current Excelbased version of the software (v.1.3.2)2 into a web-based application (NAIIS) which provides greater flexibility and security for maintaining data.\n\n# **2 General information**\n\nThe NAIIS is a web-based application designed to enable non-Annex I Parties estimate their national GHG inventories according to the UNFCCC guidelines and using the IPCC methodologies, and to report the results in their national communications and biennial update reports.\n\n# **2.1 System overview**\n\nThe NAIIS web application has the following functionalities:\n\n- 1. User management (only for the user roles NFP and PM)\n- 2. Submission management\n- 3. Data entry\n- 4. Key category analysis\n- 5. Reporting tables\n- 6. Data Export/Import\n- 7. Completeness\n- 8. Consistency\n\nThe NAIIS web application allows input of data through three different channels:\n\n- 1. Manual input into the entry grids\n- 2. Partial or full import of data from Excel\n- 3. Bulk import of data from XML\n\nThe GHG emissions totals, by gas and by sector, are automatically calculated and saved based on the values entered for activity data (AD), emission factors and other relevant parameters. In addition, the software facilitates the reporting of other category specific information, for example, the choice of the method for activity data and emission factors.\n\n1 Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories, 2000, and Good Practice Guidance for Land\n\nUse, Land‐Use Change and Forestry, 2003. 2 http://unfccc.int/files/national_reports/non‐\n\nannex_i_natcom/training_material/methodological_documents/application/zip/unfccc_nai_is_132.zip", - "page_start": 3, - "page_end": 3, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "### *Figure 12. Screen of \"Work on Inventories\"*\n\n| Non-Annex I Inventory Software NAIIS v1.1.3 | | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | United Nations | | | | | | |\n| | Framework Convention on | | | | | | |\n| | Climate Change | | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | | |\n| Name | Submission year Creator | Creation date Status | | Updater | Submission date Energy | | Industrial Proces |\n| - NAI_2013_1_Inventory | | Non-Annex I PM Wed Dec 18 12:18:57 CET 2013 created | | Non-Annex I PM | | D | D |\n| | 1 = | | | | | | |\n| 2 ExtJS EJS TreeGrid v9.2 | | | | | | | |\n| General Properties | | ector | | | Inventory Years | | |\n| Name | NAI_2013_1 Inventory | Energy | | | 1990 | | III > |\n| Submission year | | Industrial Processes | િ | | 1991 | | |\n| Creator | Non-Annex I PM | Solvent and other product use | | | 1992 | | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 000 | | 1993 | 0000 | |\n| Status | created | LUCF | | | 1994 | | |\n| Updater | Non-Annex I PM | LULUCF | | | 1995 | | |\n| Submission date | | Waste | | | 1996 | | |\n| Start Inventory | | | | | | | |\n\n# *3.2.2.3 View Inventory Progress*\n\n- The NFP or PM should log into the system.\n- Click on \"View Inventories Progress\" under Submission Management (figure 13)\n\n### *Figure 13. View Inventories Progress*\n\nClick on \"View Inventories Progress\" button will display the initial screen with the following columns (figure 14a, 14b and 14c):\n\n- **Name** automatically given by the system, once created\n- **Working Inventory** active box shows the current working inventory\n- **Submission year** year when the submission process was initiated\n- **Creator** user who created the inventory\n- **Creation date** date when the inventory was created\n- **Status** created, started, check, submitted, approved, awaiting approval, awaiting rejection check\n- **Updater** user name who updated the inventory\n- **Submission date** date of submission\n- **Sectors** Energy, Industrial processes, Solvent and other product use, Agriculture, LUCF, LULUCF, Waste, Other\n- **Inventory year**", - "page_start": 10, - "page_end": 10, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# *3.2.2.2 Add a new GHG inventory year or edit general properties/sectors (only NFP and PM's)*\n\n- Log in as NFP or PM.\n- Click on \"Work on Inventories\" under Submission Management (figure 10).\n### *Figure 10: Sub menu \"Work on Inventories\"*\n\nOnce \"Work on Inventories\" has been clicked, the initial screen will be displayed, which shows the following boxes (figure 11):\n\n- a. Existing Inventory (with all options)\n- b. General properties include the name, submission year, creator, creation date, status, updater and submission\n- date c. Sectors\n- d. Inventory years\n\n### *Figure 11. Initial screen of \"Work on Inventories\"*\n\n| Non-Annex I Inventory Software NAIIS v1.1.3 | | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| United Nations | | | | | | | |\n| | Framework Convention on | | | | | | |\n| Climate Change | | | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | | 21 |\n| | | | | | | | |\n| Name | Submission year Creator | Creation date | Status | Updater | Submission date Energy | | Industrial Proce: |\n| - NAI_2013_1_Inventory | | Non-Annex I PM Wed Dec 18 12:18:57 CET 2013 created | | Non-Annex I PM | | D | D |\n| | 1 III | | | | | | |\n| EJ ExtJS EJS TreeGrid v9.2 | | | | | | | |\n| General Properties | | ector | | | Inventory Years | | |\n| Name | NAI 2013 1 Inventory | Energy | | | 1990 | | |\n| Submission year | | Industrial Processes | (V) | | 1991 | | |\n| Creator | Non-Annex I PM | Solvent and other product use | છ | | 1992 | | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 0 | | 1993 | 000 | |\n| Status | created | UCF | 0 | | 1994 | | |\n| Updater | Non-Annex I PM | ULUCF | D | | 1995 | | |\n| Submission date | | Naste | D | | 1996 | | |\n| Start Inventory | AN | | | | | | |\n\nFollow the steps to add/remove an inventory year:\n\n- Click on the inventory year (figure 12a)\n- Select the inventory year under General properties (figure 12b)\n- Select or deselect the appropriate Sectors (figure 12c)\n- To **add** or **remove** an inventory year, select or deselect the relevant year under Inventory Years box (figure 12d)", - "page_start": 9, - "page_end": 9, - "source_file": "maiis-user-manual.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "What is the global warming potential of Perfluorohexane ?", - "target_page": 48, - "target_passage": "7,400", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "| Greenhouse gas | Chemical formula | 1995 IPCC GWP |\n| --- | --- | --- |\n| Carbon dioxide | CO2 | 1 |\n| Methane | CH4 | 21 |\n| Nitrous oxide | N2O | 310 |\n| HFC-23 | CHF3 | 11,700 |\n| HFC-32 | CH2F2 | 650 |\n| HFC-41 | CH3F | 150 |\n| HFC-43-10mee | C5H2F10 | 1,300 |\n| HFC-125 | C2HF5 | 2,800 |\n| HFC-134 | C2H2F4 | 1,000 |\n| HFC-134a | CH2FCF3 | 1,300 |\n| HFC-152a | C2H4F2 | 140 |\n| HFC-143 | C2H3F3 | 300 |\n| HFC-143a | CF3CH3 | 3,800 |\n| HFC-227ea | C3HF7 | 2,900 |\n| HFC-236fa | C3H2F6 | 6,300 |\n| HFC-254ca | C3H3F5 | 560 |\n| Perfluoromethane | CF4 | 6,500 |\n| Perfluroethane | C2F6 | 9,200 |\n| Perfluoropropape | C3F8 | 7,000 |\n| Perfluorobutane | C2F10 | 7,000 |\n| Perfluorocyclobutane | c-c4F8 | 8,700 |\n| Perfluoropentane | C5F12 | 7,500 |\n| Perfluorohexane | C6F14 | 7,400 |\n| Sulphur hexafluoride | SF6 | 23,900 |\n\n# **Annex 3: Global Warming Potentials (GWPs)**\n\n*Source: Climate Change 1995, The Science of Climate Change: Summary for Policymakers and Technical Summary of the Working Group I Report, page 22.*", - "page_start": 47, - "page_end": 47, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "is 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\nTere are apparent trends of humidifcation in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. Te distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. Te drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\n**Yield change of maize under global warming by 1.5 °C and 2.0 °C.** Maize production is afected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986–2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. Te distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. Te loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "| Model | Research institute | Country | Horizontal resolution |\n| --- | --- | --- | --- |\n| GFDL-ESM2M | Geophysical Fluid Dynamics Laboratory | Te United States | 144×90 |\n| HadGEM2-ES | Hadley Center for Climate Prediction and Research | Te United Kingdom | 192×145 |\n| IPSL-CM5A-LR | L' Institute Pierre-Simon Laplace | France | 96×96 |\n| NorESM1-M | Norway Climate Center | Norway | 144×96 |\n| MIROC-ESM | Center for Climate System Research, National Institute for Environmental Studies, and Frontier Research Center for Global Change | Japan | 128×64 |\n\n**Table 1.** Basic information of 5 ESMs in CMIP5. Horizontal resolution means the number of longitudinal grids×the number of latitudinal grids.\n\n**Figure 1.** Changes of global temperature of 20 years moving average from 2020 to 2099 simulated by 5 ESMs under 4 RCP scenarios. Note: Te black horizontal dashed lines: global warming by 1.5 °C and 2.0 °C; the black vertical solid line: the years when global warming reaches 1.5 °C and 2.0 °C simulated by the selected models and scenarios.\n\nAlthough, so far there are plenty of research on the impacts of global warming by 1.5 °C temperature, including the impacts comparison of global warming by 1.5 °C versus 2.0 °C44. It is necessary to do more quantitative impacts assessments of global warming by 1.5 °C and 2.0 °C on crops yield and market price to address research gaps and support the requirement of the scientifc community and governments. In this paper, the future climate situations were selected and analyzed which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C, based on the simulation results from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. Ten the per unit yield changes of maize all over the world under global warming by 1.5 °C and 2.0 °C were analyzed and the spatial distributions of changes in maize yield were revealed relative to the baseline from 1985 to 2006, applying crop model DSSAT (Decision Support System for Agrotechnology Transfer). Next, we examine the efects of the resulting maize production shocks in diferent countries; the market price of maize is simulated using GTAP to reveal the impacts of climate change on global crop trade. Finally, the future trend of maize yield and market price in the main breadbasket is assessed and the adaptation suggestions are put forward for maize cultivation.\n\n#### **Materials and methods**\n\n**Data processing.** In this study, historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. Te dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data45.\n\nFor future (2020–2099), the original climate scenario data (Table 1) were extracted from output archives of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) under four RCPs (RCP2.6, RCP4.5, RCP6.0, RCP8.5) retrieved from the CMIP website. Te climate scenario data was interpolated into 0.5°×0.5° horizontal resolution and bias-corrected with respect to historical observations to remove systematic errors46. Te data of maize-planting regions are from the gridded global dataset in 2000 by combining two data products47,48.\n\n**Simulation of climate scenarios with global warming by 1.5 °C and 2.0 °C.** In this study, climate data of global warming by 1.5 °C and 2.0 °C are determined according to the results of global climate models driven by typical concentration paths (RCPs) of greenhouse gas emissions. Eligible data are selected from a total of 20 sets of data under four RCP scenarios of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M), which estimate the temperature, precipitation and sunshine hours (Fig. 1).\n\nVol:.(1234567890)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed9.pdf" - }, - { - "text": "**Figure 7.** Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n**Figure 8.** Changes in Self-sufciency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\nmeantime, the huge diferences in yield changes in diferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be efectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price fuctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (Te Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the fve models could more efectively support impact assessment in diferent sectors and provide more reliable results. Based on the simulation results", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "issues and re-constructing them differently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as \"earth\" and \"pollution\", whereas \"climate change\" was more associated to specific issues like \"solar\", \"coal\", \"china\", and \"food\".\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, \"snow\", \"summer\", \"winter\", or \"heatwave\" in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' differences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n#### 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag \"tcot\", favored by right-leaning users and \"p2\", favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n#### 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, \"environment\", \"energy\", and \"global action\" represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, \"#environment\", \"#energy\", and \"#climateaction\", did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "## **OPEN**\n\n# **The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade**\n\n**Kuo Li1*****, Jie Pan1 , Wei Xiong2 , Wei Xie3 & TariqAli3**\n\n**Climate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by 5 climate models recommended by ISI-MIP under 4 RCP scenarios, in which the approximate scenarios with global warming by 1.5 °C and 2 °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by 1.5 °C and 2.0 °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under 2.0 °C scenario was much more serious than 1.5 °C scenario; the ratios of yield changes were separately 0.18% and − 10.8% under 1.5 °C and 2.0 °C scenarios. The reduction trend of total maize production is obvious in the top fve countries and the main producing regions of the world, especially under the 2.0 °C scenario. The market price of maize would increase by around 0.7% and 3.4% under 1.5 °C and 2.0 °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.**\n\nIn the past hundred years, the global climate has experienced great changes1–4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health6–10. Global warming has gradually changed from a scientifc issue to a major social issue of common concern to governments and people of all countries11–13. In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris14. Paris Agreement has indicated that it is urgent to hold the increase in global average temperature well below 2.0 °C above pre-industrial levels and pursue eforts to limit the temperature increase to 1.5 °C above pre-industrial levels.\n\nFaced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world15–20. Meanwhile, global production losses might lead to price shocks and trigger export restrictions21–24; an increasingly interconnected global food system25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide27–29. So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world30–32. Tere are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations17,33. Environment-controlled experiments are designed to observe the infuence of climate factors on crops, such as drought, food, heat stress, cold damage, elevated CO2 concentration, through which the impact mechanism of climate change on crops would be revealed and established23,34,35. Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected feld sites or in selected regions36–39. Te statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in diferent sites or counties to establish regression functions for crop responses predictions40–43. Tese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\n1 Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China. 2 International Maize and Wheat Improvement Center, Texcoco, Mexico. 3 Peking University, Beijing, China. *email: hqlk2000@163.com", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "Firstly, the period of 1986–2005 is defned as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850–1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between diferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986–2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. Tirdly, the climate data of global warming by 1.5 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 1.5–2.0 °C above pre-industrial levels at the end of the twenty-frst century; the climate data of global warming by 2.0 °C is defned according to the principles provided in the ffh IPCC Assessment Report, for which it should be within 2.0–2.5 °C above pre-industrial levels at the end of the twenty-frst century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately confrmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020–2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041–2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060–2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065–2084.\n\n**Simulation of maize yield using DSSAT.** According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986–2005 on grid level using CERES-Maize, which is part of DSSAT version 4.649.\n\nTe inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5°×0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min50. For management information, fertilizer applications, irrigation and other management practices are required. A crop-specifc gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Profle Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of fve 20 cm thick soil layers51. All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.\n\nFirst maize yields across the world during the historical period 1986–2005 were simulated at the 0.5°×0.5° grid scale with two main production systems, including Spring maize and Summer maize. Historical national maize production is aggregated from simulated gridded yield and weighted by grid cell maize areas in 2000 from the gridded global dataset by combining two data products47. Second, genetic parameters of specifc cultivars of maize from previous works were adopted for the initial parameters; model parameters related to crop genotype characteristics were calibrated and tuned following the method in Xiong et al.52, in which the simulated yields from 1986–2005 were comparable to the statistical data. Tird, maize yields across the world were simulated under global warming by 1.5 °C and 2.0 °C. Finally, global and national maize yields were aggregated from gridded values; changes in national and global yields under global warming by 1.5 °C and 2.0 °C were calculated, comparing maize yield average for 1986–2005.\n\n**Simulation of market price using GTAP.** Te yield changes for maize from the DSSAT models under 1.5 °C and 2.0 °C temperature increase are used to carry out simulations using competitive market for changes in production, market price, and self-sufciency ratio of maize at national and global levels53,54. For this study, we use a comparative static analysis approach to simulate the impact of climate changes on the prices and trade of the major food crops under current economic conditions. Utilizing current economic conditions has the advantage of minimizing assumptions and model uncertainties related to future economic conditions55,56.\n\nTe original GTAP database doesn't include maize as a separate sector, rather it is combined with other coarse grains to form an \"other coarse grain\" sector. For this study, we updated the GTAP database by splitting maize from the original sector in the database, design an appropriate sectoral and regional aggregation scheme to the original database. Te detailed method is given as follows:\n\nFirst, we improved the database by splitting maize from the existing sector \"other coarse grain\", following similar work using GTAP57–59 based on the routines from the Splitcom method60. In this procedure, the old fows of data both at national and trade levels are allocated between the new fows using weights. Te national weights include the division of each unsplit user's use of the original split commodity among the new commodities; the division of unsplit inputs to the original industry between the new industries; the splitting of new industry's use of each new commodity. Maize use is mainly shared between feed, food, processing and others (seed, waste, etc.).\n\nTrade shares allocate the original slice of the split commodity into the new commodity for all elements of basic price value, tax, and margin. Finally, we used the RAS method for balancing the newly created database. Te values for the national shares matrix were obtained from FAOSTAT. Te trade shares matrix was calculated based on the data from UN Comtrade Database.\n\nSecond, our sectoral aggregation scheme for GTAP ensures that all the competing and complimenting sectors for maize are present in the most disaggregated form. For example, for maize, other crops compete for inputs of production and both livestock and households are major users of maize. For regional aggregation, we kept the details for all the main producing, consuming, and trading regions, for maize.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the different connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint efforts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the \"hoax\" frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the differences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and effect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by differently organizing numerous climate concepts. Examining the differences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLarge amounts of user-generated data on social media, which have been valued in computer science, communication, and environmental studies [5,9,15–18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing \"climate change\" and \"global warming\" between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the difference in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018?\n\nRQ3: Did the two competing discourses converge or diverge in this decade?\n\n#### **2. Background**\n\n#### *2.1. Climate Change, Global Warming, and Frames*\n\nExisting studies have noted that the subtle difference between climate change and global warming evokes different public cognitive responses, where global warming\"indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse effect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "reports, the environment, and science [13]. Some respondents even hold the belief that global warming results in climate change [9].\n\nThe two distinct climate discourses being produced based on the same reality can be explained by the framing theory in communication study. Framing refers to the phenomenon where the reality is always partially selected or highlighted when described by the public or media [19]. By distinctly defining problems, suggesting solutions, and indicating casual interpretations [20], different frames tell the audience different stories and influence how they observe facts [21,22]. Two types of frames, equivalency frames and emphasis frames, are commonly studied by scholars to examine how framing effects influence individuals' attitudes and beliefs [23]. Equivalency frames describe the same fact or logic with different words and may suggest that the audience perceives facts in psychologicallydifferent ways [24]. For example, a cup can be described as \"half full\" and \"half empty\", where the former is a positive frame indicating a reference point lower than current status, and the latter is negative, meaning that the reference point is above the current situation [25]. Emphasis frames employ words selectively associated with parts of reality to shift the audience's attention to particular attributes [26]. Climate change and global warming have been noted to highlight different aspects of an issue by activating distinct cognitive accessibility patterns [27].\n\nDifferent frames concerning the global climate concern are popular among the public, politicians, environmentalists, and the media [1,28,29]. Big data analyses have indicated that when interpreting climate events, individuals' preference for frameworks was influenced by demographics [5] and social-political background [2]. Different choices of frameworks can evoke different psychological processes [30], promote or inhibit engagement intentions [31], or gain approval on various levels [32].\n\nStudies have noted that the frameworks of climate change and global warming may result from different political indications. The American Republican-leaning states show more preference for global warming than climate change compared with Democratic-leaning states, and global warming is more connected with \"hoax\" in questioning the reality of the global climate issue [5]. Conservatives are more likely to link heat-related phenomena to global warming, whereas liberals associate these facts equally with both frames [27]. An earlier survey conducted by [4] argued that wording choice might not influence the whole population similarly. For the whole sample and politically independent individuals, the two terminologies were equally serious, but climate change seemed more serious compared with global warming among the Republicans, and the Democrats held the opposite opinion.\n\n#### *2.2. Network Model for Cognition*\n\nDifferent framework choices may create even more differences than have already been noticed. Psychologists think that human beings are a collection of learned associations [33], and associative response rather than simply linear logic form the structural basis of thought [34]. Associative learning [35] is a long-standing assumption underlying cognitive science [14], suggesting that human cognition toward the world forms a network pattern, where the world is organized into several groups of related items and stored in a network model in the mind. When messages are processed by humans, they are first encoded into a temporary memory network and then linked to an existing associative memory network for long-term storage [36]. In the network, a node represents a certain concept, and edges refers to particular relationships, such as time sequences [37], similarity [38], semantic connections [37], or cause and effect [33] between two nodes.\n\nWhen individuals search their memory for a particular piece of a message in their mind, the targeted node becomes salient and activated in the temporary memory [39]. If two messages are always activated simultaneously, their connection tends to be more robust and the messages are regarded as associated [36]. If a link is recorded between two concepts, activations are likely to spread through the link from one concept to another with or without conscious awareness [40]. Whereas associations of nodes in the mind may not necessarily reflect the actual relationships of objects, in reality, several factors, including media usage, personal experience, and political stance [34,41,42], may help bundle different sets of concepts.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as \"Frankenstein Food\" [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though \"the end of world in 2012\" and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n#### *5.3. Discrepancy between the Two Discourses*\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, differences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in different manners.\n\nTo be specific, although \"ipcc\", \"cop\", and \"un\" were mentioned in both discourses (yellow in Figures 3 and 4) in earlier years, the clusters to which they belonged had significantly different meanings. As mentioned in the results section, these hashtags were associated with a series of scientific hashtags in the climate change discourse, appealing to global efforts. In the global warming discourse, they were clustered with \"hoax\" and \"frame\", showing lack of belief in climate issue facts and hesitation about global efforts. More recently, when discussions about temperature, politics, and hesitation significantly shrank in the global warming discourse, the wo discourses showed more similarities about the importance of scientific concepts according to Figure 5a,b. However, links between global efforts and scientific facts were not constructed in the global warming discourse. According to a network model for cognition, the lack of associations means fewer psychological activations will spread to", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "How can I request access to NAIIS ?", - "target_page": 5, - "target_passage": "Requests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address naiisapp@unfccc.int.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# NAIIS Web Application\n\n(Release version 1.1.3) User Manual\n\n(As of 10 February 2014)", - "page_start": 0, - "page_end": 0, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **2.2 Pending NAIIS features**\n\nList of pending functionalities in NAIIS:\n\n- ----------------------------------------- 1. Web services integration for help desk\n- 2. Display of information in 5 remaining UN languages.\n\n# **2.3 Contact**\n\nRequests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address **naiisapp@unfccc.int**.", - "page_start": 4, - "page_end": 4, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **3 Getting started**\n\n# **3.1 User Access, Roles and Privileges**\n\nThe users of the application are the members of the national team(s) of non-Annex I Parties involved in the preparation of their national GHG inventories, and each user is assigned a role.\n\nThe table below explains the different levels of the access rights and corresponding explanation for each role. It is important to note that the roles are not necessarily identical to a person's title (e.g. National Focal Point) and that a person can take on several roles (which may be necessary for some countries).\n\nThere are three types of access rights (roles) to the NAIIS application:\n\n| Type of access rights for specific roles | Process to gain access rights |\n| --- | --- |\n| National Focal Point (NFP): Will be responsible for | |\n| identifying the members of the team and is the only | Parties that have not already requested and received |\n| | access rights can obtain them by having their National |\n| one who has the right to approve the submission of | |\n| any GHG inventory. | Focal Point contact: naiisapp@unfccc.int |\n| NFPs will have the option to create, edit, update or | (Note: Some Parties may have more than one individual |\n| delete all of their country's GHG data entries, and | acting as the NFP; however the system can |\n| grant access rights to the 'Project Manager' and | accommodate only one account per Party). |\n| 'Sectoral Experts' for their country if they choose. | |\n| Project Manager (PM): Will have the right to | |\n| enter/edit data in all sectors, as well as to generate | Entities will be provided these rights by their NFP. If a Party |\n| an official submission to the UNFCCC, and grant | decides to grant access to a PM, their NFP will be able to |\n| access rights to the 'Sectoral Experts' for their | create such user account on the NAIIS application. |\n| country. | |\n| | Experts will be provided these rights by their NFP and PM. If |\n| Sectoral Experts (SE): Will have the right to | a Party decides to grant access to Sectoral Experts, the NFP |\n| enter/edit data in respective sector(s). | will be able to create such user accounts and assign them in |\n| | respective sector(s). |\n\nAccess for the NFP will be provided by the secretariat, upon request; however, the accounts of the other users within the country shall only be created by the NFP.\n\n# **3.2 How to access/ log out / create a GHG inventory**\n\n### **3.2.1 How to access the NAIIS application**\n\nOpen any internet browser (i.e. Internet Explorer, Firefox, etc.) and type in the following URL http://unfccc.int/7627 on the browser's address bar. (figure 1 and figure 2)\n\n### *Figure 1. Using Internet Explorer browser*\n\n| United Nations Framework Convention on Climate Change - Windows Internet Explorer |\n| --- |\n| 3 = 2 http://unfccc.int/7627 |\n| File Edit View Favorites Tools Help |\n\n### *Figure 2. Using Firefox browser*\n\n| United Nations Framework Convention on Climate Change - Mozilla Firefox | | | | |\n| --- | --- | --- | --- | --- |\n| Edit View | History | Bookmarks | Tool | Help |\n| C United Nations Framework Convention on Cli ... | | | | |\n| unfccc.int/7627 | | | | |", - "page_start": 5, - "page_end": 5, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "Press the 'Enter key' and the non-Annex I Greenhouse Gas Inventories web page appears.\n\nTo access the NAIIS application, click on the image NAIIS Web Application, the right hand side of the screen. (figure 3, number 1) and the log-in page will be displayed. (figure 4)\n\n| Figure 3. UNFCCC non-Annex I Greenhouse Gas Inventories web page |\n| --- |\n\n| Non-Annex I Greenhouse Gas Inventories | + | | |\n| --- | --- | --- | --- |\n| | 1 unfecc.int/national_reports/non-annex_j_national_communications/non-annex_i_inventory_software/Rens/7627.php | | |\n| | | SSB W IC . Share | Glossary FAQ Contact Español Français |\n| | | United Nations C | |\n| | | Framework Convention on | |\n| | | Climate Change | |\n| Home | CDM JI CC:iNet TT: Clear | Your location: Home | |\n| NEGOTIATIONS | | Non-Annex I Greenhouse Gas Inventories | News section |\n| Meetings | | | 6. |\n| Documents & Decisions | | | |\n| Bodies | | As per Article 4, paragraph 1 (a), and Article 12, paragraph 1(a) of the Convention, non-Annex I | |\n| | | Parties are required to communicate to the Conference of the Parties a national inventory of | To find out how to obtain |\n| FOCUS | | anthropogenic emissions by sources and removals by sinks of all greenhouse gases (GHGs) not. controlled by the Montreal Protocol, to the extent its capacities permit, following the quidelines | access rights to the NAIIS web application please click here |\n| | | contained in annex to decision17/CP.8 | |\n| Adaptation | | | |\n| Finance | | In order to facilitate non-Annex Parties in developing and reporting their GHG inventories as part of | |\n| | | their national communications, the secretariat developed an Excel-based software which incorporated | |\n| Mitigation | | all the elements of a national GHG inventory prescribed by decision 17/CP.8. The software was based . | NAIIS Web Application |\n| Technology | | on the IPCC inventory software version 1.1 which used the Tier 1 methodologies for estimating GHG | |\n| | | emissions and removals for all source categories described in the Revised 1996 IPCC G and | |\n| PROCESS | | further complimented by GPGs3. | |\n| | | Since its release in 2005, most non-Annex Parties have been using that software for d | |\n| Essential Background | | their national GHG inventories. In June 2011, Parties requested the secretariat to upgrade the | |\n| Kyoto Protocol | | software and make it available to non-Annex I Parties by June 2013. Pursuant to that request, the | |\n| | | secretariat will convert the current Excel-based version of the software (v.1.3.2)ª into a web-based; | |\n| Cooperation & Support | | | |\n| | | application which will provide dreater degree of flexibility in using it as well as enable a promot upgrade | |\n| Science | | of the application to respond to possible changes that may occur in the UNFCCC process, such as the, | |\n| Adaptation | | possible switch to the use of the 2006 IPCC Guidelines in the reporting of GHG Inventories. | |\n| National Reports | | | Click on the NAIIS image to |\n| | | Upon request to the secretariat, each non-Annex I Party will be provided with an access to a password. | access it. |\n| GHG Data | | enabled working space in the application. The individual working space will contain the following. | |\n| Methods | | functionalities: | Accessible only with access |\n| Gender and Climate Change | | 1. Software to estimate and report GHG emissions, conduct key source analysis, consistency and | rights. |\n| Parties & Observers | | completeness checks, and report the results of uncertainty analysis;9 | Details on gaining access |\n| | | 2. Export to and import from in the Excel and Xml format; | rights to the NAIIS application |\n| Press | | 3. Inventory management, including management of users and different versions of the inventory, | can be accessed here. |\n| Secretariat | | 4. Archiving of the finalized inventories; | |\n| | | 5. Automated submission of inventories to the secretariat. | |\n| KEY STEPS | | The request to access the application shall be from the national focal point for the UNFCCC, who in | |\n| | | turn shall be responsible for overall user management within its country. | Support for NAIS User |\n| The Corvention | | | |\n| Kyoto Protocol | | | Please click here |\n\n*Figure 4. Log-in page of the NAIIS Web Application*\n\n| United Nations | Framework Convention on | |\n| --- | --- | --- |\n| Climate Change | | |\n| Sign In | Welcome to the Online | User name: |\n| Non-Annex I GHG | inventory software | Password: |\n| (NAIIS) Web Application | Sign in | |\n| privacy :: contact @ 2013 United Nations Framework Convention on Climate Change | | |\n\nTo **log-in**, enter the username and password and click on the \"Sign in\" button.", - "page_start": 6, - "page_end": 6, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **1 Introduction**\n\nThe Non-Annex I Inventory software (NAIIS) web application is a web-based tool developed for use by Parties not included in Annex I to the Convention (non-Annex I Parties) to estimate and report their national greenhouse gas inventories (GHG inventories). As per Article 4, paragraph 1 (a), and Article 12, paragraph 1 (a) of the Convention, non-Annex I Parties are required to communicate to the Conference of the Parties a national inventory of anthropogenic emissions by sources and removals by sinks of all greenhouse gases (GHGs) not controlled by the Montreal Protocol, to the extent their capacities permit, following the guidelines contained in the annex to decision17/CP.8.\n\nIn order to assist non-Annex I Parties in estimating and reporting their GHG inventories as part of their national communications, the secretariat developed an Excel-based software which incorporated all the elements of a national GHG inventory prescribed by decision 17/CP.8. The software was based on the IPCC inventory software version 1.1, which used the Tier 1 methodologies for estimating GHG emissions and removals for all source categories included in the Revised 1996 IPCC Guidelines, and further complemented by the GPGs.1\n\nSince its release in 2005, most non-Annex I Parties have been using that software for the development of their national GHG inventories. In December 2011, Parties requested the secretariat to upgrade the software and make it available to non-Annex I Parties by June 2013. Pursuant to that request, the secretariat converted the current Excelbased version of the software (v.1.3.2)2 into a web-based application (NAIIS) which provides greater flexibility and security for maintaining data.\n\n# **2 General information**\n\nThe NAIIS is a web-based application designed to enable non-Annex I Parties estimate their national GHG inventories according to the UNFCCC guidelines and using the IPCC methodologies, and to report the results in their national communications and biennial update reports.\n\n# **2.1 System overview**\n\nThe NAIIS web application has the following functionalities:\n\n- 1. User management (only for the user roles NFP and PM)\n- 2. Submission management\n- 3. Data entry\n- 4. Key category analysis\n- 5. Reporting tables\n- 6. Data Export/Import\n- 7. Completeness\n- 8. Consistency\n\nThe NAIIS web application allows input of data through three different channels:\n\n- 1. Manual input into the entry grids\n- 2. Partial or full import of data from Excel\n- 3. Bulk import of data from XML\n\nThe GHG emissions totals, by gas and by sector, are automatically calculated and saved based on the values entered for activity data (AD), emission factors and other relevant parameters. In addition, the software facilitates the reporting of other category specific information, for example, the choice of the method for activity data and emission factors.\n\n1 Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories, 2000, and Good Practice Guidance for Land\n\nUse, Land‐Use Change and Forestry, 2003. 2 http://unfccc.int/files/national_reports/non‐\n\nannex_i_natcom/training_material/methodological_documents/application/zip/unfccc_nai_is_132.zip", - "page_start": 3, - "page_end": 3, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "| 1 Introduction 4 |\n| --- |\n| 2 General information 4 |\n| 2.1 System overview 4 |\n| 2.2 Pending NAIIS features 5 |\n| 2.3 Contact 5 |\n| 3 Getting started 6 |\n| 3.1 User Access, Roles and Privileges 6 |\n| 3.2 How to access/ log out / create a GHG inventory 6 |\n| 3.2.1 How to access the NAIIS application 6 |\n| 3.2.2 Create, Start, Add new and View GHG inventory year 8 |\n| 3.2.3 Initial screen / menu tab of the NFP, PM and SE 13 |\n| 3.2.4 How to log out 13 |\n| 3.3 User management 14 |\n| 3.3.1 Add User 14 |\n| 3.3.2 Disable/Enable User 15 |\n| 3.3.3 View User 16 |\n| 4 Using the system 17 |\n| 4.1 Data Entry 17 |\n| 4.2 Navigation tree 17 |\n| 4.3 Grids 17 |\n| 4.4 Data input 18 |\n| 4.5 Add/delete new nodes – user defined source categories 18 |\n| 4.5.1 Add new nodes 18 |\n| 4.5.2 Delete nodes – user defined nodes 20 |\n| 4.6 Backup of data files 20 |\n| 5 Key Category Analysis 21 |\n| 5.1 Using the default list 22 |\n| 5.2 Customizing the list 22 |\n| 5.3 Delete subnodes 23 |\n| 6 Reporting Tables 25 |\n| 7 Data Export/Import 26 |\n| 7.1 Excel Export – Data Entry 26 |\n| 7.2 Excel/XML Data import 27 |\n| 7.3 Export reporting tables 28 |\n| 7.4 XML Export 29 |\n| 8 Completeness 31 |\n| 9 Consistency 33 |\n| 10 Submission management 35 |", - "page_start": 1, - "page_end": 1, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# *3.2.2.2 Add a new GHG inventory year or edit general properties/sectors (only NFP and PM's)*\n\n- Log in as NFP or PM.\n- Click on \"Work on Inventories\" under Submission Management (figure 10).\n### *Figure 10: Sub menu \"Work on Inventories\"*\n\nOnce \"Work on Inventories\" has been clicked, the initial screen will be displayed, which shows the following boxes (figure 11):\n\n- a. Existing Inventory (with all options)\n- b. General properties include the name, submission year, creator, creation date, status, updater and submission\n- date c. Sectors\n- d. Inventory years\n\n### *Figure 11. Initial screen of \"Work on Inventories\"*\n\n| Non-Annex I Inventory Software NAIIS v1.1.3 | | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| United Nations | | | | | | | |\n| | Framework Convention on | | | | | | |\n| Climate Change | | | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | | 21 |\n| | | | | | | | |\n| Name | Submission year Creator | Creation date | Status | Updater | Submission date Energy | | Industrial Proce: |\n| - NAI_2013_1_Inventory | | Non-Annex I PM Wed Dec 18 12:18:57 CET 2013 created | | Non-Annex I PM | | D | D |\n| | 1 III | | | | | | |\n| EJ ExtJS EJS TreeGrid v9.2 | | | | | | | |\n| General Properties | | ector | | | Inventory Years | | |\n| Name | NAI 2013 1 Inventory | Energy | | | 1990 | | |\n| Submission year | | Industrial Processes | (V) | | 1991 | | |\n| Creator | Non-Annex I PM | Solvent and other product use | છ | | 1992 | | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 0 | | 1993 | 000 | |\n| Status | created | UCF | 0 | | 1994 | | |\n| Updater | Non-Annex I PM | ULUCF | D | | 1995 | | |\n| Submission date | | Naste | D | | 1996 | | |\n| Start Inventory | AN | | | | | | |\n\nFollow the steps to add/remove an inventory year:\n\n- Click on the inventory year (figure 12a)\n- Select the inventory year under General properties (figure 12b)\n- Select or deselect the appropriate Sectors (figure 12c)\n- To **add** or **remove** an inventory year, select or deselect the relevant year under Inventory Years box (figure 12d)", - "page_start": 9, - "page_end": 9, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## **Fundamentals**\n\nWith IAM, developers attach *policies,* JSON documents that define granular permissions, to resources. IAM provides pre-built AWS managed policies for common access levels. You can also define your own policies with the least-privilege level necessary to complete tasks.\n\nInformation about IAM policies may come at you fast. If it gets to be too much, put it in **PARC**:\n\n- **P**rincipal: entity that is allowed or denied access\n- **A**ction: type of access that is allowed or denied\n- **R**esource: AWS resources the action will act upon\n- **C**ondition: conditions for which the access is valid\n\nAt a high level, these four terms should be enough to get you started connecting serverless resources.\n\n### **Account prerequisites**\n\nBut, before you start, you need an AWS account. The following sections provide the best practice steps to create an account and an administrative user.\n\n#### **Sign up for an AWS account**\n\nIf you do not have an AWS account, complete the following steps to create one.\n\n#### **To sign up for an AWS account**\n\n- 1. Open https://portal.aws.amazon.com/billing/signup.\n- 2. Follow the online instructions.\n\nPart of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.\n\nWhen you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.", - "page_start": 40, - "page_end": 40, - "source_file": "serverless-core.pdf" - }, - { - "text": "### 3.2.2.1.2 Start a GHG inventory\n\nIn order to START a GHG inventory, please follow the steps below:\n\n- Log in as PM.\n- Hover the cursor on the \"Submission Management\" and click on the \"View Inventories Progress\" button.\n- Click/select the appropriate GHG Inventory in Status = \"created\" (see figure 7a).\n- Click on \"Work on Inventories\" under Submission Management (see figure 7b).\n\n### *Figure 7: Select an Inventory screen*\n\n| Non-Annex I Inventory Software NAIS v1.1.3 | Non-Annex I Party | Inventory #1 Editable | | | Non-Annex PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- |\n| United Nations | | | | | | |\n| Framework Convention on | | | | | | |\n| Climate Change | | | | | | |\n| Users Management | Submission Management Data Entry | Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | |\n| View Inventories Progress | | | | | | |\n| Work on Inventories | | | | | | |\n| Name | Working Invento Submission year Creator | Creation date | Status | Updater | Submission date Energy | |\n| NAI 2013 1 Inventory | D 4.9 16 | Non-Annex PM Wed Dec 18 12:18:57 CET 2013 | created | | | |\n\n- Left click to select the appropriate Inventory (figure 8a)\n- Press the \"Start Inventory\" button (figure 8b)\n\n### *Figure 8: Start an Inventory screen*\n\n| Non-Annex I Inventory Software NAIS v1.1.3 | - | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM | Sign Out |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | United Nations | | | | | | |\n| | Framework Convention on | | | | | | |\n| | Climate Change | | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | | |\n| | ission year Creator | Creation date | Status | Updater | Submission date Energy | | Industrial Proce |\n| 1- NAI_2013_1_Inventory | Non-Annex PM | Wed Dec 18 12:18:57 CET 2013 created | | Non-Annex I PM | | D | D |\n| | 4 III | | | | | | |\n| EJS EJS TreeGrid v9.2 | | | | | | | |\n| General Properties | | Sector | | | Inventory Years | | |\n| Name | NAI_2013_1_Inventory | Energy | | | 1990 | | = |\n| Submission year | | Industrial Processes | છ | | 1991 | | |\n| Creator | Non-Annex I PM | Solvent and other product use | റ്റ | 三 | 1992 | | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 00 | | 1993 | 000 | |\n| Status | created | LUCF | | | 1994 | | |\n| Updater | Non-Annex I PM | LULUCF | 0 | | 1995 | | |\n| Submission date | | Waste | D | P | 1996 | | . |\n| Start Inventory | | | | | | | |\n\nOnce the \"Start Inventory\" button is pressed, the status of the selected Inventory change to \"started\". (see Figure 9)\n\n### *Figure 9: \"Started\" status of an Inventory*\n\n| Non-Annex I Inventory Software NAJS v1.1.3 | | Non-Annex I Party Inventory #1 Editable | | | | Non-Annex I PM Sign Out |\n| --- | --- | --- | --- | --- | --- | --- |\n| | United Nations | | | | | |\n| | Framework Convention on | | 5 | | | |\n| | Climate Change | | | | | |\n| | | Users Management Submission Management Data Entry Key Categories Choice Reporting Tables Data Export / Import Completeness Consistency | | | | |\n| Name | Submission year Creator | Creation date | Status | Updater | Submission date Energy | Industrial Proces |\n| - NAI_2013_1_Inventory | | Non-Annex I PM Wed Dec 18 12:18:57 CET 2013 | started | Non-Annex I.PM. | | D 14) |\n| | .811 | | | | | |\n| Ext.IS EJS TreeGod v3.2 | | | | | | |\n| General Properties | | Sector | | | Inventory Years | |\n| Name | NAI_2013_1_Inventory | Energy | | | 1990 | |\n| Submission year | | Industrial Processes | | | 1991 | |\n| Creator | Non-Annex I PM | Solvent and other product use | | | 1992 | |\n| Creation date | Wed Dec 18 12:18:57 CE | Agriculture | 000 | | 1993 | 000 |\n| Status | created | LUCE | | | 1994 | |\n| Updater | Non-Annex I PM | LULUCE | D | | 1995 | |\n| Submission date | | Waste | > | V | 1896 | |\n| Send for | | | | | | |\n| Checking | | | | | | |", - "page_start": 8, - "page_end": 8, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "#### **3.1.4 How to subscribe to the EDP Newsletter**\n\nOn the Portal Home Page:\n\n- ‐ **Either Click on the \"Newsletter\" item in the page header:**\nThen, on the \"Newsletter subscriptions\" page:\n\n- **Enter your E-Mail address**\n- **Click on the button \"Subscribe\"**\n\nThe system will display a notification message after successful subscription.\n\n| EUROPEAN | | Newsletter | FAQ Search Contact Cookies Legal notice Login | English (en | ▶ |\n| --- | --- | --- | --- | --- | --- |\n| DATA PORTAL | | | | Search site content ... | Q |\n| European Data Portal | | | | | |\n| What we do- | Data- | | Using Data - Providing Data- | Resources - | |\n| Search Datasets | | | | | |\n| Enter keywords ... | | Search Q | | | |\n| SPARQL Search | | | | | |\n\nOr\n\n- ‐ **Enter your email address directly in the footer and click on the \"Subscribe\" button.**\n\n| | | Newsletter | Follow us on | |\n| --- | --- | --- | --- | --- |\n| Funded by the | European Union | Stay informed on our latest news! | | in |\n| | | name@example.com | Subscribe | |\n| | | ... Help us improve | | |\n| | | | Your feedback will help us to improve the overall user experience. Any suggestions? | |\n| Last update: 14/10/2019 Version: 4.3 | | | | Newsletter FAQ Search Contact Cookies Legal notice |\n\nThe system will display a notification message after successful subscription.", - "page_start": 18, - "page_end": 18, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "What is the problem regarding the use of the Book3 dataset ?", - "target_page": 2, - "target_passage": "The Books3 dataset contains text from over 170,000 books,2 which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "### A Supplementary materials for datasets\n\n#### A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using *cl100k_base* encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For *SummEvalFr*, the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the *multilingual-e5-large* model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n#### A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes (*domain* field) for the HAL dataset on *raw* subset and *mteb_eval* subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, *mteb_eval* covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13.4. Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original *raw* dataset, the *shs* and *sdv* classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with *GPT-4* for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n# B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n### C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.\n\nFor evaluating prompt-based models such as *intfloat/e5-mistral-instruct-7b*, we provide the prompts we used in Table 8.\n\n### D Evaluation results\n\nThis section presents the results obtained for each model on each task. To be relevant, we used the same metrics as in MTEB, which varies from one type of task to another:", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "### *What dataset management practices are necessary?*\n\nNo matter how a books data commons gets built, it will be important to consider broader aspects of data governance. For example:\n\n- **Dataset documentation and transparency:** Transparent documentation is important for any dataset used for AI training. A datasheet is a standardized form of documentation that includes information about provenance and composition of data, and includes information on management practices, recommended uses or collection process.\n- **Quality assurance:** Above, we note the many features that make books useful for AI training, as compared with web data, for example. That said, the institution managing a books commons dataset may still want to collect and curate the collection to meet the particular purposes of its users. For instance, it may want to take steps to mitigate biases inherent in the dataset, by ensuring books are representative of a variety of languages and geographies.\n- **Understanding uses:** The institution managing a books commons dataset could measure and study how the dataset is used, to inform future improvements. Such monitoring may also enable accountability measures with respect to uses of the dataset. Introducing community norms for disclosing datasets used in AI training and other forms of AI research would facilitate such monitoring.\n- **Governance mechanisms:** In determining matters like acceptable and ethical use, the fundamental question is \"who decides.\" While this might be settled simply by whoever sets up and operates the dataset and related infrastructure, participatory mechanisms — such as advisory bodies bringing together a broad range of users and stakeholders of a collection — could also be incorporated.", - "page_start": 19, - "page_end": 19, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## *5. Examining approaches to building a books data commons*\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## *5a. Public domain and permissively licensed books*\n\n#### **Existing Project Example : The Pile v2** 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile — a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others.28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2.29 Among other things, v2 would \"have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.\" At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.\n\nThis is an illustrative example, and there are also other projects of this ilk. For instance, see the 27 Common Corpus project, which includes an array of public domain books from a number of countries, at https://huggingface.co./blog/Pclanglais/common-corpus; see also https://huggingface.co./datasets/ storytracer/internet_archive_books_en (\"This dataset contains more than 650,000 English public domain books (~ 61 billion words) which were digitized by the Internet Archive and cataloged as part of the Open Library project.\")\n\nSee Gao et al, supra note 8. 28\n\nGoldman, Sharon. \"One of the World's Largest AI Training Datasets Is About to Get Bigger and 29 \"Substantially Better.\" *VentureBeat*, 11 Jan. 2024, venturebeat.com/ai/one-of-the-worlds-largest-aitraining-datasets-is-about-to-get-bigger-and-substantially-better/. Accessed 20 Mar. 2024.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## *1. Introduction*1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called \"Books3\" to train LLMs.2 The Books3 dataset contains text from over 170,000 books, which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited.3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a \"books data commons for AI training\" might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1 Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nSee e.g. Knibbs, Kate. \"The Battle over Books3 Could Change AI Forever.\" *Wired*, 4 Sept. 2023, 2 www.wired.com/story/battle-over-books3/.\n\nFor key documents in these cases, see the helpful compendium at \"Master List of Lawsuits v. AI, 3 ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.\" *Chat GPT Is Eating the World*, 27 Dec. 2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoftmeta-midjourney-other-ai-cos. See also \"Fair Use Week 2024: Day Two with Guest Expert Brandon Butler.\" *Fair Use Week*, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-withguest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not consequential for the fair use analysis).", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "German is the next-largest language represented at 9%, and is followed by a long-tail of languages by representation.\n\nIn order to enable these uses, HathiTrust has invested in technical solutions to prevent possible misuse. To some extent, they manage this by limiting who gets access to the Center, and limiting access to specific features to researchers at member institutions. HathiTrust has also put in place various security controls on both the physical storage of the digitized books and the network access to those files. The primary uses of the data through the Research Center includes access to an extracted features set and access to the complete corpus \"data capsule,\" which is a virtual machine running on the Center's servers. The data capsule allows users to conduct non-consumptive research with the data, but it limits the types of outputs allowed in order to prevent users from obtaining full content of incopyright works. The measures taken include physical security controls on the data centers housing the information, as well as restrictions via network access and encryption of backup tapes. In the finding that HathiTrust use was a fair use and thus rejecting a lawsuit brought by the Authors Guild, the Court noted the importance of these controls.35\n\nToday, the Center's tools are not suitable for AI training, in that they don't allow the specific types of technical manipulation of underlying text necessary to train an AI. Nevertheless, the Center demonstrates that building a books data commons for computational analysis is possible, and in turn points to the possibility of creating such a resource for AI training.36\n\n#### **Implications of Overall Approach**\n\nBy relying on existing limitations and exceptions in copyright law, the number of books one could include in the corpus of a books data commons is far greater and more diverse. Of course, a bigger dataset doesn't necessarily mean a higher quality dataset for all uses of AI models; as HathiTrust shows, even a multimillion book corpus can skew in various directions. Still, dataset size generally remains significant to an LLM's performance – the more text one can train on, or rather the more tokens for training the model, the better, at least along a number of performance metrics.37\n\nWhile holding the potential for a broader and more diverse dataset, a key limitation in pursuing this approach is that it is only feasible where relevant copyright limitations and exceptions exist. Even then, legal uncertainty means that going down this path is likely to generate, at a minimum, expensive and time-consuming litigation and regulatory\n\nThis is explained explicitly in the appeals court's decision: *Authors Guild v. HathiTrust,* 755 F.3d 87 (2d 35 Cir. 2014).\n\nHathiTrust has also made available some data derived from books, such as the Extracted Features 36 set: \"HTRC releases research datasets to facilitate text analysis using the HathiTrust Digital Library. While copyright-protected texts are not available for download from HathiTrust, fruitful research can still be performed on the basis of non-consumptive analysis of transformative datasets, such as in HTRC's flagship Extracted Features Dataset, which includes features extracted from full-text volumes. These features include volume-level metadata, page-level metadata, part-of-speech-tagged tokens, and token counts:\" https://analytics.hathitrust.org/datasets#top.\n\nSee Testimony of Chris Callison-Burch, July 2023, https://docs.house.gov/meetings/JU/ 37 JU03/20230517/115951/HHRG-118-JU03-Wstate-Callison-BurchC-20230517.pdf (\"As the amount of training data increases, AI systems' capabilities for language understanding and their other skills improve.\"); Brown, Tom, et al. *Language Models Are Few-Shot Learners*. 22 July 2020, at https://arxiv.org/ pdf/2005.14165.pdf (\"we find that performance scales very smoothly with model size\").", - "page_start": 15, - "page_end": 15, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### **Implications of the The Overall Approach**\n\nStepping back from The Pile v2 specifically, or any particular existing collection of books or dataset built on their basis, we want to understand the implications of relying on public domain works and expressly licensed works in building a books commons.\n\nThe benefits are relatively straightforward. Both categories, by definition come with express permission to use the books in AI training. The cost of acquiring the books for this use may be effectively zero or close to it, when considering public domain and \"openly\" licensed books that allow redistribution and that have already been digitized.\n\nBut this approach comes with some clear limitations. First, as noted above, for many books in the public domain, their status as such is not always clear. And with respect to permissively licensed books, it is not always clear whether and how to comply with the license obligations in this context.\n\nSetting aside those challenges, the simple fact is that relying on public domain and existing permissively licensed books would limit the quantity and diversity of data available for training, impacting performance along different dimensions. Only a small fraction of books ever published fall into this category, and the corpus of books in this category is likely to be skewed heavily towards older public domain books. This skew would, in turn, impact the content available for AI training. For instance, relying on books from before 1929 would not 30 only incorporate outdated language patterns, but also a range of biases and misconceptions about race and gender, among other things. Efforts could be made to get people to permissively license more material — a book drive for permissive licensing, so to speak; this approach would still not encompass most books, at least when it comes to past works.31\n\n### *5b. Limitations & Exceptions*\n\n#### **Existing Project Example: HathiTrust Research Center (HTRC)**\n\nThe HathiTrust Research Center provides researchers with the ability to perform computational analysis across millions of books. While it is not suited specifically for AI training, it is an existence proof for what such a resource might look like.\n\nFor instance, AI researchers note that the recently released Common Corpus dataset is an \"invaluable 30 resource\" but \"comes with limitations. A lot of public domain data is antiquated—in the US, for example, copyright protection usually lasts over seventy years from the death of the author—so this type of dataset won't be able to ground an AI model in current affairs or, say, how to spin up a blog post using current slang\" and the \"dataset is tiny.\" Thus, while it is possible to train an AI model on the data, those models will have more limited utility on some dimensions than current frontier models trained on a broader array of data. See Knibbs, Kate, *Here's Proof You Can Train an AI Model Without Slurping Copyrighted Content | WIRED*. (2024, March 20), at https://www.wired.com/story/proof-you-can-train-aiwithout-slurping-copyrighted-content/.\n\nOur workshop discussion did note that some widely available datasets for AI training have also 31 pursued more direct licensing agreements. For instance, the SILO LLM was created by working with scientific journal publishers to make works available for both download and AI training. While this might be viable in the context of particular, narrow classes of works, the barriers to efficient licensing mentioned above would remain a problem for any broader efforts. See Min, Sewon, et al. \"SILO Language Models: Isolating Legal Risk in a Nonparametric Datastore.\" *ArXiv (Cornell University)*, 8 Aug. 2023, https://doi.org/10.48550/arxiv.2308.04430. Accessed 14 Dec. 2023.", - "page_start": 13, - "page_end": 13, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "### **3.4 Graphical Data Visualisation Tool**\n\nThis section describes the features of the graphical visualisation tool for numeric data. The features are currently available for XLS (Excel) and CSV files, except for the selection of the sheet name which is applicable only for Excel files.\n\nMost GUI elements from the \"Graph\" tab (records selection, search box, filters and fields buttons) are also available on the \"Grid\" tab and work in the same way.\n\n#### **3.4.1 How to visualize graphical data from a dataset resource**\n\nAs a result of a dataset search, the system displays on the \"Dataset\" tab all distributions (resource/data files) that are part of the selected dataset. Each XLS or CSV distribution of the dataset can be further explored by clicking on \"Open Visualization\" under the \"Options\" button – if available.\n\n| C | What we do ▼ Data ▼ Using Data · | Providing Data · | Resources · |\n| --- | --- | --- | --- |\n| | Dataset Categories Similar Datasets | | Feedback |\n| | English Indices of Deprivation 2010 | | |\n| | 2 data.gov.uk | | Updated: - |\n| | The English Indices of Deprivation 2010 provide a relative measure of deprivation at small area level across | | |\n| | England. Areas are ranked from least deprived to most deprived on seven different dimensions of deprivation and | | |\n| | an overall composite measure of multiple deprivation. Most of the data underlying the 2010 Indices are for the | | |\n| | year 2008. The domains used in the Indices of Deprivation 2010 are: income deprivation; employment deprivation; | | |\n| | health deprivation and disability; education deprivation; crime deprivation; barriers to housing and services | | |\n| | deprivation; and living environment deprivation. Each of these domains has its own scores and ranks, allowing | | |\n| | users to focus on specific aspects of deprivation. In addition, two supplementary indices measure income | | |\n| | deprivation amongst children - the Income Deprivation Affecting Children Index (IDACI) - and older people - the | | |\n| | Income Deprivation Affecting Older People Index (IDAOPI). | | |\n| i | An updated translation of this dataset is in progress. | | × |\n| | Distributions (19) | | |\n| xr2 | 2010: Supplementary indices children & older people Options | | Download ~ |\n| | Licence: open-government-licence Open Visualisation | | |\n| CSV | 2010: All domains, sub domains & supplementary indices Options V | | Download v |\n| | Licence: open-government-licence | | |\n| ਮਾਟ | 2010: Sub-domains barriers to housing & services Options V | | Download v |", - "page_start": 42, - "page_end": 42, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "2023) where given the original human summary in English and its translation in French, the model rates the quality of the translation from 0 to 10, with 0 being of very bad quality and 10 being excellent. The prompt is available in Figure 8. Additionally, we manually check random translations with ratings between 9 and 10 to ensure the rating is relevant. We do the same for all translations with a score less than 9 and correct them7 (see the rating distribution in Table 6).\n\n| Dataset | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-L |\n| --- | --- | --- | --- | --- |\n| SummEval | 0.205 | 0.292 | 0.099 | 0.193 |\n| SummEvalFr | 0.276 | 0.302 | 0.117 | 0.194 |\n| Correlation En-Fr | 0.70 | 0.85 | 0.80 | 0.84 |\n\nTable 2: Average ROUGE and BLUE scores computed between machine summaries and human summaries for the original English SummEval and its translation to French. The correlations of the individual scores between English and French are also reported.\n\n#### 3.1.4 Data for the Reranking task\n\nThe reranking task, as evaluated in MTEB, requires datasets composed of a set of queries, each associated with relevant and irrelevant documents. Despite our efforts, we found no French dataset that natively exhibits such a structure. Thus, to evaluate this task, we built data for the reranking task based on the *Syntec* and *Alloprof* (Lefebvre-Brossard et al., 2023) datasets. These already feature queries and labeled relevant documents. Irrelevant ones were added using the following process:\n\n- To avoid bias, we use the BM25 algorithm (Robertson and Jones, 1976) (which is a deterministic method) to rank documents in terms of relevance regarding each query.\n- The top 10 documents that are not labeled as relevant constitute the negative samples.\n\nWe recognize that this process leads to a high correlation between the retrieval and reranking tasks. We still think it is essential to make the latter available, with an open door to future improvement8 .\n\n#### 3.1.5 Similarity analysis\n\nWe investigate the proximity between the datasets' topics to give insights about the benchmark contents. The methodology introduced by Muennighoff et al. (2022), i.e. computing an average embedding of samples from each dataset, is used to build a dataset-similarity matrix (displayed in appendix Figure 3). The distances between averaged embedding vectors of each dataset (which range from 0.89 to 1 in Figure 3) remain hard to interpret into a dataset semantic proximity. Thus, we complement this by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\nFigures 4 and 3 seem to correlate, showing high similarity between two datasets when the same underlying data is used in different tasks. Dataset topics are pretty close, with some exceptions, such as the Syntec dataset. As more datasets are added to the benchmark, this analysis will help select new data that do not produce redundant results. It may also help to understand the link between the results and the datasets' topics.\n\n# 3.2 Models\n\nFor comparison on our benchmark, we selected various models to fulfil three objectives.\n\n- Quantity: The aim was to compare a substantial number of models (51 in total) to provide comprehensive results, facilitating the community in selecting effective French models.\n- Relevance: It was imperative to include top performers from the MTEB benchmark (Muennighoff et al., 2022). We mainly selected multilingual models and some English models to asses their language-transferring abilities. Additionally, we integrated natively French transformer-based models such as *CamemBERT* (Martin et al., 2019), *FlauBERT* (Le et al., 2020) and even the very recent *CroissantLLM* (Faysse et al., 2024).\n- Variety: Diverse model types were included to offer an insightful analysis across various model characteristics (dimension, training strategy, etc.).\n\nIn line with the third objective, we explicit below the studied characteristics of embedding models that will be discussed with the results.\n\n- *Embedding dimension:* This critical element influences the expressiveness of the represen-\n7 SummEvalFr available at: https://huggingface.co./ datasets/lyon-nlp/summarization-summeval-fr-p2p\n\n8 SyntecReranking available at: https: //huggingface.co/datasets/lyon-nlp/ mteb-fr-reranking-syntec-s2p and AlloprofRerank-\n\ning available at: https://huggingface.co./datasets/ lyon-nlp/mteb-fr-reranking-alloprof-s2p", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv4.pdf" - }, - { - "text": "# **Portal Version 4.3 – User Manual**\n\n*V1.0*\n\n*October 2019*\n\n# **Table of Contents**\n\n| 1 | Introduction 4 |\n| --- | --- |\n| 1.1 | Purpose of the Document 4 |\n| 1.2 | Reference Documents 4 |\n| 1.3 | Terminology 4 |\n| 2 | Approach 6 |\n| 3 | Main User Functions of the Portal 6 |\n| 3.1 | Portal Home Page 8 |\n| 3.1.1 | How to browse through the Editorial Content of the Portal 10 |\n| 3.1.2 | How to view / search for \"Latest News\" 17 |\n| 3.1.3 | How to view / search for \"Open Data Events\" 18 |\n| 3.1.4 | How to subscribe to the EDP Newsletter 19 |\n| 3.1.5 | How to view \"Tweets\" on the EDP 20 |\n| 3.1.6 | How to switch to another User Language 21 |\n| 3.1.7 | How to search for EDP Site Content 22 |\n| 3.1.8 | How to Search for Datasets by Data Category 23 |\n| 3.1.9 | How to Search for Datasets by Keyword 25 |\n| 3.2 | Datasets (Data Platform) 26 |\n| 3.2.1 | Entering the Datasets-View 27 |\n| 3.2.2 | How to filter datasets by using \"Faceted Search\" 27 |\n| 3.2.3 | How to store personal queries 29 |\n| 3.2.4 | How to filter datasets by geographical area 31 |\n| 3.2.5 | How to download dataset distributions 33 |\n| 3.2.6 | How to view licensing information 34 |\n| 3.2.7 | How to switch to another user language 36 |\n| 3.2.8 | How to browse by data catalogues 37 |\n| 3.3 | Visualization of Geo-Spatial Data (map.apps) 38 |\n| 3.3.1 | How to visualize geo-spatial data from a dataset resource 38 |\n| 3.4 | Graphical Data Visualisation Tool 43 |\n| 3.4.1 | How to visualize graphical data from a dataset resource 43 |", - "page_start": 1, - "page_end": 1, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "In the United States, before which date is book out of copyright for sure ?", - "target_page": 9, - "target_passage": "In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## *4. Copyright, Licensing, & Access to Books for Training*\n\nEven if books can be acquired, digitized, and made technically useful for AI training, the development of a books data commons would necessarily need to navigate and comply with copyright law.\n\n**Out-of-Copyright Books:** A minority of books are old enough to be in the public domain and out of copyright, and an AI developer could use them in training without securing any copyright permission. In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on, it is worth noting that the status of whether a book is in the public domain can be difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14 out of copyright if they were not subject to a copyright renewal; however, data on copyright renewals is not easily accessible.\n\nWhat's more, copyright definitions and term lengths vary among countries. Even if a work is in the public domain in the US, it may not be in other countries. Countries generally use the 15 life of the last living author + \"x\" years to determine the term of copyright protection. For most countries, \"x\" is either 50 years (the minimum required by the Berne Convention) or 70 years (this is the case for all member states of the European Union and for all works published in the U.S. after 1978). This approach makes it difficult to determine copyright terms with certainty because it requires information about the date of death of each author, which is often not readily available.\n\n**In-Copyright Books:** The vast majority of books are in copyright, and, insofar as the training process requires making a copy of the book, the use in AI training may implicate copyright law. Our workshop covered three possible paths for incorporating such works.\n\n#### **Direct licensing**\n\nOne could directly license books from rightsholders. There may be some publishers who are willing to license their works for this purpose, but it is hard to determine the scale of such access, and, in any event, there are significant limits on this approach. Along with the challenge (and expense) of reaching agreements with relevant rightsholders, there is also the practical difficulty of simply identifying and finding the rightsholder that one must negotiate\n\nFor a sense of the complexity, see e.g. Melissa Levine, Richard C. Adler. *Finding the Public Domain:* 14 *Copyright Review Management System Toolkit*. 2016, quod.lib.umich.edu/c/crmstoolkit/\n\n14616082.0001.001. Accessed 20 Mar. 2024.; Kopel, Matthew. \"LibGuides: Copyright at Cornell Libraries: Copyright Term and the Public Domain.\" guides.library.cornell.edu/copyright/publicdomain; Mannapperuma, Menesha, et al. *Is It in the Public Domain? A HANDBOOK for EVALUATING the COPYRIGHT STATUS of a WORK CREATED in the UNITED STATES*. 1923.\n\nSee e.g. Moody, Glyn. \"Project Gutenberg Blocks Access in Germany to All Its Public Domain Books 15 because of Local Copyright Claim on 18 of Them.\" *Techdirt*, 7 Mar. 2018, www.techdirt.com/ 2018/03/07/project-gutenberg-blocks-access-germany-to-all-public-domain-books-because-localcopyright-claim-18-them/. Accessed 20 Mar. 2024.", - "page_start": 8, - "page_end": 8, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publisher. Subject to any applicable licensing terms and conditions in the case of electronically supplied publications, a person may engage in fair dealing with a copy of this publication for his or her personal or private use, or his or her research or private study. See Section 12(1)(a) of the Copyright Act 98 of 1978.\n\nThe authors and the publisher have made every effort to obtain permission for and to acknowledge the use of copyright material. Should any infringement of copyright have occurred, please contact the publisher, and every effort will be made to rectify omissions or errors in the event of a reprint or new edition.\n\nDeveloped for Oxbridge Academy - 2015", - "page_start": 1, - "page_end": 1, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "It is also important to note two other issues that can affect the application of limitations and exceptions, in particular, their application to e-books.\n\nThe first important limitation is that almost every digital book published today comes with a set of contractual terms that restrict what users can do with it. In many cases, those terms will explicitly restrict text data mining or AI uses of the content, meaning that even where copyright law allows for reuse (for example, under fair use), publishers by contract can impose restrictions anyway. In the United States, those contract terms are generally thought to override the applicability of fair use or other limitations and exceptions. Other 23 jurisdictions, such as those in the EU, provide that certain limitations and exceptions cannot be contractually overridden, though experience to date varies with how those anti-contractual override protections work in practice.24\n\nThe second limitation is the widespread adoption of \"anti-circumvention\" rules in copyright laws and the interplay of these with a choice to rely on copyright limitations and exceptions. Digital books sold by major publishers are generally encumbered with \"digital rights management\" (DRM) that limits how someone can use the digital file. For instance, DRM can limit the ability to make a copy of the book, or even screenshot or excerpt from it, among other things. Anti-circumvention laws restrict someone's ability to evade these technical restrictions, even if it is for an ultimately lawful use.\n\nWhat this means for our purposes is that even if one acquires a digital book from, for example, Amazon, and it is lawful under copyright law to use that book in AI training, it can still generally be unlawful to circumvent the DRM to do so, outside narrow exceptions.25 Thus, the ability to use in-copyright books encumbered by DRM — that is, most all books sold by major publishers — is generally limited. 26\n\nPractically, using in-copyright books to build a books commons for AI training — while relying on copyright's limitations and exceptions — requires turning a physical book into digital form, or otherwise engaging in the laborious process of manually re-creating a book's text (i.e., retyping the full text of the book) without circumventing the technical restrictions themselves.\n\nIn the U.S. the Copyright Office has recognized the importance of allowing particular exceptions for 25 researchers engaged in text and data mining. See their rulemaking in 2021 https:// www.federalregister.gov/documents/2021/10/28/2021-23311/exemption-to-prohibition-oncircumvention-of-copyright-protection-systems-for-access-control. These rules are reviewed triennially and are currently under review, with submissions suggesting both contraction and expansion; see the Authors' Alliance comments in January 2024 https://www.authorsalliance.org/2024/01/29/authorsalliance-submits-long-form-comment-to-copyright-office-in-support-of-petition-to-expand-existing-textand-data-mining-exemption/. It is possible that one could argue for these exceptions to be expanded, and then work to renew that exception every three years. The EU's text and data mining exception may also limit use of DRM to impede data mining, but only for particular covered research and heritage institutions; commercial and other users are not covered, however.\n\nNote that CC licenses forbid use of DRM — but that doesn't address most all books sold by publishers. 26\n\nSee Hansen, Dave. \"Fair Use Week 2023: How to Evade Fair Use in Two Easy Steps.\" *Authors Alliance*, 23 23 Feb. 2023, www.authorsalliance.org/2023/02/23/fair-use-week-2023-how-to-evade-fair-use-in-twoeasy-steps/. Accessed 20 Mar. 2024.\n\nSee Band, Jonathan. \"Protecting User Rights against Contract Override.\" *Joint PIJIP/TLS Research* 24 *Paper Series*, 1 May 2023, digitalcommons.wcl.american.edu/research/97/. Accessed 20 Mar. 2024.", - "page_start": 11, - "page_end": 11, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n#### **Permissively licensed works**\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution).18\n\nSee e.g. Heald, Paul J. \"How Copyright Makes Books and Music Disappear (and How Secondary 16 Liability Rules Help Resurrect Old Songs).\" Illinois Program in Law, Behavior and Social Science Paper No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181. Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen, Rebecca J. \"Why Are so Few Books from the 20th Century Available as Ebooks?\" *The Atlantic*, 18 Mar. 2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-centuryavailable-as-ebooks/284486/. See also \"Google Book Search Settlement and Access to Out of Print Books.\" *Google Public Policy Blog*, publicpolicy.googleblog.com/2009/06/google-book-searchsettlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed classaction settlement between Google, the Authors Guild, and the Association of American Publishers). Google's final brief in the settlement proceedings notes the \"prohibitive transaction costs of identifying and locating individual Rightsholders of these largely older, out-of-print books\" — see this brief at https:// web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/ google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also justified the settlement's terms in light of the fact that \"the transaction costs involved in finding copyright owners and clearing the rights are too high\"; while they argued that most works are not truly \"orphans,\" they note that total transaction costs as a whole (including, for example, determining whether the author or publisher holds the rights and then negotiating rates) are so high as to block uses of outof-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http:// thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.\n\nIn the EU, the 2019 Copyright Directive introduced specific provisions on the \"use of out-of-commerce 17 works and other subject matter by cultural heritage institutions\" (Articles 8-11 CDSMD). These provisions allow cultural heritage institutions to \"make available, for non-commercial purposes, out-ofcommerce works or other subject matter permanently in their collections\". The limitation to noncommercial purposes means that works made available under these provisions would be of limited use in building a books data commons.\n\nFor one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18 they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin' 'Bout AI Generation: Copyright and the Generative AI Supply Chain. Forthcoming, *Journal of the Copyright Society* 2024. https://doi.org/10.2139/ssrn.4523551.", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "#### **Reliance on Copyright Limitations and Exceptions**\n\nEven if a book is in copyright, it's possible that copying books for AI training may be covered by existing limitations and exceptions to copyright law in particular jurisdictions. For example:\n\n- In the United States, many argue using existing works to train generative AI is \"fair use,\" consistent with existing law and legal precedents. This is the subject of a 19 number of currently active court cases, and different actors and tools may yield different results, as fair use is applied case-by-case using a flexible balancing test.\n- In the European Union, there are explicit exceptions in the law for \"text and data mining\" uses of in-copyright works, both for non-commercial research and for commercial purposes. However, for commercial uses and for users outside of research and heritage institutions, they must respect the rights of rightsholders who choose to \"reserve their rights\" (i.e., opt-out of allowing text and data mining) via machine readable mechanisms. The exception also requires that users have \"lawful 20 access\" to the works.\n- Finally, Japan provides a specific text and data mining exception, without any comparable opt-out requirement for commercial uses as is embedded in EU law.21\n\nWhile exceptions that allow AI training exist in several other countries, such as Singapore and Israel, most countries do not provide exceptions that appear to permit AI training. Even where potentially available, as in the United States, legal uncertainty and risk create a hurdle for anyone building a books commons.22\n\nSee e.g. Comments from Sprigman, Samuelson, Sag to Copyright Office, October 2023, at https:// 19 www.regulations.gov/comment/COLC-2023-0006-10299 as well as many other submissions to the US copyright office; see also Advocacy, Katherine Klosek, Director of Information Policy and Federal Relations, Association of Research Libraries (ARL), and Marjory S. Blumenthal, Senior Policy Fellow, American Library Association (ALA) Office of Public Policy and. \"Training Generative AI Models on Copyrighted Works Is Fair Use.\" *Association of Research Libraries*, 23 Jan. 2024, www.arl.org/blog/ training-generative-ai-models-on-copyrighted-works-is-fair-use/.\n\nSee Articles 3 and 4 of the EU's Directive on Copyright and Related Rights in the Digital Single Market 20 — https://eur-lex.europa.eu/eli/dir/2019/790/oj.\n\nJapan clarified its laws in 2018 to make clear that this type of use is permitted — see discussion in 21 Testimony of Matthew Sag, July 2023, https://www.judiciary.senate.gov/imo/media/doc/ 2023-07-12_pm_-_testimony_-_sag.pdf, see also Fiil-Flynn, S. *et al.* (2022) *Legal reform to enhance global text and Data Mining Research*, *Science*. Available at: https://www.science.org/doi/10.1126/ science.add6124 (Accessed: 28 Sept. 2023).\n\nSee supra note 22*.* See also Jonathan Band, *Copyright Implications of the Relationship between* 22 *Generative Artificial Intelligence and Text and Data Mining | Infojustice*. infojustice.org/archives/45509. In addition, for an in-depth look at the cross-border legal challenges involved see: *Wrapping up Our NEH-Funded Project to Help Text and Data Mining Researchers Navigate Cross-Border Legal and Ethical Issues*. 2 Oct. 2023, buildinglltdm.org/2023/10/02/wrapping-up-our-neh-funded-project-to-help-text-anddata-mining-researchers-navigate-cross-border-legal-and-ethical-issues/. Accessed 20 Mar. 2024.", - "page_start": 10, - "page_end": 10, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "It is also an example predicated on copyright's limitations and exceptions — in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use.32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is \"an international community of research libraries committed to the long-term curation and availability of the cultural record.\" It started in what it calls the \"early 33 days of mass digitization\" — that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called \"digital humanities,\" which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., \"war on drugs\") became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from \"non-consumptive research,\" or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center \"[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.\" It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,\n\n<i>Authors Guild v. HathiTrust, 902 F.Supp.2d 445 (SDNY October 10, 2012) and *Authors Guild v.* 32 *HathiTrust*, 755 F.3d 87 (2d Cir. 2014).\n\nSee https://www.hathitrust.org/member-libraries/member-list/ — the membership is principally US 33 institutions, and most of the non-US members are from English speaking countries or institutions that use English as the primary language of operations.\n\nThis functionality is limited to scanned books provided by library partners in the US. 34", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### **Implications of the The Overall Approach**\n\nStepping back from The Pile v2 specifically, or any particular existing collection of books or dataset built on their basis, we want to understand the implications of relying on public domain works and expressly licensed works in building a books commons.\n\nThe benefits are relatively straightforward. Both categories, by definition come with express permission to use the books in AI training. The cost of acquiring the books for this use may be effectively zero or close to it, when considering public domain and \"openly\" licensed books that allow redistribution and that have already been digitized.\n\nBut this approach comes with some clear limitations. First, as noted above, for many books in the public domain, their status as such is not always clear. And with respect to permissively licensed books, it is not always clear whether and how to comply with the license obligations in this context.\n\nSetting aside those challenges, the simple fact is that relying on public domain and existing permissively licensed books would limit the quantity and diversity of data available for training, impacting performance along different dimensions. Only a small fraction of books ever published fall into this category, and the corpus of books in this category is likely to be skewed heavily towards older public domain books. This skew would, in turn, impact the content available for AI training. For instance, relying on books from before 1929 would not 30 only incorporate outdated language patterns, but also a range of biases and misconceptions about race and gender, among other things. Efforts could be made to get people to permissively license more material — a book drive for permissive licensing, so to speak; this approach would still not encompass most books, at least when it comes to past works.31\n\n### *5b. Limitations & Exceptions*\n\n#### **Existing Project Example: HathiTrust Research Center (HTRC)**\n\nThe HathiTrust Research Center provides researchers with the ability to perform computational analysis across millions of books. While it is not suited specifically for AI training, it is an existence proof for what such a resource might look like.\n\nFor instance, AI researchers note that the recently released Common Corpus dataset is an \"invaluable 30 resource\" but \"comes with limitations. A lot of public domain data is antiquated—in the US, for example, copyright protection usually lasts over seventy years from the death of the author—so this type of dataset won't be able to ground an AI model in current affairs or, say, how to spin up a blog post using current slang\" and the \"dataset is tiny.\" Thus, while it is possible to train an AI model on the data, those models will have more limited utility on some dimensions than current frontier models trained on a broader array of data. See Knibbs, Kate, *Here's Proof You Can Train an AI Model Without Slurping Copyrighted Content | WIRED*. (2024, March 20), at https://www.wired.com/story/proof-you-can-train-aiwithout-slurping-copyrighted-content/.\n\nOur workshop discussion did note that some widely available datasets for AI training have also 31 pursued more direct licensing agreements. For instance, the SILO LLM was created by working with scientific journal publishers to make works available for both download and AI training. While this might be viable in the context of particular, narrow classes of works, the barriers to efficient licensing mentioned above would remain a problem for any broader efforts. See Min, Sewon, et al. \"SILO Language Models: Isolating Legal Risk in a Nonparametric Datastore.\" *ArXiv (Cornell University)*, 8 Aug. 2023, https://doi.org/10.48550/arxiv.2308.04430. Accessed 14 Dec. 2023.", - "page_start": 13, - "page_end": 13, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "engagement. And, at least in the U.S., it could generate billions of dollars in damages if the specific design choices and technical constraints are not adequate to justify a finding of fair use.\n\nThis sort of books dataset could be built by expanding use of in-copyright books that have already been digitized from existing libraries and other sources. Specifically, workshop participants mentioned that the Internet Archive, HathiTrust, and Google as entities that have digitized books and could repurpose their use to build a books commons, although challenges with using these datasets were noted. The Internet Archive is in the midst of litigation brought by book publishers for its program for lending digital books; while not directly relevant to the issue of AI training using their corpus of books, this sort of litigation creates a chilling effect on organizations seeking to make new uses of these digitized books. Meanwhile, Google encumbered HathiTrust's digital copies with certain contractual restrictions, which would need to be addressed to develop a books dataset for AI training, and Google itself is unlikely to share its own copies while it provides them a competitive advantage.\n\nPerhaps as a matter of public policy, these existing copies could be made more freely available. For instance, to ensure robust competition around AI and advance other public interests, policymakers could remove legal obstacles to the sharing of digitized book files for use in AI training. Alternatively, policymakers could go further and affirmatively compel sharing access to these digital book files for AI training.\n\nIt's possible that there could be a new mass digitization initiative, turning physical books into new digital scans. At least in theory, one could try to replicate the existing corpora of HathiTrust, for example, without Google's contractual limitations. At the same time, such an effort would take many years, and it seems unlikely that many libraries would want to go to the trouble to have their collections digitized a second time. Moreover, while new scans may provide some incremental benefit over use of existing ones (e.g., by using the most modern digitization and OCR tools and thus improving accuracy), there is no inherent social value to making every entity that wants to do or allow AI training invest in their own redundant scanning.\n\nA new digitization effort could target works that have not been yet digitized. This may be particularly useful given that previous book digitization efforts, and the Google Books project in particular, have focused heavily (though not exclusively) on libraries in English-speaking countries. Additional digitization efforts might make more sense for books in those languages that have not yet been digitized at a meaningful scale. Any new digitization effort might therefore start with a mapping of the extent to which a books corpus in a given language has been digitized.", - "page_start": 16, - "page_end": 16, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## *1. Introduction*1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called \"Books3\" to train LLMs.2 The Books3 dataset contains text from over 170,000 books, which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited.3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a \"books data commons for AI training\" might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1 Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nSee e.g. Knibbs, Kate. \"The Battle over Books3 Could Change AI Forever.\" *Wired*, 4 Sept. 2023, 2 www.wired.com/story/battle-over-books3/.\n\nFor key documents in these cases, see the helpful compendium at \"Master List of Lawsuits v. AI, 3 ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.\" *Chat GPT Is Eating the World*, 27 Dec. 2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoftmeta-midjourney-other-ai-cos. See also \"Fair Use Week 2024: Day Two with Guest Expert Brandon Butler.\" *Fair Use Week*, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-withguest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not consequential for the fair use analysis).", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "What of the main imporvement of the Pile v2 dataset in comparison to its first version ?", - "target_page": 13, - "target_passage": "Among other things, v2 would “have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.” At the same time, it would only seek to include public domain books and permissively licensed content", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "### A Supplementary materials for datasets\n\n#### A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using *cl100k_base* encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For *SummEvalFr*, the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the *multilingual-e5-large* model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n#### A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes (*domain* field) for the HAL dataset on *raw* subset and *mteb_eval* subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, *mteb_eval* covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13.4. Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original *raw* dataset, the *shs* and *sdv* classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with *GPT-4* for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n# B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n### C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.\n\nFor evaluating prompt-based models such as *intfloat/e5-mistral-instruct-7b*, we provide the prompts we used in Table 8.\n\n### D Evaluation results\n\nThis section presents the results obtained for each model on each task. To be relevant, we used the same metrics as in MTEB, which varies from one type of task to another:", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "# **Portal Version 4.3 – User Manual**\n\n*V1.0*\n\n*October 2019*\n\n# **Table of Contents**\n\n| 1 | Introduction 4 |\n| --- | --- |\n| 1.1 | Purpose of the Document 4 |\n| 1.2 | Reference Documents 4 |\n| 1.3 | Terminology 4 |\n| 2 | Approach 6 |\n| 3 | Main User Functions of the Portal 6 |\n| 3.1 | Portal Home Page 8 |\n| 3.1.1 | How to browse through the Editorial Content of the Portal 10 |\n| 3.1.2 | How to view / search for \"Latest News\" 17 |\n| 3.1.3 | How to view / search for \"Open Data Events\" 18 |\n| 3.1.4 | How to subscribe to the EDP Newsletter 19 |\n| 3.1.5 | How to view \"Tweets\" on the EDP 20 |\n| 3.1.6 | How to switch to another User Language 21 |\n| 3.1.7 | How to search for EDP Site Content 22 |\n| 3.1.8 | How to Search for Datasets by Data Category 23 |\n| 3.1.9 | How to Search for Datasets by Keyword 25 |\n| 3.2 | Datasets (Data Platform) 26 |\n| 3.2.1 | Entering the Datasets-View 27 |\n| 3.2.2 | How to filter datasets by using \"Faceted Search\" 27 |\n| 3.2.3 | How to store personal queries 29 |\n| 3.2.4 | How to filter datasets by geographical area 31 |\n| 3.2.5 | How to download dataset distributions 33 |\n| 3.2.6 | How to view licensing information 34 |\n| 3.2.7 | How to switch to another user language 36 |\n| 3.2.8 | How to browse by data catalogues 37 |\n| 3.3 | Visualization of Geo-Spatial Data (map.apps) 38 |\n| 3.3.1 | How to visualize geo-spatial data from a dataset resource 38 |\n| 3.4 | Graphical Data Visualisation Tool 43 |\n| 3.4.1 | How to visualize graphical data from a dataset resource 43 |", - "page_start": 1, - "page_end": 1, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "## *5. Examining approaches to building a books data commons*\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## *5a. Public domain and permissively licensed books*\n\n#### **Existing Project Example : The Pile v2** 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile — a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others.28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2.29 Among other things, v2 would \"have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.\" At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.\n\nThis is an illustrative example, and there are also other projects of this ilk. For instance, see the 27 Common Corpus project, which includes an array of public domain books from a number of countries, at https://huggingface.co./blog/Pclanglais/common-corpus; see also https://huggingface.co./datasets/ storytracer/internet_archive_books_en (\"This dataset contains more than 650,000 English public domain books (~ 61 billion words) which were digitized by the Internet Archive and cataloged as part of the Open Library project.\")\n\nSee Gao et al, supra note 8. 28\n\nGoldman, Sharon. \"One of the World's Largest AI Training Datasets Is About to Get Bigger and 29 \"Substantially Better.\" *VentureBeat*, 11 Jan. 2024, venturebeat.com/ai/one-of-the-worlds-largest-aitraining-datasets-is-about-to-get-bigger-and-substantially-better/. Accessed 20 Mar. 2024.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Table 3 Average Pooling vs. Adaptive Pooling. We pool the feature map output by the frozen V-JEPA encoder using an attentive probe, which is then fed into a linear classifier for downstream supervised tasks (K400 and SSv2). We evaluate two pooling strategies: 1) average pooling (Avg.), and attentive pooling (Att.). Results are reported using a single center view. Using adaptive pooling with a crossattention layer leads to improvements of +17.3 points on K400 and +16.1 points on SSv2.\n\n| | | | Frozen Evaluation | | |\n| --- | --- | --- | --- | --- | --- |\n| | | K400 | | SSv2 | |\n| | | (16×1×1) | | (16×1×1) | |\n| Method | Arch. | Avg. | Att. | Avg. | Att. |\n| V-JEPA | ViT-L/16 | 56.7 | 73.7 | 50.1 | 66.2 |\n\nhas been critical for enabling the surge of advancements in other modalities, such as text and images (Kaplan et al., 2020; Cherti et al., 2023). We investigate whether a similar trend holds for video data. To control for the possible confounding variable of compute budget, we pretrain all models in Table 2 for 90K iterations using a batch-size of 3072. We report downstream results on K400, SSv2, and IN1K using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view.\n\nTable 2 shows that average performance across tasks monotonically increases as we increase the size of the pretraining dataset, but the best task-specific performance is obtained by independently selecting the pretraining data for each specific downstream task. For instance, the L/16 obtains its best SSv2 performance when pretrained on K710+SSv2, its best K400 performance when pretrained only on K710, and its best IN1K performance when pretrained only on K710+HT. The best average performance across all tasks is achieved by pretraining VideoMix2M, which combines all the data sources. Similarly, the H/16 pretrained on K710+SSv2 achieves a greater K400 score than the H/16 pretrained on VideoMix2M, however, the top performing H/16 on average is pretrained on VideoMix2M.\n\n### 4.3 Evaluation: Attentive Probing\n\nNext we explore the feature pooling strategy for applying the model's representations in downstream tasks. Since the prediction objective in equation (1) is unnormalized, there is no a priori reason for the encoder to yield a linearly separable subspace (Chen et al., 2020). Thus, rather than using a linear operation (averaging) to pool the features output of the frozen backbone, we explore a learnable non-linear pooling strategy. Specifically, when evaluating the frozen pretrained backbone on downstream tasks, we learn a cross-attention layer with a learnable query token. The output of the crossattention layer is then added back to the query token (residual connection), and then fed into two-layer MLP\n\nTable 4 Ablating Prediction Task. Models are ViT-L/16 networks pretrained on K710 and SSv2 and evaluated with an attentive probe using a single center view. The region x is sampled by masking spatio-temporal regions in the video; y is the mask complement. 1) random-tube[r]: x is obtained by masking a fraction r of tubes (spatial patches extended across the entire temporal duration) from the video, 2) causal multi-block[p]: x is restricted to the first p frames of the 16-frame video, which are then masked with a random set of spatio-temporal blocks, 3) multi-block: x is obtained by masking a random set of spatio-temporal blocks from the entire video. Best performance obtained by using multiblock masking.\n\n| | | Frozen Evaluation | |\n| --- | --- | --- | --- |\n| | K400 | SSv2 | IN1K |\n| Masking | (16×1×1) | (16×1×1) | |\n| random-tube[0.9] | 51.5 | 46.4 | 55.6 |\n| causal multi-block[6] | 61.3 | 49.8 | 66.9 |\n| causal multi-block[12] | 71.9 | 63.6 | 72.2 |\n| multi-block | 72.9 | 67.4 | 72.8 |\n\nwith a single GeLU activation, followed by a LayerNorm, and finally a linear classifier.\n\nIn Table 3 we see that using adaptive pooling with a learnable cross-attention layer leads to a significant improvement of +17 points on K400 and +16.1 points on SSv2. Using an attentive-probe is also beneficial for other baseline models as reported in Appendix E.\n\n### 4.4 Prediction Task: Predicting y from x\n\nWe conduct an ablation on the masking strategy used in V-JEPA pretraining. We examine the following masking strategies: random-tube[r] in which x is obtained by removing a random fraction r of tubes (spatial patches extended across the entire temporal duration) from the video, causal multi-block[p] in which x is restricted to the first p frames of the 16-frame video, which are then masked with a random set of spatio-temporal blocks, and multi-block in which x obtained by masking a random set of spatio-temporal blocks from the entire video. Spatio-temporal blocks are sampled using the parameters described in Section 3.2; an ablation on the size and quantity of masked spatio-temporal blocks is provided in Appendix E.4.\n\nTable 4 indicates that the best results are obtained by sampling x using a multi-block strategy, wherein the network is forced to make predictions after removing large continuous blocks in the video. When x is only sampled from the first few frames of the video, as in the causal multi-block strategy, we observe a decrease in downstream performances. Finally, the random-tube strategy, wherein 90% of the tubes in the video are randomly masked, leads to features of low-semantic quality when combined with our feature prediction objective.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv3.pdf" - }, - { - "text": "### *What dataset management practices are necessary?*\n\nNo matter how a books data commons gets built, it will be important to consider broader aspects of data governance. For example:\n\n- **Dataset documentation and transparency:** Transparent documentation is important for any dataset used for AI training. A datasheet is a standardized form of documentation that includes information about provenance and composition of data, and includes information on management practices, recommended uses or collection process.\n- **Quality assurance:** Above, we note the many features that make books useful for AI training, as compared with web data, for example. That said, the institution managing a books commons dataset may still want to collect and curate the collection to meet the particular purposes of its users. For instance, it may want to take steps to mitigate biases inherent in the dataset, by ensuring books are representative of a variety of languages and geographies.\n- **Understanding uses:** The institution managing a books commons dataset could measure and study how the dataset is used, to inform future improvements. Such monitoring may also enable accountability measures with respect to uses of the dataset. Introducing community norms for disclosing datasets used in AI training and other forms of AI research would facilitate such monitoring.\n- **Governance mechanisms:** In determining matters like acceptable and ethical use, the fundamental question is \"who decides.\" While this might be settled simply by whoever sets up and operates the dataset and related infrastructure, participatory mechanisms — such as advisory bodies bringing together a broad range of users and stakeholders of a collection — could also be incorporated.", - "page_start": 19, - "page_end": 19, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **2 Approach**\n\nThe approach used for this User Manual was based on the identification of the main user functions of the Portal and the description of each function from the user's perspective in terms of \"*How to*…\".\n\nEach main function documentation consists of a screen snapshot, the steps required to execute the function and optionally a screenshot with the results.\n\n# **3 Main User Functions of the Portal**\n\nThis section describes all of the main user functions supported by the Portal Version 3.0.\n\n| The table 1-3 below lists the described functions by module. |\n| --- |\n\n| | Module Name | Function |\n| --- | --- | --- |\n| 1 | Portal HomePage | - How to browse through the Editorial Content Data) - How to view / search for \"Latest News\" - How to view / search for \"Open Data Events\" |\n| | | (how to access Resources on Open Data: eLearning |\n| | | modules, Training Companion, Reports about Open |\n| | | - How to subscribe to the EDP Newsletter |\n| | | - How to view \"Tweets\" on the EDP |\n| | | - How to switch to another User Language |\n| | | - How to search for EDP Site Content |\n| | | - How to search for Datasets by Data Category |\n| | | - How to search for Datasets by Keyword |\n| 2 | Datasets (Data Platform) | Entering the Datasets-View |\n| | | How to filter datasets by using \"Faceted Search\" |\n| | | How to store personal queries |\n| | | How to filter datasets by geographical area |\n| | | How to download dataset distributions |\n| | | How to view licensing information |\n| | | How to switch to another user language |\n| | | How to browse by data catalogues |\n| 3 | Visualization of Geo-Spatial | How to visualize geo-spatial data from a dataset resource |\n| | Data (map.apps) | |\n| 4 | Graphical Data Visualisation | How to visualize graphical data from a dataset resource |\n| | Tool | |\n| 5 | Help Desk | How to contact The Portal's Help Desk |\n| 6 | Metadata Quality Assurance | Monitoring tool for the metadata quality: |\n| | (MQA) | ‐ The Global Dashboard View |\n| | | ‐ The Catalogue details view |\n| 7 | SPARQL Manager | How to run SPARQL Queries using: |\n| | | - SPARQL Search |", - "page_start": 5, - "page_end": 5, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "# Q4: Are there any correlations between datasets with respect to model ranking?\n\nThe datasets correlation w.r.t model ranking are presented in appendix Figure 12. Except for two datasets (*MasakhaNEWSClusteringP2P*, *SummEvalFr*), the correlations, on average, are high. There is still enough diversity to make each dataset interesting for the French MTEB benchmark. Two groups (*SyntecReranking*/ *SyntecRetrieval*, *MassiveScenarioClassification*/ *MTOPDomainClassification*/ *MassiveIntentClassification*) exhibit notably high correlations (∼0.97). It is interesting to point out some sub-diagonal correlation blocks. The datasets being arranged by task indicate that models behave slightly more similarly within the same task than between two different tasks. This underscores the importance of having multiple tasks in the benchmark to select general-purpose models. For readers interested in specific tasks, it is more relevant to examine task-specific rankings rather than the overall one. The complementary results of model correlations w.r.t to strengths and weaknesses on datasets are displayed in appendix Figure 11. Strong correlations in behavior emerge among the variants of the same models (e.g. DistilBERT, sentence-croissant, sentence-t5, e5, etc.). Correlations are also generally observed among numerous models trained using the sentence transformers framework (Reimers and Gurevych, 2019), as well as proprietary models, e.g. from Cohere and OpenAI. Conversely, these models finetuned for sentence similarity, show minimal correlation with pre-trained models for which tokenembedding pooling techniques are employed.\n\n# 5 Conclusion and perspectives\n\nIn this work, we introduce a large-scale embedding benchmark for French to enable the research community and industry to select the most relevant embedding methods based on their specific needs. We undertake significant efforts in collecting 15 datasets and create 3 new quality-checked ones to enhance this collection. The whole French benchmark runs on 26 tasks. We select a diverse range of 51 models, including prominent French and multilingual models deemed most efficient to conduct a broad comparison. Our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. After an in-depth analysis of the results, OpenAI models perform significantly better than\n\nthe other models. However, other models should be considered for their performance on specific tasks, being open source or having a small embedding dimension.\n\nThis work opens several doors for future improvements. By examining dataset diversity in terms of topics and model ranking, we observe that the benchmark would benefit from additional datasets that introduce higher diversity. Beyond classification, many tasks focus on semantic similarity, explaining the strong performance of models trained for similarity. Exploring novel tasks in the generative spectrum or evaluating token embeddings (contextualized or not) on tasks like Named Entity Recognition could be an interesting path for future exploration. There are also opportunities for improvements on the model side. With numerous existing models that could be added to the leaderboard and many new proposals awaiting. For instance, we can already see the promising capabilities of early variants of recent models (Faysse et al., 2024) and expect that future proposals will come to compete strongly with closed-source models. Ultimately, we hope to see the emergence of other language-specific MTEB variants (e.g. for high-resource languages like Spanish and German), enabling a more comprehensive evaluation of multilingual model performance.\n\n# 6 Limitations\n\nNative French resources unavailability The availability of resources natively in French is an obvious limitation of our work. Regarding models, there are far fewer options than with more widespread languages such as English. Indeed, most of the existing French embedding models we found are trained using either older architectures or methods, unlike most recent multilingual models such as *NV-Embed-v1* (Lee et al., 2024) or *e5 mistral-7b-instruct* (Wang et al., 2023). Comparing models by family would be beneficial, particularly for evaluating French models against multilingual models on the same architecture using the same training technique. Resource limitations also apply to datasets. For example, the summarization task dataset is translated, which can be less relevant than a natively French dataset. We have also built datasets for reranking tasks using existing ones from retrieval task because we could not find any in French. This construction process introduces a bias as the model performance on both tasks may be", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv4.pdf" - }, - { - "text": "| Dataset | Syntec | HAL | SummEvalFr |\n| --- | --- | --- | --- |\n| Samples | 100 queries | 26233 samples | 100 texts |\n| | 90 documents | 10 classes | 1100 human summaries |\n| | | | 1600 machine summaries |\n| Creation process | Scraping of Syntec col | Scraping of HAL arti | Translation from English |\n| | lective bargaining agree | cles with id, title and do | to French with Deepl of |\n| | ment with articles as doc | main. Further cleaning | the SummEval dataset. |\n| | uments. Writing queries | with deduplication, lan | |\n| | corresponding to articles. | guage filtering and class | |\n| | | subsampling. | |\n| Annotation process | 4 annotators divided into | Annotations provided by | Detailed annotation pro |\n| | 2 groups. Each group was | authors when submitting | cess provided in Fabbri |\n| | given half of the articles | their paper. They choose | et al. (2021). |\n| | and asked to choose an ar | the domain between exist | |\n| | ticle and ask a question | ing academic fields. | |\n| | about it. Each annotator | | |\n| | wrote 25 questions. | | |\n| Quality checks | Human verification of an | Baseline models for clas | Correlation between |\n| | notations. | sification and topic model | BLEU and ROUGE |\n| | | ing. | scores of the French |\n| | | | and the original English |\n| | | | datasets. LLM as-a-judge |\n| | | | translation rating and |\n| | | | human verification. |\n\nTable 1: New datasets details with the number of samples, the creation process, the annotation process and the quality checks. All datasets are test splits.\n\n- Samples belonging to *domain* classes with less than 500 samples were removed, which leads us to keep only 10 classes.\n- Subsampling was performed on 2 classes containing more than 10k samples each to lower the number of samples and mitigate the unbalance of the dataset.\n\nMore details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: *TF-IDF + SVM*, a fine-tuned *Camembert* (Martin et al., 2019) and *GPT-4* leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n#### 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,\n\n6 https://www.deepl.com", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "2023) where given the original human summary in English and its translation in French, the model rates the quality of the translation from 0 to 10, with 0 being of very bad quality and 10 being excellent. The prompt is available in Figure 8. Additionally, we manually check random translations with ratings between 9 and 10 to ensure the rating is relevant. We do the same for all translations with a score less than 9 and correct them7 (see the rating distribution in Table 6).\n\n| Dataset | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-L |\n| --- | --- | --- | --- | --- |\n| SummEval | 0.205 | 0.292 | 0.099 | 0.193 |\n| SummEvalFr | 0.276 | 0.302 | 0.117 | 0.194 |\n| Correlation En-Fr | 0.70 | 0.85 | 0.80 | 0.84 |\n\nTable 2: Average ROUGE and BLUE scores computed between machine summaries and human summaries for the original English SummEval and its translation to French. The correlations of the individual scores between English and French are also reported.\n\n#### 3.1.4 Data for the Reranking task\n\nThe reranking task, as evaluated in MTEB, requires datasets composed of a set of queries, each associated with relevant and irrelevant documents. Despite our efforts, we found no French dataset that natively exhibits such a structure. Thus, to evaluate this task, we built data for the reranking task based on the *Syntec* and *Alloprof* (Lefebvre-Brossard et al., 2023) datasets. These already feature queries and labeled relevant documents. Irrelevant ones were added using the following process:\n\n- To avoid bias, we use the BM25 algorithm (Robertson and Jones, 1976) (which is a deterministic method) to rank documents in terms of relevance regarding each query.\n- The top 10 documents that are not labeled as relevant constitute the negative samples.\n\nWe recognize that this process leads to a high correlation between the retrieval and reranking tasks. We still think it is essential to make the latter available, with an open door to future improvement8 .\n\n#### 3.1.5 Similarity analysis\n\nWe investigate the proximity between the datasets' topics to give insights about the benchmark contents. The methodology introduced by Muennighoff et al. (2022), i.e. computing an average embedding of samples from each dataset, is used to build a dataset-similarity matrix (displayed in appendix Figure 3). The distances between averaged embedding vectors of each dataset (which range from 0.89 to 1 in Figure 3) remain hard to interpret into a dataset semantic proximity. Thus, we complement this by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\nFigures 4 and 3 seem to correlate, showing high similarity between two datasets when the same underlying data is used in different tasks. Dataset topics are pretty close, with some exceptions, such as the Syntec dataset. As more datasets are added to the benchmark, this analysis will help select new data that do not produce redundant results. It may also help to understand the link between the results and the datasets' topics.\n\n# 3.2 Models\n\nFor comparison on our benchmark, we selected various models to fulfil three objectives.\n\n- Quantity: The aim was to compare a substantial number of models (51 in total) to provide comprehensive results, facilitating the community in selecting effective French models.\n- Relevance: It was imperative to include top performers from the MTEB benchmark (Muennighoff et al., 2022). We mainly selected multilingual models and some English models to asses their language-transferring abilities. Additionally, we integrated natively French transformer-based models such as *CamemBERT* (Martin et al., 2019), *FlauBERT* (Le et al., 2020) and even the very recent *CroissantLLM* (Faysse et al., 2024).\n- Variety: Diverse model types were included to offer an insightful analysis across various model characteristics (dimension, training strategy, etc.).\n\nIn line with the third objective, we explicit below the studied characteristics of embedding models that will be discussed with the results.\n\n- *Embedding dimension:* This critical element influences the expressiveness of the represen-\n7 SummEvalFr available at: https://huggingface.co./ datasets/lyon-nlp/summarization-summeval-fr-p2p\n\n8 SyntecReranking available at: https: //huggingface.co/datasets/lyon-nlp/ mteb-fr-reranking-syntec-s2p and AlloprofRerank-\n\ning available at: https://huggingface.co./datasets/ lyon-nlp/mteb-fr-reranking-alloprof-s2p", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv4.pdf" - }, - { - "text": "### **3.4 Graphical Data Visualisation Tool**\n\nThis section describes the features of the graphical visualisation tool for numeric data. The features are currently available for XLS (Excel) and CSV files, except for the selection of the sheet name which is applicable only for Excel files.\n\nMost GUI elements from the \"Graph\" tab (records selection, search box, filters and fields buttons) are also available on the \"Grid\" tab and work in the same way.\n\n#### **3.4.1 How to visualize graphical data from a dataset resource**\n\nAs a result of a dataset search, the system displays on the \"Dataset\" tab all distributions (resource/data files) that are part of the selected dataset. Each XLS or CSV distribution of the dataset can be further explored by clicking on \"Open Visualization\" under the \"Options\" button – if available.\n\n| C | What we do ▼ Data ▼ Using Data · | Providing Data · | Resources · |\n| --- | --- | --- | --- |\n| | Dataset Categories Similar Datasets | | Feedback |\n| | English Indices of Deprivation 2010 | | |\n| | 2 data.gov.uk | | Updated: - |\n| | The English Indices of Deprivation 2010 provide a relative measure of deprivation at small area level across | | |\n| | England. Areas are ranked from least deprived to most deprived on seven different dimensions of deprivation and | | |\n| | an overall composite measure of multiple deprivation. Most of the data underlying the 2010 Indices are for the | | |\n| | year 2008. The domains used in the Indices of Deprivation 2010 are: income deprivation; employment deprivation; | | |\n| | health deprivation and disability; education deprivation; crime deprivation; barriers to housing and services | | |\n| | deprivation; and living environment deprivation. Each of these domains has its own scores and ranks, allowing | | |\n| | users to focus on specific aspects of deprivation. In addition, two supplementary indices measure income | | |\n| | deprivation amongst children - the Income Deprivation Affecting Children Index (IDACI) - and older people - the | | |\n| | Income Deprivation Affecting Older People Index (IDAOPI). | | |\n| i | An updated translation of this dataset is in progress. | | × |\n| | Distributions (19) | | |\n| xr2 | 2010: Supplementary indices children & older people Options | | Download ~ |\n| | Licence: open-government-licence Open Visualisation | | |\n| CSV | 2010: All domains, sub domains & supplementary indices Options V | | Download v |\n| | Licence: open-government-licence | | |\n| ਮਾਟ | 2010: Sub-domains barriers to housing & services Options V | | Download v |", - "page_start": 42, - "page_end": 42, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "Where will the 2024 AI + Energy summit take place ?", - "target_page": 1, - "target_passage": "The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). *The Economist*. 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n- 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). *CNN Business*. Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n- 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). *The New York Times*. Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n- 203. \"Electricity 2024 Analysis\" (https://www.iea.org/reports/electricity-2024). *IEA*. 24 January 2024. Retrieved 13 July 2024.\n- 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). *Vox*. New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n- 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm_campaign=wp_post_most&utm_medium =email&utm_source=newsletter&wpisrc=nl_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). *Washington Post*.\n- 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). *Goldman Sachs*. Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n- 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). *Wall Street Journal*. Dow Jones.\n- 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). *Wall Street Journal*. Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n- 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). *Bloomberg*.\n- 210. Halper, Evan (20 September 2024). \"Microsoft deal would reopen Three Mile Island nuclear plant to power AI\" (https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mi le-island-nuclear-constellation). *Washington Post*.", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 314. Milmo, Dan (3 November 2023). \"Hope or Horror? The great AI debate dividing its pioneers\". *The Guardian Weekly*. pp. 10–12.\n- 315. \"The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023\" (https://web.archive.org/web/20231101123904/https://www.gov.uk/government/public ations/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countrie s-attending-the-ai-safety-summit-1-2-november-2023). *GOV.UK*. 1 November 2023. Archived from the original (https://www.gov.uk/government/publications/ai-safety-summit-20 23-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-s ummit-1-2-november-2023) on 1 November 2023. Retrieved 2 November 2023.\n- 316. \"Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration\" (https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible -development-of-frontier-ai-in-landmark-bletchley-declaration). *GOV.UK* (Press release). Archived (https://web.archive.org/web/20231101115016/https://www.gov.uk/government/ne ws/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchle y-declaration) from the original on 1 November 2023. Retrieved 1 November 2023.\n- 317. \"Second global AI summit secures safety commitments from companies\" (https://www.reuter s.com/technology/global-ai-summit-seoul-aims-forge-new-regulatory-agreements-2024-05-2 1). Reuters. 21 May 2024. Retrieved 23 May 2024.\n- 318. \"Frontier AI Safety Commitments, AI Seoul Summit 2024\" (https://web.archive.org/web/2024 0523201611/https://www.gov.uk/government/publications/frontier-ai-safety-commitments-aiseoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024). gov.uk. 21 May 2024. Archived from the original (https://www.gov.uk/government/publications/frontier-ai-safe ty-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-202 4) on 23 May 2024. Retrieved 23 May 2024.\n- 319. Russell & Norvig 2021, p. 9.\n- 320. Copeland, J., ed. (2004). *The Essential Turing: the ideas that gave birth to the computer age*. Oxford, England: Clarendon Press. ISBN 0-1982-5079-7.\n- 321. \"Google books ngram\" (https://books.google.com/ngrams/graph?content=electronic+brain& year_start=1930&year_end=2019&corpus=en-2019&smoothing=3). Archived (https://web.ar chive.org/web/20241005170209/https://books.google.com/ngrams/graph?content=electronic +brain&year_start=1930&year_end=2019&corpus=en-2019&smoothing=3) from the original on 5 October 2024. Retrieved 5 October 2024.\n- 322. AI's immediate precursors: McCorduck (2004, pp. 51–107), Crevier (1993, pp. 27–32), Russell & Norvig (2021, pp. 8–17), Moravec (1988, p. 3)\n- 323. Turing's original publication of the Turing test in \"Computing machinery and intelligence\": Turing (1950) Historical influence and philosophical implications: Haugeland (1985, pp. 6– 9), Crevier (1993, p. 24), McCorduck (2004, pp. 70–71), Russell & Norvig (2021, pp. 2, 984)\n- 324. Crevier (1993), pp. 47–49.\n- 325. Russell & Norvig (2003), p. 17.\n- 326. Russell & Norvig (2003), p. 18.\n- 327. Newquist (1994), pp. 86–86.\n- 328. Simon (1965, p. 96) quoted in Crevier (1993, p. 109)\n- 329. Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)\n- 330. Russell & Norvig (2021), p. 21.\n- 331. Lighthill (1973).\n- 332. NRC 1999, pp. 212–213.\n- 333. Russell & Norvig (2021), p. 22.\n- 334. Expert systems: Russell & Norvig (2021, pp. 23, 292), Luger & Stubblefield (2004, pp. 227– 331), Nilsson (1998, chpt. 17.4), McCorduck (2004, pp. 327–335, 434–435), Crevier (1993, pp. 145–162, 197–203), Newquist (1994, pp. 155–183)", - "page_start": 47, - "page_end": 47, - "source_file": "wikipedia3.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n- 283. Christian (2020), pp. 67, 73.\n- 284. Yudkowsky (2008).\n- 285. Anderson & Anderson (2011).\n- 286. AAAI (2014).\n- 287. Wallach (2010).\n- 288. Russell (2019), p. 173.\n- 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). *Business Insider*. Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n- 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). *TechCrunch*. Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n- 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). *MIT Technology Review*. Retrieved 14 April 2024.\n- 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). *AI Business*. Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n- 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). *Ars Technica*. Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). *VentureBeat*. Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n- 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). *Vox*. Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and _safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI.[195][196] Another discussed approach is to envision a separate *sui generis* system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[197]\n\n#### **Dominance by tech giants**\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[201][202]\n\n#### **Power needs and environmental impacts**\n\nIn January 2024, the International Energy Agency (IEA) released *Electricity 2024, Analysis and Forecast to 2026*, forecasting electric power use.[203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[205]\n\nA 2024 Goldman Sachs Research Paper, *AI Data Centers and the Coming US Power Demand Surge*, found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[207]\n\nIn 2024, the *Wall Street Journal* reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.[209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n- 266. Russell & Norvig 2021, p. 1001.\n- 267. Bostrom (2014).\n- 268. Russell (2019).\n- 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n- 270. Harari (2023).\n- 271. Müller & Bostrom (2014).\n- 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n- 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). *CBS News*. 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n- 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis-1.6829302). *CBC*. Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n- 275. \" '50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). *Bloomberg BNN*. 14 June 2024. Retrieved 6 July 2024.\n- 276. Valance (2023).\n- 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). *The Guardian*. Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n- 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). *Fox News*. Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). *Forbes*. Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). *Financial Times*. Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n- 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). *Wired*. Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[301]\n\n#### **Regulation**\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\".[304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\".[312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 160. Alex McFarland: *7 Best AI for Math Tools.* (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n- 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n- 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n- 163. Congressional Research Service (2019). *Artificial Intelligence and National Security* (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n- 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n- 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). *+972 Magazine*. Retrieved 6 April 2024.\n- 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). *The Guardian*. Retrieved 4 December 2023.\n- 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender – deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). *Neue Zürcher Zeitung* (in German). Retrieved 10 August 2024.\n- 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n- 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n- 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). *The New York Times*. Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n- 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). *Bloomberg News*. Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "### References\n\n- [1] \"Chatbot Arena LLM Leaderboard: Community-driven evaluation for best LLM and AI chatbots,\" https:// huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard, accessed: 2024-11-14.\n- [2] \"Hello gpt-4o,\" https://openai.com/index/hello-gpt-4o/, published: 2024-05-23.\n- [3] \"Introducing Llama 3.1: Our most capable models to date,\" https://ai.meta.com/blog/meta-llama-3-1/, published: 2024-07-23.\n- [4] \"Introducing Meta Llama 3: The most capable openly available LLM to date,\" https://ai.meta.com/blog/ meta-llama-3/, published: 2024-04-18.\n- [5] \"Martian LLM router,\" https://withmartian.com/.\n- [6] \"New embedding models and API updates,\" https://openai.com/index/new-embedding-models-and-api-updates, published: 2024-01-25.\n- [7] \"Notdiamond LLM router,\" https://www.notdiamond.ai/.\n- [8] \"OpenAI and others seek new path to smarter AI as current methods hit limitations,\" https://www.reuters.com/technology/artificial-intelligence/ openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11, published: 2024-11-15.\n- [9] \"OpenAI, Google and Anthropic are struggling to build more advanced AI,\" https://www.bloomberg.com/news/ articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai?sref=CrGXSfHu, published: 2024-11-13.\n- [10] \"OpenAI shifts strategy as rate of 'GPT' AI improvements slows,\" https://www.theinformation.com/articles/ openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows, published: 2024-11-9.\n- [11] \"Openrouter LLM router,\" https://openrouter.ai/.\n- [12] \"Unify LLM router,\" https://unify.ai/.\n- [13] \"What is a control plane?\" https://www.ibm.com/think/topics/control-plane, published: 2024-10-31.\n- [14] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat *et al.*, \"GPT-4 technical report,\" *arXiv preprint arXiv:2303.08774*, 2023.\n- [15] P. Aggarwal, A. Madaan, A. Anand, S. P. Potharaju, S. Mishra, P. Zhou, A. Gupta, D. Rajagopal, K. Kappaganthu, Y. Yang *et al.*, \"Automix: Automatically mixing language models,\" *arXiv preprint arXiv:2310.12963*, 2023.\n- [16] G. Alon and M. Kamfonas, \"Detecting language model attacks with perplexity,\" *arXiv preprint arXiv:2308.14132*, 2023.\n- [17] R. A. Bradley and M. E. Terry, \"Rank analysis of incomplete block designs: I. the method of paired comparisons,\" *Biometrika*, vol. 39, no. 3/4, 1952.\n- [18] N. Carlini, D. Paleka, K. D. Dvijotham, T. Steinke, J. Hayase, A. F. Cooper, K. Lee, M. Jagielski, M. Nasr, A. Conmy *et al.*, \"Stealing part of a production language model,\" *arXiv preprint arXiv:2403.06634*, 2024.\n- [19] H. Chaudhari, G. Severi, J. Abascal, M. Jagielski, C. A. Choquette-Choo, M. Nasr, C. Nita-Rotaru, and A. Oprea, \"Phantom: General trigger attacks on retrieval augmented language generation,\" *arXiv preprint arXiv:2405.20485*, 2024.\n- [20] L. Chen, M. Zaharia, and J. Zou, \"FrugalGPT: How to use large language models while reducing cost and improving performance,\" *arXiv preprint arXiv:2305.05176*, 2023.\n- [21] W.-L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, B. Zhu, H. Zhang, M. Jordan, J. E. Gonzalez, and I. Stoica, \"Chatbot arena: An open platform for evaluating LLMs by human preference,\" in *Forty-first International Conference on Machine Learning (ICML)*, 2024.\n- [22] S. Cho, S. Jeong, J. Seo, T. Hwang, and J. C. Park, \"Typos that broke the RAG's back: Genetic attack on RAG pipeline by simulating documents in the wild via low-level perturbations,\" *arXiv preprint arXiv:2404.13948*, 2024.\n- [23] J. Chu, Y. Liu, Z. Yang, X. Shen, M. Backes, and Y. Zhang, \"Comprehensive assessment of jailbreak attacks against LLMs,\" *arXiv preprint arXiv:2402.05668*, 2024.\n- [24] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano *et al.*, \"Training verifiers to solve math word problems,\" *arXiv preprint arXiv:2110.14168*, 2021.\n- [25] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, \"Adversarial classification,\" in *Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining*, 2004.", - "page_start": 18, - "page_end": 18, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "What is the United States SCSP ?", - "target_page": 1, - "target_passage": "he Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "The system migration facility is required if you plan to migrate application group index data from the database to the archive. You initialize the system migration facility by completing the following steps:\n\n- 1. Move to the Content Manager OnDemand executable directory by running the following command:\n/opt/IBM/ondemand/V9.5/bin\n\n- 2. Run the **ARSSYSCR** program for this instance and use the **-I** parameter:\narssyscr - I *ARC95037* -m\n\nAgain, - I ARC95037 is the new Content Manager OnDemand instance.\n\nThe **ARSSYSCR** program creates the application groups, applications, and folders that are required by the system logging, system load, and system migration facilities.\n\n# **2.5.3 Starting and verifying the new instance**\n\nNow that the new instance is set up, you can start it and verify that it is installed correctly.\n\n#### **Starting the new instance**\n\nWhen everything is set up, you can start the new instance by customizing the sample procedure in the SARSINST library to conform to your environment.\n\nFigure 2-11 shows an example of starting the new instance.\n\n```\n//ARS95037 PROC PARML= \n//* \n//* Library: USER.PRIVATE.PROCLIB(ARS95037) \n//* \n//ARS95037 EXEC PGM=ARSSOCKD,REGION=0M,TIME=NOLIMIT, \n// PARM=('/VERBOSE ARC95037') \n//STEPLIB DD DISP=SHR,DSN=ARS.ARSV950.SARSLOAD \n// DD DISP=SHR,DSN=DSN.DB2V910.SDSNEXIT \n// DD DISP=SHR,DSN=DSN.DB2V910.SDSNLOAD \n// DD DISP=SHR,DSN=DSN.DB2V910.SDSNLOD2 \n//ARSBIN DD PATH='/usr/lpp/ars/V9R5M0/bin' \n//DSNAOINI DD PATH='/etc/ars/cli937.ini' \n//SYSPRINT DD SYSOUT=* \n//SYSOUT DD SYSOUT=*\n```\nFigure 2-11 Sample Content Manager OnDemand procedure\n\nAfter this procedure is started, log on to the new instance by using the different port number and create users, application groups, applications, and storage sets with the normal procedures.\n\n#### **Running arsload to check the new instance and new file system**\n\nAfter all of the configuration work is complete and the application group, application, and folder are created, run **arsload** for installation verification. Figure 2-12 on page 44 shows the procedure that is used to load data to the new instance. If you see problems in loading the file (writing an object), check the user permissions.", - "page_start": 66, - "page_end": 66, - "source_file": "sg246915.pdf" - }, - { - "text": "# **2.5.2 Creating an instance on z/OS**\n\nIn this section, we explain how to create an instance on the z/OS system. To do so, complete the following steps:\n\n- 1. Copy the control files.\n- 2. Verify the ARS.INI file.\n- 3. Verify the ARS.CFG file.\n- 4. Modify the ARS.CACHE file.\n- 5. Verify the CLI.INI file.\n- 6. Modify the **ARSSOCKD** procedure.\n- 7. Modify the **ARSLOAD** procedure.\n\nYou can mount the Content Manager OnDemand installation directory at any mount point other than /usr/lpp/ars/V9R5M0. You can run at different service levels with this flexibility. For example, a symmetric multiprocessor (SMP) might be used to install into SERVICE/usr/lpp/ars/V9R5M0. SERVICE/usr/lpp/ars/V9R5M0 might be copied into /usr/lpp/ars/V9R5M0/maint for testing. When testing is complete, /usr/lpp/ars/V9R5M0/maint might be copied into /usr/lpp/ars/V9R5M0 for production.\n\n# **Copying the control files**\n\nTo copy the control files, complete the following steps:\n\n- 1. Create a directory (/etc/ars) for maintaining the updated configuration files.\n- 2. Create a symbolic link from the installed directory /usr/lpp/ars/V9R5M0/config to the /etc/ars directory, for example, ln -s /etc/ars /usr/lpp/ars/V9R5M0/config.\n- 3. Set the appropriate access mode of 755.\n\n#### *ARS.INI*\n\nThe ARS.INI file contains a section for each instance; each section begins with a header. It is created at installation time and, by default, it is configured with information for the archive instance. In this scenario, ARC95037 is the header line definition.\n\nFigure 2-7 shows the content of a sample ARS.INI file.\n\n```\n[@SRV@_ARC95037]\nHOST=MyHost \nPROTOCOL=2 \nPORT=1937 \nSRVR_INSTANCE=ARSDB937 \nSRVR_INSTANCE_OWNER=ARSUS937 \nSRVR_OD_CFG=/usr/lpp/ars/V9R5M0/config/ars937.cfg \nSRVR_SM_CFG=/usr/lpp/ars/V9R5M0/config/ars937.cache \nSSL_KEYRING_STASH=/usr/lpp/ars/V9R5M0/config/ars937.stash \nSRVR_FLAGS_SECURITY_EXIT=0 \nSRVR_FLAGS_FOLDER_APPLGRP_EXIT=0 \nSRVR_FLAGS_DOCUMENT_EXIT=0 \nSRVR_FLAGS_SQL_QUERY_EXIT=0 \nSRVR_FLAGS_FORCE_SECURITY=0\n```\nFigure 2-7 ARS.INI file sample", - "page_start": 61, - "page_end": 61, - "source_file": "sg246915.pdf" - }, - { - "text": "Figure 2-6 Single instance overview on z/OS\n\n# **2.5.1 Installation overview**\n\nThe path for the Content Manager OnDemand system is /usr/lpp/ars/V9R5M0 (on z/OS) and /opt/IBM/ondemand/V9.5 (on UNIX). From the ars directory, several directories contain the Content Manager OnDemand files and executable files, such as programs and procedures. The directories are created during the installation when you run the ARSMKDIR REXX routine from the installation library, ARS.V9R5M0.SARINST. The /usr/lpp/ars/V9R5M0 directory contains the subdirectories that are listed in Table 2-3.\n\n| Directory | Content |\n| --- | --- |\n| bin | All executable files, such as arsdb for creating the database |\n| config | All configuration datasets, such as ARS.INI |\n| locale | All subdirectories for globalization |\n| MidServer | All configuration files for Structured APIs |\n| samples | All sample files for updating |\n| www | All subdirectories for ODWEK |\n\nTable 2-3 Subdirectories of /usr/lpp/ars\n\n**Important:** All path parameters and commands are *case-sensitive*.\n\nSometimes when you choose a directory, such as /usr/lpp/ars/V9R5M0/bin, you see a different path when you run **pwd** because a symbolic link is set. A *symbolic link* is a file that contains the path name for another file or directory. Only the original path name is the real name. An *external link* is a type of symbolic link; it links to an object outside of the hierarchical file system (HFS). Typically, it contains the name of an IBM MVS™ dataset.", - "page_start": 60, - "page_end": 60, - "source_file": "sg246915.pdf" - }, - { - "text": "When the HBA on the host scans for devices that are attached to it, the HBA discovers all of the volumes that are mapped to its FC ports and their SCSI identifiers (SCSI LUN IDs).\n\nFor example, the first disk that is found is generally SCSI LUN 1. You can control the order in which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you do not specify a SCSI LUN ID when mapping a volume to the host, the storage system automatically assigns the next available SCSI LUN ID, based on any mappings that exist with that host.\n\nExample 7-21 shows how to map volumes volume_B and volume_C to defined host Almaden by using **mkvdiskhostmap** command.\n\n*Example 7-21 The mkvdiskhostmap command*\n\n```\nIBM_Storwize:ITSO:superuser>mkvdiskhostmap -host Almaden volume_B\nVirtual Disk to Host map, id [0], successfully created\nIBM_Storwize:ITSO:superuser>mkvdiskhostmap -host Almaden volume_C \nVirtual Disk to Host map, id [1], successfully created\n```\nExample 7-22 shows the output of the **lshostvdiskmap** command, which shows that the volumes are mapped to the host.\n\n*Example 7-22 The lshostvdiskmap -delim command*\n\n```\nIBM_2145:ITSO_CLUSTER:superuser>lshostvdiskmap -delim : \nid:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID\n2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020\n2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021\n```\n**Assigning a specific LUN ID to a volume:** The optional **-scsi scsi_lun_id** parameter can help assign a specific LUN ID to a volume that is to be associated with a host. The default (if nothing is specified) is to assign the next available ID based on current volume mapped to the host.\n\nCertain HBA device drivers stop when they find a gap in the sequence of SCSI LUN IDs, as shown in the following examples:\n\n- -Volume 1 is mapped to Host 1 with SCSI LUN ID 1.\n- -Volume 2 is mapped to Host 1 with SCSI LUN ID 2.\n- -Volume 3 is mapped to Host 1 with SCSI LUN ID 4.\n\nWhen the device driver scans the HBA, it might stop after discovering volumes 1 and 2 because no SCSI LUN is mapped with ID 3.\n\n**Important:** Ensure that the SCSI LUN ID allocation is contiguous.\n\nIf you are using host clusters, use the **mkvolumehostclustermap** command to map a volume to a host cluster instead (see Example 7-23).\n\n*Example 7-23 The mkvolumehostclustermap command*\n\n```\nBM_Storwize:ITSO:superuser>mkvolumehostclustermap -hostcluster vmware_cluster \nUNCOMPRESSED_VOL\nVolume to Host Cluster map, id [0], successfully created\n```", - "page_start": 327, - "page_end": 327, - "source_file": "sg247938.pdf" - }, - { - "text": "# **Other resources**\n\nThe following publications are also relevant as further information sources:\n\n- -*IBM System Storage Master Console: Installation and User's Guide*, GC30-4090\n- - *IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference*, SC26-7545\n- - *IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface User's Guide*, SC26-7544\n- - *IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide*, SC26-7543\n- - *IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide*, SC26-7563\n- - *IBM System Storage Open Software Family SAN Volume Controller: Installation Guide*, SC26-7541\n- - *IBM System Storage Open Software Family SAN Volume Controller: Planning Guide*, GA22-1052\n- - *IBM System Storage Open Software Family SAN Volume Controller: Service Guide*, SC26-7542\n- - *IBM System Storage SAN Volume Controller - Software Installation and Configuration Guide,* SC23-6628\n- - *IBM System Storage SAN Volume Controller V6.2.0 - Software Installation and Configuration Guide,* GC27-2286\n- - *IBM System Storage SAN Volume Controller 6.2.0 Configuration Limits and Restrictions*, S1003799\n- -*IBM TotalStorage Multipath Subsystem Device Driver User's Guide*, SC30-4096\n- -*IBM XIV and SVC Best Practices Implementation Guide*\n\nhttp://ibm.co/1bk64gW\n\n- - *Considerations and Comparisons between IBM SDD for Linux and DM-MPIO* http://ibm.co/1CD1gxG\n# **Referenced websites**\n\nThese websites are also relevant as further information sources:\n\n- - IBM Storage home page http://www.ibm.com/systems/storage\n- - SAN Volume Controller supported platform http://ibm.co/1FNjddm\n- - SAN Volume Controller IBM Knowledge Center http://www.ibm.com/support/knowledgecenter/STPVGU/welcome\n- - Cygwin Linux-like environment for Windows http://www.cygwin.com", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "- 3. Create the instance by using the Create Instance for Content Manager OnDemand (**CRTINSTOND**) command.\nAt a minimum, you must specify the name of the instance (which then uses system values and defaults for the additional parameters, such as *DFT for the **PORT** parameter, which uses port 1445). You can specify additional parameters to customize the instance to meet your requirements.\n\nFor example, you can specify a three-character language identifier (by using the **LANGID** parameter), which must match one of the language identifiers that are listed in Chapter 13, \"Defining a locale\", in the IBM Content Manager OnDemand i - Planning and Installation Guide, SC19-2790. If you specify the **LOCALE** parameter, the one that you specify must be included in the list of valid locales that are listed in Chapter 13, \"Defining a locale\", in the IBM Content Manager OnDemand i - Planning and Installation Guide, SC19-2790.\n\nIf the instance is in a user auxiliary storage pool (ASP), the user ASP number (2 - 32) must be specified for the **ASP** parameter and *ASP must be specified for the **ASPDEV** parameter. If the instance is in an independent auxiliary storage pool (IASP), *ASPDEV must be specified for the **ASP** parameter and the IASP name (such as IASP2) must be specified for the **ASPDEV** parameter.\n\nFor example, the Create instance for Content Manager OnDemand command **CRTINSTOND INSTANCE(ONDTEST) LANGID(ENU) LOCALE('/QSYS.LIB/EN_US.LOCALE')** creates an instance that is called ONDTEST with a server language of US English that uses TCP/IP port 1445.\n\nThe **CRTINSTOND** command performs the following actions:\n\n- Creates the /CONFIG directory under /QIBM/UserData/OnDemand and the default and model files under /QIBM/UserData/OnDemand (if they do not exist).\n- Appends the model ARS.INI file (in /QIBM/ProdData/OnDemand/config) to the actual ARS.INI file (in /QIBM/UserData/OnDemand/config) and uses the name of the instance wherever it finds [instance] in the model file.\n- Creates the instance directory /QIBM/UserData/OnDemand/*[instance]*. If the instance is in an Independent ASP, the instance directory path is preceded by the Independent ASP name. For example, if the Independent ASP name is IASP, the instance directory is created in /IASP/QIBM/UserData/OnDemand.\n- Creates the ARS.CFG, ARS.CACHE, and ARS.DBFS files in /QIBM/UserData/OnDemand/*[instance]* and uses the name of the instance wherever it finds [instance] and the language identifier wherever it finds [language] in the model file. (The model files for these three files are in /QIBM/ProdData/OnDemand/config.) If the instance is in an Independent ASP, the instance directory path is preceded by the Independent ASP name. For example, if the Independent ASP name is IASP, the ARS.CFG, ARS.CACHE, and ARS.DBFS files are created in /IASP/QIBM/UserData/OnDemand/*[instance]*.\n- Creates the library and database tables for the instance. If the instance is in an IASP, you must set the ASP Group before you can work with files in that library. Run the Set ASP Group (**SETASPGRP**) command to set the ASP Group.\n- Creates the directories that are needed for the instance as specified in the ARS.CFG and ARS.CACHE files.\n- Creates a user profile with the same name as the instance, and adds that user to the instance as a Content Manager OnDemand system administrator.\n- Adds the user QONDADM to the instance as a Content Manager OnDemand system administrator.", - "page_start": 54, - "page_end": 54, - "source_file": "sg246915.pdf" - }, - { - "text": "```\n//ARSLOAD PROC\n//ARSLOAD EXEC PGM=ARSLOAD,REGION=0M,TIME=NOLIMIT,\n// PARM=('/-h ARC95037 -C Q')\n//STEPLIB DD DISP=SHR,DSN=ARSV950.AE.SARSLOAD\n// DD DISP=SHR,DSN=SYS1.DB1K.SDSNEXIT \n// DD DISP=SHR,DSN=SYS1.DB1K.SDSNLOAD \n// DD DISP=SHR,DSN=SYS1.DB1K.SDSNLOD2\n// DD DISP=SHR,DSN=ACIF.V4R3M0.SAPKMOD1\n//**********************************************\n//SYSPRINT DD SYSOUT=*,RECFM=FBA,LRECL=121,BLKSIZE=6050\n//SYSOUT DD SYSOUT=*\n//***********************************************\n//* The following 2 DD statements should be uncommented and\n//* customized if the PDF indexer is used.\n//***********************************************\n//*ADOBERES DD DSN=ADOBE.PDFLIB.RESOURCE.INDEX(ADOBERES),DISP=SHR\n//*ADOBEFNT DD DSN=ADOBE.PDF405.PLUSP1C.ADOBEFNT.LST,DISP=SHR\n```\nFigure 2-12 ARSLOAD for new instance", - "page_start": 67, - "page_end": 67, - "source_file": "sg246915.pdf" - }, - { - "text": "### **PART I**\n\n### **ITEM 1. BUSINESS**\n\n### **Company Overview**\n\nWe are a leading provider of services in the domestic non-hazardous solid waste industry. We provide non-hazardous solid waste collection services for commercial, industrial, municipal and residential customers through 140 collection companies in 22 states. We also own or operate 96 transfer stations, 58 solid waste landÑlls and 35 recycling facilities.\n\nAs of December 31, 2004, our operations were organized into Ñve regions whose boundaries may change from time to time: Eastern, Central, Southern, Southwestern and Western. Each region is organized into several operating areas and each area contains a group of operating locations. Each of our regions and substantially all our areas provide collection, transfer, recycling and disposal services. We believe that this organizational structure facilitates the integration of our operations within each region, which is a critical component of our operating strategy. See Note 10 of the Notes to Consolidated Financial Statements for further discussion of operating segments.\n\nWe had revenue of $2,708.1 million and $2,517.8 million and operating income of $452.3 million and $412.7 million for the years ended December 31, 2004 and 2003, respectively. The $190.3 million, or 7.6%, increase in revenue from 2003 to 2004 is primarily attributable to the successful execution of our operating and growth strategies described below. The $39.6 million, or 9.6%, increase in operating income from 2003 to 2004 is partially due to higher self-insurance expense during 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through the 2003 policy year. The remaining increase in operating income is due to the successful execution of our operating and growth strategies described below.\n\nOur presence in high growth markets throughout the Sunbelt, including California, Florida, Georgia, Nevada, North Carolina, South Carolina and Texas, and in other domestic markets that have experienced higher than average population growth during the past several years, supports our internal growth strategy. We believe that our presence in these markets positions our company to experience growth at rates that are generally higher than the industry's overall growth rate.\n\nWe continue to focus on enhancing stockholder value by implementing our Ñnancial, operating and growth strategies as described below.\n\n### **Industry Overview**\n\nBased on analysts' reports and industry trade publications, we believe that the United States nonhazardous solid waste services industry generates annual revenue of approximately $44.0 billion, of which approximately 50% is generated by publicly-owned waste companies, 21% is generated by privately-held waste companies, and 29% is generated by municipal and other local governmental authorities. Three companies generate the substantial majority of the publicly-owned companies' total revenue. However, according to industry data, the domestic non-hazardous waste industry remains highly fragmented as privately-held companies and municipal and other local governmental authorities generate approximately 50% of total industry revenue. In general, growth in the solid waste industry is linked to growth in the overall economy, including the level of new household and business formation.\n\nThe solid waste industry experienced a period of rapid consolidation in the late 1990's. During that time we were able to grow signiÑcantly through acquisitions. However, acquisitions in the industry have slowed considerably since late 1999. Despite this, we believe that the opportunity to grow through acquisitions still exists, albeit at a slower pace than experienced in previous years, as a result of the following factors:\n\n*Subtitle D Regulation.* Subtitle D of the Resource Conservation and Recovery Act of 1976, as currently in eÅect, and similar state regulations have signiÑcantly increased the amount of capital, technical expertise, operating costs and Ñnancial assurance obligations required to own and operate a", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "# **Stash file information for z/OS**\n\nTo use **arsstash** with Content Manager OnDemand for z/OS, the Integrated Cryptographic Service Facility (ICSF) must be available on the z/OS system to provide AES-128 encryption. The encryption can be performed in either software or hardware.\n\nIn the examples, a started task name of CSF is used. If CSF is not running, when you try to create a stash file, you get the following message, which does not identify the problem:\n\nVerify OnDemand Password: ARS1602E The stash file >/u/myuser/prodstash.stash< is invalid. /usr/lpp/ars/V9R5M0/bin: >\n\nTo verify that CSF is up and running so that Content Manager OnDemand V9.5 can use it, use the **MODIFY** command against **ARSSOCKD**.\n\nOn a system where ICSF is up and running, run the following command:\n\nF ARSSOCKD,D,ICSF ARS0438I 15.21.18 DISPLAY ICSF CSFIQF RC=00, RSN=00000000, AES=3, FMID=HCR7780\n\nOn a system where CSF is not running, run the following command:\n\nF ARSSOCKD,D,ICSF ARS0438I 15.28.36 DISPLAY ICSF CSFIQF RC=12, RSN=00000000, AES=0, FMID=N/A\n\n# **6.6 Data encryption**\n\nEncrypting data is a way of providing security and protection to your Content Manager OnDemand data.\n\n# **6.6.1 Encrypting data at rest**\n\nDepending on how the database tables and archived data are stored, you can encrypt the data by using either DB2 encryption or device encryption. The advantage of encrypting the data is to make it \"unintelligible\" to unauthorized access even if it is accessed (as an extreme example, the storage device is stolen). The cost of encrypting the data is increased processor consumption and slower response time. This cost varies based on the device and encryption methods that are used.\n\nBackup data must always be encrypted because it is more susceptible to unauthorized access.\n\n# **6.6.2 Encrypting data in motion: Secure communications**\n\nTransport Layer Security (TLS) and Secure Sockets Layer (SSL) allow secure communication between the Content Manager OnDemand server and the Content Manager OnDemand clients. Since Content Manager OnDemand version 8.5, support for SSL and its successor, TLS, is enabled for all transmissions between the Content Manager OnDemand servers and clients. When this section mentions SSL, the same information applies to TLS, unless otherwise noted.\n\nSSL is the standard technology for creating secure connections between servers and clients. The secure connection allows authentication and verification, and data encryption.", - "page_start": 172, - "page_end": 172, - "source_file": "sg246915.pdf" - }, - { - "text": "During 2001, we also implemented a customer relationship management system. This system improves the productivity of our sales force by helping to establish marketing priorities and track sales leads. It also tracks renewal periods for potential commercial, industrial and franchise contracts. During 2005, we will continue to ensure our sales force is properly trained on this system and is using it as intended.\n\n- ' *Improve the productivity of our operations.* We use a grid productivity program that enables us to benchmark the performance of our drivers. In addition, in our larger markets, we use a route optimization program to minimize drive times and improve operational density. During 2005, we will continue to update our disposal optimization metrics. These metrics identify which local disposal option maximizes our return on invested capital and cash Öow.\n- ' *Improve Öeet management and procurement.* In February 2002, we selected Dossier as our Öeet management and parts procurement system. During 2003, we implemented Dossier at all of our signiÑcant hauling and landÑll operations. Among other features, this system tracks parts inventories, generates automatic quantity order points and logs all maintenance work. It allows us to capture and review information to ensure our preventive maintenance programs comply with manufacturers' warranties and governmental regulations. In addition, the purchase order module within this system allows us to cross-reference purchasing information with our inventory. During 2005, we intend to further utilize this purchase order module to take advantage of volume discounts.\n- ' *Enhance operational and Ñnancial reporting systems.* We have several initiatives aimed at improving our operational and Ñnancial reporting systems. The overall goal of these initiatives is to provide us with detailed information, prepared in a consistent manner, that will allow us to quickly analyze and act upon trends in our business.\n\nOne of our most signiÑcant systems is our enterprise-wide general ledger package. We successfully converted all of our locations to Lawson general ledger software in 2002 and in 2003 successfully converted all of our locations to Lawson Ñxed asset software.\n\nAll of the system initiatives mentioned above will provide us with more consistent and detailed information, thus allowing us to make quicker and more informed business decisions. In addition, during 2001, all of our signiÑcant software applications were standardized and centralized at our data center in Fort Lauderdale, Florida. This standardization and centralization provides us with consolidated information concerning our operations across a variety of operational and Ñnancial disciplines. It also signiÑcantly enhances our ability to execute our disaster recovery plan, if necessary.\n\n- ' *Expand our safety training programs.* As part of our ongoing emphasis on safe work practices and in light of increasing insurance costs, we expanded our safety training programs in 2002. During 2004, we distributed to all of our locations a comprehensive training and safety manual. Safety will continue to be a key area of focus during 2005.\n- ' *Develop and implement performance strategies.* Develop and implement strategies to improve the performance of locations and lines of business that are performing below the company's average.", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "What are some example of uses AI by the US departement of energy ?", - "target_page": 1, - "target_passage": "The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI.[195][196] Another discussed approach is to envision a separate *sui generis* system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[197]\n\n#### **Dominance by tech giants**\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[201][202]\n\n#### **Power needs and environmental impacts**\n\nIn January 2024, the International Energy Agency (IEA) released *Electricity 2024, Analysis and Forecast to 2026*, forecasting electric power use.[203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[205]\n\nA 2024 Goldman Sachs Research Paper, *AI Data Centers and the Coming US Power Demand Surge*, found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[207]\n\nIn 2024, the *Wall Street Journal* reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.[209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n#### **Other industry-specific tasks**\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes.[178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "# **Artificial intelligence**\n\n**Artificial intelligence** (**AI**), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\"[2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]\n\nArtificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## **Goals**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[301]\n\n#### **Regulation**\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\".[304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\".[312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\".[368]\n\n### **Evaluating approaches to AI**\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n#### **Symbolic AI and its limits**\n\nSymbolic AI (or \"GOFAI\")[370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\"[371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult.[372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge.[373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n#### **Neat vs. scruffy**\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n#### **Soft vs. hard computing**", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### **Existential risk**\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\".[265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives *almost any* goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager).[267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\"[268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\".[269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\"[273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\".[276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\"[277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\"[278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests.\"[280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\"[281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[249] By 2015, over fifty countries were reported to be researching battlefield robots.[250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[254]\n\n#### **Technological unemployment**\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI.[256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\".[p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; *The Economist* stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\".[262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]\n\nFrom the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### ISSUE\n\nDecember 2024\n\n#### CATEGORIES\n\nTechnology & Cybersecurity Editor's Picks Finance - Personal Home - Interior\n\n# **The top AI-powered tech trends in 2025**\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n### AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops – or AI PC – is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors – also known as the brain of the computer – which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n### Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and\n\nnutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n# Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n# Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com Word Count: 346\n\n#### M ed i a A tt a ch m e n ts −\n\n#### View", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "How can I contact Investor Relations of HON industries through email ?", - "target_page": 63, - "target_passage": "E-mail: investorrelations@honi.com", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "#### **FOR FURTHER INFORMATION, PLEASE CONTACT**\n\n#### **Investor Relations Nissan Motor Co., Ltd.**\n\nGlobal Communications, CSR and IR Division 17-1, Ginza 6-chome, Chuo-ku Tokyo 104-8023, Japan phone: +81(0)3-5565-2334 fax: +81(0)3-3546-2669 e-mail: nissan-ir@mail.nissan.co.jp\n\n#### **Corporate Information Website**\n\nhttp://www.nissan-global.com/\n\n#### **Investor Relations Website**\n\nhttp://www.nissan-global.com/EN/IR/", - "page_start": 111, - "page_end": 111, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### I NVESTOR I NFORMATION :\n\n#### A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on\n\nThursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n#### A DDITIONAL I NFORMATION\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\n#### I NVESTOR I NFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n#### C OMMON S TOCK\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is \"GLW.\"\n\n#### TRANSFER AGENT AND REGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nCHANGE OF ADDRESS Report change of address to Computershare Investor Services at the above address.\n\n#### I NDEPENDENT A CCOUNTANTS\n\nPricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\n#### **Corning Incorporated**\n\nOne Riverfront Plaza Corning, NY 14831-0001 607 974 9000 www.corning.com\n\n02BR24601EN\n\n\"Safe Harbor\" Statement under the Private Securities Litigation Reform Act of 1995 The statements in this annual report that are not historical facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n- global economic and political conditions,\n- currency fluctuations,\n- product demand and industry capacity,\n- competitive products and pricing,\n- sufficiency of manufacturing capacity and efficiencies,\n- cost reductions,\n- availability and costs of critical materials,\n- new product development and commercialization,\n- attracting and retaining key personnel,\n- order activity and demand from major customers,\n- fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n- financial condition of customers,\n- changes in the mix of sales between premium and non-premium products,\n- facility expansions and new plant start-up costs,\n- adverse litigation or regulatory developments, including future or pending tax legislation,\n- adequacy and availability of insurance,\n- capital resource and cash flow activities,\n- capital spending,\n- equity company activities,\n- interest costs,\n- acquisition and divestiture activity,\n- the rate of technology change,\n- the ability to enforce patents,\n- product performance issues,\n- stock price fluctuations, and\n- other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any offering of securities or for the purpose of promoting or influencing the sale of securities.\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## A M E S S A G E F R O M T H E B O A R D O F D I R E C T O R S\n\n#### **Dear Shareholders:**\n\nWe, the members of the HON INDUSTRIES Board of Directors, believe that integrity is central to good corporate governance. This belief is reflected in the HON INDUSTRIES vision statement (shown on the back of this annual report), adopted many years ago. Our Vision statement represents much more than a traditional \"mission,\" and it goes much deeper than company policy. The beliefs and values represented in that document are the very foundation of our corporate culture, and guide the attitude and actions of every member, every day.\n\nFrom its beginnings, HON INDUSTRIES has sought to implement its vision through sound policies and practices, and by maintaining a strong Board composed predominantly of outside directors. We are fully committed to executing our responsibilities, and we will continue to maintain the company's long-standing tradition of an independent, well-informed, active, and engaged Board of Directors.\n\nOur board meetings and procedures have been developed and refined to encourage open and informed communication. The company's accounting policies have always been conservative and straightforward. The Board's three committees — Audit; Human Resources and Compensation; Public Policy and Corporate Governance — have consisted entirely of non-management directors for many years.\n\nDuring 2003, we have given significant attention to the newly released rules emanating from the Sarbanes-Oxley Act of 2002 and the New York Stock Exchange listing requirements — rules intended to improve corporate governance across the country. It is gratifying to report that HON INDUSTRIES governance practices were already in accord with the spirit of the rules.\n\nIt is an honor to serve as directors of HON INDUSTRIES. We are very proud to represent you, the shareholder, as we oversee the management of this great company. Please be assured that we intend to remain vigilant and focused on good corporate governance.\n\nSincerely, The HON INDUSTRIES Board of Directors\n\nStan A. Askren\n\nGary M. Christensen\n\nCheryl A. Francis\n\nRobert L. Katz\n\nDennis J. Martin\n\nJack D. Michaels\n\nJoseph Scalzo\n\nAbbie J. Smith\n\nRichard H. Stanley\n\nBrian E. Stern\n\nRonald V. Waters, III", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## **CORPORATE AND SHAREHOLDER INFORMATION**\n\n#### **CORPORATE OFFICES**\n\nRogers Communications Inc. 333 Bloor Street East, 10th Floor Toronto, ON M4W 1G9 416-935-7777\n\n#### **CUSTOMER SERVICE AND PRODUCT INFORMATION**\n\n888-764-3771 or rogers.com\n\n#### **SHAREHOLDER SERVICES**\n\nIf you are a registered shareholder and have inquiries regarding your account, wish to change your name or address, or have questions about lost stock certificates, share transfers, estate settlements or dividends, please contact our transfer agent and registrar:\n\n#### CST Trust Company\n\nP.O. Box 700, Postal Station B Montreal, QC H3B 3K3, Canada 416-682-3860 or 800-387-0825 inquiries@canstockta.com\n\n#### **Duplicate Mailings**\n\nIf you receive duplicate shareholder mailings from Rogers Communications, please contact CST Trust Company as detailed above to consolidate your accounts.\n\n#### **INVESTOR RELATIONS**\n\nInstitutional investors, securities analysts and others requiring additional financial information can visit rogers.com/investors or contact us at:\n\n#### 1-855-300-7922 or\n\n416-935-3551 *(outside North America)* or investor.relations@rci.rogers.com\n\nMedia inquiries: 416-935-7777\n\n#### **CORPORATE PHILANTHROPY**\n\nFor information relating to Rogers various philanthropic endeavours, refer to the \"About Rogers\" section of rogers.com\n\n#### **SUSTAINABILITY**\n\nRogers is committed to continuing to grow responsibly and we focus our social and environmental sustainability efforts where we can make the most meaningful impacts on both. To learn more, please visit rogers.com/csr\n\n#### **SCAN THIS TO LEARN MORE**\n\n**rogers.com/investors** Stay up-to-date with the latest Rogers investor information\n\n#### **STOCK EXCHANGE LISTINGS**\n\n**Toronto Stock Exchange (TSX): RCI.b** – Class B Non-Voting shares (CUSIP # 775109200) **RCI.a** – Class A Voting shares (CUSIP # 775109101)\n\n**New York Stock Exchange (NYSE):**\n\n**RCI** – Class B Non-Voting shares (CUSIP # 775109200)\n\n#### **Equity Index Inclusions:**\n\nDow Jones Canada Titans 60 Index Dow Jones Telecom Titans 30 Index FTSE Global Telecoms Index FTSE All-World Index Series FTSE4Good Global Index Jantzi Social Index S&P/TSX 60 Index S&P/TSX Composite Dividend Index S&P/TSX Composite Index S&P/TSX Telecom Services Index\n\n#### **DEBT SECURITIES**\n\nFor details of the public debt securities of the Rogers companies, please refer to the \"Debt Securities\" section under rogers.com/investors\n\n**INDEPENDENT AUDITORS** KPMG LLP\n\n#### **ON-LINE INFORMATION**\n\nRogers is committed to open and full financial disclosure and best practices in corporate governance. We invite you to visit the Investor Relations section of rogers.com/investors where you will find additional information about our business, including events and presentations, news releases, regulatory filings, governance practices, corporate social responsibility and our continuous disclosure materials, including quarterly financial releases, annual information forms and management information circulars. You may also subscribe to our news by e-mail or RSS feeds to automatically receive Rogers news releases electronically.\n\n#### **FOLLOW ROGERS THROUGH THESE SOCIAL MEDIA LINKS**\n\nTWITTER **twitter.com/rogersbuzz**\n\nGOOGLE + **google.com/+Rogers**\n\n#### **COMMON STOCK TRADING AND DIVIDEND INFORMATION**\n\n| Dividends | | | |\n| --- | --- | --- | --- |\n| Closing Price RCI.b on TSX | | | Declared |\n| 2013 | High Low | Close | per Share |\n| First Quarter $51.89 $44.37 $51.89 $0.435 | | | |\n| Second Quarter $52.35 $40.35 $41.20 $0.435 | | | |\n| Third Quarter $45.36 $40.35 $44.29 $0.435 | | | |\n| Fourth Quarter $48.59 $43.66 $48.07 $0.435 | | | |\n\n#### **Shares Outstanding at December 31, 2013** Class A 112,462,000 Class B 402,281,178\n\n#### **2014 Expected Dividend Dates**\n\n| Record Date*: | Payment Date*: |\n| --- | --- |\n| March 14, 2014 | April 4, 2014 |\n| June 13, 2014 | July 4, 2014 |\n| September 12, 2014 | October 3, 2014 |\n| December 11, 2014 | January 2, 2015 |\n| * Subject to Board approval | |\n\nUnless indicated otherwise, all dividends paid by Rogers Communications are designated as \"eligible\" dividends for the purposes of the Income Tax Act (Canada) and any similar provincial legislation.\n\n#### **DIRECT DEPOSIT SERVICE**\n\nShareholders may have dividends deposited directly into accounts held at financial institutions. To arrange direct deposit service, please contact CST Trust Company as detailed earlier on this page.\n\n#### **DIVIDEND REINVESTMENT PLAN (DRIP)**\n\nRogers offers a convenient dividend reinvestment program for eligible shareholders to purchase additional Rogers Communications shares by reinvesting their cash dividends without incurring brokerage fees or administration fees. For plan information and enrolment materials or to learn more about Rogers DRIP, please visit www. canstockta.com/en/InvestorServices/Dividend_ Reinvestment_Plans or contact CST Trust Company as detailed earlier on this page.\n\n#### **ELECTRONIC DELIVERY OF SHAREHOLDER MATERIALS**\n\nRegistered shareholders can receive electronic notice of financial reports and proxy materials and utilize the Internet to submit proxies on-line by registering at www.canstockta.com/ en/InvestorServices/Delivery_of_Investor_ Materials/Electronic_Consent. This approach gets information to shareholders more quickly than conventional mail and helps Rogers protect the environment and reduce printing and postage costs.\n\n#### **GLOSSARY OF TERMS**\n\nFor a comprehensive glossary of industry and technology terms, go to rogers.com/glossary\n\n#### CAUTION REGARDING FORWARD-LOOKING INFORMATION AND OTHER RISKS\n\nThis annual report includes forward-looking statements about the financial condition and prospects of Rogers Communications that involve significant risks and uncertainties that are detailed in the \"Risks and Uncertainties That Could Affect our Businesses\" and \"Caution Regarding Forward-Looking Statements, Risks and Assumptions\" sections of the MD&A contained herein, which should be read in conjunction with all sections of this annual report.\n\n© 2014 Rogers Communications Inc. Other registered trademarks that appear are the property of the respective owners.\n\nDesign: **Interbrand** Printed in Canada", - "page_start": 129, - "page_end": 129, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## I N V E S T O R I N F O R M A T I O N\n\n#### **S C H E D U L E O F Q U A R T E R L Y R E S U L T S**\n\nThe Company operates on a fiscal year ending on the Saturday nearest December 31. Quarterly results are typically announced within 25 days after the end of each quarter, and audited results are typically announced within 40 days after year-end.\n\n#### **F I S C A L 2 0 0 4 Q U A R T E R - E N D D A T E S**\n\n1st Quarter: Saturday, April 3 2nd Quarter: Saturday, July 3 3rd Quarter: Saturday, October 2 4th Quarter: Saturday, January 1\n\n#### **A N N U A L M E E T I N G**\n\nThe Company's annual shareholders' meeting will be held at 10:30 a.m. on May 4, 2004, at the Holiday Inn, Highways 61 & 38 North, Muscatine, Iowa. Shareholders and other interested investors are encouraged to attend the meeting.\n\n#### **I N V E S T O R R E L A T I O N S**\n\nSend inquiries to: Investor Relations HON INDUSTRIES Inc. 414 East Third Street Muscatine, IA 52761 Telephone: 563.264.7400 Fax: 563.264.7655 E-mail: investorrelations@honi.com\n\n#### **C O R P O R A T E H E A D Q U A R T E R S**\n\nHON INDUSTRIES Inc. 414 East Third Street P.O. Box 1109 Muscatine, IA 52761-0071 Telephone: 563.264.7400 Fax: 563.264.7217 Website: www.honi.com\n\n#### **I N D E P E N D E N T P U B L I C A C C O U N T A N T S**\n\nPricewaterhouseCoopers LLP One North Wacker Drive Chicago, IL 60606\n\n#### **C O M M O N S T O C K**\n\nHON INDUSTRIES common stock trades on the New York Stock Exchange under the symbol: HNI. Stock price quotations can be found in major daily newspapers and *The Wall Street Journal*.\n\n#### **T R A N S F E R A G E N T**\n\nShareholders may report a change of address or make inquiries by writing or calling:\n\nComputershare Investor Services, LLC 2 North LaSalle Street Chicago, IL 60602 Telephone: 312.588.4991\n\n#### **F O R W A R D - L O O K I N G S T A T E M E N T S**\n\nStatements in this report that are not strictly historical, including statements as to plans, objectives, and future financial performance, are \"forward-looking\" statements that are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements involve known and unknown risks, which may cause the Company's actual results in the future to differ materially from expected results. These risks include, among others:\n\n**•** competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n\n**•** increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n\n**•** increases in the cost of health care benefits provided by the Company;\n\n**•** reduced demand for the Company's storage products caused by changes in office technology; including the change from paper record storage to electronic record storage;\n\n**•** the effects of economic conditions, on demand for office furniture, customer insolvencies and related bad debts and claims against the Company that it received preferential payments;\n\n**•** changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n\n**•** issues associated with acquisitions and integration of acquisitions;\n\n**•** the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n\n**•** the ability of the Company to realize financial benefits from investments in new products;\n\n**•** the ability of the Company's distributors and dealers to successfully market and sell the Company's products;\n\n- **•** the availability and cost of capital to finance planned growth; and\n- **•** other risks, uncertainties, and factors described from time to time in the Company's filings with the Securities and Exchange Commission.\n\nWe caution the reader that the above list of factors may not be exhaustive. The Company does not assume any obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.\n\n K", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### First Paragraph\n\nIntroduce yourself, and explain why you are writing the letter. If you are responding to a job advertisement, state which advertisement you are responding to, and indicate where you found it.\n\n#### For example:\n\n\"I would like to apply for the position of Graphic Designer, as advertised in the Career Times on 1 March 2015.\"\n\nIf possible, mention a mutual contact or acquaintance.\n\nFor example:\n\n\"Samantha Stevens mentioned that you are looking for an experienced Graphic Designer with a keen interest in the fashion industry.\"\n\n#### Second Paragraph\n\nMention your qualifications, skills and experience, and relate them to the needs of the company. Give relevant examples of how you have used your skills in the past to perform similar tasks and responsibilities to those set out in the job description.\n\n#### Third Paragraph\n\nExplain why you want to work for this organisation in particular. Where relevant, explain any gaps in your CV. If you don't have the required academic qualifications, for example, you can explain how your practical work experience makes up for it.\n\n#### Fourth paragraph\n\nMention any documents or attachments that you have included with your cover letter, and state your availability for an interview.\n\n#### Close\n\nThank the recipient for taking the time to read your letter, and sign off with a professional greeting, such as \"Yours sincerely\" or \"Kind regards\", followed by your full name, telephone number and e-mail address.", - "page_start": 46, - "page_end": 46, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### **SHAREHOLDER INFORMATION**\n\nApplied Industrial Technologies, Inc. common stock is listed on the New York Stock Exchange under the symbol AIT. The Company is identified in most financial listings as \"AppliedIndlTch.\"\n\n#### **RESEARCH ON APPLIED INDUSTRIAL TECHNOLOGIES IS AVAILABLE THROUGH:**\n\n#### **BB&T CAPITAL MARKETS**\n\nHolden Lewis, 703/471-3894\n\n**CJS SECURITIES** Jonathan Tanwanteng, 914/287-7600 **CLEVELAND RESEARCH COMPANY** Adam Uhlman, 216/649-7241\n\n#### **KEYBANC CAPITAL MARKETS** Jeffrey D. Hammond, 216/689-0236\n\n**SIDOTI & CO.** Joseph Mondillo, 212/894-3339 **GREAT LAKES REVIEW – Division of Wellington Shields & Co.** Elliott Schlang, 216/767-1340\n\n#### **SHAREHOLDER INQUIRIES**\n\nRequests to transfer Applied Industrial Technologies, Inc. shares and all correspondence regarding address change information, duplicate mailings, missing certificates, failure to receive dividend checks in a timely manner or to participate in the Company's direct stock purchase program should be directed to the Company's transfer agent and registrar:\n\n#### **COMPUTERSHARE TRUST COMPANY, N.A.**\n\n250 Royall Street Canton, MA 02021 800/988-5291\n\n#### **INVESTOR RELATIONS INQUIRIES SHOULD BE DIRECTED TO:**\n\n**MARK O. EISELE** Vice President – Chief Financial Officer & Treasurer Applied Industrial Technologies 1 Applied Plaza Cleveland, OH 44115-5014 Telephone: 216/426-4000, Fax: 216/426-4845\n\n#### **STEPHENS INC.**\n\nMatt Duncan, 501/377-3723 **WELLS FARGO SECURITIES, LLC**\n\nAllison Poliniak-Cusic, 212/214-5062 **WUNDERLICH SECURITIES**\n\nBrent D. Rakers, 901/251-2236\n\n#### **ANNUAL REPORT ON FORM 10-K**\n\n**The Applied Industrial Technologies, Inc. Annual Report on Form 10-K for the fiscal year ended June 30, 2012, including the financial statements and schedules thereto, is available at our website at www.Applied.com. It is also available without charge upon written request to the Vice President – Chief Financial Officer & Treasurer at the address shown.**\n\n#### **ANNUAL MEETING**\n\nThe Annual Meeting of Shareholders will be held at 10:00 a.m., Tuesday, October 23, 2012, at the Corporate Headquarters of Applied Industrial Technologies, 1 Applied Plaza, East 36th and Euclid Avenue, Cleveland, Ohio 44115.\n\n### **COMPARISON OF FIVE-YEAR CUMULATIVE TOTAL RETURN**\n\nApplied Industrial Technologies, Inc., Standard & Poor's 500, and Peer Group (Performance Results from 7/1/2007 through 6/30/2012)\n\n> Assumes $100 invested at the close of trading 6/30/07 in Applied Industrial Technologies, Inc. common stock, Standard & Poor's 500, and Peer Group.\n\n> Cumulative total return assumes reinvestment of dividends.\n\nThe returns of the companies in the Peer Group are weighted based on the companies' relative stock market capitalization.\n\nPeer Group companies selected on a line-of-business basis include: DXP Enterprises, Inc.; Fastenal Company; Genuine Parts Company; W. W. Grainger, Inc.; Kaman Corporation; Lawson Products, Inc.; MSC Industrial Direct Co., Inc.; and WESCO International, Inc.\n\n| | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 |\n| --- | --- | --- | --- | --- | --- | --- |\n| Applied Industrial Technologies, Inc. | $100.00 | $83.63 | $70.22 | $92.62 | $133.17 | $141.07 |\n| Standard & Poor's 500 | 100.00 | 86.88 | 64.11 | 73.36 | 95.88 | 101.10 |\n| Peer Group | 100.00 | 86.96 | 74.77 | 100.34 | 148.47 | 170.81 |\n\n25358_AIT_Report_WT.indd 45 8/23/12 8:33 AM\n\nSource: Value Line Publishing LLC", - "page_start": 46, - "page_end": 46, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "### HERE ARE A FEW GUIDELINES TO KEEP IN MIND WHEN SENDING E-MAILS TO YOUR COLLEAGUES:\n\n#### • Always use a relevant and descriptive subject line.\n\nE-mails with blank subject lines may be marked as spam by the recipient's e-mail client, and e-mails with non-descriptive subject lines such as \"Hello\" or \"Meeting\" may be ignored.\n\n#### • Write your e-mail in clear and simple language.\n\nDon't try to sound too formal, and don't use complicated words when simple ones would work just fine. As far as possible, write in the active voice.\n\n- Structure your message clearly, and include only the necessary information.\nTake care not to confuse the message by including too many topics in one e-mail. Respect your colleagues' time, and try to keep your messages as short as possible.\n\n#### • Don't type your e-mail in ALL CAPS.\n\nThis is regarded as the online equivalent of shouting.\n\n- Always proofread your e-mail before you hit 'send'. Grammar and spelling errors come across as unprofessional.\n- If you include a link in your e-mail, make sure that you provide some context.\n\nYour recipients are unlikely to click on a link if they don't have any idea as to what they are going to see when they open it.\n\n- Only mark an e-mail as 'urgent' when it really does require immediate attention.\nWhat's urgent to you may not always be urgent to your recipients.\n\n- Don't use the CC' or Reply All' functions unnecessarily. Only send your e-mails to the people who really need to see them.", - "page_start": 52, - "page_end": 52, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "#### **Information on Directors**\n\n#### **Michael Damer Hannell**\n\n*Chairman, BSc Eng (Hons), FIEAust*\n\n#### *Experience*\n\nMike has been a Director of Sundance since March 2006 and chairman of our board of directors since December 2008. Mr. Hannell has over 45 years of experience in the oil and gas industry, initially in the downstream sector and subsequently in the upstream sector. His extensive experience has been in a wide range of design and construction, engineering, operations, exploration and development, marketing and commercial, financial and corporate areas in the United States, United Kingdom, continental Europe and Australia at the senior executive level with Mobil Oil (now Exxon) and Santos Ltd. Mr. Hannell recently finished his term as the chairman of Rees Operations Pty Ltd (doing business as Milford Industries Pty Ltd), an Australian automotive components and transportation container manufacturer and supplier. He has also held a number of other board appointments including the chairman of Sydac Pty Ltd, a designer and producer of simulation training products for industry. Mr. Hannell has also served on a number of not-for-profit boards, with appointments as president of the Adelaide-based Chamber of Mines and Energy, president of Business SA (formerly the South Australian Chamber of Commerce and Industry), chairman of the Investigator Science and Technology Centre, chairman of the Adelaide Graduate School of Business, and a member of the South Australian Legal Practitioners Conduct Board. Mr. Hannell holds a Bachelor of Science degree in Engineering (with Honors) from the University of London and is a Fellow of the Institution of Engineers Australia.\n\n*Interest in Shares*: 1,059,000 ordinary shares in Sundance Energy Australia Limited\n\n*Special Responsibilities*: -Chairman of the Board of Directors -Chairman of the Remuneration and Nominations Committee -Member of the Audit and Risk Management Committee -Member of the Reserves Committee\n\n*Other Directorships*: Nil\n\n#### **Eric P. McCrady**\n\n*Director, BS in Business* Administration\n\n#### *Experience*\n\nEric has been our Chief Executive Officer since April 2011 and Managing Director of our board of directors since November 2011. He also served as our Chief Financial Officer from June 2010 until becoming Chief Executive Officer in 2011. Mr. McCrady has served in numerous positions in the energy, private investment and retail industries. From 2004 to 2010, Mr. McCrady was employed by The Broe Group, a private investment firm, in various financial and executive management positions across a variety of industry investment platforms, including energy, transportation and real estate. From 1997 to 2003, Mr. McCrady was employed by American Coin Merchandising, Inc. in various corporate finance roles. Mr. McCrady holds a degree in Business Administration from the University of Colorado, Boulder.\n\n*Interest in Shares, Restricted Share Units and Options:* 1,908,581 Ordinary Shares in Sundance Energy Australia Limited and 791,561 Restricted Share Units\n\n*Special Responsibilities*: Managing Director and Chief Executive Officer of the Company\n\n*Other Directorships*: Nil", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### **22. SUBSEQUENT EVENTS**\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe financial effect of the above events have not been reflected in these financial statements.\n\n# **2000 1999 Cents per Cents per Share Share** Basic earnings per share (0.62) 8.09 Diluted earnings per share (0.21) 8.05 **2000 1999 No. No.** Weighted average number of ordinary shares on issue used in the calculation of basic earnings per share 43,000,000 30,356,164\n\n#### **23. EARNINGS PER SHARE**", - "page_start": 56, - "page_end": 56, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "What explains the decrease in net sales of HON industries in 2002 ?", - "target_page": 34, - "target_passage": "The decrease in 2002 was due to the decline in the office furniture market due to unstable economic conditions and the deletion of less profitable product lines in the hearth products segment", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### *O F F I C E F U R N I T U R E*\n\nOffice furniture comprised 74% of consolidated net sales for 2003 and 76% of consolidated net sales for 2002 and 2001. Net sales for office furniture increased 2% in 2003 and decreased 6% in 2002. The increase in 2003 is due to the increased week from the Company's 52/53-week fiscal year. The office furniture industry has experienced an unprecedented three-year decline in shipments. The Business and Institutional Furniture Manufacturer's Association (BIFMA) reported 2003 shipments down over 5% and 2002 shipments down 19%. The Company's estimated share of the market based on reported office furniture shipments increased to 15.3% in 2003 compared to 14.4% in 2002 and 12.4% in 2001. This increase was achieved by providing strong brands, innovative products and services, and greater value to end-users.\n\nOperating profit as a percent of sales was 10.0% in 2003, 10.2% in 2002, and 8.2% in 2001. Included in 2003 were $15.2 million of net pretax charges related to the closure of two office furniture facilities, which impacted operating margins by 1.1 percentage points. Included in 2002 were $3.0 million of restructuring charges, which impacted operating margins by 0.2 percentage points, and 2001 included $22.5 million of restructuring charges, which impacted operating margins by 1.7 percentage points. The increase in operating margins is due to increased gross profit from the benefits of restructuring initiatives, rapid continuous improvement programs, and increased price realization, offset by additional investments in brand building and selling initiatives and increased freight expense.\n\n#### *H E A R T H P R O D U C T S*\n\nHearth products sales increased 9% in 2003 and decreased 3% in 2002, respectively. The growth in 2003 was attributable to strong housing starts, growth in market share in both the new construction and retail channels, strengthening alliances with key distributors and dealers, as well as focused new product introductions. The decrease in 2002 was mainly due to pruning out less profitable product lines.\n\nOperating profit as a percent of sales in 2003 was 12.1% compared to 10.8% and 9.2% in 2002 and 2001, respectively. The improved profitability in 2003 was the result of leveraging fixed costs over a higher sales volume and increased sales through company-owned distribution offset by increased freight costs and higher labor costs from increased use of overtime and temporary labor to meet record levels of demand. The increase in 2002 was mainly due to discontinuance of goodwill and indefinite-lived intangible amortization of approximately $7 million due to the adoption of SFAS 142.\n\n#### **Liquidity and Capital Resources**\n\nDuring 2003, cash flow from operations was $141.3 million, which along with funds from stock option exercises under employee stock plans, provided the funds necessary to meet working capital needs, invest in capital improvements, repay long-term debt, repurchase common stock, and pay increased dividends.\n\nCash, cash equivalents, and short-term investments totaled $204.2 million at the end of 2003 compared to $155.5 million at the end of 2002 and $78.8 million at the end of 2001. The Company used approximately $80 million of cash to acquire Paoli Inc. on January 5, 2004. These remaining funds, coupled with cash from future operations and additional long-term debt, if needed, are expected to be adequate to finance operations, planned improvements, and internal growth. The Company is not aware of any known trends or demands, commitments, events, or uncertainties that are reasonably likely to result in its liquidity increasing or decreasing in any material way.\n\nThe Company places special emphasis on the management and reduction of its working capital with a particular focus on trade receivables and inventory levels. The success achieved in managing receivables is in large part a result of doing business with quality customers and maintaining close communication with them. Trade receivables at year-end 2003 were virtually unchanged from the prior year. Trade receivable days outstanding have averaged approximately 37 to 38 days over the past three years. The Company's inventory turns were 23, 23, and 18 for 2003, 2002, and 2001, respectively. Increased imports of raw materials and finished goods may negatively affect inventory turns in the future but the Company is constantly looking for ways to add efficiency to its supply chain. The decrease in accounts payable and accrued expenses is due to timing of vendor and marketing program payments and the payment of additional purchase consideration and debenture earn out related to a prior acquisition. The Company also funded the retiree medical portion of its postretirement benefit obligation in 2003.\n\n#### *I N V E S T M E N T S*\n\nThe Company has investments in investment grade equity and debt securities. Management classifies investments in marketable securities at the time of purchase and reevaluates such classification at each balance sheet date. Equity securities are classified as available-for-sale and are stated at current market value with unrealized gains and losses included as a separate component of equity, net of any related tax effect. Debt securities are classified as held-to-maturity and are stated at amortized cost. A table of holdings as of year-end 2003 and 2002 is", - "page_start": 35, - "page_end": 35, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "### YEAR ENDED JUNE 30, 2011 vs. 2010\n\nThe following table is included to aid in review of Applied's statements of consolidated income.\n\n| | Year Ended June 30, As a % of Net Sales | | Change in $'s Versus Prior Period |\n| --- | --- | --- | --- |\n| | 2011 | 2010 | % Increase |\n| Net Sales | 100.0 % | 100.0 % | 16.9 % |\n| Gross Profit | 27.7 % | 27.2 % | 18.9 % |\n| Selling, Distribution & Administrative | 20.9 % | 21.4 % | 14.0 % |\n| Operating Income | 6.8 % | 5.8 % | 37.0 % |\n| Net Income | 4.4 % | 3.5 % | 46.8 % |\n\nNet sales in fiscal 2011 were $2.2 billion, which was $319.6 million or 16.9% above the prior year driven by improvements in the industrial economy. Incremental net sales from companies acquired in fiscal 2011 contributed approximately $40.8 million or 1.8%. Currency translation increased fiscal year 2012 sales by approximately $16.3 million or 0.7%. In local currency, net sales from our Canadian operations were up 23.1% from fiscal 2010, including 8.4% from acquisitions. In local currency, net sales from our Mexican operations were up 17.9%. The number of selling days in fiscal 2011 was the same as in fiscal 2010.\n\nNet sales of our Service Center Based Distribution segment increased $234.3 million, or 15.2%, compared to fiscal year 2010 led by improvements in the industrial economy, with acquisitions adding $40.8 million or 2.7%. Net sales of our Fluid Power Businesses segment increased $85.4 million or 23.9%, driven by improvements in the industrial economy.\n\nThe sales product mix for fiscal 2011 was 70.5% industrial products and 29.5% fluid power products compared to 71.7% industrial and 28.3% fluid power in the prior year.\n\nAt June 30, 2011, we had a total of 474 operating facilities in the U.S., Canada and Mexico versus 455 at June 30, 2010. The increase in operating facilities represented 11 new locations due to acquisitions, the opening of 2 new locations, the impact of redefining certain shop operations which added 11 locations, and the merger of 5 locations with other locations.\n\nOur gross profit margin increased to 27.7% in fiscal 2011 from 27.2% in fiscal 2010. LIFO benefits had a negative 1.0% impact on gross profit margin in fiscal 2011 versus fiscal 2010. LIFO benefits recorded during fiscal year 2011 totaled $5.3 million which provided an overall benefit in our gross profit percent of 0.2%. This compares to a LIFO benefit of $23.5 million in fiscal 2010 which added 1.2% to gross profit. Our focused efforts on\n\n2\n\nexpenses as a percent of sales helped offset the reduction in gross profit. Management continues to seek opportunities to take advantage of economies of scale to improve the SD&A\n\nInterest expense, net, decreased $1.7 million during fiscal 2012 compared with the prior year. We repaid all of our outstanding\n\nOther expense (income), net, represents certain non-operating items of income and expense. This was $1.6 million of expense in fiscal 2012 compared to income of $3.8 million of income in fiscal 2011. Current year expense primarily consists of foreign currency transaction losses of $1.6 million. Fiscal 2011 included $2.0 million of unrealized gains on investments held by nonqualified deferred compensation trusts and recognition of a $1.7 million gain from death benefits received under two life\n\nIncome tax expense as a percent of income before taxes was 34.8% for fiscal 2012 and 36.7% for fiscal 2011. The impact of lower effective tax rates and higher income in foreign jurisdictions favorably reduced our rate when compared to the U.S. federal statutory rate by 1.8%. Further reducing our rate compared to the U.S. federal statutory rate is a permanent dividend deduction benefit of 0.5%. These reductions compared to the U.S. federal rate were offset by the impact of state and local taxes which\n\nIn fiscal 2011, the impact of lower effective tax rates and higher income in foreign jurisdictions favorably reduced our rate when compared to the U.S. federal statutory rate by 1.0%. Further reducing our rate compared to the U.S federal statutory rate is a\n\nreductions compared to the U.S. federal rate were offset by the impact of state and local taxes and by provision made for U.S. income tax on a portion of undistributed earnings not considered permanently reinvested in our Canadian subsidiaries which increased the rate by 2.8% and 1.8%, respectively.\n\nWe expect our income tax rate for fiscal 2013 to be in the range\n\nAs a result of the factors addressed above, net income for fiscal 2012 increased $12.0 million or 12.4% from the prior year. Net income per share increased at a slightly higher rate of 13.4% due\n\nThe number of Company associates was 4,664 at June 30, 2012\n\npermanent dividend deduction benefit of 0.5%. These\n\ndebt in fiscal 2011 which lowered interest expense.\n\nexpenses in this segment.\n\ninsurance policies.\n\nincreased the rate by 2.5%.\n\nof 34.0% to 35.0%.\n\nto stock repurchases in fiscal 2012.\n\nand 4,640 at June 30, 2011.\n\nOur gross profit margin was 27.6% in fiscal 2012 versus 27.7% in fiscal 2011. Positive impacts as a result of higher supplier purchasing incentives offset the impact of lower LIFO layer liquidation benefits recognized in the current year ($3.4 million of LIFO layer liquidation benefits in fiscal 2012 versus $12.3 million\n\nSelling, distribution and administrative expenses (SD&A) consist of associate compensation, benefits and other expenses associated\n\nmanagement, and providing marketing and distribution of the Company's products, as well as costs associated with a variety of administrative functions such as human resources, information technology, treasury, accounting, legal, and facility related expenses. SD&A increased $23.7 million or 5.1% during fiscal 2012 compared to the prior year, and as a percent of sales decreased to 20.5% from 20.9% in fiscal 2011. Enterprise Resource Planning (ERP) project cash expenses were $18.3 million ($9.8 million above the prior year period). SD&A of businesses acquired since the prior year period added $5.6 million. Effective\n\nwith selling, purchasing, warehousing, supply chain\n\nDecember 31, 2011, the Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and final average earnings) and entry into the Supplemental Executive Retirement Benefits Plan (SERP) which constituted a plan curtailment. As a result, we recognized $3.1 million in prior service costs upon curtailment of the plan in the second quarter of fiscal 2012. We also incurred one-time expenses associated with our CEO transition of $1.4 million in fiscal 2012. The translation impact of our foreign subsidiaries into U.S. dollars had an unfavorable impact of $0.5\n\nOperating income increased 11.7% to $168.4 million during fiscal 2012 from $150.8 million during 2011. As a percent of sales, operating income increased to 7.1% in the current year from 6.8% in 2011. The $17.6 million increase in operating income during fiscal 2012 primarily reflects higher sales levels and the impact of leverage on increased sales as we kept our SD&A to\n\n20.5% of sales in 2012 versus 20.9% in fiscal 2011.\n\n(representing 0.2% of the improvement).\n\nOperating income as a percentage of sales for the Service Center Based Distribution segment increased to 7.1% in fiscal 2012 from 6.5% in fiscal 2011, this increase is attributable to improved gross profit margins (representing 0.4% of the improvement) and higher sales levels without a commensurate increase in SD&A\n\nThe Fluid Power Businesses segment operating income decreased slightly to 9.2% in fiscal 2012 from 9.5% in fiscal 2011. This reduction is attributable to lower net gross profit margins primarily from one vertical market within one of our Fluid Power Businesses (representing 0.5% of the reduction). Lower SD&A\n\nmillion on SD&A in the year.\n\nin fiscal 2011).\n\n3\n\n25358_AIT_Report_WT.indd 7 8/23/12 8:33 AM\n\nselling products at a higher gross profit margin led to an approximate 0.9% improvement in gross profit margins. Other positive impacts on margins were an increase of approximately 0.4% from businesses acquired during the fiscal year and an increase of approximately 0.2% due to lower scrap expense.\n\nSD&A increased $56.7 million or 14.0% during fiscal 2011 compared to fiscal year 2010, and as a percent of sales decreased to 20.9% from 21.4% in fiscal 2010. Associate compensation and benefits, including amounts tied to financial performance, increased $27.4 million. Acquisitions added $18.4 million of SD&A compared to fiscal year 2010, including additional amortization expense of $1.4 million. Incremental expenses associated with the development of a new ERP platform totaled $8.6 million. Foreign currency translation had an unfavorable impact of $3.1 million in fiscal 2011.\n\nOperating income increased 37.0% to $150.8 million during fiscal 2011 from $110.1 million during 2010. As a percent of sales, operating income increased to 6.8% in fiscal 2011 from 5.8% in 2010. The $40.7 million increase in operating income during fiscal 2011 primarily reflects higher sales levels, improved gross profit margins and the impact of leverage on increased sales as we kept our SD&A to 20.9% of sales in 2011 versus 21.4% in fiscal 2010.\n\nOperating income as a percentage of sales for the Service Center Based Distribution segment increased to 6.5% in fiscal 2011 from 5.0% in fiscal 2010, this increase is attributed to higher sales levels without a commensurate increase in SD&A (representing 0.9% of the improvement) and improved gross profit margins (representing 0.6% of the improvement).\n\nThe Fluid Power Businesses segment operating income increased to 9.5% in fiscal 2011 from 7.5% in fiscal 2010, attributed to higher sales levels without a commensurate increase in SD&A (representing 1.5% of the improvement) and improved gross profit margins (representing 0.5% of the improvement).\n\nInterest expense, net, decreased $3.8 million during fiscal 2011 compared with the prior year. We repaid all of our outstanding debt in fiscal 2011 which lowered interest expense.\n\nOther expense (income), net, was $3.8 million of income in fiscal 2011 compared to income of $0.4 million in fiscal 2010. Fiscal 2011 included $2.0 million of unrealized gains on investments held by non-qualified deferred compensation trusts and recognition of a $1.7 million gain from death benefits received under two life insurance policies.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "### M A N A G E M E N T ' S D I S C U S S I O N A N D A N A L Y S I S\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n#### **Overview**\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n#### **Critical Accounting Policies and Estimates** *G E N E R A L*\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.\n\nAn accounting policy is deemed to be critical if it requires an accounting estimate to be made based on assumptions about matters that are uncertain at the time the estimate is made, and if different estimates that reasonably could have been used, or changes in the accounting estimates that are reasonably likely to occur periodically, could materially impact the financial statements. Management believes the following critical accounting policies reflect its more significant estimates and assumptions used in the preparation of the Consolidated Financial Statements.\n\n*Fiscal year end* – The Company's fiscal year ends on the Saturday nearest December 31. Fiscal year 2003, the year ended January 3, 2004, contained 53 weeks, while fiscal year 2002, the year ended December 28, 2002, and fiscal year 2001, the year ended December 29, 2001, contained 52 weeks. A 53-week year occurs approximately every sixth year.\n\n*Revenue recognition* – Revenue is normally recognized upon shipment of goods to customers. In certain circumstances revenue is not recognized until the goods are received by the customer or upon installation and customer acceptance based on the terms of the sale agreement. Revenue includes freight charged to customers; related", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "The Company's policy requires measurement of the allowance for an impaired collateral dependent loan based on the fair value of the collateral. Other loan impairments are measured based on the present value of expected future cash flows or the loan's observable market price.\n\n#### **Results of Operations**\n\n*Performance Summary*. Net earnings for 2002 were $34.0 million, an increase of $4.6 million, or 15.7%, over net earnings for 2001 of $29.4 million. Net earnings for 2000 were $28.3 million. The increase in net earnings for 2002 over 2001 was primarily attributable to an increase in net interest income resulting primarily from growth in average earning assets and an improved net interest margin. The increase in net earnings for 2001 over 2000 was primarily attributable to an increase in net interest income resulting primarily from the growth in average earning assets and an increase in noninterest income resulting primarily from increases in service fees on deposit accounts and real estate mortgage fees.\n\nOn a basic net earnings per share basis, net earnings were $2.75 for 2002 as compared to $2.38 for 2001 and $2.28 for 2000. Return on average assets was 1.78% for 2002 as compared to 1.62% for 2001 and 1.67% for 2000. Return on average equity was 15.13% for 2002 as compared to 14.35% for 2001 and 15.39% for 2000.\n\nAffecting our 2002 net earnings and basic and diluted earnings per share is the implementation of Statement of Financial Accounting Standards No. 141, \"Business Combinations\" (\"SFAS No. 141\") and Statement of Financial Accounting Standards No. 142, \"Goodwill and Other Intangible Assets\" (\"SFAS No. 142\"). SFAS No. 141 requires that all business combinations initiated after June 30, 2001 be accounted for under the purchase method and addresses the initial recognition and measurement of goodwill and other intangible assets acquired in a business combination. SFAS No. 142 addresses the initial recognition and measurement of intangible assets acquired outside of a business combination and the accounting for goodwill and other intangible assets subsequent to their acquisition. SFAS No. 142 provides that intangible assets with finite useful lives be amortized and that goodwill and intangible assets with indefinite lives not be amortized, but rather be tested at least annually for impairment. SFAS No. 142 was effective January 1, 2002 for calendar year companies; however, acquired goodwill and intangible assets recorded in the acquisition of City Bancshares, Inc. closed subsequent to June 30, 2001 were subject immediately to its provisions.\n\nOn January 1, 2002, goodwill amounting to $23,765,896 was not subject to further amortization as a result of SFAS No. 142. The Company conducted its initial impairment test in 2002, with no reduction of recorded goodwill resulting from the test. A reconciliation adjusting comparative net earnings and earnings per share for the years ended December 31, 2001 and 2000, to show the effect of no longer amortizing the Company's goodwill, follows:\n\n| | 2001 | | 2000 | |\n| --- | --- | --- | --- | --- |\n| Reported net earnings | $ 29,354,505 | | $ 28,316,047 | |\n| Add back: goodwill amortization | | | | |\n| Goodwill amortization, before income tax | 1,641,367 | | 1,641,367 | |\n| Income tax benefit | (420,000) | | (420,000) | |\n| Adjusted net earnings | $ 30,575,872 | | $ 29,537,414 | |\n| Basic earnings per share: | | | | |\n| Reported net earnings | $ | 2.38 | $ | 2.28 |\n| Goodwill amortization, net of income tax benefit | | .10 | | .10 |\n| Adjusted net earnings | $ | 2.48 | $ | 2.38 |\n| Earnings per share, assuming dilution: | | | | |\n| Reported net earnings | $ 2.37 | | $ | 2.27 |\n| Goodwill amortization, net of income tax benefit | .10 | | | .10 |\n| Adjusted net earnings | $ 2.47 | | $ | 2.37 |\n\n*Net Interest Income*. Net interest income is the difference between interest income on earning assets and interest expense on liabilities incurred to fund those assets. Our earning assets consist primarily of loans and investment securities. Our liabilities to fund those assets consist primarily of noninterest-bearing and interestbearing deposits. Tax-equivalent net interest income was $84.2 million in 2002 as compared to $74.8 million in", - "page_start": 43, - "page_end": 43, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### **Impact on Operating Profit**\n\n#### **Net Cash Flow (automotive)**\n\n#### **Net Income**\n\nNet non-operating expenses totaled ¥5.5 billion, ¥9.7 billion lower than last year. This was primarily due to a ¥5.3 billion decrease in financial costs and a ¥5.3 billion increase in equity in earnings of unconsolidated subsidiaries and affiliates, thanks mainly to Renault. Net extraordinary losses totaled ¥62.5 billion, ¥10.7 billion lower than last year, mainly due to the sale of the site of the former Murayama plant. Net income before taxes came to ¥793.2 billion. Income taxes totaled ¥258.0 billion, with an effective consolidated tax rate of 33 percent. Minority interests amounted to ¥22.9 billion, mainly from Yulon Nissan Motor. As a result, net income reached ¥512.3 billion, an increase of ¥8.6 billion.\n\n#### **FINANCIAL POSITION**\n\n#### **Balance Sheet**\n\nIn 2004, total consolidated assets increased by 25.3 percent to ¥9,848.5 billion.\n\nCurrent assets increased by 36.4 percent, or ¥1,372.4 billion, to ¥5,139.4 billion. This increase included changes in the scope of consolidation by ¥271.1 billion and an increase in sales finance receivables by ¥840.6 billion thanks to increased sales in the U.S. Fixed assets increased by 15.1 percent, or ¥616.7 billion, to ¥4,708.0 billion. Property, plant and equipment valuation increased by ¥593.6 billion, mainly due to capital expenditures of ¥477.5 billion and an increase in leased vehicles.\n\nCurrent liabilities increased by 28.1 percent, or ¥872.2 billion, to ¥3,974.7 billion. This increase included changes in the scope of consolidation of ¥144.4 billion and an increase in short-term borrowings for sales financing of ¥558.5 billion.\n\nIn 2004, total shareholder equity increased from ¥2,024.0 billion to ¥2,465.8 billion. This gain was primarily due to net income of ¥512.3 billion, offset by dividends paid totaling ¥101.2 billion. Consolidated shareholder equity represented 29 percent of total revenues and 25 percent of total assets.\n\n#### **Cash Flow**\n\nCash from operating activities was ¥369.4 billion, below the previous year's level of ¥797.4 billion. This drop was primarily caused by a ¥331.2 billion increase in finance receivables in the U.S. and Japan. There were also increases in inventory and income tax paid.\n\nCash used for investing activities increased by ¥108.9 billion to ¥865.0 billion. This increase was mainly due to an increase of leased vehicles in the U.S.\n\nCash from financing activities totaled ¥521.0 billion, including an increase in short-term borrowing of ¥666.2 billion, offset by ¥94 billion for the payment of dividends and ¥26 billion for the acquisition of treasury stock.\n\nIn total, cash and cash equivalents increased by ¥95.6 billion to ¥289.8 billion from fiscal 2004.", - "page_start": 14, - "page_end": 14, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Income from discontinued operations was $22.4 million after taxes, an increase of $15.0 million or 202%. The income from discontinued operations in 2003 includes the sale of the partnership interest in February 2003 and results from the two months of its operations in 2003.\n\nThe Company adopted FAS 143 \"Accounting for Asset Retirement Obligations.\" effective January 1, 2003, and as a result recorded a charge to earnings for the cumulative effect of this change in accounting of $76 thousand after taxes.\n\nNet income was $32.1 million, an increase of $27.6 million or 610%. The increase is a result of improved operating results in the PCS operations, the 2002 VeriSign stock loss and the sale of the cellular operations.\n\n#### **DISCONTINUED OPERATIONS**\n\nThe Company invested $2.0 million in the Virginia 10 RSA limited partnership in the early 1990's. The partnership's local customer base peaked in early 2000 with nearly 12,000 subscribers, then steadily declined to 6,700 by December 31, 2002. The decline was the result of competition with digital technologies and increased competition from national carriers in the area. As a result of the decline in the subscriber base, and the need for extensive capital expenditures to transform the analog network into a digital cellular network, the Company elected to sell its 66% interest in the partnership to one of the minority partners. The agreement was signed in November 2002, and closing was February 28, 2003. The Company's portion of the net income from its operations for 2003, 2002 and 2001 was $1.2 million, $7.4 million and $6.7 million, respectively.\n\n#### **CONTINUING OPERATIONS**\n\n#### **2002 compared to 2001**\n\nTotal revenue was $93.0 million in 2002, an increase of $24.3 million or 35.3%. Total revenues included $57.9 million of wireless revenues, an increase of $21.7 million or 60.2%; wireline revenues of $28.7 million, an increase of $1.3 million or 4.6%; and other revenues of $6.4 million, an increase of $1.2 million or 24.5%.\n\nWithin wireless revenues, the PCS operation contributed $55.5 million, an increase of $21.4 million, or 63.0%. PCS service revenues were $37.4 million, an increase of $18.3 million or 95.7%. The increase in the subscriber base, which totaled 67,842 at December 31, 2002, was an increase of 20,524 or 43% from the prior year end.\n\nPCS travel revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.5 million, an increase of $2.9 million or 21.3%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, and the travel exchange rate. The rate received on travel was $0.10 per minute in 2002. The rates in 2001 were $0.20 per minute from January 1, 2001 through April 30, 2001; $0.15 per minute from May 1, 2001 through September 30, 2001; and $0.12 per minute from October 1, 2001 through December 31, 2001.\n\nPCS equipment sales were $1.6 million, an increase of $0.3 million or 19.6%. The equipment sales are net of $0.3 million of rebates and discounts given at the time of sale, which became more pronounced during the year to meet industry competition for subscriber additions and subscriber retention.\n\nIn accordance with Sprint's requirements, the Company launched third generation (3G 1X) service in August 2002. The impact of 3G 1X-network enhancements on revenues was not significant in 2002.\n\nTower leases added $2.1 million to wireless revenues, an increase of $0.4 million or 24.5%. The increase was the result of other wireless carriers executing additional leases to use space on the Company's portfolio of towers. Of the 82 towers and poles owned by the Company as of December 31, 2002, 46 have tower space leased to other carriers.\n\nWireless revenues from the Company's paging operation were $0.3 million, a decrease of $0.1 million as the local customer base increasingly chose alternative digital wireless services. Paging service subscribers declined by 7.8% in 2002 from 3,190 subscribers to 2,940 subscribers.\n\nWithin wireline revenues, the Telephone operation contributed $22.5 million, an increase of $0.9 million, or 4.0%. Telephone access revenues were $10.9 million, an increase of $1.4 million or 14.8%. The growth in access revenues was driven by a 38.4% increase in access minutes of use on the Company's network and an increased percentage of minutes in the intrastate jurisdiction, where rates are higher than the interstate jurisdiction. On January 1, 2002 the Federal subscriber line charge (SLC) for residential customers increased from $3.50 to $5.00 per month. The SLC", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Operating income grew to $9.3 million, an increase of $2.9 million or 45.4%. Revenue growth, primarily in the PCS operation, was greater than the increase in operating expense, and the overall operating margin was 10.0%, compared to 9.4% in 2001. The elevated bad debt expense in the PCS and telephone operations had a dampening effect on the operating margin improvement.\n\nOther income (expense) is comprised of non-operating income and expenses, interest expense and gain or loss on investments. Collectively, the net impact of these items to pre-tax income was an expense of $14.3 million for 2002, compared to income of $9.1 million from 2001. The largest component was the loss on investments that is discussed below.\n\nInterest expense was $4.2 million, an increase of $0.1 million or 1.4%. The Company's average debt outstanding was approximately the same during 2002 as compared to the previous year.\n\nNet losses on investments were $10.0 million, compared to a gain of $12.9 million from 2001. Results in 2002 include the sale of the VeriSign, Inc. stock for a loss of $9.0 million compared to a gain recorded on the VeriSign stock of $12.7 million in 2001.\n\nNon-operating income was a loss of $0.1 million, a decrease of $0.3 million, primarily due to losses recorded for the Company's portfolio of investments, offset by an increase in patronage equity earned from CoBank, the Company's primary lender.\n\nIncome (loss) from continuing operations before taxes was a $5.0 million loss compared to a profit of $15.5 million in 2001, a decrease of $20.5 million. Gains and losses on external investments contributed $21.7 million to this change from 2002 to 2001.\n\nThe Company recognized an income tax benefit of $2.1 million on continuing operations in 2002, which is an effective tax rate of 42.2% due to the impact of net operating loss carry forwards generated in several states with higher tax rates, offset by the need for a valuation allowance.\n\nNet loss from continuing operations was $2.9 million, a decrease of $12.6 million from 2001. The results are primarily made up of the one-time impact of the losses on the sale of the VeriSign stock and the improvement in operating income.\n\nIncome from discontinued operations was $7.4 million after taxes, an increase of $0.7 million or 11%. Increased revenues from use of our cellular network by customers of other wireless providers were the main cause for the increase in net income.\n\nNet income was $4.5 million, a decrease of $11.9 million or 72.4%. The decrease is primarily the result of the $21.7 million decline in investment results due to the impact of the VeriSign gain recorded in 2001, and the loss on the sale of the VeriSign stock in 2002.\n\n#### **Investments in Non-Affiliated Companies**\n\nThe Company has investments in several available-for-sale securities, which the Company may choose to liquidate from time to time, based on market conditions, capital needs, other investment opportunities, or a combination of any number of these factors. As a result of the uncertainty of these factors, there is also uncertainty as to what the value of the investments may be when they are sold.\n\nThe fair value of the Company's available-for-sale securities was $0.2 million at the end of 2003, compared to $0.2 million at the end of 2002. The Company's available-for-sale portfolio at December 31, 2003 is made up of two investments, both of which are within the telecommunications industry. Due to the volatility of the securities markets, particularly in the telecommunications industry, there is uncertainty about the ultimate value the Company will realize with respect to these investments in the future.\n\nThe Company participates in emerging technologies by investing in entities that invest in start-up companies. This includes indirect participation through capital venture funds of South Atlantic Venture Fund III, South Atlantic Private Equity IV, Dolphin Communications Parallel Fund, Dolphin Communications Fund II and the Burton Partnership. The Company also participates by direct investment in privately held companies. Currently the Company's only direct investment is in NTC Communications, a provider of voice, video and data connections to off campus housing properties at universities and colleges. For those companies that eventually make public offerings of their securities, it", - "page_start": 52, - "page_end": 52, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### FISCAL YEAR 2004 FINANCIAL REVIEW\n\nNISSAN REPORTED A RECORD YEAR IN TERMS OF REVENUES, OPERATING INCOME, NET INCOME, SALES AND PRODUCTION VOLUME IN FISCAL 2004. NISSAN ACHIEVED TWO OF ITS THREE COMMITMENTS FOR NISSAN 180: AN 8 PERCENT OPERATING PROFIT MARGIN AND ZERO NET AUTOMOTIVE DEBT. THE REMAINING COMMITMENT IS THE ACHIEVEMENT OF ONE MILLION ADDITIONAL UNIT SALES. AT MID-YEAR 2005, GLOBAL SALES AT 1,809,000 UNITS WERE SLIGHTLY AHEAD OF THE COMMITMENT TO REACH 3,597,000 UNITS BY THE END OF SEPTEMBER 2005.\n\n#### **Net Sales**\n\nConsolidated net sales came to ¥8,576.3 billion, up 15.4 percent from last year. A higher volume and mix had a positive impact of ¥707.0 billion. Movements in foreign exchange rates produced a negative impact of ¥173.0 billion. Changes in the scope of consolidation, including Dongfeng Motor and Yulon Nissan Motor, raised revenues by ¥432.0 billion.\n\n#### **Operating Income**\n\nConsolidated operating profit improved by 4.4 percent from last year to a record ¥861.2 billion. This resulted in an operating profit margin of 10.0 percent. Operating profit was affected by the following factors:\n\n- The effect of foreign exchange rates produced a ¥78 billion negative impact for the full year. The depreciation of the U.S. dollar against the yen resulted in a negative impact of ¥74 billion, with an additional ¥13 billion from other currencies. The appreciation of the euro resulted in a positive impact of ¥9 billion.\n- The change in the scope of consolidation produced a positive impact of ¥31 billion. This was primarily from the consolidation of Dongfeng Motor and Yulon Nissan Motor.\n- The impact of the higher volume and mix contributed ¥284 billion. This was mainly driven by an increase in U.S. sales volume.\n- Selling expenses increased by ¥114 billion, also mainly due to the increase of sales in the U.S.\n- The improvement in purchasing costs amounted to ¥131 billion.\n- Product enrichment and the cost of regulations had a negative impact of ¥92 billion.\n- An additional ¥44 billion was allocated to R&D to reinforce product and technology development.\n- Cost reductions from manufacturing efficiencies were offset by costs associated with expanding the Canton plant's capacity, which resulted in a ¥15 billion increase in manufacturing and logistics expenses.\n- Warranty costs increased by ¥41 billion, partly due to greater volume.\n- General, administrative and other expenses increased by ¥25.7 billion.\n\nBy region, operating profits in Japan came to ¥341.1 billion, a decrease of 3.2 percent compared to last year. This was mainly due to unfavorable exchange rate fluctuations and an increase in R&D expenses, which reached a record level.\n\nDue to higher volumes, profitability in the U.S. and Canada increased 7.9 percent from last year and totaled ¥379.7 billion.\n\nOperating profit in Europe was ¥56 billion, an increase of 13.8 percent compared to last year, owing to a better mix and higher contributions from Russia.\n\nIn General Overseas Markets, including Mexico, operating profits came to ¥84.8 billion, an increase of 28.5 percent compared to last year. This was primarily due to the consolidation of Dongfeng Motor and Yulon Nissan Motor. Inter-regional eliminations were negative ¥0.4 billion.", - "page_start": 13, - "page_end": 13, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Facility lease revenue contributed $5.5 million to wireline revenues, a decrease of $0.2 million or 3.5%. The decrease was primarily the result of the prolonged decline of lease rates associated with competitive pricing pressures and the economic downturn in the telecommunications industry. During 2002 the Company completed a second, diverse fiber route to its existing interconnection point in the Dulles airport area of Northern Virginia. This fiber route provides increased reliability for customers in the event of fiber cuts or breaks, and extends the availability of the Company's fiber network to additional market locations but to date has not added additional revenue to the Company's operation.\n\nBilling and collection services and other revenues contributed $0.4 million to wireline revenues, which was the same as 2002 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.4 million, an increase of $0.1 million or 1.7%. The number of subscribers and service plan prices remained relatively constant during 2003.\n\nOther revenues, primarily consisting of Internet and 511Virginia service revenues were $5.8 million in 2003, an increase of $0.7 million or 13.5%. The Company had 17,420 dial-up Internet subscribers at December 31, 2003, compared to 18,050 at the end of the previous year. During 2003, the Company's DSL high-speed Internet access subscriber count increased to 1,298 from 646. Total Internet service revenue was $4.5 million, an increase of $0.3 million or 10.7%. The 511Virginia contract with the Virginia Department of Transportation contributed $1.3 million to other revenues, an increase of $0.4 million or 41.3%. Telecommunications equipment sales, services and lease revenues were $1.1 million, which reflects a $0.1 million decrease from 2002 results.\n\nTotal operating expenses were $87.2 million, an increase of $3.6 million or 4.3%. The primary driver in the increase in operating expenses is continued growth in the PCS operation somewhat offset by a significant decline in bad debt expense compared to 2002.\n\nLate in 2003, the Company made an employee benefits policy change, which eliminated the requirement for the Company to accrue a vacation liability in advance of the year in which the benefit was used. The result of this change was a reduction of benefit expense of $0.5 million for the year compared to 2002. Benefit expenses impact all operating departments based on the amount of direct labor charged to the department. The change has a one-time impact on the financial statements of the Company. The benefits policy now provides that employees earn and use their paid time off in the same period. In the future, under this policy, unused hours can be banked but only used for extended illness, not carried over for use as vacation.\n\nCost of goods and services was $10.9 million, an increase of $0.4 million or 4.2%. The PCS cost of goods sold was $8.5 million, an increase of $0.2 million or 2.3%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. In 2003, the Company recorded approximately $1.8 million in handset costs related to existing subscribers upgrading their handsets. Prior to 2003, the Company did not track the specific costs related to subsidizing new handsets to existing customers. The cost of handset up-grades sold to existing customers is expected to increase as the customer base matures and handset manufacturers introduce new technologies in new handsets. The cable television programming (cost of service) expense was $1.6 million, an increase of $0.2 million or 16.3%. The Company has seen continuing upward pressure on the cost of cable TV programming by cable TV program providers.\n\nNetwork operating costs were $33.6 million, an increase of $1.1 million or 3.4%. The largest item in network operating costs is travel expense. These costs made up 31.8% and 32.9% of the total network and other costs in 2003 and 2002, respectively. Travel expense is the cost of minutes used by the Company's PCS subscribers on Sprint or other Sprint Affiliates' networks. Travel expense in 2003 was $10.8 million, an increase of $0.1 million due to a significant increase in travel minutes in 2003 which was offset by the impact of the rate decline. The travel rate declined from $0.10 per minute in 2002 to $0.058 per minute in 2003. Our PCS customers increased their average monthly travel minutes by 22% compared to 2002. In 2002, the average customer's travel usage was 130 minutes per month and in 2003 that average travel usage increased to 159 minutes per month.\n\nNetwork infrastructure maintenance costs were $4.9 million or 14.6% of total network operating costs, a decrease of $0.2 million from 2002. Rent for towers, tower sites, and buildings increased $0.9 million or 27.3% to $4.2 million. Lease escalators plus the increase in the number of sites leased contributed to the increase. Line costs in 2003 were $9.8 million or 29.1% of the network operating costs, an increase of $0.1 million.\n\nDepreciation and amortization expense was $16.6 million, an increase of $2.1 million or 14.8%. The PCS operation had depreciation expense of $10.2 million, an increase of $1.6 million or 18.9%. The 16 additional PCS base stations placed in service during 2003 resulted in higher depreciation expense for the year. In the telephone operation, depreciation", - "page_start": 48, - "page_end": 48, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "increased $0.5 million or 12.7%, due to new assets deployed in the operation. There was no amortization of goodwill in 2003 or 2002, compared to goodwill amortization of $360 thousand expensed in 2001, due to the required accounting change.\n\nSelling, general and administrative expenses were $26.0 million, down $0.1 million or 0.4%. Customer support costs were $8.7 million, an increase of $0.9 million or 11.4%. The growth in Sprint wireless subscribers is primarily responsible for this change. Advertising expense was $4.6 million, an increase of $0.3 million or 6.4%. The change is primarily due to increased marketing efforts in support of the PCS operations in both the Quad State and Central Penn markets. PCS sales staff expenses were $2.8 million, an increase of $0.1 million or 1.5% compared to 2002. Other sales staff expenses increased $0.3 million to $1.3 million as the Company worked to expand its other services in areas outside its historically defined service area. Bad debt expense decreased $2.6 million or 58.3%.\n\nAdministrative expenses increased $1.0 million or 17.1%. This increase is a result of increased professional fees, insurance and pension costs. During 2003, the Company added several positions to expand the management team to support the Company's growing operations.\n\nBad debt expense decreased $2.6 million to $1.8 million or 58.3%. This decrease was due to more restrictive credit terms for new PCS subscribers (limiting the high credit risk customers who obtained service), lower churn in the PCS operation and improvement in the interexchange carrier segment of the business. This expense is net of normal recoveries and includes a recovery of $0.2 million for an interexchange carrier settlement the Company received in 2003 which was written off in 2002.\n\nOperating income grew to $18.6 million, an increase of $9.3 million or 100%. Revenue growth, primarily in the PCS operation in addition to the reduced bad debt expenses, adjustments of management estimates, and the settlement of disputed items with Sprint, all contributed to the operating income improvements. The Company's operating margin was 17.6%, compared to 10.0% in 2002.\n\nOther income (expense) is comprised of non-operating income and expenses, interest expense and gain or loss on investments. Collectively, the net impact of these items to pre-tax income was an expense of $3.6 million for 2003, compared to expense of $14.3 million from 2002. The 2002 results were primarily the results of the previously disclosed $9.0 million loss recorded on the sale of the VeriSign stock.\n\nInterest expense was $3.5 million, a decrease of $0.7 million or 16.3%. The Company's average debt outstanding decreased approximately $4.8 million. Long-term debt (inclusive of current maturities), was $43.3 million at year-end 2003, versus $52.0 million at year-end 2002. The Company did not borrow any money on its revolving facilities in 2003.\n\nNet losses on investments were $0.4 million, compared to a loss of $10.1 million from 2002. Results in 2002 include the sale of the VeriSign, Inc. stock for a loss of $9.0 million. See Note 3 to the consolidated financial statements.\n\nNon-operating income was a gain of $0.4 million, an increase of $0.5 million, due to an increase in patronage equity earned from CoBank, the Company's primary lender, and due to interest income from the proceeds on the sale of the Virginia 10 RSA Limited partnership, offset by losses recorded for the Company's portfolio of investments.\n\nThe Company provided for income taxes of $5.3 million in 2003, which is an effective tax rate of 35.2% due to the effect of state tax apportionment rules and reduction in the liability for tax exposures. On a normalized basis the Company would have recorded taxes at an effective tax rate of approximately 39%. Last year's effective tax rate was 42.2% due to the impact of net operating loss carry forwards generated in several states with higher tax rates. The Company currently operates in four states. Due to apportionment rules and geographic operations of subsidiaries where the Company's profits and losses arise, the Company is generating profits in states with lower tax rates, while generating losses in states with higher tax rates. The Company cautions readers that the current effective tax rate may not be the same rate at which tax benefits or tax expenses are recorded in the future. The Company's state apportionments, profits and losses and state tax rates may change, therefore changing the effective rate at which taxes are provided for or at which tax benefits accrue. In the near term, under existing operating results and current tax rates, the Company anticipates a normalized effective tax rate will be approximately 39%.\n\nNet income from continuing operations was $9.8 million, an increase of $12.7 million from 2002. The results are primarily made up of the improvement in the PCS operation and the one-time impact of the losses on the sale of VeriSign stock in 2002.", - "page_start": 49, - "page_end": 49, - "source_file": "NASDAQ_SHEN_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "What operations were discontinued in 1997 by Atrion Corp ?", - "target_page": 17, - "target_passage": "During 1997, the Company sold all of its natural gas operations. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Atrion Corporation One Allentown Parkway Allen, Texas 75002 972 • 390 • 9800 www.atrioncorp.com", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "# CORPORATE INFORMATION\n\n# **Corporate Office:**\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800 www.atrioncorp.com\n\n## **Registrar and Transfer Agent**\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## **Form 10-K**\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to:\n\n> *Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002*\n\n## **Stock Information**\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or \"street\" name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | | Low | | Dividends |\n| --- | --- | --- | --- | --- | --- | --- |\n| March 31 | $ | 38.14 | $ | 26.91 | $ | — |\n| June 30 | | 32.51 | | 26.82 | | — |\n| September 30 | | 28.09 | | 18.31 | | — |\n| December 31 | | 23.90 | | 17.31 | | — |\n| 2003 Quarter Ended | | High | | Low | | Dividends |\n| March 31 | $ | 22.85 | $ | 17.95 | $ | — |\n| June 30 | | 30.80 | | 22.75 | | — |\n| September 30 | | 45.20 | | 26.80 | | .12 |\n| December 31 | | 50.00 | | 40.00 | | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "and operate MGM Grand Australia. This transaction closed in July 2004 with net proceeds to the Company of $136 million.\n\nThe results of the Golden Nugget Subsidiaries, Online and MGM Grand Australia are classified as discontinued operations in the accompanying consolidated statements of income for all periods presented. Net revenues of discontinued operations were $45 million, $231 million and $222 million, respectively, for the years ended December 31, 2004, 2003 and 2002. Included in income from discontinued operations is an allocation of interest expense based on the ratio of the net assets of the discontinued operations to the total consolidated net assets and debt of the Company. Interest allocated to discontinued operations was $2 million, $9 million and $9 million for the years ended December 31, 2004, 2003 and 2002, respectively. Included in discontinued operations for the year ended December 31, 2003 is a loss on disposal of Online of $7 million relating primarily to unrecoverable costs of computer hardware and software. Included in the tax benefit from discontinued operations for the year ended December 31, 2003 is $2 million of previously unrecognized tax benefits relating to prior year operating losses of Online. Included in discontinued operations for the year ended December 31, 2004 is a gain on the sale of the Golden Nugget Subsidiaries of $8 million and a gain on sale of the MGM Grand Australia Subsidiaries of $74 million.\n\nThe following table summarizes the assets and liabilities of discontinued operations (the Golden Nugget Subsidiaries and Online) as of December 31, 2003, included as assets and liabilities held for sale in the accompanying consolidated balance sheet:\n\n| At December 31, 2003 (In thousands) | |\n| --- | --- |\n| Cash $ | 15,230 |\n| Accounts receivable, net | 6,024 |\n| Inventories | 4,321 |\n| Prepaid expenses and other | 5,174 |\n| Total current assets | 30,749 |\n| Property and equipment, net | 185,516 |\n| Other assets, net | 9,817 |\n| Total assets | 226,082 |\n| Accounts payable | 2,180 |\n| Other current liabilities | 20,885 |\n| Total current liabilities | 23,065 |\n| Long-term debt | 391 |\n| Total liabilities | 23,456 |\n| Net assets $ 202,626 | |", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "To the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and Subsidiaries as of December 31, 2003 and 2002, and the related consolidated statements of income, changes in stockholders' equity and cash flows for the years then ended. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audit. The financial statements of Atrion Corporation and Subsidiaries as of and for the year in the period ended December 31, 2001, were audited by other auditors who have ceased operations. Those auditors expressed an unqualified opinion on those financial statements in their report dated February 25, 2002.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the consolidated financial position of Atrion Corporation and Subsidiaries as of December 31, 2003 and 2002, and the consolidated results of their operations and their consolidated cash flows for the years then ended in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed above, the financial statements of Atrion Corporation and Subsidiaries as of December 31, 2001, and for the year then ended were audited by other auditors who have ceased operations. As described in Note 2, these financial statements have been revised to include the transitional disclosures required by Statement of Financial Accounting Standards No. 142, Goodwill and Other Intangible Assets, which was adopted by the Company as of January 1, 2002. Our audit procedures with respect to the disclosures in Note 2 with respect to 2001 included agreeing the previously reported net income to the previously issued financial statements and the adjustments to reported net income representing amortization expense (including any related tax effects) recognized in those periods related to goodwill to the Company's underlying records obtained from management. We also tested the mathematical accuracy of the reconciliation of adjusted net income to reported net income, and the related income-per-share amounts. In our opinion, the disclosures for 2001 in Note 2 are appropriate. However, we were not engaged to audit, review, or apply any procedures to the 2001 financial statements of the Company other than with respect to such disclosures and, accordingly, we do not express an opinion or any other form of assurance on the 2001 financial statements taken as a whole.\n\nGrant Thornton LLP Dallas, Texas February 13, 2004\n\n*This is a copy of the audit report previously issued by Arthur Andersen LLP in connection with Atrion Corporation and Subsidiaries Annual Report for the year ended December 31, 2001. This audit report has not been reissued by Arthur Andersen LLP in connection with this Annual Report. The consolidated balance sheets as of December 31, 2001 and 2000 and the consolidated statements of income and cash flows for the years ended December 31, 2000 and 1999 referred to in this report have not been included in the accompanying financial statements.*\n\nTo the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and subsidiaries as of December 31, 2001 and 2000 and the related consolidated statements of income and cash flows for each of the three years in the period ended December 31, 2001. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of Atrion Corporation and subsidiaries as of December 31, 2001 and 2000 and the results of their operations and their cash flows for each of the three years in the period ended December 31, 2001 in conformity with accounting principles generally accepted in the United States.\n\nArthur Andersen LLP Atlanta, Georgia February 25, 2002", - "page_start": 24, - "page_end": 24, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## 1 SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES\n\nAtrion Corporation designs, develops, manufactures and markets products primarily for the medical and health care industry. The Company markets its products throughout the United States and internationally. The Company's customers include hospitals, distributors, and other manufacturers. As of December 31, 2003, the principal subsidiaries of the Company through which it conducted its operations were Atrion Medical Products, Inc. (\"Atrion Medical Products\"), Halkey-Roberts Corporation (\"Halkey-Roberts\") and Quest Medical, Inc. (\"Quest Medical\").\n\n### **PRINCIPLES OF CONSOLIDATION**\n\nThe consolidated financial statements include the accounts of Atrion Corporation and its subsidiaries (the \"Company\"). All significant intercompany transactions and balances have been eliminated in consolidation.\n\n### **FAIR VALUE**\n\nThe carrying amounts of cash and cash equivalents, accounts receivable and accounts payable approximate fair value due to the short-term nature of these items. The carrying amount of debt approximates fair value as the interest rate is tied to market rates.\n\n### **ESTIMATES**\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosures of contingent assets and liabilities at the dates of the financial statements and the reported amount of revenues and expenses during the reporting periods. Actual results could differ from those estimates.\n\n### **FINANCIAL PRESENTATION**\n\nCertain prior-year amounts have been reclassified to conform with the current-year presentation.\n\n### **CASH AND CASH EQUIVALENTS**\n\nCash equivalents are securities with original maturities of 90 days or less.\n\n### **TRADE RECEIVABLES**\n\nTrade accounts receivable are recorded at the original sales price to the customer. The Company maintains an allowance for doubtful accounts to reflect estimated losses resulting from the inability of customers to make required payments. On an ongoing basis, the collectibility of accounts receivable is assessed, based upon historical collection trends, current economic factors, and the assessment of the collectibility of specific accounts. The Company evaluates the collectibility of specific accounts using a combination of factors, including the age of the outstanding balances, evaluation of customers' current and past financial condition, recent payment history, current economic environment, and discussions with appropriate Company personnel and with the customers directly. Accounts are written off when it is determined the receivable will not be collected.\n\n### **INVENTORIES**\n\nInventories are stated at the lower of cost or market. Cost is determined by using the first-in, first-out method. The following table details the major components of inventory (in thousands):\n\n| | | DECEMBER 31, | |\n| --- | --- | --- | --- |\n| | 2003 | | 2002 |\n| Raw materials | $ 5,641 | $ | 6,082 |\n| Finished goods | 4,044 | | 2,818 |\n| Work in process | 1,629 | | 1,411 |\n| Total inventories | $ 11,314 | $ | 10,311 |\n\n### **INCOME TAXES**\n\nThe Company utilizes the asset and liability approach to financial accounting and reporting for income taxes. Deferred income tax assets and liabilities are computed annually for differences between the financial reporting basis and the tax basis of the Company's other assets and liabilities. These amounts are based on enacted tax laws and rates applicable to the periods in which the differences are expected to affect taxable income.", - "page_start": 13, - "page_end": 13, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "on the Company's ATM network. In addition, the Company continues to invest in the on-going development of products that were re c e n t l y i n t roduced to the market. The Company's re s e a rch and development costs incurred for computer products to be sold, leased or otherw i s e marketed increased to $6.7 million for the year ended December 31, 2000 from $3.2 million for the year ended December 31, 1999. Of this total f i g u re, $1.0 million and $322,000 were capitalized, as at December 31, 2000 and 1999, re s p e c t i v e l y, in conjunction with the Company's accounting policy requiring the capitalization of development costs on a product by product basis once technological feasibility is established. Technological feasibility of computer software products is established when the Company has completed all planning, designing, coding, and testing activities that are necessary to establish that the product can be produced to meet its design specifications including functions, feature s , and technical perf o rmance re q u i rements.\n\n**Operating Loss** The Software Solutions Segment incurred an operating loss of $21.5 million for the year ended December 31, 2000 and $7.1 million for the year ended December 31, 1999 as a result of the factors discussed above\n\n#### Corporate Services Segment\n\n**Operating Expenses** Operating expenses for the Corporate Services Segment increased to $7.9 million for the year ended December 31, 2000 f rom $6.8 million for the year ended December 31, 1999. The components of corporate services operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | | Years ending December 31, | | |\n| --- | --- | --- | --- | --- |\n| | 2 0 0 0 | | 1 9 9 9 | |\n| Salaries and benefits | $ | 3 , 8 1 3 | $ | 3 , 3 3 5 |\n| Selling, general and administrative | | 3 , 8 4 1 | | 3 , 2 7 0 |\n| D e p reciation and amort i z a t i o n | | 2 0 8 | | 1 4 5 |\n| Total direct operating expenses | $ | 7 , 8 6 2 | $ | 6 , 7 5 0 |\n\nThe Company's expansion of its network infrastru c t u re, and increases in corporate and administrative capabilities are the primary reasons for these i n c reased expenditures.\n\n#### **Non-Operating Results for the Years Ended December 31, 2000 and 1999**\n\n**Interest Income** I n t e rest income decreased to $1.1 million for the year ended December 31, 2000 from $2.0 million for the year ended December 31, 1999 and from $2.5 million for the year ended December 31, 1998. The decrease is the result of the decrease in investment securities and cash as a result of negative cash flow from operations and capital expenditure s .\n\n**Interest Expense** I n t e rest expense decreased to $10.8 million for the year ended December 31, 2000 from $10.9 million for the year ended December 31, 1999 and increased from $7.8 million for the year ended December 31, 1998. The decrease from 1999 to 2000 is due to exchange rate diff e rences as the majority of the debt is denominated in Deutsche Mark. The increase from 1998 to 1999 is the result of accretion of the C o m p a n y 's Notes Payable for a full year in 1999 in comparison to 6 months' accretion in 1998.\n\n**Foreign Exchange Gain/Loss** The Company had a net foreign exchange loss of $3.2 million for the year ended December 31, 2000, as c o m p a red to $2.1 million for the year ended December 31, 1999, and $1.9 million for the year ended December 31, 1998. Exchange gains and losses that result from re - m e a s u rement of certain Company assets and liabilities are re c o rded in determining net loss. A portion of the assets and liabilities of the Company are denominated in Euros, including capital lease obligations, notes payable (including the Notes issued in the C o m p a n y 's public bond offering), cash and cash equivalents, investments, and forw a rd foreign exchange contracts. It is the Company's policy to attempt to match local currency receivables and payables. The foreign currency denominated assets and liabilities give rise to foreign exchange gains and losses as a result of U.S. dollar to local currency exchange movements.\n\n**Extraordinary Gain** In 1999 the Company re c o rded an extraord i n a ry gain of $2.8 million (net of income taxes of $0) following its re p u rchase of a portion of its Senior Discount Notes. The gain re p resents the diff e rence between the allocated carrying value of the face value of the debt re p u rchased of $8.1 million less the consideration paid of $5.0 million, offset by the write-off of allocated unamortized deferred financing costs of $300,000. The Company has not re t i red the bonds re p u rchased.\n\nIn addition, the Company re p u rchased 97,023 warrants that were attached to the notes payable. Accord i n g l y, approximately $176,000 was allocated to the carrying value of the warrants which reduced additional paid-in capital.\n\nIn 1998 the Company re c o rded an extraord i n a ry gain of $2.9 million (net of income taxes of $1.5 million), following its re p u rchase of a portion of its Senior Discount Notes. The gain re p resents the diff e rence between the allocated carrying value of the face value of the debt re p u rchased of $10.2 million less the consideration paid of $5.5 million, offset by the write-off of allocated unamortized deferred financing costs of $400,000. The Company has not re t i red the bonds re p u rchased.\n\n**Net Loss** The Company's net loss increased to $49.6 million for the year ended December 31, 2000, as compared to $30.9 million for the year ended December 31, 1999 and $28.4 million for the year ended December 31, 1998, as a result of the factors discussed above.\n\n#### LI Q U I D I T Y A N D CA P I TA L RE S O U R C E S\n\nSince its inception, the Company has sustained negative cash flows from operations and has financed its operations and capital expenditure s primarily through the proceeds from the 1998 issue of Deutsche Mark denominated notes payable, the Company's 1997 public equity off e r i n g , equipment lease financing and private placements of equity securities. The net proceeds of such transactions, together with revenues fro m operations and interest income have been used to fund aggregate net losses of approximately $123.8 million, investments in pro p e rt y, plant and equipment of approximately $52.8 million and acquisitions of $24.6 million.", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Mermaid's principal activities during the course of the Financial Year were:\n\nPRINCIPAL ACTIVITIES\n\nOperating crewed vessel charters; **•**\n\nVessel manning, management and logistics;\n\n- **•**\n**•**\n\nOperating supply base facilities; and\n\n- Equipment hire. **•**\nOther than detailed in the Chairman's Report set out at pages 1 and 2 of this report and/or in the Operations Review set out on pages 3 to 9 of this report, (together the \"Chairman's and Operations Reviews\"), there have been no significant changes to these activities during the Financial Year.\n\nIn respect of the financial year ended 30 June 1999, as detailed in the directors' report for that financial year, a final dividend of 1.25 cents per share, franked to 100 per cent at 36 per cent corporate income tax rate, was paid to the holders of fully paid ordinary shares on 1 November 1999. DIVIDEND\n\nIn respect of the financial year ended 30 June 2000 the directors have not recommended the payment of a dividend.\n\nA review of operations for the Financial Year and the results of those operations are set out in the Chairman's and Operations Reviews.\n\nREVIEW OF OPERATIONS\n\n2 9\n\nThe Chairman's and Operations Reviews set out the matters which have had a significant effect on the state of affairs of Mermaid. Other than those matters there were no significant changes in the state of affairs of Mermaid during the Financial Year. SIGNIFIC ANT CHANGES IN THE STATE OF AFFAIRS\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares). SUBSEQUENT EVENTS\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n## **Recent Accounting Pronouncements**\n\nIn April 2014, the Financial Accounting Standards Board (\"FASB\") issued Accounting Standards Update (\"ASU\") No. 2014-08, *Reporting Discontinued Operations and Disclosures of Disposals of Components of an Entity*. This ASU raises the threshold for a disposal to qualify as discontinued operations and requires new disclosures for individually material disposal transactions that do not meet the definition of a discontinued operation. Under the new guidance, companies report discontinued operations when they have a disposal that represents a strategic shift that has or will have a major impact on operations or financial results. We do not expect the provisions of this ASU, which are effective for us beginning in the first quarter of 2015, to have a material impact on our consolidated financial statements.\n\nIn May 2014, the FASB issued ASU No. 2014-09, *Revenue from Contracts with Customers*. The core principle of this ASU is that companies should recognize revenue when the transfer of promised goods or services to customers occurs in an amount that reflects what the company expects to receive. It requires additional disclosures to describe the nature, amount, timing, and uncertainty of revenue and cash flows from contracts with customers. This ASU is effective for us beginning with the first quarter of 2017. We are currently evaluating the impact the provisions of this ASU would have on our consolidated financial statements.\n\nIn June 2014, the FASB issued ASU No. 2014-12, *Compensation - Stock Compensation*. This ASU provides guidance on how to account for share-based payments for performance targets that could be achieved after an employee completes the requisite service period. Under the new guidance, a performance target that affects vesting and could be achieved after the requisite service period is treated as a performance condition. As such, the performance target is not reflected in estimating the grant-date fair value of the award. This ASU is effective for us beginning with the first quarter of 2016. We do not expect the provisions of this ASU to have a material impact on our consolidated financial statements.\n\n## **NOTE 2: TRUNK CLUB ACQUISITION**\n\nOn August 22, 2014, we acquired 100% of the outstanding equity of Trunk Club, a personalized clothing service for men. We believe the acquisition enables us to provide a high-touch personalized shopping experience combined with the convenience of an online platform. This represents a natural extension of our core business, aligns with our strategic priorities around a relevant customer experience and accelerates our entry into this fast-growing market.\n\nThe following bullets summarize the accounting activity related to Trunk Club and provide reference to relevant disclosures throughout our 10-K:\n\n- Consideration The purchase price fair value of $357 reflects the value of our stock as of the acquisition date. Purchase price consideration is discussed in further detail below.\n- Issuance of Nordstrom Common Stock 3.7 shares of Nordstrom common stock were issued in 2014 as part of the acquisition purchase price. Additional shares will be issued, either to be earned as future compensation or associated with indemnity holdback releases. Stock issued is discussed in further detail below and also reflected in our Consolidated Statements of Shareholders' Equity.\n- Net Assets Acquired Of the $357 purchase price fair value, $46 is compensation expense and subject to future performance and vesting. The remaining net purchase price consideration of $311 is allocated to the tangible and intangible assets acquired and liabilities assumed. The net asset allocation is discussed in further detail below.\n- Issuance of Nordstrom Stock Awards Trunk Club employees received Nordstrom stock awards in exchange for previously held Trunk Club awards and stock. Stock awards are discussed in further detail within Note 13: Stock-Based Compensation.\n- Long-term Incentive Plan A long-term incentive plan (the \"Value Creation Plan\") was created to incentivize certain Trunk Club employees to increase the value of the Trunk Club business. The accounting for this plan is discussed in further detail within Note 13: Stock-Based Compensation.\n\nTrunk Club's financial results have been included in our consolidated financial statements from the date of acquisition forward and were not material to our consolidated results for the fiscal year ended January 31, 2015. We have not presented pro forma results of operations for any periods prior to the acquisition, as Trunk Club's results of operations were not material to our consolidated results.", - "page_start": 57, - "page_end": 57, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Our financial performance earned recognition from *Investors Business Daily*, which ranked Atrion sixth on its list of Market-Leading Medical Stocks in November 2003. During the year, our stock price more than doubled, ending the year at $45.44, up from $22.50 at year-end 2002. Over the last five years, our stock price has increased by 468 percent.\n\n# **We make products that meet the specific needs of niche markets.**\n\nOne of the principal strengths of our company lies in the diversity of our product lines. Atrion makes medical devices and components for end-users and manufacturers throughout the health care industry, ranging from ophthalmology and cardiovascular products to fluid delivery devices. Our reputation for quality, precision and reliability has helped a number of our products gain the leading market positions in the United States in their respective niches.\n\nIn the ophthalmic sector, Atrion is a leading U.S. manufacturer of soft contact lens disinfection cases. In addition, our LacriCATH® balloon catheter positions us as a market leader with a patented product for the treatment of tear duct blockages.\n\nWe serve the cardiac surgery market as a leading U.S. provider of vacuum relief valves, minimally invasive surgical tapes and check valves. Serving the same market, our MPS® Myocardial Protection System continues to make headway, as hospitals and surgeons increasingly recognize the value of this proprietary technology. The MPS delivers essential fluids and medications to the heart during open-heart surgery, and it is the only system that provides integrated control over temperature, pressure, flow rate and the precise delivery of medications to the heart during surgery. Atrion also is the leading U.S. provider of clamps for IV sets, which are used in many surgical and medical settings.\n\nOur expertise and leadership in valve design and manufacturing extend beyond the health care industry. We are the leading domestic manufacturer of valves and inflation devices used in marine and aviation safety products.\n\nWe support this stable of solidly performing products with two essential programs. One is a highly effective sales and marketing effort that keeps our products moving into the marketplace. Our sales team is comprised of professionals who possess clinical knowledge and specific product experience, and also concentrate on building strong relationships with customers and within the industry.\n\nOur other essential program is research and development. We believe it is vital to keep a pipeline of products in various stages of development so that we can take advantage of near- and long-term opportunities in our markets. Understandably, proposed new products for the health care industry must undergo stringent testing and rigorous approval procedures. Often, this means that the process of bringing a new product from the design stage to the marketplace is a long and arduous one. A strong, proactive research and development program ensures that we are committing the resources and time required to successfully stay the course.\n\n# 2003 Revenues by Product Line\n\nFLUID DELIVERY OTHER", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe effective date is 1 July 2000. The Company will issue 800,000 ordinary fully paid shares in Mermaid Marine Australia Limited.\n\nThere have not been any other matters or circumstances, other than those referred to in the Chairman's and Operations Reviews and/or in the financial statements and notes attached thereto, that have arisen since the end of the Financial Year that have significantly affected, or may significantly affect Mermaid's operations, the results of those operations or its state of affairs in future financial years.\n\nThe Chairman's and Operations Reviews give indications, in general terms, of likely developments in Mermaid's operations in future financial years and the expected results of those operations. FUTURE DEVELOPMENTS\n\nThe development of the Company's Dampier and Broome bases is subject to the approval of the Western Australian Environmental Protection Authority. ENVIRONMENTAL REGULATION\n\nAs at the date of this report the Company had a total of 7,115,000 unissued shares under option as follows: **30 November 2000 Options** SHARE OPTIONS\n\n> As at the date of this report there are outstanding 6,500,000 options to acquire 6,500,000 ordinary shares in the Company at an issue price of 0.75 cents per ordinary share. Each of these options expires on 30 November 2000.", - "page_start": 33, - "page_end": 33, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "How much share of Atrion's revenues did its major customer representin in 2003 ? ", - "target_page": 21, - "target_passage": "The Company had one major customer which represented approximately $9.1 million (14.4 percent", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Atrion Corporation One Allentown Parkway Allen, Texas 75002 972 • 390 • 9800 www.atrioncorp.com", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "Facility lease revenue contributed $5.5 million to wireline revenues, a decrease of $0.2 million or 3.5%. The decrease was primarily the result of the prolonged decline of lease rates associated with competitive pricing pressures and the economic downturn in the telecommunications industry. During 2002 the Company completed a second, diverse fiber route to its existing interconnection point in the Dulles airport area of Northern Virginia. This fiber route provides increased reliability for customers in the event of fiber cuts or breaks, and extends the availability of the Company's fiber network to additional market locations but to date has not added additional revenue to the Company's operation.\n\nBilling and collection services and other revenues contributed $0.4 million to wireline revenues, which was the same as 2002 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.4 million, an increase of $0.1 million or 1.7%. The number of subscribers and service plan prices remained relatively constant during 2003.\n\nOther revenues, primarily consisting of Internet and 511Virginia service revenues were $5.8 million in 2003, an increase of $0.7 million or 13.5%. The Company had 17,420 dial-up Internet subscribers at December 31, 2003, compared to 18,050 at the end of the previous year. During 2003, the Company's DSL high-speed Internet access subscriber count increased to 1,298 from 646. Total Internet service revenue was $4.5 million, an increase of $0.3 million or 10.7%. The 511Virginia contract with the Virginia Department of Transportation contributed $1.3 million to other revenues, an increase of $0.4 million or 41.3%. Telecommunications equipment sales, services and lease revenues were $1.1 million, which reflects a $0.1 million decrease from 2002 results.\n\nTotal operating expenses were $87.2 million, an increase of $3.6 million or 4.3%. The primary driver in the increase in operating expenses is continued growth in the PCS operation somewhat offset by a significant decline in bad debt expense compared to 2002.\n\nLate in 2003, the Company made an employee benefits policy change, which eliminated the requirement for the Company to accrue a vacation liability in advance of the year in which the benefit was used. The result of this change was a reduction of benefit expense of $0.5 million for the year compared to 2002. Benefit expenses impact all operating departments based on the amount of direct labor charged to the department. The change has a one-time impact on the financial statements of the Company. The benefits policy now provides that employees earn and use their paid time off in the same period. In the future, under this policy, unused hours can be banked but only used for extended illness, not carried over for use as vacation.\n\nCost of goods and services was $10.9 million, an increase of $0.4 million or 4.2%. The PCS cost of goods sold was $8.5 million, an increase of $0.2 million or 2.3%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. In 2003, the Company recorded approximately $1.8 million in handset costs related to existing subscribers upgrading their handsets. Prior to 2003, the Company did not track the specific costs related to subsidizing new handsets to existing customers. The cost of handset up-grades sold to existing customers is expected to increase as the customer base matures and handset manufacturers introduce new technologies in new handsets. The cable television programming (cost of service) expense was $1.6 million, an increase of $0.2 million or 16.3%. The Company has seen continuing upward pressure on the cost of cable TV programming by cable TV program providers.\n\nNetwork operating costs were $33.6 million, an increase of $1.1 million or 3.4%. The largest item in network operating costs is travel expense. These costs made up 31.8% and 32.9% of the total network and other costs in 2003 and 2002, respectively. Travel expense is the cost of minutes used by the Company's PCS subscribers on Sprint or other Sprint Affiliates' networks. Travel expense in 2003 was $10.8 million, an increase of $0.1 million due to a significant increase in travel minutes in 2003 which was offset by the impact of the rate decline. The travel rate declined from $0.10 per minute in 2002 to $0.058 per minute in 2003. Our PCS customers increased their average monthly travel minutes by 22% compared to 2002. In 2002, the average customer's travel usage was 130 minutes per month and in 2003 that average travel usage increased to 159 minutes per month.\n\nNetwork infrastructure maintenance costs were $4.9 million or 14.6% of total network operating costs, a decrease of $0.2 million from 2002. Rent for towers, tower sites, and buildings increased $0.9 million or 27.3% to $4.2 million. Lease escalators plus the increase in the number of sites leased contributed to the increase. Line costs in 2003 were $9.8 million or 29.1% of the network operating costs, an increase of $0.1 million.\n\nDepreciation and amortization expense was $16.6 million, an increase of $2.1 million or 14.8%. The PCS operation had depreciation expense of $10.2 million, an increase of $1.6 million or 18.9%. The 16 additional PCS base stations placed in service during 2003 resulted in higher depreciation expense for the year. In the telephone operation, depreciation", - "page_start": 48, - "page_end": 48, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Income from discontinued operations was $22.4 million after taxes, an increase of $15.0 million or 202%. The income from discontinued operations in 2003 includes the sale of the partnership interest in February 2003 and results from the two months of its operations in 2003.\n\nThe Company adopted FAS 143 \"Accounting for Asset Retirement Obligations.\" effective January 1, 2003, and as a result recorded a charge to earnings for the cumulative effect of this change in accounting of $76 thousand after taxes.\n\nNet income was $32.1 million, an increase of $27.6 million or 610%. The increase is a result of improved operating results in the PCS operations, the 2002 VeriSign stock loss and the sale of the cellular operations.\n\n#### **DISCONTINUED OPERATIONS**\n\nThe Company invested $2.0 million in the Virginia 10 RSA limited partnership in the early 1990's. The partnership's local customer base peaked in early 2000 with nearly 12,000 subscribers, then steadily declined to 6,700 by December 31, 2002. The decline was the result of competition with digital technologies and increased competition from national carriers in the area. As a result of the decline in the subscriber base, and the need for extensive capital expenditures to transform the analog network into a digital cellular network, the Company elected to sell its 66% interest in the partnership to one of the minority partners. The agreement was signed in November 2002, and closing was February 28, 2003. The Company's portion of the net income from its operations for 2003, 2002 and 2001 was $1.2 million, $7.4 million and $6.7 million, respectively.\n\n#### **CONTINUING OPERATIONS**\n\n#### **2002 compared to 2001**\n\nTotal revenue was $93.0 million in 2002, an increase of $24.3 million or 35.3%. Total revenues included $57.9 million of wireless revenues, an increase of $21.7 million or 60.2%; wireline revenues of $28.7 million, an increase of $1.3 million or 4.6%; and other revenues of $6.4 million, an increase of $1.2 million or 24.5%.\n\nWithin wireless revenues, the PCS operation contributed $55.5 million, an increase of $21.4 million, or 63.0%. PCS service revenues were $37.4 million, an increase of $18.3 million or 95.7%. The increase in the subscriber base, which totaled 67,842 at December 31, 2002, was an increase of 20,524 or 43% from the prior year end.\n\nPCS travel revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.5 million, an increase of $2.9 million or 21.3%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, and the travel exchange rate. The rate received on travel was $0.10 per minute in 2002. The rates in 2001 were $0.20 per minute from January 1, 2001 through April 30, 2001; $0.15 per minute from May 1, 2001 through September 30, 2001; and $0.12 per minute from October 1, 2001 through December 31, 2001.\n\nPCS equipment sales were $1.6 million, an increase of $0.3 million or 19.6%. The equipment sales are net of $0.3 million of rebates and discounts given at the time of sale, which became more pronounced during the year to meet industry competition for subscriber additions and subscriber retention.\n\nIn accordance with Sprint's requirements, the Company launched third generation (3G 1X) service in August 2002. The impact of 3G 1X-network enhancements on revenues was not significant in 2002.\n\nTower leases added $2.1 million to wireless revenues, an increase of $0.4 million or 24.5%. The increase was the result of other wireless carriers executing additional leases to use space on the Company's portfolio of towers. Of the 82 towers and poles owned by the Company as of December 31, 2002, 46 have tower space leased to other carriers.\n\nWireless revenues from the Company's paging operation were $0.3 million, a decrease of $0.1 million as the local customer base increasingly chose alternative digital wireless services. Paging service subscribers declined by 7.8% in 2002 from 3,190 subscribers to 2,940 subscribers.\n\nWithin wireline revenues, the Telephone operation contributed $22.5 million, an increase of $0.9 million, or 4.0%. Telephone access revenues were $10.9 million, an increase of $1.4 million or 14.8%. The growth in access revenues was driven by a 38.4% increase in access minutes of use on the Company's network and an increased percentage of minutes in the intrastate jurisdiction, where rates are higher than the interstate jurisdiction. On January 1, 2002 the Federal subscriber line charge (SLC) for residential customers increased from $3.50 to $5.00 per month. The SLC", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Total income tax expense for continuing operations differs from the amount that would be provided by applying the statutory federal income tax rate to pretax earnings as illustrated below (in thousands):\n\n| | | | | YEAR ENDED DECEMBER 31, | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | | 2003 | | 2002 | | 2001 |\n| Income tax expense at the statutory federal income tax rate | $ | 2,298 | $ | 1,858 | $ | 2,062 |\n| Increase (decrease) resulting from: | | | | | | |\n| State income taxes | | 34 | | 80 | | 220 |\n| Decrease in valuation allowance | | — | | — | | (68) |\n| R&D credit | | (100) | | (164) | | (52) |\n| Foreign sales benefit | | (250) | | (244) | | (352) |\n| Other, net | | (103) | | (127) | | (7) |\n| Total income tax expense | $ | 1,879 | $ | 1,403 | $ | 1,803 |\n\n## STOCKHOLDERS' EQUITY\n\n6\n\n7\n\nThe Board of Directors of the Company has at various times authorized repurchases of Company stock in open-market or negotiated transactions at such times and at such prices as management may from time to time decide. The Company has effected a number of open-market or negotiated transactions to purchase its stock during the past three years. These repurchases totaled 20,200, 26,000 and 10,300 shares during the years 2003, 2002 and 2001, respectively, at per share prices ranging from $14.02 to $42.42. As of December 31, 2003, authorization for the repurchase of 94,000 additional shares remained. The Company purchased 173,614 shares of its common stock at $23.00 per share in April 2003 pursuant to a tender offer. The Company purchased 502,229 shares of its common stock at $34.50 per share in December 2001 pursuant to a tender offer. All shares purchased in the tender offers and in the open-market or negotiated transactions became treasury shares upon repurchase by the Company.\n\nIn September 2003, the Company announced that it had adopted a policy for the payment of regular quarterly cash dividends on the Company's common stock. The Company subsequently paid a quarterly cash dividend of $ .12 per common share in both September and December of 2003.\n\nThe Company has a Common Share Purchase Rights Plan, which is intended to protect the interests of stockholders in the event of a hostile attempt to take over the Company. The rights, which are not presently exercisable and do not have any voting powers, represent the right of the Company's stockholders to purchase at a substantial discount, upon the occurrence of certain events, shares of common stock of the Company or of an acquiring company involved in a business combination with the Company. In January 2000, this plan, which was adopted in February 1990, was extended until February 2005.\n\n# INCOME PER SHARE\n\nThe following is the computation for basic and diluted income per share from continuing operations:\n\n| | | | | YEAR ENDED DECEMBER 31, | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| (IN THOUSANDS, EXCEPT PER SHARE AMOUNTS) | | 2003 | | 2002 | | 2001 |\n| Income from continuing operations | $ | 4,892 | $ | 4,065 | $ | 4,262 |\n| Weighted average basic shares outstanding | | 1,711 | | 1,711 | | 2,033 |\n| Add: Effect of dilutive securities (options) | | 128 | | 152 | | 239 |\n| Weighted average diluted shares outstanding | | 1,839 | | 1,863 | | 2,272 |\n| Income per share from continuing operations: | | | | | | |\n| Basic | $ | 2.86 | $ | 2.37 | $ | 2.10 |\n| Diluted | $ | 2.66 | $ | 2.18 | $ | 1.88 |\n\nFor the years ended December 31, 2003, 2002 and 2001, options to purchase approximately 25,250, 40,625 and 7,800 shares of common stock, respectively, were not included in the computation of diluted income per share because their effect would have been antidilutive.", - "page_start": 18, - "page_end": 18, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "# CORPORATE INFORMATION\n\n# **Corporate Office:**\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800 www.atrioncorp.com\n\n## **Registrar and Transfer Agent**\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## **Form 10-K**\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to:\n\n> *Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002*\n\n## **Stock Information**\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or \"street\" name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | | Low | | Dividends |\n| --- | --- | --- | --- | --- | --- | --- |\n| March 31 | $ | 38.14 | $ | 26.91 | $ | — |\n| June 30 | | 32.51 | | 26.82 | | — |\n| September 30 | | 28.09 | | 18.31 | | — |\n| December 31 | | 23.90 | | 17.31 | | — |\n| 2003 Quarter Ended | | High | | Low | | Dividends |\n| March 31 | $ | 22.85 | $ | 17.95 | $ | — |\n| June 30 | | 30.80 | | 22.75 | | — |\n| September 30 | | 45.20 | | 26.80 | | .12 |\n| December 31 | | 50.00 | | 40.00 | | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "increased again on July 1, 2002 to $6.50, and comparable rate increases also impacted business subscribers. Tied to the SLC rate increases were declines in rates charged to interexchange carriers for interstate minutes of use. The 2002 results reflect a significantly larger increase in network usage, which more than offset the decline in rates.\n\nFacility lease revenue contributed $5.7 million to wireline revenues, a decrease of $0.8 million or 12.6% from 2001. The decrease was primarily the result of declining lease rates associated with competitive pricing pressure, and the economic downturn in the telecommunications industry.\n\nBilling and collection services contributed $0.4 million to wireline revenues, which was the same as 2001 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.3 million, an increase of $0.5 million or 14.5%. In December 2001, the Company increased its basic service charge by $6.00 per month, which produced $0.3 million of the increase in cable television revenue. The remaining $0.2 million was generated by an increased penetration of digital services and increased pay per view sales.\n\nWithin other revenues, Internet and 511Virginia contract revenues from the Virginia Department of Transportation, were $5.1 million in 2002, an increase of $1.2 million or 30.4%. The Company had 18,050 dial-up Internet subscribers at December 31, 2002, compared to 17,423 subscribers at the end of 2001. Total Internet service revenue was $4.2 million, an increase of $0.6 million or 15.7%. Services provided under the 511Virginia contract contributed $0.9 million to other revenues, an increase of $0.6 million. Telecommunications equipment sales, services and lease revenues were $1.2 million, a nominal increase over 2001 results.\n\nTotal operating expenses were $83.6 million, an increase of $21.3 million or 34.3%. The continued growth in the PCS operation was principally responsible for the change.\n\nCost of goods and services was $10.5 million, an increase of $3.1 million or 41.8%. The PCS cost of goods sold was $8.3 million, an increase of $2.8 million or 50.2%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. The cable television programming (cost of service) expense was $1.4 million, an increase of $0.1 million or 4.6%. The other cost of goods sold increased $0.3 million, compared to the same period in 2001.\n\nNetwork operating costs were $32.5 million, an increase of $5.8 million or 21.5%. Line and switching costs were $9.7 million, an increase of $2.6 million or 37.4%, due principally to the impact of the expanded PCS network. Travel expense, generated by the Company's PCS subscribers' use of minutes on other providers' portions of the Sprint wireless network, was $10.7 million, an increase of $0.9 million or 8.4%. The increase in customer travel usage more than offset the travel rate explained above in travel revenue. Plant specific costs were $9.6 million, which include the operation, and maintenance of the networks increased $2.3 million or 30.7%. Tower, building, and land rentals, as well as PCS equipment maintenance, were major contributors to the increase in plant specific expenses. Other network costs such as power, network administration, and engineering, were $2.7 million, the same as in 2001.\n\nDepreciation and amortization expense was $14.5 million, an increase of $3.2 million or 28.6%. The PCS operation had depreciation expense of $8.6 million, an increase of $3.6 million or 72.7%. The PCS operation added 53 additional base stations during 2002.\n\nSelling, general and administrative expenses were $26.1 million, an increase of $9.3 million or 55.0%. Customer support costs were $7.8 million, an increase of $2.8 million or 55.3%. The growth in Sprint wireless subscribers was the primary driver for this increase. Advertising expense was $4.3 million, an increase of $1.5 million or 55.8%. This change was primarily due to the stepped-up and ongoing marketing efforts to support the PCS operations in the Quad State market and particularly the Central Penn market. PCS sales staff expenses were $2.7 million, an increase of $0.7 million or 32.7%. The increase was principally due to the full year operations of the three retail locations and adding additional sales staff.\n\nThe Company experienced significant bad debt losses in its PCS operations related to the Sprint Clear PaySM program. The program was initially targeted at customers in sub-prime credit classes and did not require a deposit upon activation of service. As a result of default rates that exceeded projections, the Company experienced a substantial increase in bad debt expense, which rose from $1.2 million in 2001 to $4.4 million in 2002. The reinstatement of deposit requirements in April 2002 caused some moderation in bad debt expense by the end of the year. Total PCS bade debt expense for 2002 was $3.7 million of this expense is associated with several large telecommunications customers who filed bankruptcies in 2002. program. sm", - "page_start": 51, - "page_end": 51, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "| | | Consolidated |\n| --- | --- | --- |\n| | 2004 | 2003 |\n| 21. Earnings per Share | $million | $million |\n| Earnings used in the calculation of basic earnings per share reconciles to | | |\n| the net profit after tax in the statement of financial performance as follows: | | |\n| Net profit after income tax | 379.9 | 327.0 |\n| Less special dividend on redeemable convertible preference shares | (14.3) | – |\n| Earnings used in the calculation of diluted earnings per share | 365.6 | 327.0 |\n| Less dividends paid on reset convertible preference shares | (23.0) | (23.0) |\n| Earnings used in the calculation of basic earnings per share | 342.6 | 304.0 |\n| | 2004 | 2003 |\n| | | Number of shares |\n\nThe weighted average number of shares used for the purposes of calculating diluted earnings per share reconciles to the number used to calculate basic earnings per\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 69\n\n| share as follows: | | |\n| --- | --- | --- |\n| Basic earnings per share | 584,924,130 | 583,432,623 |\n| Partly paid shares | 100,722 | 112,876 |\n| Executive share options | 755,897 | 164,841 |\n| Reset convertible preference shares | 39,367,128 | 50,946,143 |\n| Diluted earnings per share | 625,147,877 | 634,656,483 |\n\nPartly paid shares outstanding issued under the Santos Executive Share Plan; options outstanding issued under the Santos Executive Share Option Plan; and reset convertible preference shares have been classified as potential ordinary shares and included in the calculation of diluted earnings per share. The number of shares included in the calculation are those assumed to be issued for no consideration, being the difference between the number that would have been issued at the exercise price and the number that would have been issued at the average market price.\n\nDuring the year, 715,000 (2003: 1,250,000) options and 50,000 (2003: 35,750) partly paid shares were converted to ordinary shares.\n\n6,000,000 redeemable convertible preference shares were not dilutive and were excluded from the calculation of diluted earnings per share.", - "page_start": 70, - "page_end": 70, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Over the years, we have learned that there are certain clear prerequisites to growth. Our path to growth is grounded on these basic fundamentals: Managing our assets wisely. Making products that meet specific market needs. And maintaining our keen focus on productivity and profitability. It is our steady and consistent focus on the fundamentals that has enabled us to build strength and create an environment for growth, through favorable or unfavorable conditions in the market and the economy.\n\n## **We manage our assets and resources carefully.**\n\nOur financial strategy centers on building the strength and stability that will position our company for ongoing growth. We approach the management of our resources with discipline and diligence, striking the balance that allows us to accomplish our objectives: Funding the current needs of the business, maintaining a strong financial foundation, and investing in the resources, technology and assets that will ensure operating efficiency and fuel future growth. The soundness of this strategy was reflected once again in our financial results for 2003.\n\n# EBITDA Per Diluted Share From Continuing Operations(a)\n\nFor the fifth consecutive year, Atrion's earnings per diluted share from continuing operations increased by more than 15 percent, rising from $2.18 in 2002 to $2.66 in 2003, a 22 percent improvement. In light of the economic pressures which have challenged virtually every business in recent years, we view five consecutive years of EPS growth—ranging from 16 percent to over 50 percent—as a sign of solid financial strength and a testament to the viability of our strategy. Including a gain from discontinued operations of $ .09 per share, net income totaled $2.75 per diluted share for 2003.\n\nRevenues for 2003 increased five percent to $62.8 million, from $59.5 million in the prior year. Return on equity(a), which provides a good indication of how well we are utilizing investors' dollars, has steadily increased in recent years, from five percent in 1999 to 12 percent in 2003. This compares favorably to the average return on equity for our industry, reported at 10.7 percent by statistical research sources.\n\nThe company's ability to generate strong cash flow continued to flourish in 2003. This is a key strength for our company, as it enables us to pursue a number of value-creating initiatives.\n\n- We initiated the payment of quarterly dividends on the company's common stock in September 2003. Recent changes in tax laws make this an efficient avenue for providing a return to our shareholders and, with continuing growth in earnings and cash flow, we plan to increase the dividend periodically.\n- We repurchased 193,814 shares of our common stock in 2003. Over the last five years, we have repurchased nearly two million shares of our stock, a strategy we regard as a wise investment for our company and our stockholders.\n- We reduced debt by $6 million, from $10.3 million at the end of 2002 to $4.3 million at year-end 2003.\n\n*(a) This is a non-GAAP financial measure which is defined and reconciled to GAAP on page 7 of this report.*", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "### M A N A G E M E N T ' S D I S C U S S I O N A N D A N A L Y S I S\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n#### **Overview**\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n#### **Critical Accounting Policies and Estimates** *G E N E R A L*\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.\n\nAn accounting policy is deemed to be critical if it requires an accounting estimate to be made based on assumptions about matters that are uncertain at the time the estimate is made, and if different estimates that reasonably could have been used, or changes in the accounting estimates that are reasonably likely to occur periodically, could materially impact the financial statements. Management believes the following critical accounting policies reflect its more significant estimates and assumptions used in the preparation of the Consolidated Financial Statements.\n\n*Fiscal year end* – The Company's fiscal year ends on the Saturday nearest December 31. Fiscal year 2003, the year ended January 3, 2004, contained 53 weeks, while fiscal year 2002, the year ended December 28, 2002, and fiscal year 2001, the year ended December 29, 2001, contained 52 weeks. A 53-week year occurs approximately every sixth year.\n\n*Revenue recognition* – Revenue is normally recognized upon shipment of goods to customers. In certain circumstances revenue is not recognized until the goods are received by the customer or upon installation and customer acceptance based on the terms of the sale agreement. Revenue includes freight charged to customers; related", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "Investments in unconsolidated affiliates in 2004 primarily consist of contributions to The Residences at MGM Grand. In 2003 and 2002, such investments were primarily our required investments in Borgata. In 2002, we also contributed $44 million to Monte Carlo in connection with Monte Carlo's retirement of the final $87 million of its outstanding debt.\n\n#### **Cash Flows – Financing Activities**\n\nIn 2004, we issued over $1.5 billion of fixed rate debt in various issuances:\n\n- In February and March 2004, we issued $525 million of 5.875% Senior Notes due 2014;\n- In August 2004, we issued $550 million of 6.75% Senior Notes due 2012;\n- In September 2004, we issued $450 million of 6% Senior Notes due 2009 at a premium to yield 5.65%.\n\nIn 2004, we repaid a net $1.6 billion on our bank credit facilities and repurchased $49 million of our existing senior notes for $52 million, resulting in a loss on early retirement of debt of $6 million (including the write-off of unamortized original issue discount), which is classified as \"Other, net\" in the accompanying consolidated statement of income. In 2003, we issued $600 million of 6% Senior Notes, due 2009 and repaid a net $285 million on our bank credit facilities. The net proceeds of these financing activities were used to supplement operating cash flows, fund capital expenditures and repurchase shares of our common stock. In 2002, we utilized our operating cash flow to reduce outstanding indebtedness by $270 million, while still funding significant capital expenditures and share repurchases.\n\nOur share repurchases are only conducted under repurchase programs approved by our Board of Directors and publicly announced. Our share repurchase activity was as follows:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| August 2001 authorization (1.4 million | | | | |\n| and 6.4 million shares purchased) $ | — | $ 36,034 | $ 207,590 | |\n| February 2003 authorization | | | | |\n| (10 million shares purchased) | — | 335,911 | — | |\n| November 2003 authorization (8 million and | | | | |\n| 2 million shares purchased) | 348,895 | 70,919 | — | |\n| | $ 348,895 | $442,864 | $ 207,590 | |\n| Average price of shares repurchased $ | 43.59 | $ 33.17 | $ 32.28 | |\n\nAt December 31, 2004, we had 10 million shares available for repurchase under a July 2004 authorization. We received $136 million, $36 million and $46 million in proceeds from the exercise of employee stock options in the years ended December 31, 2004, 2003 and 2002, respectively.\n\n#### **Principal Debt Arrangements**\n\nOur long-term debt consists of publicly held senior and subordinated notes and bank credit facilities. We pay fixed rates of interest ranging from 5.875% to 9.75% on the senior and subordinated notes. We pay variable interest based on LIBOR on our bank credit facility. We amended our bank credit facility in November 2003, and our current senior credit facility is a $2.5 billion, five-year revolving credit facility with a syndicate of banks led by Bank of America, N.A. As of December 31, 2004, we had approximately $2.4 billion of available liquidity under our bank credit facility. Subsequent to year-end, we redeemed three issuances of senior notes totaling $676 million of principal utilizing available funds under the bank credit facility. Our next maturity of public debt is not due until 2006.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "What was Atrion's gross profit in 2003 (in thousands) ? ", - "target_page": 10, - "target_passage": "Gross Profit 22,239", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Atrion Corporation One Allentown Parkway Allen, Texas 75002 972 • 390 • 9800 www.atrioncorp.com", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "# CORPORATE INFORMATION\n\n# **Corporate Office:**\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800 www.atrioncorp.com\n\n## **Registrar and Transfer Agent**\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## **Form 10-K**\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to:\n\n> *Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002*\n\n## **Stock Information**\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or \"street\" name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | | Low | | Dividends |\n| --- | --- | --- | --- | --- | --- | --- |\n| March 31 | $ | 38.14 | $ | 26.91 | $ | — |\n| June 30 | | 32.51 | | 26.82 | | — |\n| September 30 | | 28.09 | | 18.31 | | — |\n| December 31 | | 23.90 | | 17.31 | | — |\n| 2003 Quarter Ended | | High | | Low | | Dividends |\n| March 31 | $ | 22.85 | $ | 17.95 | $ | — |\n| June 30 | | 30.80 | | 22.75 | | — |\n| September 30 | | 45.20 | | 26.80 | | .12 |\n| December 31 | | 50.00 | | 40.00 | | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "#### *O F F I C E F U R N I T U R E*\n\nOffice furniture comprised 74% of consolidated net sales for 2003 and 76% of consolidated net sales for 2002 and 2001. Net sales for office furniture increased 2% in 2003 and decreased 6% in 2002. The increase in 2003 is due to the increased week from the Company's 52/53-week fiscal year. The office furniture industry has experienced an unprecedented three-year decline in shipments. The Business and Institutional Furniture Manufacturer's Association (BIFMA) reported 2003 shipments down over 5% and 2002 shipments down 19%. The Company's estimated share of the market based on reported office furniture shipments increased to 15.3% in 2003 compared to 14.4% in 2002 and 12.4% in 2001. This increase was achieved by providing strong brands, innovative products and services, and greater value to end-users.\n\nOperating profit as a percent of sales was 10.0% in 2003, 10.2% in 2002, and 8.2% in 2001. Included in 2003 were $15.2 million of net pretax charges related to the closure of two office furniture facilities, which impacted operating margins by 1.1 percentage points. Included in 2002 were $3.0 million of restructuring charges, which impacted operating margins by 0.2 percentage points, and 2001 included $22.5 million of restructuring charges, which impacted operating margins by 1.7 percentage points. The increase in operating margins is due to increased gross profit from the benefits of restructuring initiatives, rapid continuous improvement programs, and increased price realization, offset by additional investments in brand building and selling initiatives and increased freight expense.\n\n#### *H E A R T H P R O D U C T S*\n\nHearth products sales increased 9% in 2003 and decreased 3% in 2002, respectively. The growth in 2003 was attributable to strong housing starts, growth in market share in both the new construction and retail channels, strengthening alliances with key distributors and dealers, as well as focused new product introductions. The decrease in 2002 was mainly due to pruning out less profitable product lines.\n\nOperating profit as a percent of sales in 2003 was 12.1% compared to 10.8% and 9.2% in 2002 and 2001, respectively. The improved profitability in 2003 was the result of leveraging fixed costs over a higher sales volume and increased sales through company-owned distribution offset by increased freight costs and higher labor costs from increased use of overtime and temporary labor to meet record levels of demand. The increase in 2002 was mainly due to discontinuance of goodwill and indefinite-lived intangible amortization of approximately $7 million due to the adoption of SFAS 142.\n\n#### **Liquidity and Capital Resources**\n\nDuring 2003, cash flow from operations was $141.3 million, which along with funds from stock option exercises under employee stock plans, provided the funds necessary to meet working capital needs, invest in capital improvements, repay long-term debt, repurchase common stock, and pay increased dividends.\n\nCash, cash equivalents, and short-term investments totaled $204.2 million at the end of 2003 compared to $155.5 million at the end of 2002 and $78.8 million at the end of 2001. The Company used approximately $80 million of cash to acquire Paoli Inc. on January 5, 2004. These remaining funds, coupled with cash from future operations and additional long-term debt, if needed, are expected to be adequate to finance operations, planned improvements, and internal growth. The Company is not aware of any known trends or demands, commitments, events, or uncertainties that are reasonably likely to result in its liquidity increasing or decreasing in any material way.\n\nThe Company places special emphasis on the management and reduction of its working capital with a particular focus on trade receivables and inventory levels. The success achieved in managing receivables is in large part a result of doing business with quality customers and maintaining close communication with them. Trade receivables at year-end 2003 were virtually unchanged from the prior year. Trade receivable days outstanding have averaged approximately 37 to 38 days over the past three years. The Company's inventory turns were 23, 23, and 18 for 2003, 2002, and 2001, respectively. Increased imports of raw materials and finished goods may negatively affect inventory turns in the future but the Company is constantly looking for ways to add efficiency to its supply chain. The decrease in accounts payable and accrued expenses is due to timing of vendor and marketing program payments and the payment of additional purchase consideration and debenture earn out related to a prior acquisition. The Company also funded the retiree medical portion of its postretirement benefit obligation in 2003.\n\n#### *I N V E S T M E N T S*\n\nThe Company has investments in investment grade equity and debt securities. Management classifies investments in marketable securities at the time of purchase and reevaluates such classification at each balance sheet date. Equity securities are classified as available-for-sale and are stated at current market value with unrealized gains and losses included as a separate component of equity, net of any related tax effect. Debt securities are classified as held-to-maturity and are stated at amortized cost. A table of holdings as of year-end 2003 and 2002 is", - "page_start": 35, - "page_end": 35, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "Goodwill for 2003 acquisitions totaled approximately $21.2 million. As of December 31, 2003, we had goodwill, net of accumulated amortization, of $1,558.1 million. $27.7 million of the total purchase price paid for acquisitions and contingent payments to former owners was allocated to landÑll airspace.\n\nGoodwill for 2002 acquisitions totaled approximately $40.1 million. As of December 31, 2002, we had goodwill, net of accumulated amortization, of $1,544.2 million.\n\n### **Consolidated Results of Operations**\n\n### *Years Ended December 31, 2004, 2003 and 2002*\n\nOur income before cumulative eÅect of changes in accounting principles was $237.9 million for the year ended December 31, 2004, as compared to $215.4 million in 2003 and $239.6 million in 2002. Net income was $237.9 million for year ended December 31, 2004, or $1.53 per diluted share, as compared to $177.6 million, or $1.10 per diluted share, in 2003 and $239.6 million, or $1.44 per diluted share, in 2002. Net income for the year ended December 31, 2003 includes an after-tax expense of $37.8 million (net of an income tax beneÑt of $23.1 million), or $.23 per share, as a cumulative eÅect of a change in accounting principle resulting from the adoption of Statement of Financial Accounting Standards No. 143, \"\"Accounting for Asset Retirement Obligations,'' and a change in accounting principle for our methane gas collection systems. See Note 1, Basis of Presentation, of the Notes to our Consolidated Financial Statements for further discussion of these changes in accounting principles. Our operating results for the year ended December 31, 2002 include other charges (income) described below.\n\nThe following table summarizes our costs and expenses in millions of dollars and as a percentage of our revenue for 2002 through 2004:\n\n| | 2004 | | 2003 | | 2002 | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | $ | % | $ | % | $ | % |\n| Revenue ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $2,708.1 | 100.0% | $2,517.8 | 100.0% | $2,365.1 | 100.0% |\n| Cost of operations ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 1,714.4 | 63.3 | 1,605.4 | 63.8 | 1,472.9 | 62.3 |\n| Depreciation, amortization and | | | | | | |\n| depletion of property and | | | | | | |\n| equipment ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 252.4 | 9.3 | 233.8 | 9.3 | 193.5 | 8.2 |\n| Amortization of intangible assets ÏÏÏÏ | 7.0 | .3 | 5.3 | .2 | 6.1 | .2 |\n| Accretion ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 13.7 | .5 | 12.7 | .5 | Ì | Ì |\n| Selling, general and administrative | | | | | | |\n| expenses ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 268.3 | 9.9 | 247.9 | 9.8 | 238.7 | 10.1 |\n| Other charges (income)ÏÏÏÏÏÏÏÏÏÏÏÏ | Ì | Ì | Ì | Ì | (5.6) | (.2) |\n| Operating income ÏÏÏÏÏÏÏÏÏ | $ 452.3 | 16.7% | $ 412.7 | 16.4% | $ 459.5 | 19.4% |\n\n*Revenue.* Revenue was $2,708.1 million, $2,517.8 million and $2,365.1 million for the years ended December 31, 2004, 2003 and 2002, respectively. Revenue increased by $190.3 million, or 7.6%, from 2003 to", - "page_start": 41, - "page_end": 41, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "Over the years, we have learned that there are certain clear prerequisites to growth. Our path to growth is grounded on these basic fundamentals: Managing our assets wisely. Making products that meet specific market needs. And maintaining our keen focus on productivity and profitability. It is our steady and consistent focus on the fundamentals that has enabled us to build strength and create an environment for growth, through favorable or unfavorable conditions in the market and the economy.\n\n## **We manage our assets and resources carefully.**\n\nOur financial strategy centers on building the strength and stability that will position our company for ongoing growth. We approach the management of our resources with discipline and diligence, striking the balance that allows us to accomplish our objectives: Funding the current needs of the business, maintaining a strong financial foundation, and investing in the resources, technology and assets that will ensure operating efficiency and fuel future growth. The soundness of this strategy was reflected once again in our financial results for 2003.\n\n# EBITDA Per Diluted Share From Continuing Operations(a)\n\nFor the fifth consecutive year, Atrion's earnings per diluted share from continuing operations increased by more than 15 percent, rising from $2.18 in 2002 to $2.66 in 2003, a 22 percent improvement. In light of the economic pressures which have challenged virtually every business in recent years, we view five consecutive years of EPS growth—ranging from 16 percent to over 50 percent—as a sign of solid financial strength and a testament to the viability of our strategy. Including a gain from discontinued operations of $ .09 per share, net income totaled $2.75 per diluted share for 2003.\n\nRevenues for 2003 increased five percent to $62.8 million, from $59.5 million in the prior year. Return on equity(a), which provides a good indication of how well we are utilizing investors' dollars, has steadily increased in recent years, from five percent in 1999 to 12 percent in 2003. This compares favorably to the average return on equity for our industry, reported at 10.7 percent by statistical research sources.\n\nThe company's ability to generate strong cash flow continued to flourish in 2003. This is a key strength for our company, as it enables us to pursue a number of value-creating initiatives.\n\n- We initiated the payment of quarterly dividends on the company's common stock in September 2003. Recent changes in tax laws make this an efficient avenue for providing a return to our shareholders and, with continuing growth in earnings and cash flow, we plan to increase the dividend periodically.\n- We repurchased 193,814 shares of our common stock in 2003. Over the last five years, we have repurchased nearly two million shares of our stock, a strategy we regard as a wise investment for our company and our stockholders.\n- We reduced debt by $6 million, from $10.3 million at the end of 2002 to $4.3 million at year-end 2003.\n\n*(a) This is a non-GAAP financial measure which is defined and reconciled to GAAP on page 7 of this report.*", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "Income from discontinued operations was $22.4 million after taxes, an increase of $15.0 million or 202%. The income from discontinued operations in 2003 includes the sale of the partnership interest in February 2003 and results from the two months of its operations in 2003.\n\nThe Company adopted FAS 143 \"Accounting for Asset Retirement Obligations.\" effective January 1, 2003, and as a result recorded a charge to earnings for the cumulative effect of this change in accounting of $76 thousand after taxes.\n\nNet income was $32.1 million, an increase of $27.6 million or 610%. The increase is a result of improved operating results in the PCS operations, the 2002 VeriSign stock loss and the sale of the cellular operations.\n\n#### **DISCONTINUED OPERATIONS**\n\nThe Company invested $2.0 million in the Virginia 10 RSA limited partnership in the early 1990's. The partnership's local customer base peaked in early 2000 with nearly 12,000 subscribers, then steadily declined to 6,700 by December 31, 2002. The decline was the result of competition with digital technologies and increased competition from national carriers in the area. As a result of the decline in the subscriber base, and the need for extensive capital expenditures to transform the analog network into a digital cellular network, the Company elected to sell its 66% interest in the partnership to one of the minority partners. The agreement was signed in November 2002, and closing was February 28, 2003. The Company's portion of the net income from its operations for 2003, 2002 and 2001 was $1.2 million, $7.4 million and $6.7 million, respectively.\n\n#### **CONTINUING OPERATIONS**\n\n#### **2002 compared to 2001**\n\nTotal revenue was $93.0 million in 2002, an increase of $24.3 million or 35.3%. Total revenues included $57.9 million of wireless revenues, an increase of $21.7 million or 60.2%; wireline revenues of $28.7 million, an increase of $1.3 million or 4.6%; and other revenues of $6.4 million, an increase of $1.2 million or 24.5%.\n\nWithin wireless revenues, the PCS operation contributed $55.5 million, an increase of $21.4 million, or 63.0%. PCS service revenues were $37.4 million, an increase of $18.3 million or 95.7%. The increase in the subscriber base, which totaled 67,842 at December 31, 2002, was an increase of 20,524 or 43% from the prior year end.\n\nPCS travel revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.5 million, an increase of $2.9 million or 21.3%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, and the travel exchange rate. The rate received on travel was $0.10 per minute in 2002. The rates in 2001 were $0.20 per minute from January 1, 2001 through April 30, 2001; $0.15 per minute from May 1, 2001 through September 30, 2001; and $0.12 per minute from October 1, 2001 through December 31, 2001.\n\nPCS equipment sales were $1.6 million, an increase of $0.3 million or 19.6%. The equipment sales are net of $0.3 million of rebates and discounts given at the time of sale, which became more pronounced during the year to meet industry competition for subscriber additions and subscriber retention.\n\nIn accordance with Sprint's requirements, the Company launched third generation (3G 1X) service in August 2002. The impact of 3G 1X-network enhancements on revenues was not significant in 2002.\n\nTower leases added $2.1 million to wireless revenues, an increase of $0.4 million or 24.5%. The increase was the result of other wireless carriers executing additional leases to use space on the Company's portfolio of towers. Of the 82 towers and poles owned by the Company as of December 31, 2002, 46 have tower space leased to other carriers.\n\nWireless revenues from the Company's paging operation were $0.3 million, a decrease of $0.1 million as the local customer base increasingly chose alternative digital wireless services. Paging service subscribers declined by 7.8% in 2002 from 3,190 subscribers to 2,940 subscribers.\n\nWithin wireline revenues, the Telephone operation contributed $22.5 million, an increase of $0.9 million, or 4.0%. Telephone access revenues were $10.9 million, an increase of $1.4 million or 14.8%. The growth in access revenues was driven by a 38.4% increase in access minutes of use on the Company's network and an increased percentage of minutes in the intrastate jurisdiction, where rates are higher than the interstate jurisdiction. On January 1, 2002 the Federal subscriber line charge (SLC) for residential customers increased from $3.50 to $5.00 per month. The SLC", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "To the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and Subsidiaries as of December 31, 2003 and 2002, and the related consolidated statements of income, changes in stockholders' equity and cash flows for the years then ended. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audit. The financial statements of Atrion Corporation and Subsidiaries as of and for the year in the period ended December 31, 2001, were audited by other auditors who have ceased operations. Those auditors expressed an unqualified opinion on those financial statements in their report dated February 25, 2002.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the consolidated financial position of Atrion Corporation and Subsidiaries as of December 31, 2003 and 2002, and the consolidated results of their operations and their consolidated cash flows for the years then ended in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed above, the financial statements of Atrion Corporation and Subsidiaries as of December 31, 2001, and for the year then ended were audited by other auditors who have ceased operations. As described in Note 2, these financial statements have been revised to include the transitional disclosures required by Statement of Financial Accounting Standards No. 142, Goodwill and Other Intangible Assets, which was adopted by the Company as of January 1, 2002. Our audit procedures with respect to the disclosures in Note 2 with respect to 2001 included agreeing the previously reported net income to the previously issued financial statements and the adjustments to reported net income representing amortization expense (including any related tax effects) recognized in those periods related to goodwill to the Company's underlying records obtained from management. We also tested the mathematical accuracy of the reconciliation of adjusted net income to reported net income, and the related income-per-share amounts. In our opinion, the disclosures for 2001 in Note 2 are appropriate. However, we were not engaged to audit, review, or apply any procedures to the 2001 financial statements of the Company other than with respect to such disclosures and, accordingly, we do not express an opinion or any other form of assurance on the 2001 financial statements taken as a whole.\n\nGrant Thornton LLP Dallas, Texas February 13, 2004\n\n*This is a copy of the audit report previously issued by Arthur Andersen LLP in connection with Atrion Corporation and Subsidiaries Annual Report for the year ended December 31, 2001. This audit report has not been reissued by Arthur Andersen LLP in connection with this Annual Report. The consolidated balance sheets as of December 31, 2001 and 2000 and the consolidated statements of income and cash flows for the years ended December 31, 2000 and 1999 referred to in this report have not been included in the accompanying financial statements.*\n\nTo the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and subsidiaries as of December 31, 2001 and 2000 and the related consolidated statements of income and cash flows for each of the three years in the period ended December 31, 2001. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of Atrion Corporation and subsidiaries as of December 31, 2001 and 2000 and the results of their operations and their cash flows for each of the three years in the period ended December 31, 2001 in conformity with accounting principles generally accepted in the United States.\n\nArthur Andersen LLP Atlanta, Georgia February 25, 2002", - "page_start": 24, - "page_end": 24, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "The increase in aggregate dollars in all periods presented is primarily a result of the expansion of our operations through internal growth and acquisitions.\n\nThe increase in cost of operations as a percentage of revenue from 2002 to 2003 and the decrease in cost of operations as a percentage of revenue from 2003 to 2004 is primarily attributable to higher self-insurance expense in 2003. Self-insurance expense was $165.3 million, $189.5 million and $138.1 million for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in self-insurance expense in 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through the 2003 policy year.\n\nExcluding self-insurance expense, cost of operations as a percentage of revenue increased during the year ended December 31, 2004 versus the comparable 2003 period. This increase is primarily attributable to increased fuel prices, labor costs and subcontracting costs associated with the long-haul transport of waste by third-party vendors. Excluding self-insurance expense, cost of operations as a percentage of revenue decreased in 2003 versus the comparable 2002 period due to the elimination of closure and post-closure expense as a component of cost of operations in accordance with SFAS 143 in 2003 and the termination of our operating lease facility in July 2002. This decrease was partially oÅset by increased fuel prices, an increase in waste taxes levied on landÑll volumes in certain states, an increase in revenue generated by lines of business that produce lower operating margins and an increase in the long-haul transport of waste by third-party vendors.\n\nTo date in 2005, we have experienced a signiÑcant increase in fuel prices. We believe that cost of operations as a percentage of revenue may continue to remain high depending upon the cost of fuel, health insurance, risk insurance and other key components of our cost structure and general economic conditions.\n\n*Depreciation, Amortization and Depletion of Property and Equipment.* Depreciation, amortization and depletion expenses for property and equipment were $252.4 million, $233.8 million and $193.5 million, or, as a percentage of revenue, 9.3%, 9.3% and 8.2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in aggregate dollars from 2003 to 2004 is primarily due to the expansion of our operations through internal growth and acquisitions. The increase in aggregate dollars and as a percentage of revenue from 2002 to 2003 is primarily due to an increase in landÑll amortization associated with the adoption of SFAS 143. The remaining increase from 2002 to 2003 is due to increased depreciation expense resulting from capital expenditures, acquisitions and the purchase of equipment originally placed into service pursuant to an operating lease.\n\n*Amortization of Intangible Assets.* Intangible assets consist primarily of cost in excess of fair value of net assets acquired, but also includes values assigned to long-term contracts, covenants not to compete and customer relationships. Expenses for amortization of intangible assets were $7.0 million, $5.3 million and $6.1 million, or, as a percentage of revenue, .3%, .2% and .2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in such expenses in aggregate dollars and as a percentage of revenue from 2003 to 2004 is primarily due to amortization expense on amounts that were recorded in other intangible assets during the three months ended September 30, 2004 resulting from an extensive internal review of all recent acquisitions. The increase in amortization of intangible assets in aggregate dollars is also due to the amortization of intangible assets associated with businesses acquired during 2004.\n\n*Accretion expense.* Accretion expense was $13.7 million and $12.7 million or, as a percentage of revenue, .5% and .5%, for the years ended December 31, 2004 and 2003, respectively, versus $0 for 2002. Accretion expense resulted from the adoption of SFAS 143 as of January 1, 2003. The increase in such expenses in aggregate dollars in 2004 is primarily due to expansion of our landÑll operations.\n\n*Selling, General and Administrative Expenses.* Selling, general and administrative expenses were $268.3 million, $247.9 million and $238.7 million, or, as a percentage of revenue, 9.9%, 9.8% and 10.1%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increases in aggregate dollars are primarily a result of the expansion of our operations through internal growth and acquisitions. The increase in such expenses as a percentage of revenue from 2003 to 2004 is primarily due to higher compensation costs. The decrease in such expenses as a percentage of revenue from 2002 to 2003 is primarily due to leveraging our existing overhead structure over an expanding revenue base.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "#### PERFORMANCE\n\n## **The recovery story is complete**\n\nFiscal 2004 was a tough year, full of both anticipated and unexpected risks, but Nissan lived up to all the challenges. We had a record year in revenues, operating profit, net income, sales volume and production.\n\n#### **Sales performance**\n\nGlobal sales came to 3,388,000 units, which exceeded our forecast of 3,380,000 units. This record level represents an increase of 10.8 percent, or 331,000 units, over fiscal 2003, and is 281,000 units more than the previous record level set in 1990. In fiscal 2004, we released nine all-new models globally.\n\nAlong with record sales, we achieved a global production record. Nissan's manufacturing plants turned out 3,378,000 units, or 293,000 units more than the previous record.\n\n#### **Financial performance**\n\n- Consolidated net revenues came to 8 trillion ¥576.3 billion, up 15.4 percent from last year.\n- Consolidated operating profit improved by 4.4 percent to a record ¥861.2 billion. As a percentage of net revenue, our operating profit margin came to 10.0 percent.\n- Net income reached ¥512.3 billion, an increase of ¥8.6 billion.\n\n#### **Nissan 180 commitments**\n\nFiscal 2004 marked the end of our NISSAN 180 business plan. Obviously, NISSAN 180 cannot be closed completely until the end of September 2005, but we know that we have already delivered two of the plan's three critical commitments.\n\n- We committed to an 8 percent operating profit margin, and our margin has been at or above 10 percent for every year of NISSAN 180.\n- We committed to zero debt, and today we have more than ¥200 billion in net cash under the new and more demanding accounting standards.\n- Our only remaining commitment is to achieve one million additional sales. Even here we are in reasonably good shape. At the midpoint of the measurement period we are at 1,809,000 units, which is a slight advance compared to our commitment to reach 3,597,000 units by the end of September 2005.", - "page_start": 7, - "page_end": 7, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| | | Consolidated |\n| --- | --- | --- |\n| | 2004 | 2003 |\n| 21. Earnings per Share | $million | $million |\n| Earnings used in the calculation of basic earnings per share reconciles to | | |\n| the net profit after tax in the statement of financial performance as follows: | | |\n| Net profit after income tax | 379.9 | 327.0 |\n| Less special dividend on redeemable convertible preference shares | (14.3) | – |\n| Earnings used in the calculation of diluted earnings per share | 365.6 | 327.0 |\n| Less dividends paid on reset convertible preference shares | (23.0) | (23.0) |\n| Earnings used in the calculation of basic earnings per share | 342.6 | 304.0 |\n| | 2004 | 2003 |\n| | | Number of shares |\n\nThe weighted average number of shares used for the purposes of calculating diluted earnings per share reconciles to the number used to calculate basic earnings per\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 69\n\n| share as follows: | | |\n| --- | --- | --- |\n| Basic earnings per share | 584,924,130 | 583,432,623 |\n| Partly paid shares | 100,722 | 112,876 |\n| Executive share options | 755,897 | 164,841 |\n| Reset convertible preference shares | 39,367,128 | 50,946,143 |\n| Diluted earnings per share | 625,147,877 | 634,656,483 |\n\nPartly paid shares outstanding issued under the Santos Executive Share Plan; options outstanding issued under the Santos Executive Share Option Plan; and reset convertible preference shares have been classified as potential ordinary shares and included in the calculation of diluted earnings per share. The number of shares included in the calculation are those assumed to be issued for no consideration, being the difference between the number that would have been issued at the exercise price and the number that would have been issued at the average market price.\n\nDuring the year, 715,000 (2003: 1,250,000) options and 50,000 (2003: 35,750) partly paid shares were converted to ordinary shares.\n\n6,000,000 redeemable convertible preference shares were not dilutive and were excluded from the calculation of diluted earnings per share.", - "page_start": 70, - "page_end": 70, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What the name of the first bridge buildt over Danube ?", - "target_page": 16, - "target_passage": "he Chain Bridge was the first bridge over the Danube", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### IST MODE OR PHUGOID\n\nFigure 4.20. Longiitudinal Dynamic Sttxbility", - "page_start": 297, - "page_end": 297, - "source_file": "00-80T-80.pdf" - }, - { - "text": "When looking at the **differences between countries** in 2020, the countries with the highest values are: Poland (36.6%), Finland (25.7%) and Sweden (20.3%); all three are far above the average. Austria, Luxembourg and Germany have figures close to the EU27 average of 10.3%. In most other countries the response values are under or close to 6%, like in Estonia, Romania, Ireland, Latvia, Lithuania, Hungary, Malta, Bulgaria, Greece, Croatia, Cyprus, Czechia and Slovenia.257", - "page_start": 92, - "page_end": 92, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### **Corporate Headquarters**\n\nEuronet Worldwide 4601 College Boulevard, Suite 300 Leawood, Kansas 66211 Tel: 913-327-4200 Fax: 913-327-1921\n\n#### **European Headquarters**\n\nEuronet Worldwide Horvát u. 14-24. 1027 Budapest, Hungary Tel: 36-1-224-1000 Fax: 36-1-224-1013", - "page_start": 47, - "page_end": 47, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "# CHAPTER 6:\n\n### LEARN HOW TO SUMMARISE YOUR STUDY MATERIAL\n\nTo be successful in your studies, you need to learn how to create meaningful summaries of your course material. This is especially important if you are a distance learning student (www.oxbridgeacademy. co.za/distance-learning/), as you won't have a teacher or lecturer to point out key concepts, or to give you tips about the types of questions you can expect in the exams.\n\n### SUMMARISING YOUR WORK GIVES YOU AN OPPORTUNITY TO:\n\n- Organise your study material into astructure that makes sense to you.\n- Arrange your study material into a format that suits your learning style.\n- Create memory aids for yourself.\n- Identify key ideas and concepts.\n- Focus on what's important.\n- Prepare for exams more easily.", - "page_start": 27, - "page_end": 27, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- 158. Marek, Miroslav. \"Capet 40\" (http://genealogy.euweb.cz/capet/capet40.html). *euweb.cz*. Archived (https://web.archi ve.org/web/20131014145729/http://www.genealogy.euweb.cz/capet/capet40.html) from the original on 14 October 2013. Retrieved 13 February 2009.\n- 159. \"Suzanne de Mésenge\" (http://roglo.eu/roglo?lang=es;i=337437). *roglo.eu*. Archived (https://web.archive.org/web/ 20160411124656/http://roglo.eu/roglo?lang=es;i=337437) from the original on 11 April 2016. Retrieved 14 September 2009.\n\n#### **Works cited**\n\nAnselme de Sainte-Marie, Père (1726). *Histoire généalogique et chronologique de la maison royale de France* (http s://gallica.bnf.fr/ark:/12148/bpt6k76026j) [*Genealogical and chronological history of the royal house of France*] (in French). Vol. 1 (3rd ed.). Paris: La compagnie des libraires. Archived (https://web.archive.org/web/2022033108174 8/https://gallica.bnf.fr/ark:/12148/bpt6k76026j) from the original on 31 March 2022. Retrieved 30 August 2018.\n\nAntoine, Michel (1989). *Louis XV* (in French). Paris: Fayard. ISBN 978-2-2130-2277-2.\n\nBailey, Gauvin Alexander (2018). *Architecture and Urbanism in the French Atlantic Empire: State, Church and Society, 1604–1830*. Kingston, Ontario: McGill-Queen's University Press. ISBN 978-0-7735-5376-7.\n\n- Barentine, John C. (2016). *Uncharted Constellations: Asterisms, Single-Source and Rebrands*. Springer Publishing. ISBN 978-3-3192-7619-9.\nBarnes, Linda L. (2005). *Needles, Herbs, Gods, and Ghosts: China, Healing, and the West to 1848*. Harvard University Press. ISBN 978-0-6740-1872-3.\n\n- Beem, Charles (2018). *Queenship in Early Modern Europe* (https://books.google.com/books?id=301GEAAAQBAJ). Red Globe Press. ISBN 978-1-1370-0506-9. Archived (https://web.archive.org/web/20231124053309/https://book s.google.com/books?id=301GEAAAQBAJ) from the original on 24 November 2023. Retrieved 30 October 2023.\nBély, Lucien (2001). *The History of France*. Paris: Editions Jean-Paul Gisserot. ISBN 978-2-8774-7563-1.\n\n- Black, Jeremy (2011). *Beyond the Military Revolution: War in the Seventeenth Century World*. Palgrave Macmillan. ISBN 978-0-2302-5156-4.\n- Blanning, Tim (2008). *The Pursuit of Glory: The Five Revolutions That Made Modern Europe*. Penguin Books. ISBN 978-0-1431-1389-8.\n\nBluche, François (1986). *Louis XIV* (in French). Paris: Hachette Littératures. ISBN 978-2-0101-3174-5.\n\n- Bluche, François (1990). *Louis XIV*. Translated by Greengrass, Mark. New York: Franklin Watts. p. 11. ISBN 978-0- 5311-5112-9.\n- Bluche, François (2005). *Dictionnaire du Grand Siècle 1589–1715* (in French). Fayard. ISBN 978-2-2136-2144-9.\n- Bryant, Mark (2004). \"Partner, Matriarch, and Minister: Mme de Maintenon of France, Clandestine Consort, 1680– 1715\". In Campbell Orr, Clarissa (ed.). *Queenship in Europe 1660–1815: The Role of the Consort*. Cambridge University Press. pp. 77–106. ISBN 978-0-5218-1422-5.\n- Buckley, Veronica (2008). *Madame de Maintenon: The Secret Wife of Louis XIV*. London: Bloomsbury. ISBN 978-0- 7475-8098-0.\n\nBurke, Peter (1992). \"The Fabrication of Louis XIV\". *History Today*. **42** (2).\n\n- Claydon, Tony (2007). *Europe and the Making of England, 1660–1760*. Cambridge University Press. ISBN 978-0- 5218-5004-9.\n- Delon, Michel (2013). *Encyclopedia of the Enlightenment* (https://books.google.com/books?id=QEpJAgAAQBAJ). Routledge. ISBN 978-1-1359-5998-2.\n\nDunlop, Ian (2000). *Louis XIV*. London: Pimlico. ISBN 978-0-7126-6709-8.\n\nDurant, Will; Durant, Ariel (1963). *The Story of Civilization*. Vol. 8: The Age of Louis XIV. Boston: Simon & Schuster.\n\nDvornik, Francis (1962). *The Slavs in European History and Civilization* (https://books.google.com/books?id=LACpYP -g1y8C). Rutgers University Press. ISBN 978-0-8135-0799-6. Archived (https://web.archive.org/web/20231017044 641/https://books.google.com/books?id=LACpYP-g1y8C) from the original on 17 October 2023. Retrieved 21 August 2021.\n\nEdmunds, Martha (2002). *Piety and Politics*. University of Delaware Press. ISBN 0-8741-3693-8.\n\nEdwards (2007). \"Edict of Versailles (1787)\" (https://books.google.com/books?id=6_2wkP4j-EsC&pg=PA212). In Fremont-Barnes, Gregory (ed.). *Encyclopedia of the Age of Political Revolutions and New Ideologies, 1760–1815*. Greenwood Publishing. ISBN 978-0-3130-4951-4.\n\n- Fraser, Antonia (2006). *Love and Louis XIV: The Women in the Life of the Sun King*. New York: Random House, Inc. ISBN 978-1-4000-3374-4.\n- Frost, Robert (2000). *The Northern Wars; State and Society in Northeastern Europe 1558–1721*. Routledge. ISBN 978-0-5820-6429-4.\n- Gaudelus, Sébastien (2000). \"La Mise en Spectacle De La Religion Royale: Recherches sur la Devotion de Louis XIV\" (https://www.persee.fr/doc/hes_0752-5702_2000_num_19_4_2133). *Histoire, Économie et Société* (in French). **19** (4): 513–526. doi:10.3406/hes.2000.2133 (https://doi.org/10.3406%2Fhes.2000.2133). Archived (http s://web.archive.org/web/20200522101239/https://www.persee.fr/doc/hes_0752-5702_2000_num_19_4_2133) from the original on 22 May 2020. Retrieved 9 October 2020.", - "page_start": 30, - "page_end": 30, - "source_file": "wikipedia5.pdf" - }, - { - "text": "vessels engaged in routine offshore logistics tasks operate fully laden with 7.4 m draft which means there will be very few occasions when the largest vessels in the industry have to make a tide dependent entry or departure through the Mermaid channel. Further the Mermaid Base will not suffer operational disadvantages experienced by the adjacent Woodshed Base or nearby Damper Public Wharf in terms of entry and departure draft restrictions.\n\nThe function and purpose of Berth 1 will be:\n\n- To service the larger offshore supply boat market on a fast turnaround basis.\n- To receive and offload very heavy ro/ro cargoes up to 1500 tonne delivered by ocean going heavy lift ships and barges.\n- To handle inbound and outbound cargoes related to major offshore pipe lay projects.\n- To receive and efficiently load reel ships used for deep water small diameter pipelay.\n\nThe wharf will be an earth filled structure with steel sheet pile faces and concrete capping beam surround. Most of the construction will be performed using land based equipment working from the core of the earth filled system.\n\nMuch effort has gone into a design concept which allows very large cranes (>100 tonne capacity) to operate without restriction on the wharf.\n\nThe separation between Berth 1 and Berth 2 is such to allow Road Train Triples (the max allowable) to turn unassisted on the wharf.\n\n#### **C. QUAY WALL (BERTH 2)**\n\nThe inner berth, Berth 2 has a minimum depth alongside of 5.0 m allowing unrestricted operation of all the Mermaid fleet, and the majority of other vessels servicing the offshore oil/gas industry and mineral ports. This berth will offer excellent weather protection for small and medium size vessels.\n\n#### **D. BREAKWATER.**\n\nThe rubble mount type breakwater will be an extension of the wharf, constructed using core and armor rock largely won from excavations on the Base. The excavations created will become depositories for dredge spoil.\n\nBecause the storm surge associated with major cyclones can be up to 7 m above chart datum (low tide), before imposing the wave height, a fully protective breakwater is not practical. The", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "| Logging in to the OpenShift web console 224 |\n| --- |\n| Deploying an NGINX server by using the OpenShift web console 225 |\n| Deploying a second NGINX server by using the OpenShift web console 229 |\n| Customizing the index.test file of the NGINX instances 233 |\n| Creating a route to balance the network traffic between the two NGINX instances 237 |\n| Testing load balancing across NGINX instances 239 |\n| Appendix C. Seamless application movement across multicloud environments . . 241 |\n| Network tunneling for MongoDB 242 |\n| Moving the application across clouds 243 |\n| Starting the pod at Amazon Web Services 243 |\n| Accessing MongoDB by using the tunneled connection 244 |\n| Moving to the on-premises Power Systems cloud 245 |\n| Related publications 249 |\n| IBM Redbooks 249 |\n| Online resources 249 |\n| Help from IBM 249 |", - "page_start": 7, - "page_end": 7, - "source_file": "sg248459.pdf" - }, - { - "text": "#### NAVWEPS OO-SOT-80 OPERATING STRENGTH LIMITATIONS\n\n-\n\nFigure 5.5. Aeroelastic Effects (Sheet I of 2)", - "page_start": 357, - "page_end": 357, - "source_file": "00-80T-80.pdf" - }, - { - "text": "### **3.3 Visualization of Geo-Spatial Data (map.apps)**\n\nThe visualization of geo-spatial data within the European Data Portal provides previewing functionality for spatial open data. The aim is to allow the user to assess if a dataset meets specific requirements in terms of spatial and thematic coverage. The functionality that is provided in the header (links to disclaimers and language switching) is consistent in the entire portal.\n\n#### **3.3.1 How to visualize geo-spatial data from a dataset resource**\n\nAccessing the geo-spatial visualization is achieved via the Data Platform interface. A user searches for specific data, enters the dataset view of reasonable results and displays the available distributions (see Section 3.2.5). If a dataset distribution is supported by the geo-spatial visualization, a globe button is displayed (see Figure 3). This is the entry point into the map viewer application. Supported formats are OGC Web Map Service (WMS) and GeoJSON. If the user visits the geo-spatial visualization for the first time, an interactive user tutorial is provided to guide the use through specific functions of the user interface, similar to this written user manual.\n\n| C | What we do ~ Providing Data T Using Data ▼ Resources ' | Data ▼ |\n| --- | --- | --- |\n| | Dataset Categories Similar Datasets Feedback | |\n| | Erfassung des Seehundbestandes im Niedersächsischen | |\n| | Wattenmeer 2018 (UIG) Monitoring of Common seals in the | |\n| | Wadden Sea of Lower Saxony 2018 | |\n| | GovData Updated: - | |\n| | Erfassung des Seehundbestandes im Niedersächsischen Wattenmeer 2018. Der Seehundbestand im | |\n| | Niedersächsischen Wattenmeer wird jährlich ermittelt. Dazu werden in den Sommermonaten (Mai - September) | |\n| | Zählflüge durchgeführt. Während dieser Monate vollziehen sich Geburt, Aufzucht der Jungtiere und Haanwechsel bei den Seehunden. Die Zählungen erfolgen bei Niedrigwasser. Zu dieser Zeit ruhen die Seehunde auf den | |\n| | trockengefallenen Liegeplätzen. Die Zählungen sind trilateral über den 'Seal Management Plan' koordiniert. Die | |\n| | Daten sind Bestandteil des Trilateral Monitoring and Assessment Program (TMAP). Monitoring of Common seals in the Wadden Sea of Lower Saxony 2018. The common seal population in the Wadden Sea of Lower Saxony is | |\n| | annually determined by aerial surveys at low tides during the summer months (May - September). In this period | |\n| | whelping, nursing and moulting of the seals take place. The counts are trilaterally coordinated according to the | |\n| | 'Seal Management Plan'; the data are part of the Trilateral Monitoring and Assessment Program (TMAP). | |\n| | An updated translation of this dataset is in progress, × | |\n| | Distributions (11) | |\n| | WFS: GetCapabilities (2.0.0) Download v | |\n| | No Licence Provided | |\n| | WMS: GetCapabilities (1.3.0) Options Download v | |\n| | No Licence Provided ben Geo-Visualization | |\n| | WMS: GetCapabilities Options V Download v | |\n| | No Licence Provided | |\n| XML | XML-Metadaten: Erfassung des Seehundbestandes im Niedersächsischen Watte ... Download v | |\n| | No Licence Provided | |\n| | spaettet-s ... umweltüber .. fauna inspireide_ nordsee | environmen. |\n| | wadden-sea. biologie phoca-vitu_ inspire common-sea .. | gewone-zee. |\n| | nationalpa ... niedersäch. opendata | meeressäug .. |\n| | Dataset Extent | |\n| | + | |\n\n*Figure 3 – Dataset Resource Page with Link to Geo-Spatial Visualisation.*", - "page_start": 37, - "page_end": 37, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "The name of the city has taken the forms *Lugdon*, *Luon*, and since the 13th century, *Lyon*. The Gallic *Lugdun* or *Lugdunon* that was Latinized in Roman as Lugdunum is composed of two words. The first may be the name of the Celtic god Lug (in charge of order and law), or the derived word *lugon*, meaning \"crow\" (the crow being the messenger of Lug), but might also be another word *lug*, meaning \"light\". The second is *dunos* ('fortress', 'hill'). The name thus may designate the hill of Fourvière, on which the ancient city of Lyon is founded, but could mean \"hill of the god Lug\", \"hill of the crows\" or \"shining hill\".[21] [22]\n\nAlternatively Julius Pokorny associates the first part of the word with the Indo-European radical **lūg* ('dark, black, swamp'), the basis of the toponyms Ludza in Latvia, Lusatia in Germany (from Sorbian *Łužica*), and several places in the Czech Republic named Lužice;[23] it could then also be compared to Luze in Franche-Comté and various hydronyms such as Louge.\n\nFurther down, in the current Saint-Vincent district, was the Gallic village of Condate, probably a simple hamlet of sailors or fishermen living on the banks of the Saône. *Condate* is a Gallic word meaning \"confluence\", from which the Confluence district gets its name.\n\nIn Roman times the city was called *Caput Galliæ*, meaning \"capital of the Gauls\". As an homage to this title, the Archbishop of Lyon is still called the Primate of Gaul.\n\nDuring the revolutionary period, Lyon was renamed *Commune-Affranchie* (\"Emancipated Commune\") on 12 October 1793 by a decree of the Convention Nationale. It resumed its name in 1794, after the end of the Terror.\n\nLyon is called *Liyon* in Franco-Provençal. [24]\n\n#### **Ancient Lyon**\n\nAccording to the historian Dio Cassius, in 43 BC, the Roman Senate ordered the creation of a settlement for Roman refugees of war with the Allobroges. These refugees had been expelled from Vienne and were now encamped at the confluence of the Saône and Rhône rivers. The foundation was built on Fourvière hill and officially called *Colonia Copia Felix Munatia*, a name invoking prosperity and the blessing of the gods. The city became increasingly referred to as *Lugdunum* (and occasionally *Lugudunum*[25] ).[26] The earliest translation of this Gaulish place-name as \"Desired Mountain\" is offered by the 9th-century *Endlicher Glossary*. [27] In contrast, some modern scholars have proposed a Gaulish hill-fort named Lug[o]dunon, after the Celtic god Lugus (cognate with Old Irish *Lugh*, Modern Irish *Lú*), and *dúnon* (hillfort).\n\nThe Romans recognised that Lugdunum's strategic location at the convergence of two navigable rivers made it a natural communications hub. The city became the starting point of main Roman roads in the area, and it quickly became the capital of the province, Gallia Lugdunensis. Two Emperors were born in this city: Claudius, whose speech is preserved in the Lyon Tablet in which he justifies the nomination of Gallic Senators, and Caracalla.\n\n| Country | France |\n| --- | --- |\n| Region | Auvergne-Rhône-Alpes |\n| Metropolis | Lyon Metropolis |\n| Arrondissement | Lyon |\n| Subdivisions | 9 arrondissements |\n| Government | |\n| ��� Mayor (2020– | [2] Grégory Doucet |\n| 2026) | (EELV) |\n| 1 Area | 47.87 km2 (18.48 sq mi) |\n| [3]) • Urban (2020 | 1,141.4 km2 |\n| | (440.7 sq mi) |\n| [4] • Metro (2020 ) | 4,605.8 km2 |\n| | (1,778.3 sq mi) |\n| [5] Population (2022) | 520,774 |\n| • Rank | 3rd in France |\n| • Density | 11,000/km2 |\n| | (28,000/sq mi) |\n| • Urban (Jan. | 1,702,921 |\n| [6] 2021 ) | |\n| • Urban density | 1,500/km2 (3,900/sq mi) |\n| • Metro (Jan. | 2,308,818 |\n| [7] 2021 ) | |", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What was the total amount of operating expenses of 2000 by Network Wordwide in 2000 ?", - "target_page": 17, - "target_passage": "Total operating expenses increased to $88.1 million for the year ended December 31, 2000", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "The Company re c o rded an $800,000 write-down of certain ATM hard w a re assets associated with the p u rchase of the Budapest Bank ATM network in May 2000 and the Service Bank ATM network in M a rch 1999 (see Note 10 to the Consolidated Financial Statements – Asset Write Down). In addition, the Company re c o rded a one-time gain in its Central European Sub-segment of $1.2 million. The gain is related to a change in Hungarian law that eliminates a major portion of the Company's liability for import taxes on ATM hard w a re to the Hungarian government. The gain is included as an element of direct operating costs.\n\nThe operating expenses for the Central European Sub-segment totaled $21.7 million for the year ended December 31, 2000 as compared to $20.7 million for the year ended December 31, 1999, an i n c rease of 5%. The increase in operating expenses is largely the result of an increase in the number of ATMs operated by the Company from 1,203 at December 31, 1999 to 1,391 at December 31, 2000, and increased transaction volumes.\n\nThe operating expenses for the We s t e rn European Sub-segment totaled $18.9 million for the year\n\nended December 31, 2000 as compared to $16.5 million for the year ended December 31, 1999, an increase of 15%. The increase in operating expenses is largely the result of an increase in the number of ATMs operated by the Company from 621 at December 31, 1999 to 787 at December 31, 2000, and increased transaction volumes.\n\nThe operating expenses for the Other ATM Operations Sub-segment were $2.4 million for the year ended December 31, 2000 as compared to $2.2 million for the year ended December 31, 1999, an increase of 9%. The operating expenses from this segment are the result of the acquisition of the Dash network located in the United States in August 1999 and the unallocated costs associated with the Company's processing facilities.\n\nD i rect operating costs in the Network Services Segment consist primarily of: ATM installation costs; ATM site rentals; and costs associated with maintaining ATMs, ATM telecommunications, interest on network cash and cash delivery and security services to ATMs. Such costs increased to $24.4 million for the year ended December 31, 2000 from $21.9 million for the year ended December 31, 1999. The increase in direct operating costs is primarily attributable to costs associated with operating the increased number of ATMs in the network during the periods. Also, i n t e rcompany allocations were made to charge the ATM operations with transaction switching and bank connection fees associated with the operations central processing center in Budapest. These allocations totalled $3.5 million and $2.9 million for the years ended December 31, 2000 and 1999, re s p e c t i v e l y. Direct operating costs for 2000 include a one-time gain of $1.2 million due to a change in Hungarian law that eliminates a major portion of the Company's liability for import taxes on ATM hard w a re. Direct operating costs also include a $657,000 gain realized in 1999 f rom the sale of the Croatian network assets. The components of direct operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | | Years ending December 31, | | |\n| --- | --- | --- | --- | --- |\n| | 2 0 0 0 | | 1 9 9 9 | |\n| ATM communication | $ | 4 , 1 8 3 | $ | 3 , 9 8 2 |\n| ATM cash filling and interest on network cash | | 7 , 4 2 6 | | 5 , 9 0 0 |\n| ATM maintenance | | 3 , 9 8 7 | | 2 , 9 6 7 |\n| ATM site re n t a l | | 2 , 2 5 8 | | 2 , 4 2 1 |\n| ATM installation | | 6 7 5 | | 7 8 3 |\n| Transaction processing and ATM monitoring | | 5 , 2 4 2 | | 4 , 2 0 5 |\n| O t h e r | | 6 0 0 | | 1 , 6 6 3 |\n| Total direct operating expenses | $ | 2 4 , 3 7 1 | $ | 2 1 , 9 2 1 |\n\nAs a percentage of network revenue, direct operating costs fell from 83% for the year ended December 31, 1999 to 66% for the year ended December 31, 2000. On a per ATM basis the direct operating costs fell from $12,782 per ATM for the year ended December 31, 1999 to $9,807 per ATM for the year ended December 31, 2000, an improvement of 23%. On a per transaction basis the direct operating costs fell from $0.66 per transaction for the year ended December 31, 1999 to $0.46 per transaction for the year ended December 31, 2000, an improvement of 30%.\n\nSegment salaries and benefits increased to $7.4 million for the year ended December 31, 2000 from $7.2 million for the year ended December 31, 1999, an increase of 3%. The increase in the year-on-year expenses reflect the continued expansion of the operations to We s t e rn Euro p e a n markets with significantly higher labor costs than Central Europe as well as some increases in staff levels at the processing center re q u i red to maintain quality service in line with the rising transaction volumes. As a percentage of Network Services Segment revenue, salaries and benefits fell from 27% for the year ended December 31, 1999 to 20% for the year ended December 31, 2000.\n\nSelling, general and administrative costs allocated to the Network Services Segment decreased to $2.4 million for the year ended December 31, 2000 from $2.9 million for the year ended December 31, 1999. The $500,000 cost decrease for the year ended December 31, 2000 results fro m the net effect of (1) a $600,000 increase in the allocation of costs from the selling, general and administrative line of the Budapest pro c e s s i n g center to the operating cost line, as discussed above, from $2.9 million for the year ended December 31, 1999 to $3.5 for the year ended December 31, 2000 and (2) a $100,000 increase in costs associated with the expansion of the Company's network operations.\n\nD e p reciation and amortization increased to $8.0 million for the year ended December 31, 2000 from $7.4 million for the year ended December 31, 1999. The increases are due primarily to the increase in the number of owned ATMs as discussed pre v i o u s l y. The Company also re c o rded an $800,000 write-down of certain ATM hard w a re assets for the year ended December 31, 2000, as previously discussed.", - "page_start": 18, - "page_end": 18, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## **To Our Shareholders**\n\n*In our report to you last year, we noted that Euronet's success has been built in large part on the question \"Would you like another transaction?\" The answer from our clients and their customers was a resounding \"Yes!\"* \n\n*To reflect the rapid changes taking place in financial transactions worldwide, even that question has evolved. So in 2000, we also began asking \"How would you like your next transaction?\"*\n\nIn 2000, Euronet Worldwide focused on providing ways people can access their financial accounts and transactions through various electronic touchpoints. New secure transaction types and touchpoints—ATMs, point-of-sale (POS) devices, the Internet and mobile phones—continued to fuel transaction growth every month. In 2000, we processed a record 52.7 million billable transactions, a 60% increase over 1999, and in December 2000, our transaction levels exceeded 5 million per month and continue to accelerate.\n\n> Taken together, our transaction growth and expanding number of consumer touchpoints translated into an accelerating and recurring revenue stream, which greatly improved our bottom line. Our 2000 revenue of $52.7 million represented a 27% increase over the company's 1999 revenue of $41.5 million. Euronet's 2000 EBITDA also improved $2.4 million, or 14.5%, over 1999.\n\n> > This year we continued to focus on our core business of ATM driving and transaction processing, and we pursued new transactions through our mobile and Internet banking solutions. We also implemented our bill payment initiative, starting with electronic payments for prepaid mobile airtime. We are pleased to report that in 2000 our Network Services business turned EBITDA positive and posted revenue of $36.9 million, an increase of 39% over 1999 revenue.\n\n> > > Additional milestones were reached through several new strategic partnerships we announced late in the year. Gemplus, Sila Communications and Aether Systems chose Euronet mobile products to supplement their product offerings, proving the strength of Euronet's mobile products. Teaming up with these partners will further increase the sales penetration of our suite of mobile payment solutions around the world.", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "on the Company's ATM network. In addition, the Company continues to invest in the on-going development of products that were re c e n t l y i n t roduced to the market. The Company's re s e a rch and development costs incurred for computer products to be sold, leased or otherw i s e marketed increased to $6.7 million for the year ended December 31, 2000 from $3.2 million for the year ended December 31, 1999. Of this total f i g u re, $1.0 million and $322,000 were capitalized, as at December 31, 2000 and 1999, re s p e c t i v e l y, in conjunction with the Company's accounting policy requiring the capitalization of development costs on a product by product basis once technological feasibility is established. Technological feasibility of computer software products is established when the Company has completed all planning, designing, coding, and testing activities that are necessary to establish that the product can be produced to meet its design specifications including functions, feature s , and technical perf o rmance re q u i rements.\n\n**Operating Loss** The Software Solutions Segment incurred an operating loss of $21.5 million for the year ended December 31, 2000 and $7.1 million for the year ended December 31, 1999 as a result of the factors discussed above\n\n#### Corporate Services Segment\n\n**Operating Expenses** Operating expenses for the Corporate Services Segment increased to $7.9 million for the year ended December 31, 2000 f rom $6.8 million for the year ended December 31, 1999. The components of corporate services operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | | Years ending December 31, | | |\n| --- | --- | --- | --- | --- |\n| | 2 0 0 0 | | 1 9 9 9 | |\n| Salaries and benefits | $ | 3 , 8 1 3 | $ | 3 , 3 3 5 |\n| Selling, general and administrative | | 3 , 8 4 1 | | 3 , 2 7 0 |\n| D e p reciation and amort i z a t i o n | | 2 0 8 | | 1 4 5 |\n| Total direct operating expenses | $ | 7 , 8 6 2 | $ | 6 , 7 5 0 |\n\nThe Company's expansion of its network infrastru c t u re, and increases in corporate and administrative capabilities are the primary reasons for these i n c reased expenditures.\n\n#### **Non-Operating Results for the Years Ended December 31, 2000 and 1999**\n\n**Interest Income** I n t e rest income decreased to $1.1 million for the year ended December 31, 2000 from $2.0 million for the year ended December 31, 1999 and from $2.5 million for the year ended December 31, 1998. The decrease is the result of the decrease in investment securities and cash as a result of negative cash flow from operations and capital expenditure s .\n\n**Interest Expense** I n t e rest expense decreased to $10.8 million for the year ended December 31, 2000 from $10.9 million for the year ended December 31, 1999 and increased from $7.8 million for the year ended December 31, 1998. The decrease from 1999 to 2000 is due to exchange rate diff e rences as the majority of the debt is denominated in Deutsche Mark. The increase from 1998 to 1999 is the result of accretion of the C o m p a n y 's Notes Payable for a full year in 1999 in comparison to 6 months' accretion in 1998.\n\n**Foreign Exchange Gain/Loss** The Company had a net foreign exchange loss of $3.2 million for the year ended December 31, 2000, as c o m p a red to $2.1 million for the year ended December 31, 1999, and $1.9 million for the year ended December 31, 1998. Exchange gains and losses that result from re - m e a s u rement of certain Company assets and liabilities are re c o rded in determining net loss. A portion of the assets and liabilities of the Company are denominated in Euros, including capital lease obligations, notes payable (including the Notes issued in the C o m p a n y 's public bond offering), cash and cash equivalents, investments, and forw a rd foreign exchange contracts. It is the Company's policy to attempt to match local currency receivables and payables. The foreign currency denominated assets and liabilities give rise to foreign exchange gains and losses as a result of U.S. dollar to local currency exchange movements.\n\n**Extraordinary Gain** In 1999 the Company re c o rded an extraord i n a ry gain of $2.8 million (net of income taxes of $0) following its re p u rchase of a portion of its Senior Discount Notes. The gain re p resents the diff e rence between the allocated carrying value of the face value of the debt re p u rchased of $8.1 million less the consideration paid of $5.0 million, offset by the write-off of allocated unamortized deferred financing costs of $300,000. The Company has not re t i red the bonds re p u rchased.\n\nIn addition, the Company re p u rchased 97,023 warrants that were attached to the notes payable. Accord i n g l y, approximately $176,000 was allocated to the carrying value of the warrants which reduced additional paid-in capital.\n\nIn 1998 the Company re c o rded an extraord i n a ry gain of $2.9 million (net of income taxes of $1.5 million), following its re p u rchase of a portion of its Senior Discount Notes. The gain re p resents the diff e rence between the allocated carrying value of the face value of the debt re p u rchased of $10.2 million less the consideration paid of $5.5 million, offset by the write-off of allocated unamortized deferred financing costs of $400,000. The Company has not re t i red the bonds re p u rchased.\n\n**Net Loss** The Company's net loss increased to $49.6 million for the year ended December 31, 2000, as compared to $30.9 million for the year ended December 31, 1999 and $28.4 million for the year ended December 31, 1998, as a result of the factors discussed above.\n\n#### LI Q U I D I T Y A N D CA P I TA L RE S O U R C E S\n\nSince its inception, the Company has sustained negative cash flows from operations and has financed its operations and capital expenditure s primarily through the proceeds from the 1998 issue of Deutsche Mark denominated notes payable, the Company's 1997 public equity off e r i n g , equipment lease financing and private placements of equity securities. The net proceeds of such transactions, together with revenues fro m operations and interest income have been used to fund aggregate net losses of approximately $123.8 million, investments in pro p e rt y, plant and equipment of approximately $52.8 million and acquisitions of $24.6 million.", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "guarantees for financial instruments and as deposits with customs officials. The decrease resulted primarily from the settlement of the forw a rd f o reign exchange contracts using restricted cash and a release of restricted cash resulting from the posting of a surety bond with the Hungarian banking institution that supplies cash to the Company's ATM network in Hungary.\n\n**Trade Accounts** Trade accounts receivable increased to $9.5 million at December 31, 2000 from $7.9 million at December 31, 1999 due primarily to sales from the Software Solutions Segment and increased Network Services Segment revenues.\n\n**P r o p e r t y, Plant and Equipment** Net pro p e rt y, plant and equipment decreased to $31.7 million at December 31, 2000 from $36.7 million at December 31, 1999. This decrease is due primarily to a reduction in the rate of installation of ATMs and fixed asset additions. Fixed asset d e p reciation was in excess of fixed asset additions, and the write-off of $800,000 in ATM hard w a re further reduced the net fixed asset position.\n\n**Intangible Assets** The decrease in net intangible assets to $2.6 million at December 31, 2000 from $16.3 million at December 31, 1999 is due primarily to the $11.2 million write-down of goodwill and other identifiable intangible assets associated with the Software Solutions Segment (see Note 9 to the Consolidated Financial Statements – Intangibles). In addition, the decrease is the result of amortization of purchased intangibles a c q u i red in the Euronet USA acquisition in 1998, and the SBK and Dash acquisitions in 1999.\n\n**Current Liabilities** C u rrent liabilities decreased to $20.5 million at December 31, 2000 from $26.9 million at December 31, 1999. This decre a s e is due primarily to decreases in accrued expenses, billings in excess of costs and estimated earnings on software installation costs and settlement of the forw a rd foreign exchange contracts.\n\n**Capital Lease** Total capital lease obligations including current installments increased to $11.5 million at December 31, 2000 from $10.6 million at December 31, 1999. This increase is due primarily to additional capital leases resulting from the Company's purchase of Budapest Bank's AT M network, consisting of 147 ATMs on May 1, 2000.\n\n**Notes Payable** Notes payable increased to $77.2 million at December 31, 2000 from $72.8 million at December 31, 1999. This is the result of several transactions as follows:\n\n| | (in millions) | |\n| --- | --- | --- |\n| Balance at December 31, 1999 | $ | 7 2 . 8. |\n| U n realized foreign exchange gain (DEM vs. US$) | | (4.4) |\n| A c c retion of bond intere s t | | 8 . 8. |\n| Balance at December 31, 2000 | $ | 7 7 . 2. |\n\n**S t o c k h o l d e r's Deficit** Stockholders' deficit increased to $44.8 million at December 31, 2000 from $9.5 million at December 31, 1999. This is due to the net loss for the year ended December 31, 2000 of $49.6 million which was offset by an increase in additional paid in capital of $14.4 million due to the sale of 1,882,723 shares of common stock for proceeds of $13.0 million, the issue of $400,000 of warrants and the exercise of 390,231 stock options for proceeds of $900,000.\n\n#### **Year 2000 Compliance**\n\nThe Company's European and U.S. Year 2000 compliance teams re p o rted no material Year 2000 problems during the advent of the year 2000, either with Euro n e t 's own systems or the systems of its customers. The Company is unaware of any material Year 2000 complications to date.\n\n#### **Impact of New Accounting Pronouncements Not Yet Adopted**\n\n**S FAS 133** The Company is re q u i red to adopt Statement of Financial Accounting Standard (SFAS) No. 133 \"Accounting for Derivative I n s t ruments and Hedging Activities\" as amended by SFAS No. 138 for US GAAP re p o rting as of 1 January 2001. SFAS 133 and 138 establish accounting and re p o rting standards for derivative instruments, including certain derivative instruments embedded in other contracts (collectively re f e rred to as derivatives).\n\nIn accordance with SFAS No. 133, entities are re q u i red to carry all derivative instruments on the balance sheet at fair value. The accounting for movements in fair value of derivatives depends upon whether it has been designated and qualifies as part of a hedging relationship and, if so, the reason for holding it. If certain conditions are met, the Company may elect to designate a derivative instrument as a hedge of exposures. If the hedged exposure is a fair value exposure, movements in fair value are recognized in earnings with the offsetting gain or loss on the hedged item attributable to the hedged risk. If the hedged exposure is a cash flow exposure, the effective portion of the movement in fair value of the derivative i n s t rument is initially re p o rted as a component of other comprehensive income and subsequently reclassified into earnings at the time the f o recasted transaction impacts earnings. Amounts excluded from the assessment of hedge effectiveness as well as the ineffective portion of movements in fair value of the derivative instrument are re p o rted in earnings in the current period. Accounting for foreign currency hedges is similar to the accounting for fair value and cash flow hedges. If a derivative instrument is not designated as a hedge, movements in the fair value of derivative instruments are recognized in earnings.\n\nUnder the provisions of SFAS No. 133, the method that the Company will use to assess effectiveness of a hedge, as well as the measure m e n t a p p roach for determining the ineffectiveness of a hedge, must be established at the inception of a hedge. The Company formally documents all relationships between hedging instruments and hedged items as well as its risk management objective and strategy for entering into the transaction. This process includes linking derivatives designated as fair value or cash flow hedges to specific assets, liabilities or firm commitments on forecasted transactions. This process is repeated on a periodic basis. If at any time the Company determines a hedge is no longer eff e c t i v e , hedge accounting is immediately discontinued and the derivative is marked to market with any gain or loss re c o rded in earnings.\n\nThe Company adopted the provisions of SFAS No. 133 on 1 January 2001 and this had no impact on the Company's consolidated financial statements as the Company does not have any derivative financial instruments. Future changes in the fair value for any remaining trading securities will be re c o rded through earnings. Changes in fair value of available for sale securities will be re c o rded in other comprehensive income.", - "page_start": 22, - "page_end": 22, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "**Operating Loss** The total Network Services Segment operating loss decreased to $6.1 million for the year ended December 31, 2000 from $12.9 million for the year ended December 31, 1999, an improvement of 53%, as a result of the factors discussed above. The Central European Subsegment re c o rded an operating loss of $3.1 million for the year ended December 31, 2000 compared to a loss of $8.0 million for the year ended December 30, 1999, an improvement of 61%, as a result of the factors discussed above. The We s t e rn European Sub-segment operating loss d e c reased to $2.3 million for year ended December 31, 2000 compared to a loss of $3.8 million for the year ended December 31, 1999, an i m p rovement of 39%, as a result of the factors discussed above. The Other ATM Operations Sub-segment incurred an operating loss of $700,000 for the year ended December 31, 2000 compared to a loss of $1.0 million for the year ended December 31, 1999, an improvement of 30%, as a result of the factors discussed above.\n\n#### Software Solutions Segment\n\n**Software Solutions Revenue** Revenues from the Software Solutions Segment totaled $16.0 million before inter-segment eliminations for the year ended December 31, 2000 as compared to revenue of $15.1 for the year ended December 31, 1999. Software revenues are grouped into four b road categories: software license fees, professional service fees, maintenance fees and hard w a re sales. Software license fees are the initial fees c h a rged by the Company for the licensing of its pro p r i e t a ry application software to customers. Professional service fees are charged for customization, installation and consulting services provided to customers. Software maintenance fees are the ongoing fees charged to customers for the maintenance of the software products. Hard w a re sales revenues are derived from the sale of computer products and are re p o rted net of cost of sales. The components of software solutions revenue for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | | Years ending December 31, | | |\n| --- | --- | --- | --- | --- |\n| | 2 0 0 0 | | 1 9 9 9 | |\n| S o f t w a re license fees | $ | 4 , 1 1 7 | $ | 2 , 4 3 0 |\n| P rofessional service fees | | 6 , 8 6 7 | | 8 , 2 9 8 |\n| Maintenance fees | | 4 , 4 8 7 | | 4 , 0 5 1 |\n| H a rd w a re sales | | 5 3 5 | | 3 7 0 |\n| Total direct operating expenses | $ | 1 6 , 0 0 6 | $ | 1 5 , 1 4 9 |\n\nThe increases in software license fees from 1999 to 2000 can be attributed to an increased number of software sales contracts signed in 2000 as c o m p a red to 1999, primarily in the first half of the year 2000. Sales of the Company's core software products have dropped off substantially in the third and fourth quarter of 2000 and are expected to be soft again during 2001. The Company believes that revenues of the Software Solutions Segment will increasingly be derived from the Company's new set of software solutions, including its wireless banking solutions. The decreases in professional service fees from 1999 to 2000 can be attributed to increased efficiency in the installation of software.\n\n**Software Sales Backlog** The Company defines \"software sales backlog\" as fees specified in contracts which have been executed by the Company and for which the Company expects recognition of the related revenue within one year. At December 31, 2000 the revenue backlog was $3.5 million, as compared to December 31, 1999 the revenue backlog was $3.1 million. The increase in backlog from December 31, 1999 re s u l t s principally from growth in software sales. It is management's intention to continue to focus on expediting delivery and implementation of software in an eff o rt to reduce backlog while continuing sales growth.\n\nT h e re can be no assurance that the contracts included in backlog will actually generate the specified revenues or that the revenues will be generated within the one-year period.\n\n**Operating Expenses** S o f t w a re Solutions Segment operating expenses consist primarily of salaries and benefits, selling, general and administrative, and depreciation and amortization. In addition, the Company re c o rded a $11.2 million one-time write down of goodwill and other identifiable intangible assets associated with the Company's purchase of Euronet USA in December 1998 (see Note 10 to Consolidated Financial Statements – Asset Write Down). Total segment operating expenses increased to $37.5 million for the year ended December 31, 2000 from $22.3 million for the year ended December 31, 1999. The components of software solutions operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | | Years ending December 31, | | |\n| --- | --- | --- | --- | --- |\n| | 2 0 0 0 | | 1 9 9 9 | |\n| D i rect operating costs | $ | 8 0 0 | $ | 1 , 0 8 9 |\n| Salaries and benefits | | 1 8 , 0 0 4 | | 1 3 , 9 5 3 |\n| Selling, general and administrative | | 5 , 2 6 6 | | 4 , 5 6 5 |\n| D e p reciation and amort i z a t i o n | | 2 , 2 1 5 | | 2 , 6 8 3 |\n| Asset write down | | 1 1 , 1 9 0 | | — |\n| Total direct operating expenses | $ | 3 7 , 4 7 5 | $ | 2 2 , 2 9 0 |\n\nThe Company has made planned increases in staff in order to increase sales, accelerate development of certain software enhancements and re d u c e d e l i v e ry times for software. These staff increases have resulted in a significant increase in salaries and benefits, which has contributed to the net losses of the Software Solutions Segment for the years ended December 31, 2000 and 1999. In January 2001, a reduction in the work force took place with the objective being to reduce costs to bring them more in line with the anticipated revenue.\n\nThe Company has an ongoing commitment to the development, maintenance and enhancement of its products and services. As a result of this commitment the Company has invested substantial amounts in re s e a rch and development. In part i c u l a r, the Company has invested and will continue to invest in new software products that will serve as the underlying application software that permits additional features and transactions", - "page_start": 19, - "page_end": 19, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "At December 31, 2000 the Company had cash and cash equivalents of $7.2 million and working capital of $3.6 million. The Company had $2.1 million of restricted cash held as security with respect to cash provided by banks participating in Euro n e t 's ATM network, to cover guarantees on financial instruments and as deposits with customs officials (See Note 7 to the Consolidated Financial Statements – Restricted cash). In addition to the assets held on the balance sheet at December 31, 1999 the Company held re p u rchased notes payable with a face value of 48.4 million Deutsche Marks ($23.3 million as at December 31, 2000 based on a USD to DM rate of 1:2.08) and a fair market value at December 31, 2000 of $9.3 million (See Note 20 to the Consolidated Financial Statements – Financial instruments).\n\nOn June 28, 2000 the Company entered into an unsecured revolving credit agreement (the \"Credit Agreement\") providing a facility of up to $4.0 million from three shareholders as follows: DST Systems in the amount of $2.4 million; Hungarian-American Enterprise Fund in the amount of $1.0 million; and Michael J. Brown in the amount of $600,000. The facility was available to be drawn upon until December 28, 2000, with repayment of any draws being due June 28, 2001. On December 28, 2000 the facility was amended and renewed for a further six months and is available to be drawn until June 28, 2001 with repayments of any draws being due December 28, 2001. Draws on the facility will accrue intere s t at 10 percent per annum, payable quart e r l y. A \"commitment\" fee was paid for the initial facility of 100,000 warrants issued pro- rata to the lenders with a warrant strike price set at the average share price, as quoted on NASDAQ for 10 trading days prior to the warrant issue date, less 10 percent. An additional fee of 100,000 warrants, on the same terms, was paid for the subsequent extension of the facility. Wa rrants are to be issued on similar terms and conditions for each draw on the facility at the rate of 80,000 warrants for each $1.0 million of funds drawn. As of M a rch 1, 2001, the Company had not made any draws under the Credit Agreement.\n\nOn Febru a ry 25, 2000 the Company entered into two subscription agreements for the sale of an aggregate of 650,000 new common shares of the C o m p a n y. Closing under those agreements took place on March 13, 2000. These agreements were signed with certain accredited investors in transactions exempt from registration under the exemptions provided in Section 4(2) and Regulation D of the Act. The purchase price of each s h a re was $6.615, which re p resents ninety percent of the average closing price for the ten trading days prior to and including Febru a ry 15, 2000. The aggregate amount of proceeds to the Company from the private placement was $4.3 million. Under each of the agreements, for each two s h a res of common stock purchased in the private placement, the purchasers were issued one warrant to purchase a share of Euronet common stock at an exercise price of $11.615, expiring in each case on the one year anniversary date of the subscription agreement.\n\nIn April 2000 the Company entered into two separate subscription agreements for the sale of an aggregate of 354,777 new common shares of the C o m p a n y. Of the total new shares, closing with respect to 254,777 shares took place on April 10, 2000, and closing with respect to 100,000 share s took place on May 4, 2000. These agreements were signed with certain foreign persons in transactions exempt from registration under the exemption provided in Regulation S of the Act. The weighted average purchase price of each share was $7.50. The aggregate amount of pro c e e d s to the Company from the private placement was $2.7 million. Under each of the agreements, for each two shares of common stock purchased in the private placement, the purchaser was issued one warrant to purchase a share of Euronet common stock at a weighted average exercise price of $12.50, expiring in each case on the one year anniversary date of the subscription agreement.\n\nIn July 2000 the Company entered into subscription agreements for the sale of 877,946 new common shares of the Company. These agre e m e n t s w e re signed with accredited investors in transactions exempt from registration pursuant to the exemptions provided in Section 4(2) and Regulation D of the Act. Closing with respect to such sale took place on July 14 and August 29, 2000. The purchase price of each share was $6.97. The aggregate amount of proceeds to the Company from the private placement was $6.1 million.\n\nThe Company leases many of its ATMs under capital lease arrangements that expire between 2001 and 2005. The leases bear interest between 8% and 12% per annum. As of December 31, 2000 the Company owed $11.5 million under such capital lease arrangements. (See Note 15 to the Consolidated Financial Statements - Leases.)\n\nThe Company expects that its capital re q u i rements will continue in the future but will not be as great as they were in the past, as the Company intends to continue to promote its outsourcing capabilities and re-deploy under- p e rf o rming ATMs currently operating in the network. This strategy should reduce the Company's reliance on capital expenditures in the future as the business continues to gro w. Fixed asset purchases and capital lease payments for 2001 are expected to be approximately $6.2 million in the Company's existing markets, notably We s t e rn and Central E u rope. Acquisitions of related ATM business and investments in new markets in furtherance of the Company's strategy may re q u i re additional capital expenditures.\n\nBased on the Company's current business plan and financial projections, the Company expects to continue to reduce operating losses and net cash used in operating activities in 2001. In the Network Services Segment, the Company anticipates that increased transaction levels in its AT M network will result in additional revenues without a corresponding increase in expenses. In addition, the Company expects to further expand its ATM outsourcing services and offer new value-added services, which will provide continued revenue growth without significantly increasing dire c t operating expenses or capital investments. In the Software Solutions Segment, the Company expects that the benefits of a re s t ructuring pro g r a m commenced in the first quarter of 2001 will reduce the operating losses and bring operating costs more in line with anticipated revenues. The Company believes that the credit facility, certain asset sales and cash and cash equivalents will provide the Company with sufficient capital until it achieves positive cash flow. As a result, the Company believes it has sufficient liquidity re s o u rces to meet current and future cash re q u i rements.\n\n#### BA L A N C E SH E E T IT E M S\n\n**Cash and Cash Equivalents** The decrease of cash and cash equivalents to $7.2 million at December 31, 2000 from $15.0 million at December 31, 1999 is due primarily to the net effects of working capital movements, foreign exchange gains and losses, the settlement of a forw a rd fore i g n exchange contract, private placement of common shares, capital expenditures and capital lease payments, and operating losses for the year ended December 31, 2000. (See Note 21 to the Consolidated Financial Statements – Reconciliation of net loss to net cash used in operating activities and the Consolidated Statements of Cash Flows.)\n\n**Restricted Cash** Restricted cash decreased to $2.1 million at December 31, 2000 from $10.9 million at December 31, 1999. The majority of restricted cash was held as security with respect to cash provided in Hungary by banks participating in Euro n e t 's ATM network, to cover", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "In the week of March 13, 2000, the Company entered into put options with Merrill Lynch to sell Euro 79.0 million for $75.1 million on May 26, 2000. The contracts were purchased to limit the Company's exposure on the call option described above against a fall of the Euro below $0.95.\n\nThe Company was re q u i red to cash collateralize the net fair value of such options contracts measured on a mark-to-market basis, and on May 26, 2000, the Company had on deposit $8.3 million with Merrill Lynch.\n\nOn May 26, 2000, the rate of the Euro was $0.9118 and the Company settled the above option contracts in the amount of $8.3 million resulting in a total net loss on such contracts of $10.3 million inclusive of the cost of the contracts. At December 31, 2000, the Company had not entered into any further option contracts.\n\n#### **(15) Leases**\n\n- (a) Capital leases\nThe Company leases many of its ATMs under capital lease agreements that expire between 2001 and 2005 and bear interest at rates between 8% and 12%. Lease installments are paid on a monthly, quarterly or semi-annual basis. Euronet has the right to extend the t e rm of certain leases at the conclusion of the basic lease period.\n\nThe gross amount of the ATMs and computer equipment and related accumulated amortization re c o rded under capital leases were as follows: December 31,\n\n| 2 0 0 0 | | 1 9 9 9 | |\n| --- | --- | --- | --- |\n| | | (in thousands) | |\n| AT M 's | $ 1 3 , 9 2 4. | $ | 1 8 , 0 2 7. |\n| O t h e r | 3 6 6. | | 7 6 8. |\n| $ | 1 4 , 2 9 0. | $ | 1 8 , 7 9 5. |\n| Less accumulated amort i z a t i o n | (3,429) | | ( 4 , 8 1 3 ) |\n| Net book value | 1 0 , 8 6 1. $ | $ | 1 3 , 9 8 2. |\n\nD e p reciation of assets held under capital leases amounted to $2.0 million, $2.1 million, and $2.9 million for the years ended December 31, 2000, 1999, and 1998, re s p e c t i v e l y, and is included in depreciation and amortization expense.\n\n- (b) Operating leases\nThe Company also has noncancelable operating rental leases for office space which expire over the next 3 to 9 years. Rent expense under these leases amounted to $1.4 million, $2.1 million, and $1.1 million for the years ended December 31, 2000, 1999, and 1998, re s p e c t i v e l y.\n\n- (c) Future minimum lease payments\nF u t u re minimum lease payments under the capital leases and the noncancelable operating lease (with initial or remaining lease terms in excess of one year) as of December 31, 2000 are:\n\n| | C a p i t a l | O p e r a t i n g |\n| --- | --- | --- |\n| | L e a s e s | L e a s e s |\n| | | (in thousands) |\n| Year ending December 31, | | |\n| 2 0 0 1 | 5 , 1 3 7. | 1 , 3 1 5 |\n| 2 0 0 2 | 4 , 4 7 0. | 1 , 0 4 9 |\n| 2 0 0 3 | 2 , 9 5 1. | 7 7 9 |\n| 2 0 0 4 | 1 , 5 1 2. | 5 1 5 |\n| 2 0 0 5 | 3 6 3. | 5 1 5 |\n| 2006 and there a f t e r | —. | 8 2 |\n| Total minimum lease payments | 1 4 , 4 3 3. | |\n| Less amounts re p resenting intere s t | ( 2 , 9 3 3 ) | |\n| P resent value of net minimum capital | | |\n| lease payments | 1 1 , 5 0 0. | |\n| Less current installments of obligations | | |\n| under capital leases | ( 3 , 4 6 6 ) | |\n| Long term capital lease obligations $ | 8 , 0 3 4. . | |", - "page_start": 37, - "page_end": 37, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "**Euronet Worldwide Annual Report 2000**\n\nSECURE FINANCIAL TRANSACTIONS AN Y TIME, AN Y PLACE", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Existing industry practice related to non-gaming revenues already complied with EITF 01-9. The retail value of accommodations, food and beverage, and other services furnished to guests without charge is included in gross revenue and then deducted as promotional allowances. The estimated cost of providing such promotional allowances is primarily included in casino expenses as follows:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Rooms $ 63,652 | | $ 64,103 | $ 60,544 |\n| Food and beverage | 191,695 | 178,399 | 169,676 |\n| Other | 25,213 | 21,560 | 19,920 |\n| $ 280,560 | | $ 264,062 | $ 250,140 |\n\n**Advertising.** The Company expenses advertising costs the first time the advertising takes place. Advertising expense, which is generally included in general and administrative expenses, was $57 million, $54 million and $52 million for 2004, 2003 and 2002, respectively.\n\n**Corporate expense.** Corporate expense represents unallocated payroll and aircraft costs, professional fees and various other expenses not directly related to the Company's casino resort operations. In addition, corporate expense includes the costs associated with the Company's evaluation and pursuit of new business opportunities, which are expensed as incurred until development of a specific project has become probable.\n\n**Preopening and start-up expenses.** The Company accounts for costs incurred during the preopening and start-up phases of operations in accordance with Statement of Position 98-5, \"Reporting on the Costs of Start-up Activities\". Preopening and start-up costs, including organizational costs, are expensed as incurred. Costs classified as preopening and start-up expenses include payroll, outside services, advertising, and other expenses related to new or start-up operations and new customer initiatives. **Income per share of common stock.** The weighted-average number of common and common equivalent shares used in the calculation of basic and diluted earnings per share consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Weighted-average common shares outstanding used | | | |\n| in the calculation of basic earnings per share 139,663 | | 148,930 | 157,809 |\n| Potential dilution from stock options and | | | |\n| restricted stock | 5,003 | 2,662 | 2,131 |\n| Weighted-average common and common equivalent | | | |\n| shares used in the calculation of diluted earnings | | | |\n| per share 144,666 | | 151,592 | 159,940 |\n\n**Stock-based compensation.** The Company accounts for stock-based compensation, including employee stock option plans, in accordance with Accounting Principles Board Opinion No. 25, \"Accounting for Stock Issued to Employees\" and the Financial Accounting Standards Board's Interpretation No. 44, \"Accounting for Certain Transactions involving Stock Compensation, an interpretation of APB Opinion No. 25\", and discloses supplemental information in accordance with Statement of Financial Accounting Standards No. 123, \"Accounting for Stock-Based Compensation\" (\"SFAS 123\"), as amended by Statement of Financial Accounting Standards No. 148, \"Accounting for Stock-Based Compensation – Transition and Disclosure\" (\"SFAS 148\"). The Company does not incur compensation expense for employee stock options when the exercise price is at least 100% of the market value of the Company's common stock on the date of grant. For disclosure purposes, employee stock options are measured at fair value and compensation is assumed to be amortized over the vesting periods of the options.\n\nIn December 2004, the FASB issued FASB Statement No. 123 (revised 2004), \"Share-Based Payment\" (\"SFAS 123(R)\"). Under the original standard, SFAS No. 123, companies had the option of recording stock options issued to employees at", - "page_start": 59, - "page_end": 59, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **(19) Business Segment Information**\n\nE u ronet and its subsidiaries operate in two business segments: (1) a segment that provides an independent shared ATM network and other e l e c t ronic payment network services to banks, retail and financial institutions (the \"Network Services Segment\"); and (2) a segment that p roduces application software and solutions for payment and transaction delivery systems (the \"Software Solutions Segment\"). These business segments are supported by a corporate service segment which provides corporate and other administrative services which are not d i rectly identifiable with the two business segments, (the \"Corporate Services Segment\"). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss fro m operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation.\n\nAs the Network Services Segment continued to grow throughout 1999, the Company's management began to divide the internal org a n i z a t i o n of the segment into Sub-segments. Accord i n g l y, beginning in January 2000, the Company divided the Network Services Segment into thre e Sub-segments: \"Central European Sub-segment\" (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), \"We s t e rn E u ropean Sub-segment\" (including Germ a n y, France, and the United Kingdom) and \"Other Operations Sub-segment\" (including the United States and unallocated processing center costs). Where practical, certain amounts have been reclassified to reflect the change in intern a l re p o rting. The Company is unable to present Network Services Segment assets by Sub-segment as of December 31, 1999. Prior to January 1, 2000, certain assets that were used to provide support services to the Company as a whole were included in the assets in the balance sheet of the Company's wholly owned Hungarian subsidiary, Bank Tech. In order to segregate corporate assets from those of the Hungarian operations, these assets were transferred as of December 31, 1999, from Bank Tech to an existing Hungarian shell company, Administrative S e rvices. Those assets are now shown under the Other Operations Sub-segment.\n\nThe following tables present the segment results of the Company's operations for the years ended December 31, 2000, 1999 and 1998.\n\n| | | Year Ended December 31, 2000 | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | |\n| | | | | | N e t w o r k | | | | | |\n| | Central | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | (in thousands) | | | | | |\n| Total Revenues | $ 1 8 , 5 9 9. | $ 1 6 , 6 1 5. | $ | 1 , 7 0 0. | $ | 3 6 , 9 1 4. | $ 1 6 , 0 0 6. | $ | —. | $ 5 2 , 9 2 0. |\n| Total operating expenses | ( 2 1 , 6 6 9 ) | ( 1 8 , 9 0 1 ) | | ( 2 , 4 0 9 ) | | ( 4 2 , 9 7 9 ) | ( 3 7 , 4 7 5 ) | ( 7 , 8 6 2 ) | | ( 8 8 , 3 1 6 ) |\n| Operating loss. | ( 3 , 0 7 0 ) | ( 2 , 2 8 6 ) | | ( 7 0 9 ) | | ( 6 , 0 6 5 ) | ( 2 1 , 4 6 9 ) | ( 7 , 8 6 2 ) | | ( 3 5 , 3 9 6 ) |\n| I n t e rest income | 2 8 9. | 6 5. | | 1 9 0. | | 5 4 4. | 1 0 3. | | 4 4 2. | 1 , 0 8 9. |\n| I n t e rest expense | ( 1 , 0 1 6 ) | ( 1 6 8 ) | | ( 1 5 0 ) | | ( 1 , 3 3 4 ) | —. | ( 9 , 4 9 5 ) | | ( 1 0 , 8 2 9 ) |\n| F o reign exchange (loss)/gain, net | ( 6 1 6 ) | ( 4 9 4 ) | | ( 1 5 5 ) | | ( 1 , 2 6 5 ) | 1. . | ( 1 , 9 6 3 ) | | ( 3 , 2 2 7 ) |\n| Net loss before income taxes | $ ( 4 , 4 1 3 ) | $ ( 2 , 8 8 3 ) | $ | ( 8 2 4 ) | $ | ( 8 , 1 2 0 ) | $( 2 1 , 3 6 5 ) | $( 1 8 , 8 7 8 ) | | $ ( 4 8 , 3 6 3 ) |\n| Segment assets | $ 2 5 , 6 9 7. | $ 1 6 , 7 5 5 | $ | 3 , 6 5 2. | | $ 4 6 , 1 0 4. | $ 9 , 4 3 3. | $ 5 , 3 5 3. | | $ 6 0 , 8 9 0. |\n| Fixed assets | 1 7 , 1 4 5. | 1 1 , 7 0 7. | | 1 , 6 8 2. | | 3 0 , 5 3 4. | 9 6 8. | | 1 5 5. | 3 1 , 6 5 7. |\n| D e p reciation and amort i z a t i o n | 3 , 9 7 7. | 2 , 8 8 4. | | 1 , 1 0 0. | | 7 , 9 6 1. | 2 , 2 1 5. | | 2 0 8. | 1 0 , 3 8 4. |\n| Asset write down | 6 6 8. | 1 1 0. | | — | | 7 7 8. | 1 1 , 1 9 0 | | —. | 1 1 , 9 6 8. |\n\n| | Year Ended December 31, 2000 | | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | | |\n| | | | | | | N e t w o r k | | | | | |\n| | Central | | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | | (in thousands) | | | | | |\n| Total Revenues | $ | 1 2 , 6 6 4. | $ 1 2 , 6 3 7. | $ | 1 , 2 0 2. | $ 2 6 , 5 0 3. | | $ 1 5 , 1 4 9. | $ | —. | $ 4 1 , 6 5 2. |\n| Total operating expenses | | ( 2 0 , 6 8 3 ) | ( 1 6 , 4 7 7 ) | | ( 2 , 2 5 0 ) | ( 3 9 , 4 1 0 ) | | ( 2 2 , 2 9 0 ) | ( 6 , 7 5 0 ) | | ( 6 8 , 4 5 0 ) |\n| Operating loss. | | ( 8 , 0 1 9 ) | ( 3 , 8 4 0 ) | | ( 1 , 0 4 8 ) | ( 1 2 , 9 0 7 ) | | ( 7 , 1 4 1 ) | ( 6 , 7 5 0 ) | | ( 2 6 , 7 9 8 ) |\n| I n t e rest income | | 4 4 8. | 1 6. | | 1 0 3. | | 5 6 7. | 1 4 8. | 1 , 2 3 5. | | 1 , 9 5 0. |\n| I n t e rest expense | | ( 9 8 1 ) | ( 1 0 1 ) | | ( 5 1 ) | | ( 1 , 1 3 3 ) | —. | ( 9 , 7 6 6 ) | | ( 1 0 , 8 9 9 ) |\n| F o reign exchange (loss)/gain, net | | ( 3 9 9 ) | ( 1 9 ) | | ( 1 4 6 ) | | ( 5 6 4 ) | 2. | ( 1 , 5 4 8 ) | | ( 2 , 1 1 0 ) |\n| Net loss before income taxes | $ | ( 8 , 9 5 1 ) | $ ( 3 , 9 4 4 ) | $ | ( 1 , 1 4 2 ) | $ ( 1 4 , 0 3 7 ) | | $ ( 6 , 9 9 1 ) | $ ( 1 6 , 8 2 9 ) | | $ ( 3 7 , 8 5 7 ) |\n| Segment assets | | n / a. | n / a. | | n / a. | $ 5 6 , 6 5 8. | | $ 2 1 , 5 2 7. | $ 1 8 , 6 5 9. | | $ 9 6 , 8 4 4. |\n| Fixed assets | | n / a. | n / a. | | n / a. | 3 5 , 4 3 8. | | 1 , 1 1 3. | 1 4 2. | | 3 6 , 6 9 3. |\n| D e p reciation and amort i z a t i o n | | n / a. | n / a. | | n / a. | | 7 , 4 1 0. | 2 , 6 8 3. | 1 4 5. | | 1 0 , 2 3 8. |", - "page_start": 42, - "page_end": 42, - "source_file": "NASDAQ_EEFT_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What was the share of revenues of Netwrok Wordwide made in Poland and Hungary in 2000 ?", - "target_page": 24, - "target_passage": "In 2000, 30% of the Company’s revenues were generated in Poland and Hungary", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### **(19) Business Segment Information**\n\nE u ronet and its subsidiaries operate in two business segments: (1) a segment that provides an independent shared ATM network and other e l e c t ronic payment network services to banks, retail and financial institutions (the \"Network Services Segment\"); and (2) a segment that p roduces application software and solutions for payment and transaction delivery systems (the \"Software Solutions Segment\"). These business segments are supported by a corporate service segment which provides corporate and other administrative services which are not d i rectly identifiable with the two business segments, (the \"Corporate Services Segment\"). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss fro m operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation.\n\nAs the Network Services Segment continued to grow throughout 1999, the Company's management began to divide the internal org a n i z a t i o n of the segment into Sub-segments. Accord i n g l y, beginning in January 2000, the Company divided the Network Services Segment into thre e Sub-segments: \"Central European Sub-segment\" (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), \"We s t e rn E u ropean Sub-segment\" (including Germ a n y, France, and the United Kingdom) and \"Other Operations Sub-segment\" (including the United States and unallocated processing center costs). Where practical, certain amounts have been reclassified to reflect the change in intern a l re p o rting. The Company is unable to present Network Services Segment assets by Sub-segment as of December 31, 1999. Prior to January 1, 2000, certain assets that were used to provide support services to the Company as a whole were included in the assets in the balance sheet of the Company's wholly owned Hungarian subsidiary, Bank Tech. In order to segregate corporate assets from those of the Hungarian operations, these assets were transferred as of December 31, 1999, from Bank Tech to an existing Hungarian shell company, Administrative S e rvices. Those assets are now shown under the Other Operations Sub-segment.\n\nThe following tables present the segment results of the Company's operations for the years ended December 31, 2000, 1999 and 1998.\n\n| | | Year Ended December 31, 2000 | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | |\n| | | | | | N e t w o r k | | | | | |\n| | Central | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | (in thousands) | | | | | |\n| Total Revenues | $ 1 8 , 5 9 9. | $ 1 6 , 6 1 5. | $ | 1 , 7 0 0. | $ | 3 6 , 9 1 4. | $ 1 6 , 0 0 6. | $ | —. | $ 5 2 , 9 2 0. |\n| Total operating expenses | ( 2 1 , 6 6 9 ) | ( 1 8 , 9 0 1 ) | | ( 2 , 4 0 9 ) | | ( 4 2 , 9 7 9 ) | ( 3 7 , 4 7 5 ) | ( 7 , 8 6 2 ) | | ( 8 8 , 3 1 6 ) |\n| Operating loss. | ( 3 , 0 7 0 ) | ( 2 , 2 8 6 ) | | ( 7 0 9 ) | | ( 6 , 0 6 5 ) | ( 2 1 , 4 6 9 ) | ( 7 , 8 6 2 ) | | ( 3 5 , 3 9 6 ) |\n| I n t e rest income | 2 8 9. | 6 5. | | 1 9 0. | | 5 4 4. | 1 0 3. | | 4 4 2. | 1 , 0 8 9. |\n| I n t e rest expense | ( 1 , 0 1 6 ) | ( 1 6 8 ) | | ( 1 5 0 ) | | ( 1 , 3 3 4 ) | —. | ( 9 , 4 9 5 ) | | ( 1 0 , 8 2 9 ) |\n| F o reign exchange (loss)/gain, net | ( 6 1 6 ) | ( 4 9 4 ) | | ( 1 5 5 ) | | ( 1 , 2 6 5 ) | 1. . | ( 1 , 9 6 3 ) | | ( 3 , 2 2 7 ) |\n| Net loss before income taxes | $ ( 4 , 4 1 3 ) | $ ( 2 , 8 8 3 ) | $ | ( 8 2 4 ) | $ | ( 8 , 1 2 0 ) | $( 2 1 , 3 6 5 ) | $( 1 8 , 8 7 8 ) | | $ ( 4 8 , 3 6 3 ) |\n| Segment assets | $ 2 5 , 6 9 7. | $ 1 6 , 7 5 5 | $ | 3 , 6 5 2. | | $ 4 6 , 1 0 4. | $ 9 , 4 3 3. | $ 5 , 3 5 3. | | $ 6 0 , 8 9 0. |\n| Fixed assets | 1 7 , 1 4 5. | 1 1 , 7 0 7. | | 1 , 6 8 2. | | 3 0 , 5 3 4. | 9 6 8. | | 1 5 5. | 3 1 , 6 5 7. |\n| D e p reciation and amort i z a t i o n | 3 , 9 7 7. | 2 , 8 8 4. | | 1 , 1 0 0. | | 7 , 9 6 1. | 2 , 2 1 5. | | 2 0 8. | 1 0 , 3 8 4. |\n| Asset write down | 6 6 8. | 1 1 0. | | — | | 7 7 8. | 1 1 , 1 9 0 | | —. | 1 1 , 9 6 8. |\n\n| | Year Ended December 31, 2000 | | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | | |\n| | | | | | | N e t w o r k | | | | | |\n| | Central | | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | | (in thousands) | | | | | |\n| Total Revenues | $ | 1 2 , 6 6 4. | $ 1 2 , 6 3 7. | $ | 1 , 2 0 2. | $ 2 6 , 5 0 3. | | $ 1 5 , 1 4 9. | $ | —. | $ 4 1 , 6 5 2. |\n| Total operating expenses | | ( 2 0 , 6 8 3 ) | ( 1 6 , 4 7 7 ) | | ( 2 , 2 5 0 ) | ( 3 9 , 4 1 0 ) | | ( 2 2 , 2 9 0 ) | ( 6 , 7 5 0 ) | | ( 6 8 , 4 5 0 ) |\n| Operating loss. | | ( 8 , 0 1 9 ) | ( 3 , 8 4 0 ) | | ( 1 , 0 4 8 ) | ( 1 2 , 9 0 7 ) | | ( 7 , 1 4 1 ) | ( 6 , 7 5 0 ) | | ( 2 6 , 7 9 8 ) |\n| I n t e rest income | | 4 4 8. | 1 6. | | 1 0 3. | | 5 6 7. | 1 4 8. | 1 , 2 3 5. | | 1 , 9 5 0. |\n| I n t e rest expense | | ( 9 8 1 ) | ( 1 0 1 ) | | ( 5 1 ) | | ( 1 , 1 3 3 ) | —. | ( 9 , 7 6 6 ) | | ( 1 0 , 8 9 9 ) |\n| F o reign exchange (loss)/gain, net | | ( 3 9 9 ) | ( 1 9 ) | | ( 1 4 6 ) | | ( 5 6 4 ) | 2. | ( 1 , 5 4 8 ) | | ( 2 , 1 1 0 ) |\n| Net loss before income taxes | $ | ( 8 , 9 5 1 ) | $ ( 3 , 9 4 4 ) | $ | ( 1 , 1 4 2 ) | $ ( 1 4 , 0 3 7 ) | | $ ( 6 , 9 9 1 ) | $ ( 1 6 , 8 2 9 ) | | $ ( 3 7 , 8 5 7 ) |\n| Segment assets | | n / a. | n / a. | | n / a. | $ 5 6 , 6 5 8. | | $ 2 1 , 5 2 7. | $ 1 8 , 6 5 9. | | $ 9 6 , 8 4 4. |\n| Fixed assets | | n / a. | n / a. | | n / a. | 3 5 , 4 3 8. | | 1 , 1 1 3. | 1 4 2. | | 3 6 , 6 9 3. |\n| D e p reciation and amort i z a t i o n | | n / a. | n / a. | | n / a. | | 7 , 4 1 0. | 2 , 6 8 3. | 1 4 5. | | 1 0 , 2 3 8. |", - "page_start": 42, - "page_end": 42, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "The subsidiaries of Euronet Services Inc., all of which are, directly or indire c t l y, wholly owned are:\n\n- EFT Services Holding B.V., incorporated in the Netherlands\n- Euronet Banktechnikai Szolgaltato Kft. (\"Bank Tech\"), incorporated in Hungary\n- Euronet Adminisztracios Szolgaltato Kft. (\"Administrative Services\") (formerly SatComNet), incorporated in Hungary\n- Bankomat 24/Euronet Sp. z o.o. (\"Bankomat\"), incorporated in Poland\n- EFT-Usluge d o.o., incorporated in Croatia\n- Euronet Services GmbH, incorporated in Germany\n- EFT Services France SAS, incorporated in France\n- Euronet Services spol. s.r.o., incorporated in the Czech Republic\n- Euronet Services SRL, incorporated in Romania\n- Euronet Services (UK) Limited, incorporated in the United Kingdom\n- Euronet USA Inc. (formerly Arkansas Systems, Inc.) (\"Euronet USA\") incorporated in Arkansas, United States of America\n- EFT Network Services LLC (\"Dash\"), incorporated in Arkansas, United States of America\n- Euronet Holding N.V., incorporated in the Netherlands Antilles (in liquidation)\n- Euronet Eft Services Hellas, incorporated in Greece\n\n#### **( 2 ) Financial Position and Basis of Preparation**\n\nThe Company generated an operating loss of $35.4 million and negative cash flows from operations of $16.4 million for the year ended December 31, 2000, primarily due to the significant costs associated with its investment in delivery, support, re s e a rch and development in its s o f t w a re subsidiary which was acquired in December 1998. Based on the Company's current business plan and financial projections, the Company expects to reduce operating losses and net cash used in operating activities in 2001. In the Network Services Segment, the Company anticipates that increased transaction levels in its ATM network will result in additional revenues without a corresponding incre a s e in expenses. In addition, the Company expects to further expand its ATM outsourcing services and offer new value-added services, which will p rovide continued revenue growth without significantly increasing direct operating expenses or capital investments. In the Software Solutions Segment, the Company expects reduced operating expenses and improved operating perf o rmance due to a cost re s t ructuring pro g r a m i n t roduced in the first quarter of 2001. The Company believes that the credit facility (see note 13), certain asset sales and cash and cash equivalents at December 31, 2000 will provide the Company with sufficient cash re s o u rces until it achieves positive cash flow.\n\nBased on the above, management is confident that the Company will be able to continue as a going concern. Accord i n g l y, these consolidated financial statements have been pre p a red on a going concern basis which contemplates the continuation and expansion of trading activities as well as the realization of assets and liquidation of liabilities in the ord i n a ry course of business.\n\n#### **( 3 ) S u m m a ry of Significant Accounting Policies and Practices**\n\n- (a) Basis of presentation\nThe accompanying consolidated financial statements have been pre p a red in accordance with generally accepted accounting principles in the United States of America.\n\nAll significant intercompany balances and transactions have been eliminated.\n\n- (b) Foreign currencies\nF o reign currency transactions are re c o rded at the exchange rate prevailing on the date of the transactions. Assets and liabilitiesdenominated in foreign currencies are re m e a s u red at rates of exchange on the balance sheet date. Resulting gains and losses on f o reign currency transactions are included in the consolidated statement of operations and comprehensive loss.\n\nThe financial statements of foreign subsidiaries where the local currency is the functional currency are translated to U.S. dollars using (i) exchange rates in effect at period end for assets and liabilities, and (ii) average exchange rates during the period for results of operations. Adjustments resulting from translation of such financial statements are reflected in accumulated other comprehensive income as aseparate component of consolidated stockholders' equity.\n\nThe financial statements of foreign subsidiaries where the functional currency is the U.S. dollar are re m e a s u red using historical exchangerates for nonmonetary items while current exchange rates are used for monetary items. Foreign exchange gains and losses arising from the re m e a s u rement are re p o rted in the consolidated statement of operations and comprehensive loss.\n\n- (c) Cash equivalents\nFor the purposes of the consolidated statements of cash flows, the Company considers all highly liquid debt instruments purchased with an original maturity of three months or less to be cash equivalents.\n\n(d) Investment securities\n\nThe Company has classified its investment securities as held-to-maturity or available-for-sale. Held-to-maturity securities are those securities in which the Company has the ability and intent to hold the security to maturity. All securities not included in held-to-maturity a re classified as available-for sale.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "#### Stock Based Compensation\n\nThe Company grants stock options for a fixed number of shares to employees with an exercise price equal to the fair value of the shares at the date of grant. The Company accounts for stock option grants using the intrinsic value method prescribed by APB Opinion No. 25, \"Accounting for Stock Issued to Employees\" (\"APB 25\"). Under APB 25, because the exercise price of the Company's employee stock options equals the market price of the underlying stock on the date of grant, no compensation expense is recognized. Had compensation cost for the plan been determined consistent with Statement of Financial Accounting Standards No. 123, \"Accounting for Stock-Based Compensation,\" the Company's net earnings and earnings per share would have been reduced by insignificant amounts on a pro forma basis for the years ended December 31, 2002, 2001 and 2000. Note 15 provides additional information on the Company's stock option plan.\n\n#### Stock Repurchase\n\nOn July 25, 2000, the Company approved a stock repurchase plan, authorizing the repurchase of up to 740,690 shares of the Company's common stock. During the years ended December 31, 2001 and 2000, the Company repurchased 9,900 and 126,100 shares, respectively. The treasury shares were purchased for $4,240,119, which represented an average purchase price of $31.18 per share. The treasury shares were retired in 2001.\n\n#### Per Share Data\n\nNet earnings per share (\"EPS\") are computed by dividing net earnings by the weighted average number of shares of common stock outstanding during the period. The Company calculates dilutive EPS assuming all outstanding options to purchase common stock have been exercised at the beginning of the year (or the time of issuance, if later.) The dilutive effect of the outstanding options is reflected by application of the treasury stock method, whereby the proceeds from the exercised options are assumed to be used to purchase common stock at the average market price during the period. The following table reconciles the computation of basic EPS to dilutive EPS:\n\n| | | Weighted | | |\n| --- | --- | --- | --- | --- |\n| | Net | Average | | Per Share |\n| | Earnings | Shares | | Amount |\n| For the year ended December 31, 2002: | | | | |\n| Net earnings per share, basic | $33,952,550 | 12,359,966 | $ | 2.75 |\n| Effect of stock options | - | 47,523 | | |\n| Net earnings per share, assuming dilution | $33,952,550 | 12,409,489 | $ | 2.74 |\n| For the year ended December 31, 2001: | | | | |\n| Net earnings per share, basic | $29,354,505 | 12,318,346 | $ | 2.38 |\n| Effect of stock options | - | 45,323 | | |\n| Net earnings per share, assuming dilution | $29,354,505 | 12,363,669 | $ | 2.37 |\n| For the year ended December 31, 2000: | | | | |\n| Net earnings per share, basic | $28,316,047 | 12,426,344 | $ | 2.28 |\n| Effect of stock options | - | 28,355 | | |\n| Net earnings per share, assuming dilution | $28,316,047 | 12,454,699 | $ | 2.27 |\n\n#### Reclassifications\n\nCertain 2001 and 2000 amounts have been reclassified to conform to the 2002 presentation.", - "page_start": 75, - "page_end": 75, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## **Bridging electronic payments in emerging markets**\n\n*New business solutions are thriving as traditional banking environments transition rapidly from cash to electronic payments and transactions.*\n\nhile credit is used for electronic transactions in Western Europe and North America, the model is quite different in many \"cash-based\" economies around the world. And that's where Euronet continues to look for new opportunities – particularly in the emerging W\n\n## **The Promise of Emerging Markets**\n\nExpanding Poland's Payment Infrastructure\n\nAlthough still under-\n\ndeveloped compared to western economies, Poland is one of the most dynamic and promising markets in all of Europe.\n\nSince entering Poland in 1995, Euronet Worldwide has become one of the largest transaction processing service providers in the country, establishing a network of over 600 ATMs and providing software to eight major banks. Our agreement for electronic airtime distribution with all three mobile phone operators in the country – ERA GSM, Plus GSM and IDEA Centertel – further confirms that Euronet is embedded in the financial payments fabric in Poland.\n\nmarkets of Central Europe, the Middle East, Africa, Asia-Pacific, Latin America and the Caribbean.\n\nAlthough bank card use is just starting in these markets, the demand for non-cash payment is gaining momentum. The foundation for this marketplace is rapidly taking shape with greater technology support, well-designed infrastructure and rapidly growing networks, as well as a critical mass of users. So the shift to new electronic payment channels is on, and the number of electronic financial transactions has grown tremendously.\n\nEuronet Worldwide continuously monitors cash-based economies to identify their readiness to embrace electronic payment and transaction alternatives. With ATM, point-of-sale (POS), interactive voice response (IVR), Internet, mobile solutions and other innovative payment options, we can play a vital role in developing the electronic payments fabric of these countries.\n\nIn Greece, we are delivering ATM outsourcing solutions for a number of multinational banks with Greek operations. For Credigen Bank in Hungary, we are helping to open up the consumer credit market to a new base of shoppers who can perform POS and ATM transactions over Euronet's network. And in the Czech Republic we are providing outsourcing services for ABN AMRO's Visa Charge Card Program.\n\nLooking ahead, we see great potential for extending Euronet's brand into cash-based markets and for connecting a new world of users to dynamic transaction services.", - "page_start": 11, - "page_end": 11, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## **To Our Shareholders**\n\n*In our report to you last year, we noted that Euronet's success has been built in large part on the question \"Would you like another transaction?\" The answer from our clients and their customers was a resounding \"Yes!\"* \n\n*To reflect the rapid changes taking place in financial transactions worldwide, even that question has evolved. So in 2000, we also began asking \"How would you like your next transaction?\"*\n\nIn 2000, Euronet Worldwide focused on providing ways people can access their financial accounts and transactions through various electronic touchpoints. New secure transaction types and touchpoints—ATMs, point-of-sale (POS) devices, the Internet and mobile phones—continued to fuel transaction growth every month. In 2000, we processed a record 52.7 million billable transactions, a 60% increase over 1999, and in December 2000, our transaction levels exceeded 5 million per month and continue to accelerate.\n\n> Taken together, our transaction growth and expanding number of consumer touchpoints translated into an accelerating and recurring revenue stream, which greatly improved our bottom line. Our 2000 revenue of $52.7 million represented a 27% increase over the company's 1999 revenue of $41.5 million. Euronet's 2000 EBITDA also improved $2.4 million, or 14.5%, over 1999.\n\n> > This year we continued to focus on our core business of ATM driving and transaction processing, and we pursued new transactions through our mobile and Internet banking solutions. We also implemented our bill payment initiative, starting with electronic payments for prepaid mobile airtime. We are pleased to report that in 2000 our Network Services business turned EBITDA positive and posted revenue of $36.9 million, an increase of 39% over 1999 revenue.\n\n> > > Additional milestones were reached through several new strategic partnerships we announced late in the year. Gemplus, Sila Communications and Aether Systems chose Euronet mobile products to supplement their product offerings, proving the strength of Euronet's mobile products. Teaming up with these partners will further increase the sales penetration of our suite of mobile payment solutions around the world.", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "#### **22. SUBSEQUENT EVENTS**\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe financial effect of the above events have not been reflected in these financial statements.\n\n# **2000 1999 Cents per Cents per Share Share** Basic earnings per share (0.62) 8.09 Diluted earnings per share (0.21) 8.05 **2000 1999 No. No.** Weighted average number of ordinary shares on issue used in the calculation of basic earnings per share 43,000,000 30,356,164\n\n#### **23. EARNINGS PER SHARE**", - "page_start": 56, - "page_end": 56, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "guarantees for financial instruments and as deposits with customs officials. The decrease resulted primarily from the settlement of the forw a rd f o reign exchange contracts using restricted cash and a release of restricted cash resulting from the posting of a surety bond with the Hungarian banking institution that supplies cash to the Company's ATM network in Hungary.\n\n**Trade Accounts** Trade accounts receivable increased to $9.5 million at December 31, 2000 from $7.9 million at December 31, 1999 due primarily to sales from the Software Solutions Segment and increased Network Services Segment revenues.\n\n**P r o p e r t y, Plant and Equipment** Net pro p e rt y, plant and equipment decreased to $31.7 million at December 31, 2000 from $36.7 million at December 31, 1999. This decrease is due primarily to a reduction in the rate of installation of ATMs and fixed asset additions. Fixed asset d e p reciation was in excess of fixed asset additions, and the write-off of $800,000 in ATM hard w a re further reduced the net fixed asset position.\n\n**Intangible Assets** The decrease in net intangible assets to $2.6 million at December 31, 2000 from $16.3 million at December 31, 1999 is due primarily to the $11.2 million write-down of goodwill and other identifiable intangible assets associated with the Software Solutions Segment (see Note 9 to the Consolidated Financial Statements – Intangibles). In addition, the decrease is the result of amortization of purchased intangibles a c q u i red in the Euronet USA acquisition in 1998, and the SBK and Dash acquisitions in 1999.\n\n**Current Liabilities** C u rrent liabilities decreased to $20.5 million at December 31, 2000 from $26.9 million at December 31, 1999. This decre a s e is due primarily to decreases in accrued expenses, billings in excess of costs and estimated earnings on software installation costs and settlement of the forw a rd foreign exchange contracts.\n\n**Capital Lease** Total capital lease obligations including current installments increased to $11.5 million at December 31, 2000 from $10.6 million at December 31, 1999. This increase is due primarily to additional capital leases resulting from the Company's purchase of Budapest Bank's AT M network, consisting of 147 ATMs on May 1, 2000.\n\n**Notes Payable** Notes payable increased to $77.2 million at December 31, 2000 from $72.8 million at December 31, 1999. This is the result of several transactions as follows:\n\n| | (in millions) | |\n| --- | --- | --- |\n| Balance at December 31, 1999 | $ | 7 2 . 8. |\n| U n realized foreign exchange gain (DEM vs. US$) | | (4.4) |\n| A c c retion of bond intere s t | | 8 . 8. |\n| Balance at December 31, 2000 | $ | 7 7 . 2. |\n\n**S t o c k h o l d e r's Deficit** Stockholders' deficit increased to $44.8 million at December 31, 2000 from $9.5 million at December 31, 1999. This is due to the net loss for the year ended December 31, 2000 of $49.6 million which was offset by an increase in additional paid in capital of $14.4 million due to the sale of 1,882,723 shares of common stock for proceeds of $13.0 million, the issue of $400,000 of warrants and the exercise of 390,231 stock options for proceeds of $900,000.\n\n#### **Year 2000 Compliance**\n\nThe Company's European and U.S. Year 2000 compliance teams re p o rted no material Year 2000 problems during the advent of the year 2000, either with Euro n e t 's own systems or the systems of its customers. The Company is unaware of any material Year 2000 complications to date.\n\n#### **Impact of New Accounting Pronouncements Not Yet Adopted**\n\n**S FAS 133** The Company is re q u i red to adopt Statement of Financial Accounting Standard (SFAS) No. 133 \"Accounting for Derivative I n s t ruments and Hedging Activities\" as amended by SFAS No. 138 for US GAAP re p o rting as of 1 January 2001. SFAS 133 and 138 establish accounting and re p o rting standards for derivative instruments, including certain derivative instruments embedded in other contracts (collectively re f e rred to as derivatives).\n\nIn accordance with SFAS No. 133, entities are re q u i red to carry all derivative instruments on the balance sheet at fair value. The accounting for movements in fair value of derivatives depends upon whether it has been designated and qualifies as part of a hedging relationship and, if so, the reason for holding it. If certain conditions are met, the Company may elect to designate a derivative instrument as a hedge of exposures. If the hedged exposure is a fair value exposure, movements in fair value are recognized in earnings with the offsetting gain or loss on the hedged item attributable to the hedged risk. If the hedged exposure is a cash flow exposure, the effective portion of the movement in fair value of the derivative i n s t rument is initially re p o rted as a component of other comprehensive income and subsequently reclassified into earnings at the time the f o recasted transaction impacts earnings. Amounts excluded from the assessment of hedge effectiveness as well as the ineffective portion of movements in fair value of the derivative instrument are re p o rted in earnings in the current period. Accounting for foreign currency hedges is similar to the accounting for fair value and cash flow hedges. If a derivative instrument is not designated as a hedge, movements in the fair value of derivative instruments are recognized in earnings.\n\nUnder the provisions of SFAS No. 133, the method that the Company will use to assess effectiveness of a hedge, as well as the measure m e n t a p p roach for determining the ineffectiveness of a hedge, must be established at the inception of a hedge. The Company formally documents all relationships between hedging instruments and hedged items as well as its risk management objective and strategy for entering into the transaction. This process includes linking derivatives designated as fair value or cash flow hedges to specific assets, liabilities or firm commitments on forecasted transactions. This process is repeated on a periodic basis. If at any time the Company determines a hedge is no longer eff e c t i v e , hedge accounting is immediately discontinued and the derivative is marked to market with any gain or loss re c o rded in earnings.\n\nThe Company adopted the provisions of SFAS No. 133 on 1 January 2001 and this had no impact on the Company's consolidated financial statements as the Company does not have any derivative financial instruments. Future changes in the fair value for any remaining trading securities will be re c o rded through earnings. Changes in fair value of available for sale securities will be re c o rded in other comprehensive income.", - "page_start": 22, - "page_end": 22, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "### **Bookmarks**\n\nBookmarks are included in the PDF for headings or Word bookmarks depending on the option selected.\n\n## **Availability**\n\nThe information in this article is applicable to the following versions of Word.\n\n- Word for Windows Version 2408 and later.\n- Word for Mac Version 16.89 and later.\n- Word for iOS Version 2.89 and later.\n- Word for Android Build 16.0.18025.XXXXX or later.\n- Word for the web Build 16.0.18025.XXXXX or later.\n\nIt is available to customers with Office 2024 or Office LTSC 2024 and to customers with a Microsoft 365 subscription on Current Channel or Monthly Enterprise Channel. For customers with a Microsoft 365 subscription on Semi-Annual Enterprise Channel it will be available on January 14, 2025.", - "page_start": 60, - "page_end": 60, - "source_file": "office-pdf.pdf" - }, - { - "text": "**Euronet Worldwide Annual Report 2000**\n\nSECURE FINANCIAL TRANSACTIONS AN Y TIME, AN Y PLACE", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## **Fig. 3 | Time transects across six geographical regions in Europe.**\n\n**a**–**f**, Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland (**a**), southeastern Europe (**b**), central Europe (**c**), Italy (**d**), Britain and Ireland (**e**) and Scandinavia (**f**). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals (*P* ≪ 1 × 10−32). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36–51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41–57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus (*P* = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium ce in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century ce burial of a 50–60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century ce associated with Longobards (Longobard_earlyMED(I))10 (Fig. 2c). This is consistent with the original study10, which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP))10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "Under which name was the Applied company initially fouded ?", - "target_page": 6, - "target_passage": "The Company was founded in 1923 by Joseph M. Bruening as The Ohio Ball Bearing Company", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "important both for its role in ending the war between France and Spain, because many of the claims and objectives of Louis's foreign policy for the next 50 years would be based upon this marriage, and because it was through this marriage that the Spanish throne would ultimately be delivered to the House of Bourbon.[32]\n\n# **Personal reign and reforms**\n\n### **Coming of age and early reforms**\n\nLouis XIV was declared to have reached the age of majority on the 7th of September 1651. On the death of Mazarin, in March 1661, Louis personally took the reins of government and astonished his court by declaring that he would rule without a chief minister: \"Up to this moment I have been pleased to entrust the government of my affairs to the late Cardinal. It is now time that I govern them myself. You [secretaries and ministers] will assist me with your counsels when I ask for them. I request and order you to seal no orders except by my command . . . I order you not to sign anything, not even a passport . . . without my command; to render account to me personally each day and to favor no one\".[33] Capitalizing on the widespread public yearning for peace and order after decades of foreign and civil strife, the young king consolidated central political authority at the expense of the feudal aristocracy. Praising his ability to choose and encourage men of talent, the historian Chateaubriand noted: \"it is the voice of genius of all kinds which sounds from the tomb of Louis\".[34]\n\nLouis began his personal reign with administrative and fiscal reforms. In 1661, the treasury verged on bankruptcy. To rectify the situation, Louis chose Jean-Baptiste Colbert as Controller-General of Finances in 1665. However, Louis first had to neutralize Nicolas Fouquet, the powerful Superintendent of Finances. Although Fouquet's financial indiscretions were not very different from Mazarin's before him or Colbert's after him, his ambition worried Louis. He lavishly entertained the king at the opulent château of Vaux-le-\n\nMonogram\n\nVicomte, flaunting a wealth which could hardly have accumulated except through embezzlement of government funds.\n\nFouquet appeared eager to succeed Mazarin and Richelieu in power, and he indiscreetly purchased and privately fortified the remote island of Belle Île. These acts sealed his doom. Fouquet was charged with embezzlement; the *Parlement* found him guilty and sentenced him to exile; and finally Louis altered the sentence to life imprisonment.\n\nFouquet's downfall gave Colbert a free hand to reduce the national debt through more efficient taxation. The principal taxes included the *aides* and *douanes* (both customs duties), the *gabelle* (salt tax), and the *taille* (land tax). The *taille* was reduced at first, and certain tax-collection contracts were auctioned instead of being sold privately to a favoured few. Financial officials were required to keep regular accounts, revising inventories and removing unauthorized exemptions: up to 1661 only 10 per cent of income from the royal domain reached the king. Reform had to overcome vested interests: the *taille* was collected by officers of the Crown who had purchased their post at a high price, and punishment of abuses necessarily lowered the value of the purchase. Nevertheless, Colbert achieved excellent results, with the deficit of 1661 turning into a surplus by 1666, with interest on the debt decreasing from 52 million to 24 million livres. The *taille* was reduced to 42 million in 1661 and 35 million in 1665, while revenue from indirect taxation\n\nMembers of the *Académie des sciences* with Louis in 1667; in the background appears the new Paris Observatory.\n\nprogressed from 26 million to 55 million. The revenues of the royal domain were raised from 80,000 livres in 1661 to 5.5 million in 1671. In 1661, the receipts were equivalent to 26 million British pounds, of which 10 million reached the treasury. The expenditure was around 18 million pounds, leaving a deficit of 8 million. In 1667, the net receipts had risen to 20 million pounds sterling, while expenditure had fallen to 11 million, leaving a surplus of 9 million pounds.\n\nMoney was the essential support of the reorganized and enlarged army, the panoply of Versailles, and the growing civil administration. Finance had always been the weakness of the French monarchy: tax collection was costly and inefficient; direct taxes dwindled as they passed through the hands of many intermediate officials; and indirect taxes were collected by private contractors called tax farmers who made a handsome profit. The state coffers leaked at every joint.\n\nThe main weakness arose from an old bargain between the French crown and nobility: the king might raise taxes on the nation without consent if only he exempted the nobility. Only the \"unprivileged\" classes paid direct taxes, which came to mean the peasants only, as most bourgeois finagled exemptions in one way or another. The system laid the whole burden of state expenses on the backs of the poor and powerless. After 1700, with the support of Louis's pious secret wife Madame de Maintenon, the king", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia5.pdf" - }, - { - "text": "claims. The Company currently has a claim for approximately $7.6 million pending against it arising out of the bankruptcy of a customer filed in 2001. The Company was named a critical vendor by the bankruptcy court and, accordingly, was paid in full for all outstanding receivables. The claim alleges that the Company received preferential payments from the customer during the ninety days before the customer filed for bankruptcy protection. The claim was brought in February 2003. The Company has recorded an accrual with respect to this contingency, in an amount substantially less than the full amount of the claim, which represents the best estimate within the range of likely exposure and intends to vigorously defend against the claim. Given the nature of this claim, it is possible that the ultimate outcome could differ from the recorded amount. It is our opinion, after consultation with legal counsel, that additional liabilities, if any, resulting from these matters, are not expected to have a material adverse effect on our financial condition, although such matters could have a material effect on our quarterly or annual operating results and cash flows when resolved in a future period.\n\n#### **Looking Ahead**\n\nThe Company is encouraged by indications that the economy is recovering and is cautiously optimistic that the office furniture industry will begin to rebound in the second half of 2004. Global Insight, BIFMA's forecasting consultant, increased its estimate for the industry shipment growth from 2.4% to 5.6% in 2004, with first quarter flat and improving as the year progresses.\n\nThe hearth segment is impacted by the housing market, which may experience a slight decline from record high levels, but is expected to remain at healthy levels. Management believes its strong brand recognition and new innovative product introductions in addition to strengthening distribution will allow it to grow its hearth segment.\n\nOn January 5, 2004, the Company completed the acquisition of Paoli Inc., a leading provider of wood case goods and seating. The Company intends to continue to build on Paoli's strong position in the market and excellent selling capabilities while leveraging its lean enterprise practices to achieve greater cost efficiencies and improved customer performance.\n\nThe Company's strategy is to grow its business through aggressive investment in building its brands, enhancing its strong member-owner culture, and remaining focused on its rapid continuous improvement program to continue to build best total cost. The Company plans to reinvest a large portion of its cost savings from plant consolidations and its rapid continuous improvement program to continue to build brands, product solutions, and selling models.\n\nBecause of the following factors, as well as other variables affecting the Company's operating results, past financial performance may not be a reliable indicator of future performance, and historical trends should not be used to anticipate results or trends in future periods:\n\n**•** competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n\n**•** increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n\n**•** increases in the cost of health care benefits provided by the Company;\n\n**•** reduced demand for the Company's storage products caused by changes in office technology, including the change from paper record storage to electronic record storage;\n\n**•** the effects of economic conditions on demand for office furniture, customer insolvencies and related bad debts, and claims against the Company that it received preferential payments;\n\n**•** changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n\n**•** issues associated with acquisitions and integration of acquisitions;\n\n**•** the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n\n**•** the ability of the Company to realize financial benefits from investments in new products;\n\n**•** the ability of the Company's distributors and dealers to successfully market and sell the Company's products; and\n\n**•** the availability and cost of capital to finance planned growth.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "FIN 46R is effective at the end of the first interim period ending after March 15, 2004. Entities that have adopted FIN 46 prior to this effective date can continue to apply the provision of FIN 46 until the effective date of FIN 46R. The Company adopted FIN 46 on January 3, 2004, and it did not have an impact on the Company's financial statements.\n\nThe Financial Accounting Standards Board finalized SFAS No. 150, \"Accounting for Certain Financial Instruments with Characteristics of both Liabilities and Equity,\" effective for financial instruments entered into or modified after May 31, 2003, and otherwise is effective at the beginning of the first interim period beginning after June 15, 2003. The adoption of SFAS No. 150 did not have an impact on the Company's financial statements.\n\nDuring 2002, the Financial Accounting Standards Board finalized SFAS No. 146, \"Accounting for Costs Associated with Exit or Disposal Activities\" for exit and disposal activities that are initiated after December 31, 2002. This Statement requires that a liability for a cost associated with an exit or disposal activity be recognized when the liability is incurred. The Company applied this statement to its 2003 restructuring activities which resulted in a charge of $8.5 million during 2003.\n\nThe Financial Accounting Standards Board also issued Interpretation No. 45, \"Guarantor's Accounting and Disclosure Requirements for Guarantees, Including Indirect Guarantees of Indebtedness to Other.\" FIN 45 clarifies the requirements of SFAS No. 5, \"Accounting for Contingencies\" relating to the guarantor's accounting for and disclosure of the issuance of certain types of guarantees. The provisions for initial recognition and measurement are effective on a prospective basis for guarantees that are issued or modified after December 31, 2002. The adoption did not have a material impact on the Company's financial statements.\n\nIn December 2003, the Financial Accounting Standards Board issued a revised SFAS No. 132, \"Employers' Disclosures about Pensions and Other Postretirement Benefits.\" In 2003, the Company adopted the revised disclosure requirements of this pronouncement.\n\n#### *R E C L A S S I F I C A T I O N S*\n\nCertain prior year amounts have been reclassified to conform to the 2003 presentation.\n\n#### **Restructuring Related Charges**\n\nAs a result of the Company's business simplification and cost reduction strategies, the Company closed two office furniture facilities located in Milan, Tennessee, and Hazleton, Pennsylvania, and consolidated production into other U.S. manufacturing locations. Charges for the closures totaled $15.7 million, which consists of $6.7 million of accelerated depreciation of machinery and equipment which was recorded in cost of sales, $3.4 million of severance, and $5.6 million of facility exit, production relocation, and other costs which were recorded as restructuring costs. A total of 316 members were terminated and received severance due to these shutdowns. The closures and consolidation are substantially complete.\n\nThe Hazleton, Pennsylvania, facility is an owned facility and has been reclassified to current assets as it is currently being held as available for sale. It is included in the \"Prepaid expenses and other current assets\" in the January 3, 2004, condensed consolidated balance sheet at its carrying value of $2.1 million. The Milan, Tennessee, facility is a leased facility that is no longer being used in the production of goods. The restructuring expense for 2003 included $1.4 million of costs that will continue to be incurred under the lease contract reduced by estimated sublease rentals that could be reasonably obtained.\n\nDuring 2002, the Company recorded a pretax charge of approximately $5.4 million due to the shutdown of an office furniture facility in Jackson, Tennessee. A total of 125 members were terminated and received severance due to this shutdown. During the second quarter of 2003, a restructuring credit of approximately $0.6 million was taken back into income relating to this charge. This was due to the fact that the Company was able to exit a lease with the lessor at more favorable terms than previously estimated.\n\nDuring the second quarter of 2001, the Company recorded a pretax charge of $24.0 million or $0.26 per diluted share for a restructuring plan that involved consolidating physical facilities, discontinuing low-volume product lines, and reductions of workforce. Included in the charge was the closedown of three of its office furniture facilities located in Williamsport, Pennsylvania; Tupelo, Mississippi; and Santa Ana, California. Approximately 500 members were terminated and received severance due to the closedown of these facilities. During the second quarter of 2002, a restructuring credit of approximately $2.4 million was taken back into income relating to this charge. This was mainly due to the fact that the Company was able to exit a lease with a lessor at more favorable terms than originally estimated and the Company's ability to minimize the number of members terminated as compared to the original plan.\n\nThe following table details the change in restructuring reserve for the last three years:", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "#### *P R O D U C T D E V E L O P M E N T C O S T S*\n\nProduct development costs relating to the development of new products and processes, including significant improvements and refinements to existing products, are expensed as incurred. These costs include salaries, contractor fees, building costs, utilities, and administrative fees. The amounts charged against income were $25,791,000 in 2003, $25,849,000 in 2002, and $21,415,000 in 2001.\n\n#### *S T O C K - B A S E D C O M P E N S A T I O N*\n\nThe Company accounts for its stock option plan using Accounting Principles Board Opinion No. 25, \"Accounting for Stock Issued to Employees,\" whereby stock-based employee compensation is reflected in net income as all options granted under the plan had an exercise price equal to the market value of the underlying common stock on the date of grant. SFAS No. 123, \"Accounting for Stock-Based Compensation\" issued subsequent to APB No. 25 and amended by SFAS No. 148, \"Accounting for Stock-Based Compensation — Transition and Disclosure\" defines a fair value-based method of accounting for employees' stock options but allows companies to continue to measure compensation cost for employee stock options using the intrinsic value-based method described in APB No. 25.\n\nThe following table illustrates the effect on net income and earnings per share if the Company had applied the fair value recognition provisions of SFAS No. 123, \"Accounting for Stock-Based Compensation,\" as amended by SFAS No. 148 \"Accounting for Stock-Based Compensation — Transition and Disclosure,\" to stock-based employee compensation.\n\n| (In thousands) | 2003 | 2002 | 2001 |\n| --- | --- | --- | --- |\n| Net income, as reported | $ 98.1 | $ 91.4 | $ 74.4 |\n| Deduct: Total stock-based | | | |\n| employee compensation | | | |\n| expense determined under fair | | | |\n| value-based method for all | | | |\n| awards, net of related tax effects | (3.0) | (2.2) | (1.4) |\n| Pro forma net income | $ 95.1 | $ 89.2 | $ 73.0 |\n| Earnings per share: | | | |\n| Basic – as reported | $ 1.69 | $ 1.55 | $ 1.26 |\n| Basic – pro forma | $ 1.64 | $ 1.52 | $ 1.24 |\n| Diluted – as reported | $ 1.68 | $ 1.55 | $ 1.26 |\n| Diluted – pro forma | $ 1.62 | $ 1.51 | $ 1.24 |\n\nIncrease in expense in 2003 is due to accelerated vesting upon the retirement of plan participants.\n\n#### *I N C O M E T A X E S*\n\nThe Company accounts for income taxes under SFAS No. 109, \"Accounting for Income Taxes.\" This Statement uses an asset and liability approach that requires the recognition of deferred tax assets and liabilities for the expected future tax consequences of events that have been recognized in the Company's financial statements or tax returns. Deferred income taxes are provided to reflect the differences between the tax bases of assets and liabilities and their reported amounts in the financial statements.\n\n#### *E A R N I N G S P E R S H A R E*\n\nBasic earnings per share are based on the weighted-average number of common shares outstanding during the year. Shares potentially issuable under options and deferred restricted stock have been considered outstanding for purposes of the diluted earnings per share calculation.\n\n#### *U S E O F E S T I M A T E S*\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States requires management to make estimates and assumptions that affect the amounts reported in the financial statements and accompanying notes. The more significant areas requiring the use of management estimates relate to allowance for doubtful accounts, inventory reserves, marketing program accruals, warranty accruals, accruals for self-insured medical claims, workers' compensation, legal contingencies, general liability and auto insurance claims, and useful lives for depreciation and amortization. Actual results could differ from those estimates.\n\n#### *S E L F - I N S U R A N C E*\n\nThe Company is partially self-insured for general and product liability, workers' compensation, and certain employee health benefits. The general, product, and workers' compensation liabilities are managed using a wholly owned insurance captive; the related liabilities are included in the accompanying consolidated financial statements. The Company's policy is to accrue amounts in accordance with the actuarially determined liabilities. The actuarial valuations are based on historical information along with certain assumptions about future events. Changes in assumptions for such matters as legal actions, medical costs, and changes in actual experience could cause these estimates to change in the near term.\n\n#### *R E C E N T A C C O U N T I N G P R O N O U N C E M E N T S*\n\nIn December 2003, the Financial Accounting Standards Board issued Interpretation 46R (FIN 46R), a revision to Interpretation 46 (FIN 46), \"Consolidation of Variable Interest Entities.\" Fin 46R clarifies some of the provisions of FIN 46 and exempts certain entities from its requirements.", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "*PURPOSE PRODUCT PERFORMANCE PEOPLE*\n\nApplied Industrial Technologies is a leading industrial distributor that offers more than four million parts to serve the needs of MRO and OEM customers in virtually every industry. In addition, Applied® provides engineering, design and systems integration for industrial and fluid power applications, as well as customized mechanical, fabricated rubber and fluid power shop services. Applied also offers maintenance training and inventory management solutions that provide added value to its customers.\n\n**Headquarters:** Cleveland, Ohio, USA\n\n**Operating Facilities:** More than 500 in the United States, Canada, Mexico, Puerto Rico, Australia and New Zealand\n\n**E-Commerce:** www.Applied.com\n\n**Distribution Centers:** 9\n\n**Stock Keeping Units (SKUs) Available to Customers:** More than 4 million\n\n**Product Manufacturers:** More than 2,000\n\n**Stock Ticker Symbol:** AIT, listed on the New York Stock Exchange\n\n**Employee Associates:** Approximately 4,900\n\nData current as of August 1, 2012\n\n25358_AIT_Report_WT.indd 2 8/23/12 8:32 AM\n\nThis report contains statements that are forward-looking, as that term is defined by the Securities and Exchange Commission in its rules, regulations and releases. Applied intends that such forward-looking statements be subject to the safe harbors created thereby. All forwardlooking statements are based on current expectations regarding important risk factors, including those identified on page 12 of this report and in our Annual Report on Form 10-K for the fiscal year ended June 30, 2012. Accordingly, actual results may differ materially from those expressed in the forward-looking statements, and the making of such statements should not be regarded as a representation by Applied or any other person that results expressed therein will be achieved.", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "#### **ACCOUNTING PRINCIPLES ADOPTED IN 2004**\n\n#### **Taxation on Foreign Earnings**\n\nIn December 2004, the staff of the Financial Accounting Standards Board (\"FASB\") issued FASB Staff Position 109-2, \"Accounting and Disclosure Guidance for the Foreign Repatriation Provision within the American Jobs Creation Act of 2004\" (\"FSP 109-2\"). FSP 109-2 allows us additional time beyond the financial reporting period in which the Act was enacted to evaluate the effects of the Act on our plans for repatriation of unremitted earnings. Under SFAS 109, we did not historically record a provision for U.S. Federal or State income taxes on undistributed earnings of foreign subsidiaries because such earnings were considered to be indefinitely reinvested in the operations of foreign subsidiaries. Upon the sale of MGM Grand Australia, we did provide deferred taxes of $11 million on the basis that the proceeds would be repatriated without the benefit of the 85 percent one-time deduction provided by the Act. The Act may allow a special one-time deduction of 85 percent of certain repatriated foreign earnings; however, additional clarifying language is necessary to ensure we qualify for the deduction. The potential benefit to us of the repatriation provisions of the Act is $7 million.\n\n#### **Discontinued operations**\n\nIn November 2004, the Emerging Issues Task Force (\"EITF\") of the FASB reached a consensus on Issue No. 03-13, \"Applying the Conditions in Paragraph 42 of FASB Statement No. 144, *Accounting for the Impairment or Disposal of Long-Lived Assets*, in Determining Whether to Report Discontinued Operations,\" (\"EITF 03-13\"). EITF 03-13 requires us to analyze whether the cash flows of a disposed component have been eliminated from our ongoing operations and whether we retain a continuing involvement in the operations of the disposed component. If significant migration of customers occurs to our other operations, we would be precluded from classifying a sold or disposed operation as a \"discontinued\" operation. EITF 03-13 is effective for components disposed of or classified as held for sale in periods beginning after\n\nDecember 15, 2004, with optional application to components disposed of or classified as held for sale within that fiscal year. We did not apply EITF 03-13 to our sale of MGM Grand Australia, but if we had applied EITF 03-13 we still would have classified MGM Grand Australia as a discontinued operations.\n\n#### **RECENTLY ISSUED ACCOUNTING STANDARDS**\n\n#### **Stock-based Compensation**\n\nIn December 2004, the FASB issued FASB Statement No. 123 (revised 2004), \"Share-Based Payment\" (\"SFAS 123(R)\"). Under the original standard, SFAS No. 123, \"Accounting for Stock-Based Compensation\" (\"SFAS 123\"), companies had the option of recording stock options issued to employees at fair value or intrinsic value, which generally leads to no expense being recorded. Most companies, including us, opted to use this intrinsic value method and make required disclosures of fair value expense. SFAS 123(R) eliminates this intrinsic value alternative. SFAS 123(R) is effective for us on July 1, 2005, at which time all future share-based payments must be recorded at fair value. Transition methods are discussed below.\n\nWe must make certain changes in the manner of valuation of options and must make certain decisions which will affect the amount and timing of expense recognition, as discussed below.\n\n**Choice of valuation model.** Under SFAS 123, stock options were generally valued using the Black-Scholes model. SFAS 123(R) does not specify which model must be used, but requires that certain assumptions be included in the chosen model. Essentially, we have a choice of continuing to apply the Black-Scholes model or applying a binomial (lattice) model. The key difference is that a binomial model can better account for sub-optimal exercises; that is, exercises before the contractual expiration of the option. A binominal model is more complex to apply, and generally results in a lower value than a comparable valuation using the Black-Scholes model. We have not yet determined which model we will apply.", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "Recognizing that technology can only supplement—not supplant—personal relationships, we created an organization of business technology solutions managers in our field offices. These specialists provide hands-on support to property-casualty agents who use The Hartford's online tools.\n\nThis successful high-tech, high-touch mix is one reason why we estimate our smallbusiness insurance growth rate is five to six times the industry average. Another reason is that we strategically target these businesses' unmet needs. Our new CyberFlexTM business insurance coverage, for example, is geared to traditional brick-and-mortar businesses that have some exposure for cyber-risk in their normal course of doing business—such as using e-mail or operating a Web site.\n\nOur focus on growth never distracts us from the bottom line. When markets or businesses prove unprofitable, we're nimble enough to take quick action. We exited the European property-casualty business in 2001, focusing instead on financial services in Asia. We also repositioned our reinsurance business to concentrate on the U.S. market, where we're already strong.\n\nIn all our operations, we've built a well-deserved reputation as a premier partner because we offer an exceptional value proposition that will never change: innovative products, world-class money management, value-added distribution, and outstanding service and technology.\n\nI'm deeply grateful to our employees, our business partners, our board of directors and, of course, our customers for their support during some of the most trying times we've ever experienced. I especially want to thank you, our shareholders, for allowing us to continue earning your support.\n\nMy confidence in our company and our industry has never wavered, even in the darkest moments following Sept. 11. True, we're still grappling with serious issues. The question of federal backstop legislation for terrorism is still unresolved, and the possibility of future terrorist attacks remains a serious concern. The world and its risks are much changed since I wrote my letter to you a year ago. But we'll continue to manage risks prudently while always thinking ahead for our shareholders, customers and partners. That's what we're in business to do. We'll continue to run our business the right way, and I believe we'll continue to earn your trust.\n\nSincerely,\n\nRamani Ayer Chairman, President and Chief Executive Officer\n\n*Tom Marra, President and Chief Operating Officer, Life Operations*", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "### Long-Range Strategy: *Translating Potential Into Results (continued)*\n\nAs a leadership team, we have developed a long-range strategic plan to accelerate profitable growth. Our plan includes numerous growth opportunities across our business, and implementation is underway, including:\n\n- • Leveraging sales capabilities and existing CRM (Customer Relationship Management) processes to expand our value-add and reach new customers\n- • Strengthening our position in attractive vertical markets while growing in our core segments\n- • Expanding our products and solutions; growing our core bearings and power transmission business at a rate greater than the market, along with focused product expansion via logical extensions and enhanced local capabilities\n- • Building on our fluid power market leadership via strengthened product offerings and value-added services for OEM and MRO customers\n- • Enhancing our operational excellence by capturing the full benefits of our ERP system and driving continuous improvement with customers, suppliers and throughout our operations\n- • Accelerating strategic acquisitions by leveraging our cash generation and strong financial position to extend into new markets\n\nToday, nearly 90 years since our founding, we are well-positioned and committed to realizing our potential – a potential that builds upon a proud past and the dedication of our associates around the globe.\n\nAs we look ahead, we see a bright future with excellent opportunities for growth and increased profitability – organically, via acquisition, and through our technology investments. *We are in exciting times, and we firmly believe our best days are ahead.*\n\nThank you for your ongoing investment and support of Applied.\n\nNeil A. Schrimsher Chief Executive Officer\n\nAugust 15, 2012\n\nBenjamin J. Mondics President & Chief Operating Officer\n\n25358_AIT_Report_WT.indd 4 8/28/12 4:22 PM\n\nOVERVIEW\n\nWith more than 4,600 associates across North America, Applied Industrial Technologies (\"Applied,\" the \"Company,\" \"We,\" \"Us\" or \"Our\") is a leading industrial distributor serving MRO and OEM\n\nmanagement solutions that provide added value to its customers. We have a long tradition of growth dating back to 1923, the year our business was founded in Cleveland, Ohio. At June 30, 2012, business was conducted in the United States, Canada, Mexico\n\nWhen reviewing the discussion and analysis set forth below, please note that the majority of SKUs we sell in any given year were not sold in the prior year, resulting in the inability to quantify certain commonly used comparative metrics analyzing\n\nOur fiscal 2012 sales were $2.4 billion, an increase of $162.6 million or 7.3% compared to the prior year. Net sales from acquired businesses added $16.6 million or 0.7% to the current year. Gross margin of 27.6% compares to 27.7% in the prior year. Our operating margin increased to 7.1% compared to the prior year's 6.8%. Our earnings per share was $2.54 versus $2.24\n\nOur consolidated balance sheet remains strong. Shareholders' equity is $672.1 million, up from $633.6 million at June 30, 2011. Working capital increased $31.4 million from June 30, 2011 to $435.6 million at June 30, 2012. Our current ratio remains strong\n\nApplied monitors several economic indices that have been key indicators for industrial economic activity in the United States. These include the Industrial Production and Manufacturing Capacity Utilization (MCU) indices published by the Federal Reserve Board and the Purchasing Managers Index (PMI) published by the Institute for Supply Management (ISM). Historically, our performance correlates well with the MCU which measures productivity and calculates a ratio of actual manufacturing output versus potential full capacity output. When manufacturing plants are running at a high rate of capacity, they tend to wear out machinery and require replacement parts. Our sales tend to lag the MCU by up\n\nat 2.9 to 1, consistent with the June 30, 2011 level.\n\nsales, such as changes in product mix and volume.\n\nin fiscal year 2011, an increase of 13.4%.\n\nto six months.\n\ncustomers in virtually every industry. In addition, Applied provides engineering, design and systems integration for industrial and fluid power applications, as well as customized mechanical, fabricated rubber and fluid power shop services. Applied also offers maintenance training and inventory\n\nand Puerto Rico from 476 facilities.\n\n## **Celebrating 90 Years of Strength in Distribution**\n\nIn January 2013, Applied Industrial Technologies will celebrate its 90th anniversary. The Company was founded in 1923 by Joseph M. Bruening as The Ohio Ball Bearing Company, a distributor of bearings to customers in Cleveland, Ohio. Over the years, the Company grew to become a regional distributor of bearings, then an international distributor of a wide range of industrial technologies and components. Today, nearly 90 years since our beginning, customers served by Applied benefit from our years of accumulated experience, expertise and exceptional ability to improve our customers' operations.\n\nJoin us as we kick-off a year-long celebration of our strength in distribution. We thank all of you, our stakeholders, for making it possible.\n\n1\n\nIndustrial production increased 0.4% in June after having declined 0.2% in May. In the manufacturing sector, outputs advanced 0.7% in June, reversing a decline of 0.7% in May and increased at an annual rate of 1.4% in the second quarter. In June, capacity utilization for manufacturing moved up 0.4% to 77.7%, a rate 13.9 percentage points above its trough in June of 2009 and was still 1.1 percentage points below its long-run average. The ISM PMI registered 49.7 in June, the first time this indicator dropped below 50 (its expansionary threshold) since July 2009. We remain optimistic about the U.S. industrial economy for our fiscal 2013.\n\nYEAR ENDED JUNE 30, 2012 vs. 2011\n\nstatements of consolidated income.\n\nfiscal 2012 was the same as in fiscal 2011.\n\nas a continued focus on profitable sales growth.\n\nCanada and Mexico versus 474 at June 30, 2011.\n\n29.5% fluid power in the prior year.\n\nThe following table is included to aid in review of Applied's\n\nNet Sales **100.0%** 100.0% 7.3% Gross Profit **27.6%** 27.7% 6.7% Selling, Distribution & Administrative **20.5%** 20.9% 5.1% Operating Income **7.1%** 6.8% 11.7% Net Income **4.6%** 4.4% 12.4%\n\nNet sales in fiscal 2012 were $2.4 billion, which was $162.6 million or 7.3% above the prior year, driven by improvements in the industrial economy as well as a continued focus on profitable sales growth. Incremental net sales from companies acquired since the prior year period contributed approximately $16.6 million or 0.7%. Currency translation decreased fiscal year sales by approximately $1.8 million or 0.1%. In local currency, net sales from our Canadian operations were up 12.2% from fiscal 2011, including 2.8% from acquisitions. In local currency, net sales from our Mexican operations were up 25.9%. The number of selling days in\n\nNet sales of our Service Center Based Distribution segment increased $133.8 million, or 7.6%, compared to fiscal year 2011 led by\n\nThe sales product mix for fiscal 2012 was 70.8% industrial products and 29.2% fluid power products compared to 70.5% industrial and\n\nAt June 30, 2012, we had a total of 476 operating facilities in the U.S.,\n\nimprovements in the industrial economy as well as a continued focus on profitable sales growth, with acquisitions adding $16.6 million or 0.9%. Net sales of our Fluid Power Businesses segment increased $28.8 million or 6.5%, also driven by improvements in the industrial economy as well\n\nYear Ended June 30, As a % of Net Sales\n\n**2012** 2011 % Increase\n\nChange in $'s Versus Prior Period", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## **SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES**\n\n## **2003 Financial Statements**\n\n## **INDEPENDENT AUDITOR'S REPORT**\n\nThe Board of Directors and Shareholders Shenandoah Telecommunications Company:\n\nWe have audited the accompanying consolidated balance sheets of Shenandoah Telecommunications Company and subsidiaries (the Company), as of December 31, 2003, 2002, and 2001, and the related consolidated statements of income, shareholders' equity and comprehensive income, and cash flows for the years then ended. These consolidated financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these consolidated financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the consolidated financial statements referred to above present fairly, in all material respects, the financial position of Shenandoah Telecommunications Company and subsidiaries as of December 31, 2003, 2002 and 2001, and the results of their operations and their cash flows for the years then ended, in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for goodwill in 2002. As further discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for asset retirement obligations in 2003.\n\nRichmond, Virginia February 6, 2004", - "page_start": 12, - "page_end": 12, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### T O O U R S HAREHOLDERS :\n\nJ AMES R. H OUGHTON\n\nC HAIRMAN AND C HIEF E XECUTIVE O FFICER\n\nWe will long remember 2002 as one of the most challenging years — if not the most challenging — in Corning Incorporated's long history. I quickly became even more steeped in these challenges in April when, at the request of our Board of Directors, I returned to the company as Chairman and Chief Executive Officer.\n\nSince that time, I am increasingly convinced that, despite our downturn, the long-term future of Corning remains bright and filled with opportunity.\n\nBut in the meantime, we have been living in a very difficult reality – one marked by ongoing quarterly losses and drops in revenue. You, our shareholders—along with our employees and our friends in the communities we serve—felt the pain. We all watched our businesses retrench, battered by a weakened global economy and Wall Street turmoil. And we could only wonder what bad news would be next as our stock value continued its seemingly relentless decline.\n\nWith the severe drop-off in revenues from our telecommunications customers, we knew we could no longer afford to keep up the costly infrastructure of facilities and staff we had in place. Put simply, we couldn't spend more than we were making.\n\nWe also knew our strengths — and they were many! We knew we were not — nor had we ever been — merely a telecommunications company. Rather, we are a technology company, with the materials and process expertise to create life-changing products. That's what we've been for all of our 152 years; that's what we'll continue to be.\n\nAnd we knew something else … that our Values, the historic strength of our company, were alive and well. Quality, Integrity, Performance, Leadership, Innovation, Independence and The Individual continue to guide our every move, and continue to set us apart from other companies— especially those caught in the accounting scandals that marred the business world this past year.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_GLW_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "By how much does Applied company plan to contribute to its pension benefits between 2018 and 2022 ?", - "target_page": 36, - "target_passage": "2018 through 2022 15,200", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "#### NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:\n\n- overseeing the funding, administration, communication and investment management of the plans\n- selecting and monitoring the performance of all third parties performing duties in respect of the plans, including audit, actuarial and investment management services\n- proposing, considering and approving amendments to the defined benefit pension plans\n- proposing, considering and approving amendments of the Statement of Investment Policies and Procedures\n- reviewing management and actuarial reports prepared in respect of the administration of the defined benefit pension plans\n- reviewing and approving the audited financial statements of the defined benefit pension plan funds.\n\nThe assets of the defined benefit pension plans are invested and managed following all applicable regulations and the Statement of Investment Policies and Procedures, and reflect the characteristics and asset mix of each defined benefit pension plan. Investment and market return risk is managed by:\n\n- contracting professional investment managers to execute the investment strategy following the Statement of Investment Policies and Procedures and regulatory requirements\n- specifying the kinds of investments that can be held in the plans and monitoring compliance\n- using asset allocation and diversification strategies, and\n- purchasing annuities from time to time.\n\nThe funded pension plans are registered with the Office of the Superintendent of Financial Institutions and are subject to the Federal Pension Benefits Standards Act. The plans are also registered with the Canada Revenue Agency and are subject to the Canada Income Tax Act. The benefits provided under the plans and the contributions to the plans are funded and administered in accordance with all applicable legislation and regulations.\n\nSignificant estimates are involved in determining pension related balances. Actuarial estimates are based on projections of employees' compensation levels at the time of retirement. Maximum retirement benefits are primarily based on career average earnings, subject to certain adjustments. The most recent actuarial valuations were completed as at January 1, 2013.\n\nThe table below sets out the estimated present value of accrued plan benefits and the estimated market value of the net assets available to provide these benefits for our funded plans at December 31, 2013 and 2012.\n\n| | 2013 | | | 2012 |\n| --- | --- | --- | --- | --- |\n| Plan assets, at fair value | | $ 1,037 | $ | 833 |\n| Accrued benefit obligations | | 1,209 | | 1,167 |\n| Deficiency of plan assets over accrued benefit obligations | | (172) | | (334) |\n| Effect of asset ceiling limit | | (9) | | – |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n| Consists of: | | | | |\n| Deferred pension asset | $ | 8 | $ | 9 |\n| Deferred pension liability | | (189) | | (343) |\n| Net deferred pension liability | $ | (181) | $ | (334) |\n\nThe table below shows our pension fund assets for the years ended 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Plan assets, January 1 | $ 833 | $ 684 |\n| Interest income | 40 | 40 |\n| Remeasurements, return on plan assets recognized in other | | |\n| comprehensive income and equity | 65 | 37 |\n| Contributions by employees | 26 | 22 |\n| Contributions by employer | 101 | 85 |\n| Benefits paid | (26) | (33) |\n| Administrative expenses paid from plan assets | (2) | (2) |\n| Plan assets, December 31 | $ 1,037 | $ 833 |\n\nThe table below shows the accrued benefit obligations arising from funded obligations for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Accrued benefit obligations, January 1 | $ 1,167 | $ 817 |\n| Service cost | 71 | 46 |\n| Interest cost | 52 | 45 |\n| Benefits paid | (26) | (33) |\n| Contributions by employees | 26 | 23 |\n| Remeasurements, recognized in other comprehensive | | |\n| income and equity | (81) | 269 |\n| Accrued benefit obligations, December 31 | $ 1,209 | $ 1,167 |\n\nThe table below shows the effect of the asset ceiling for the years ended December 31, 2013 and 2012.\n\n| | 2013 | | 2012 | |\n| --- | --- | --- | --- | --- |\n| Asset ceiling, January 1 | $ | – | $ | – |\n| Interest income | | – | | – |\n| Remeasurements, change in asset ceiling (excluding interest | | | | |\n| income) recognized in comprehensive income and equity | (9) | | – | |\n| Effect of changes in foreign exchange rates | | – | – | |\n| Asset ceiling, December 31 | $ (9) | | $ – | |\n\nPlan assets are comprised mainly of pooled funds that invest in common stocks and bonds that are traded in an active market. The table below shows the fair value of the total pension plan assets by major category for the years ended December 31, 2013 and 2012.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Equity securities | $ 631 | $ 480 |\n| Debt securities | 403 | 348 |\n| Other – cash | 3 | 5 |\n| Total fair value of plan assets | $ 1,037 | $ 833 |", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "### NOTE 11: LEASES\n\nPlan Assets\n\nallocation as of June 30:\n\nEquity securities do not include any Company common stock.\n\nfive years and in the aggregate for the subsequent five years:\n\nthe target asset allocation of the pension portfolio.\n\nAsset Class:\n\nCash Flows\n\nEmployer Contributions\n\nEstimated Future Benefit Payments\n\nThe fair value of each major class of plan assets for the Company's Qualified Benefit Retirement Plan are valued using quoted market prices in active markets for identical instruments, or Level 1 in the fair value hierarchy. Following are the fair values and target\n\nEquity securities 40 – 70% **$ 3,735** $ 3,876 Debt securities 20 – 50% **2,382** 1,756 Other 0 – 20% **322** 424 Total 100% **$ 6,439** $ 6,056\n\nThe Company has established an investment policy and regularly monitors the performance of the assets of the trust maintained in conjunction with the Qualified Defined Benefit Retirement Plan. The strategy implemented by the trustee of the Qualified Defined Benefit Retirement Plan is to achieve long-term objectives and invest the pension assets in accordance with ERISA and fiduciary standards. The long-term primary objectives are to provide for a reasonable amount of long-term capital, without undue exposure to risk; to protect the Qualified Defined Benefit Retirement Plan assets from erosion of purchasing power; and to provide investment results that meet or exceed the actuarially assumed long-term rate of return. The expected long-term rate of return on assets assumption was developed by considering the historical returns and the future expectations for returns of each asset class as well as\n\nThe Company expects to contribute $6,000 to its pension benefit plans and $240 to its retiree health care benefit plans in\n\nThe following benefit payments, which reflect expected future service, as applicable, are expected to be paid in each of the next\n\n2013 $ 6,200 $ 240 2014 5,900 240 2015 5,700 240 2016 4,500 240 2017 1,700 260 2018 through 2022 15,200 1,420\n\n2013. Contributions do not equal estimated future payments as certain payments are made from plan assets.\n\nDuring Fiscal Years Pension Benefits\n\nTarget Allocation Fair Value\n\n**2012** 2011\n\nRetiree Health Care\n\nBenefits\n\nThe Company leases its corporate headquarters facility along with many service center and distribution center facilities, vehicles and equipment under non-cancelable lease agreements accounted for as operating leases. The minimum annual rental commitments under non-cancelable operating leases as of June 30, 2012 are as follows:\n\n| During Fiscal Years | | |\n| --- | --- | --- |\n| 2013 | $ | 23,500 |\n| 2014 | | 18,000 |\n| 2015 | | 14,300 |\n| 2016 | | 9,600 |\n| 2017 | | 5,100 |\n| Thereafter | | 11,100 |\n| Total minimum lease payments | $ | 81,600 |\n\nRental expenses incurred for operating leases, principally from leases for real property, vehicles and computer equipment were $31,200 in 2012, $31,400 in 2011 and $30,700 in 2010.\n\n### NOTE 12: SEGMENT AND GEOGRAPHIC INFORMATION\n\nThe Company's reportable segments are: Service Center Based Distribution and Fluid Power Businesses. The Service Center Based Distribution segment provides customers with solutions to their maintenance, repair and original equipment manufacturing needs through the distribution of industrial products including bearings, power transmission components, fluid power components, industrial rubber products, linear motion products, safety products, general maintenance and a variety of mill supply products. The Fluid Power Businesses segment distributes fluid power components and operates shops that assemble fluid power systems and components, performs equipment repair, and offers technical advice to customers.\n\nThe accounting policies of the Company's reportable segments are generally the same as those described in Note 1. Sales primarily from the Fluid Power Businesses segment to the Service Center Based Distribution segment of $18,097, $17,665 and $14,006, in fiscal 2012, 2011 and 2010, respectively, have been eliminated in the table below.\n\n#### Segment Financial Information\n\n| | Service Center | | Fluid Power | |\n| --- | --- | --- | --- | --- |\n| | Based Distribution | | Businesses | Total |\n| Year Ended June 30, 2012 | | | | |\n| Net sales | $ | 1,904,564 | $ 470,881 | $ 2,375,445 |\n| Operating income for reportable segments | | 135,240 | 43,236 | 178,476 |\n| Assets used in the business | | 731,915 | 230,268 | 962,183 |\n| Depreciation and amortization of property | | 9,403 | 1,833 | 11,236 |\n| Capital expenditures | | 24,339 | 1,682 | 26,021 |\n| Year Ended June 30, 2011 | | | | |\n| Net sales | $ | 1,770,798 | $ 442,051 | $ 2,212,849 |\n| Operating income for reportable segments | | 115,798 | 41,793 | 157,591 |\n| Assets used in the business | | 700,486 | 214,445 | 914,931 |\n| Depreciation and amortization of property | | 9,152 | 2,082 | 11,234 |\n| Capital expenditures | | 19,392 | 1,039 | 20,431 |\n| Year Ended June 30, 2010 | | | | |\n| Net sales | $ | 1,536,543 | $ 356,665 | $ 1,893,208 |\n| Operating income for reportable segments | | 77,029 | 26,794 | 103,823 |\n| Assets used in the business | | 690,970 | 200,550 | 891,520 |\n| Depreciation and amortization of property | | 9,336 | 2,129 | 11,465 |\n| Capital expenditures | | 6,389 | 827 | 7,216 |\n\n25358_AIT_Report_WT.indd 35 8/23/12 8:33 AM", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## EMPLOYEE RETIREMENT AND BENEFIT PLANS\n\n11\n\nA noncontributory defined benefit retirement plan is maintained for all regular employees of the Company except those of Quest Medical. This plan was amended effective January 1, 1998 to become a cash balance pension plan. The Company's funding policy is to make the annual contributions required by applicable regulations and recommended by its actuary. The Company uses a December 31 measurement date for the plan.\n\nThe changes in the plan's projected benefit obligation (\"PBO\") as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| CHANGE IN BENEFIT OBLIGATION: | | | | |\n| Benefit obligation, January 1 | $ | 4,170 | $ | 4,599 |\n| Service cost | | 214 | | 320 |\n| Interest cost | | 298 | | 307 |\n| Amendments | | —- | | (616) |\n| Actuarial (gain)/loss | | 529 | | (93) |\n| Benefits paid | | (333) | | (347) |\n| Benefit obligation, December 31 | $ | 4,878 | $ | 4,170 |\n\nIn December 2002, the plan was amended to reduce benefit accruals for future service by plan participants by approximately 50 percent. This amendment caused a reduction in the PBO of approximately $616,000, and is reflected as a reduction in pension expense over the estimated employee service lives.\n\nThe changes in the fair value of plan assets, funded status of the plan and the status of the prepaid pension benefit recognized, which is included in the Company's balance sheets as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| CHANGE IN PLAN ASSETS: | | | | |\n| Fair value of plan assets, January 1 | $ | 4,383 | $ | 4,550 |\n| Actual return on plan assets | | 963 | | (750) |\n| Employer contributions | | 400 | | 930 |\n| Benefits paid | | (333) | | (347) |\n| Fair value of plan assets, December 31 | $ | 5,413 | $ | 4,383 |\n| Funded status of plan | $ | 535 | $ | 213 |\n| Unrecognized actuarial loss | | 1,941 | | 2,154 |\n| Unrecognized prior service cost | | (502) | | (539) |\n| Unrecognized net transition obligation | | (88) | | (132) |\n| Net amount recognized as other assets | $ | 1,886 | $ | 1,696 |", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n## **NOTE 5: SELF-INSURANCE**\n\nOur self-insurance reserves are summarized as follows:\n\n| | January 31, 2015 | February 1, 2014 |\n| --- | --- | --- |\n| Workers' compensation | $70 | $66 |\n| Employee health and welfare | 23 | 23 |\n| General liability | 16 | 16 |\n| Total self-insurance reserve | $109 | $105 |\n\nOur workers' compensation policies have a retention per claim of $1 or less and no policy limits.\n\nWe are self-insured for the majority of our employee health and welfare coverage and we do not use stop-loss coverage. Participants contribute to the cost of their coverage through both premiums and out-of-pocket expenses and are subject to certain plan limits and deductibles.\n\nOur general liability policies, encompassing employment practices liability and commercial general liability, have a retention per claim of $3 or less and a policy limit up to $30 and $150, respectively.\n\n#### **NOTE 6: 401(k) PLAN**\n\nWe provide a 401(k) plan for our employees that allows for employee elective contributions and discretionary company contributions. Employee elective contributions are funded through voluntary payroll deductions. Our discretionary company contribution is funded in an amount determined by our Board of Directors each year. Our expense related to company contributions totaled $77, $77 and $83 in 2014, 2013 and 2012.\n\n#### **NOTE 7: POSTRETIREMENT BENEFITS**\n\nWe have an unfunded defined benefit Supplemental Executive Retirement Plan (\"SERP\"), which provides retirement benefits to certain officers and select employees. The SERP has different benefit levels depending on the participant's role in the company. At the end of 2014, we had 59 participants in the plan, including 27 officers and select employees eligible for SERP benefits, 31 retirees and 1 beneficiary. This plan is non-qualified and does not have a minimum funding requirement.", - "page_start": 61, - "page_end": 61, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (Continued)\n\n#### (In thousands, except per share amounts)\n\nThe weighted-average remaining contractual terms for SARs and stock options outstanding and exercisable at June 30, 2012 were 6.0 and 4.8 years, respectively. The aggregate intrinsic values of SARs and stock options outstanding and exercisable at June 30, 2012 were $15,023 and $10,775, respectively. The aggregate intrinsic value of the SARs and stock options exercised during fiscal 2012, 2011 and 2010 was $13,747, $18,526 and $5,157, respectively.\n\nPerformance Grants\n\nNOTE 10: BENEFIT PLANS\n\nDeferred Compensation Plans\n\nPostemployment Benefit Plans\n\nmutual funds and Company common stock.\n\nSupplemental Executive Retirement Benefits Plan\n\nRestoration Plan\n\nQualified Defined Benefit Retirement Plan\n\nincome (loss)) of $302 ($492 loss, net of income tax of $190).\n\n$128 of expense associated with this plan in fiscal 2012.\n\nRetirement Savings Plan\n\n2010, respectively.\n\nare unfunded:\n\nKey Executive\n\nretirement.\n\nIn fiscal 2009 and 2008, the Executive Organization and Compensation Committee made annual awards of three-year performance grants to key officers. A target payout was established at the beginning of each three-year performance period. The actual payout at the end of the period is calculated based upon the Company's achievement of sales growth, return on sales, and total shareholder return targets. All performance periods had expired by June 30, 2011. During fiscal 2011 and 2010, the Company recorded $1,020 and $(231), respectively, of compensation expense (income) for achievement relative to the total shareholder return-based goals of\n\nSubstantially all U.S. associates participate in the Applied Industrial Technologies, Inc. Retirement Savings Plan. Participants may elect to contribute up to 50% of their compensation, subject to Internal Revenue Code maximums. The Company makes a discretionary profit-sharing contribution to the Retirement Savings Plan generally based upon a percentage of the Company's U.S. income before income taxes and before the amount of the contribution (5% for fiscal 2012, 2011 and 2010). The Company partially matches 401(k) contributions by participants; this match was suspended from January 1, 2009 to June 30, 2010. The Company's expense for profit sharing and matching of associates' 401(k) contributions was $10,866, $11,251 and $4,891 during fiscal 2012, 2011 and\n\nThe Company has deferred compensation plans that enable certain associates of the Company to defer receipt of a portion of their compensation and non-employee directors to defer receipt of director fees. The Company funds these deferred compensation liabilities by making contributions to rabbi trusts. Assets held in these rabbi trusts consist of investments in money market and\n\nThe Company provides the following postemployment benefits which, except for the Qualified Defined Benefit Retirement Plan,\n\nThe Company has a non-qualified pension plan to provide supplemental retirement benefits to certain officers. Benefits are payable beginning at retirement and determinable at retirement based upon a percentage of the participant's historical compensation. On December 19, 2011, the Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and final average earnings) and entry into the Supplemental Executive Retirement Benefits Plan (SERP) effective December 31, 2011. This action constituted a plan curtailment. The plan liability was remeasured in conjunction with the curtailment using a 3.5% discount rate and participant final average earnings through the curtailment date. The remeasurement in conjunction with the curtailment resulted in an actuarial loss (recorded in other comprehensive\n\nThe curtailment is reflected in the Company's consolidated balance sheets as: 1) a reduction to the overall SERP liability (included in postemployment benefits) of $8,860, 2) a reduction to deferred tax assets of $3,411 and 3) an increase in accumulated other comprehensive income (loss) of $5,449. Prior service costs previously recorded through accumulated other comprehensive income (loss) were reclassified into the statements of consolidated income ($3,117 gross expense, net of income\n\nIn fiscal 2012, the Executive Organization & Compensation Committee of the Board of Directors adopted the Key Executive Restoration Plan (KERP), an unfunded, non-qualified deferred compensation plan, to replace the SERP. The Company recorded\n\nThe Company has a qualified defined benefit retirement plan that provides benefits to certain hourly associates at retirement. These associates do not participate in the Retirement Savings Plan. The benefits are based on length of service and date of\n\ntax of $1,200). The gross expense is recorded in selling, distribution and administrative expense in fiscal 2012.\n\nthe Company's performance grants. The liability at June 30, 2011 was $1,558; this was paid in fiscal 2012.\n\nAs of June 30, 2012, unrecognized compensation cost related to SARs and stock options amounted to $1,951. That cost is expected to be recognized over a weighted-average period of 2.4 years. The total fair value of shares vested during fiscal 2012, 2011 and 2010 was $4,266, $2,645 and $2,673, respectively.\n\n#### Performance Shares\n\nPerformance shares are intended to provide incentives to achieve three-year goals. Performance shares pay out in shares of Applied stock at the end of a three-year period provided the Company achieves the established goals. The number of Applied shares payable will vary depending on the level of the goal achieved.\n\nA summary of nonvested performance shares activity at June 30, 2012 is presented below:\n\n| Year Ended June 30, 2012 | | Weighted-Average. | |\n| --- | --- | --- | --- |\n| | | Grant-Date. | |\n| (Share amounts in thousands) | Shares | Fair Value. | |\n| Nonvested, beginning of year | 222 | $ | 23.23 |\n| Granted | 31 | | 28.34 |\n| Forfeitures | (47 ) | | 27.15 |\n| Vested | (144 ) | | 20.67 |\n| Nonvested, end of year | 62 | $ | 28.80 |\n\nThe Committee set three one-year goals for the 2012 and 2011 grants tied to the Company's earnings before interest, tax, depreciation, and amortization (EBITDA) and after-tax return on assets (ROA). Each fiscal year during the three-year term has its own separate goals. Achievement during any particular fiscal year is \"banked\" for payout at the end of the three-year term.\n\nAs of June 30, 2012, the potential shares to be banked in future periods was 62. Unrecognized compensation cost relating to these shares has the potential to reach $1,812 and would be recognized in expense over the weighted-average remaining vesting period of 1.7 years.\n\n#### Restricted Stock and Restricted Stock Units\n\nRestricted stock award recipients are entitled to receive dividends on, and have voting rights with respect to their respective shares, but are restricted from selling or transferring the shares prior to vesting. Restricted stock awards vest over periods of one to four years. RSUs are grants valued in shares of Applied stock, but shares are not issued until the grants vest three years from the award date, assuming continued employment with Applied. RSUs vest on a pro rata basis upon retirement during the three-year term. Applied pays dividend equivalents on RSUs on a current basis.\n\nA summary of the status of the Company's nonvested restricted stock and RSUs at June 30, 2012 is presented below:\n\n| Year Ended June 30, 2012 | | | |\n| --- | --- | --- | --- |\n| | | Weighted-Average. | |\n| | | Grant-Date. | |\n| (Share amounts in thousands) | Shares | Fair Value. | |\n| Nonvested, beginning of year | 162 | $ | 25.97 |\n| Granted | 135 | | 31.58 |\n| Forfeitures | (31 ) | | 27.30 |\n| Vested | (15 ) | | 31.42 |\n| Nonvested, end of year | 251 | $ | 28.50 |\n\nUnrecognized compensation cost related to unvested restricted stock awards and RSUs aggregated $3,670 at June 30, 2012, and is expected to be recognized over the weighted-average remaining vesting period of 2.1 years.\n\n25358_AIT_Report_WT.indd 30 8/23/12 8:33 AM", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "TSR will be compared to a set of 22 oil and gas exploration and production companies headquartered in the United States and Australia. The Australian-headquartered companies are highlighted. The chart on the right depicts the TSR over a three year period ending 31 December 2014. Diamondback Energy Inc, Matador Resources Co and Midstates Petroleum Co Inc were excluded from the chart as there was not enough historical data to measure the defined TSR.\n\n#### *Retirement and Other Benefits*\n\nExecutive management participates in the same benefit plans and on the same basis as other employees. Those plans include health, dental and vision insurance (for which a premium contribution is required by the participant) and a 401(k) retirement plan under which the Company makes an annual contribution equal to 3 percent of the participant's eligible compensation.\n\n#### *Post-Termination and Change In Control Benefits*\n\nThe Managing Director's employment contract provides for payment of his base salary through the end of the contract term in the event he is terminated as a result of a change in control event. Additionally, in the event of a corporate take-over or change in control (as defined in the RSU Plan), our board in its discretion may cause all unvested RSUs to vest and be satisfied by the issue of one share each or provide for the cancellation of outstanding RSUs and a cash payment equal to the then-fair market value of the RSUs.", - "page_start": 39, - "page_end": 39, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### **Pension Obligations**\n\nOur retiree pension plans had a funding deficit of approximately $172 million at December 31, 2013. We have been making special minimum monthly payments in addition to our regular contributions to eliminate the pension liability. During 2013, our funding deficit was reduced by $162 million.\n\nThe special payments, including contributions associated with benefits paid from the plans, were approximately $7 million in 2013. We expect our total estimated funding requirements to be $96 million in 2014 and to be adjusted annually thereafter, based on various market factors such as interest rates and expected returns and staffing assumptions.\n\nChanges in factors such as the discount rate, increase in compensation and the expected return on plan assets can affect the accrued benefit obligation, pension expense and the deficiency of plan assets over accrued obligations in the future. See *Critical accounting estimates* for more information.\n\n#### *Purchase of Annuities*\n\nFrom time to time we have made additional lump-sum contributions to our pension plans, and the pension plans have purchased annuities from insurance companies to fund the pension benefit obligations for certain groups of retired employees in the plans. Purchasing the annuities relieves us of our primary responsibility for that portion of the accrued benefit obligations for the retired employees and eliminates the significant risk associated with the obligations.\n\nWe did not make any additional lump-sum contributions to our pension plans in 2013 or 2012, and the pension plans did not purchase additional annuities.\n\n#### FINANCIAL RISK MANAGEMENT\n\nWe normally use three categories of derivative instruments to manage risks related to our business activities:\n\n| Categories | The risk it manages | Types of derivative instruments |\n| --- | --- | --- |\n| Debt Derivatives | • Impact of fluctuations in foreign exchange rates on | • Cross-currency interest rate exchange agreements |\n| | principal and interest payments for US denominated | • Forward foreign exchange agreements (from time |\n| | long-term debt | to time, as applicable) |\n| Expenditure Derivatives | • Impact of fluctuations in foreign exchange rates on | • Forward foreign exchange agreements |\n| | forecasted US dollar denominated expenditures | |\n| Equity Derivatives | • Impact of fluctuations in share price on stock-based | • Total return swap agreements |\n| | compensation expense | |\n\nWe also manage our exposure to fluctuating interest rates and we have fixed the interest rate on 95.3% of our debt including short-term borrowings at December 31, 2013 (2012 – 100%).\n\n#### **Debt Derivatives**\n\nWe use cross currency interest exchange agreements (Debt Derivatives), to hedge the foreign exchange risk on all of the principal and interest obligations of our US dollar denominated senior notes and debentures. At December 31, 2013 we used Debt Derivatives to hedge the foreign exchange risk on 100% of the principal and interest obligations on all our US dollar denominated debt. We use Debt Derivatives for risk management purposes only.\n\nDuring 2013, we completed Debt Derivatives transactions as follows:\n\n- entered into new Debt Derivatives to hedge senior notes issued in 2013\n- terminated existing Debt Derivatives and entered into Debt Derivatives with different terms to hedge existing senior notes\n- settled Debt Derivatives related to senior notes that matured during the year.\n\n*Terminated and Replaced Existing Debt Derivatives*\n\n| | Notional amount | Original maturity | Cash settlement payment |\n| --- | --- | --- | --- |\n| Termination date | (millions) | date | |\n\n1 Converting from a fixed US$ coupon rate to a weighted average Cdn$ fixed rate.\n\n2 Converting from a fixed US$ principal amount to a fixed Cdn$ principal amount.\n\nAll of our Debt Derivatives currently outstanding have been designated as effective hedges against foreign exchange risk for accounting purposes as described below and in note 20 to the consolidated financial statements.\n\n*New Debt Derivatives to Hedge Senior Notes Issued In 2013*\n\n| | | | | US$ | | Hedging effect | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | US$ Principal/ | | | | Fixed | | Cdn$ |\n| | notional amount | | Maturity | | | equivalent | |\n| Effective date | (millions) | | date | Coupon rate | hedged Cdn.$ interest rate 1 | (millions) | |\n| March 7, 2013 | US$ | 500 | 2023 | 3.00% | 3.60% | $ | 515 |\n| March 7, 2013 | US$ | 500 | 2043 | 4.50% | 4.60% | $ | 515 |\n| Subtotal | US$ 1,000 | | | | | $ 1,030 | |\n| October 2, 2013 | US$ | 850 | 2023 | 4.10% | 4.59% | $ | 877 |\n| October 2, 2013 | US$ | 650 | 2043 | 5.45% | 5.61% | $ | 671 |\n| Subtotal | US$ 1,500 | | | | | $ 1,548 | |\n\n1 Converting from a fixed US$ coupon rate to a weighted average Cdn$ fixed rate.\n\n| Terminated Debt Derivatives | | | | | | New Debt Derivatives | | Hedging effect |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | | | | | Fixed |\n| | Notional | Original | Cash settlement | | Derivative | New | Fixed | Cdn$ |\n| | amount | maturity | payment | | amount | maturity | weighted | equivalent |\n| Termination date | (millions) | date | (millions) | Date entered | (millions) | date | average 1 | (millions) 2 |\n| Mar 6, 2013 | US$ 350 2 | 2018 | Nil | Mar 6, 2013 | US$ 3502 | 2038 | 7.62% | $ 356 |\n| Sep 27, 2013 | US$ 1,075 3,4 | 2014 – 2015 | $ 263 | Sep 27, 2013 | US$ 1,0753 | 2014-2015 | 7.42% | $ 1,110 |", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "# **Nordstrom, Inc.**\n\n# **Notes to Consolidated Financial Statements**\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n#### **NOTE 13: STOCK-BASED COMPENSATION**\n\nWe currently have three stock-based compensation plans: the 2010 Equity Incentive Plan (\"2010 Plan\"), the Employee Stock Purchase Plan (\"ESPP\") and the 2002 Nonemployee Director Stock Incentive Plan. Additionally, as part of our acquisitions of HauteLook in 2011 and Trunk Club in 2014, we replaced and/or granted awards from shares available that were not allocated to a specific plan, as well as created an additional long-term incentive plan for certain Trunk Club employees.\n\nIn 2010, our shareholders approved the adoption of the 2010 Plan, which replaced the 2004 Equity Incentive Plan (\"2004 Plan\"). The 2010 Plan authorizes the grant of stock options, performance share units, restricted stock units, stock appreciation rights and both restricted and unrestricted shares of common stock to employees. The aggregate number of shares to be issued under the 2010 Plan may not exceed 27.6 plus any shares currently outstanding under the 2004 Plan which are forfeited or which expire during the term of the 2010 Plan. No future grants will be made under the 2004 Plan. As of January 31, 2015, we have 70.4 shares authorized, 40.4 shares issued and outstanding and 16.7 shares remaining available for future grants under the 2010 Plan.\n\nUnder the ESPP, employees may make payroll deductions of up to 10% of their base and bonus compensation. At the end of each six-month offering period, participants may apply their accumulated payroll deductions toward the purchase of shares of our common stock at 90% of the fair market value on the last day of the offer period. As of January 31, 2015, we had 12.6 shares authorized and 3.3 shares available for issuance under the ESPP. We issued 0.3 shares under the ESPP during 2014. At the end of both 2014 and 2013, we had current liabilities of $6 for future purchases of shares under the ESPP.\n\nThe 2002 Nonemployee Director Stock Incentive Plan authorizes the grant of stock awards to our nonemployee directors. These awards may be deferred or issued in the form of restricted or unrestricted stock, non-qualified stock options or stock appreciation rights. As of January 31, 2015, we had 0.9 shares authorized and 0.5 shares available for issuance under this plan. In 2014, we deferred shares with a total expense of less than $1.\n\nThe following table summarizes our stock-based compensation expense:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Stock options | $37 | $44 | $36 |\n| Acquisition-related stock compensation | 11 | 8 | 9 |\n| Restricted stock units | 10 | — | — |\n| Performance share units | 6 | — | 3 |\n| Other | 4 | 6 | 5 |\n| Total stock-based compensation expense, before income tax benefit | 68 | 58 | 53 |\n| Income tax benefit | (23) | (19) | (17) |\n| Total stock-based compensation expense, net of income tax benefit | $45 | $39 | $36 |\n\nThe stock-based compensation expense before income tax benefit was recorded in our Consolidated Statements of Earnings as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Cost of sales and related buying and occupancy costs | $17 | $15 | $14 |\n| Selling, general and administrative expenses | 51 | 43 | 39 |\n| Total stock-based compensation expense, before income tax benefit | $68 | $58 | $53 |\n\nThe benefit of tax deductions in excess of the compensation cost recognized for stock-based awards is classified as financing cash inflows and are reflected as \"Excess tax benefit from stock-based compensation\" in the Consolidated Statements of Cash Flows.", - "page_start": 67, - "page_end": 67, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### 9. RETIREMENT BENEFIT PLANS\n\nThe Company and its domestic consolidated subsidiaries have defined benefit plans, i.e., welfare pension fund plans (\"WPFP\"), tax-qualified pension plans and lump-sum payment plans, covering substantially all employees who are entitled to lump-sum or annuity payments, the amounts of which are determined by reference to their basic rates of pay, length of service, and the conditions under which termination occurs. Certain foreign consolidated subsidiaries have defined benefit and contribution plans.\n\nThe following table sets forth the funded and accrued status of the plans, and the amounts recognized in the consolidated balance sheets as of March 31, 2005 and 2004 for the Company's and the consolidated subsidiaries' defined benefit plans:\n\n| | | | Thousands of |\n| --- | --- | --- | --- |\n| | Millions of yen | | U.S. dollars |\n| 2004 | | 2003 | 2004 |\n| As of | Mar. 31, 2005 | Mar. 31, 2004 | Mar. 31, 2005 |\n| Retirement benefit obligation ¥(1,217,260) | | ¥(1,041,483) | $(11,376,262) |\n| Plan assets at fair value | 500,815 | 377,169 | 4,680,514 |\n| Unfunded retirement benefit obligation | (716,445) | (664,314) | (6,695,748) |\n| Unrecognized net retirement benefit obligation at transition | 120,718 | 131,666 | 1,128,206 |\n| Unrecognized actuarial gain or loss | 154,689 | 152,867 | 1,445,691 |\n| Unrecognized prior service cost | (66,720) | (61,833) | (623,551) |\n| Net retirement benefit obligation | (507,758) | (441,614) | (4,745,402) |\n| Prepaid pension cost | 445 | 652 | 4,159 |\n| Accrued retirement benefits ¥ | (508,203) ¥ | (442,266) | $ (4,749,561) |\n\nThe substitutional portion of the benefits under the WPFP has been included in the amounts shown in the above table.\n\nThe Company received the approval from the Minister of Health, Labor and Welfare (\"MHLW\") in the year ended March 31, 2003 with respect to its application for exemption from the obligation for benefits related to future employee services under the substitutional portion of the WPFP. Certain domestic consolidated subsidiaries received the same approval from MHLW during the year ended March 31, 2004. In accordance with the transitional provision stipulated in \"Practical Guidelines for Accounting for Retirement Benefits,\" the Company and the domestic consolidated subsidiaries accounted for the separation of the substitutional portion of the benefit obligation from the corporate portion of the benefit obligation under their WPFPs as of the dates of approval for their exemption assuming that the transfer to the Japanese government of the substitutional portion of the benefit obligation and related pension plan assets had been completed as of those dates. As a result, the Company recognized a loss of ¥30,945 million for the year ended March 31, 2003 and the domestic consolidated subsidiaries recognized an aggregate gain of ¥3,669 million and an aggregate loss of ¥1,587 million for the year ended March 31, 2004. The pension assets to be transferred were calculated at ¥35,770 million for the domestic consolidated subsidiaries at March 31, 2004 and ¥241,203 million for the Company at March 31, 2003.\n\nThe components of retirement benefit expenses for the years ended March 31, 2005, 2004 and 2003 are outlined as follows:\n\n| | | | | Thousands of |\n| --- | --- | --- | --- | --- |\n| | | Millions of yen | | U.S. dollars |\n| 2004 | | 2003 | 2002 | 2004 |\n| For the years ended | Mar. 31, 2005 | Mar. 31, 2004 | Mar. 31, 2003 | Mar. 31, 2005 |\n| Service cost ¥47,802 | | ¥48,418 | ¥ 51,543 | $446,748 |\n| Interest cost | 33,288 | 33,012 | 45,269 | 311,103 |\n| Expected return on plan assets | (17,999) | (15,523) | (26,708) | (168,215) |\n| Amortization of net retirement benefit obligation at transition | 12,009 | 14,169 | 24,280 | 112,234 |\n| Amortization of actuarial gain or loss | 12,298 | 18,689 | 11,464 | 114,934 |\n| Amortization of prior service cost | (5,431) | (7,049) | (7,762) | (50,757) |\n| Other | 179 | 57 | 5 | 1,673 |\n| Retirement benefit expenses | 82,146 | 91,773 | 98,091 | 767,720 |\n| (Gain) loss on return of the substitutional portion of | | | | |\n| welfare pension fund plans | (1,107) | (5,594) | 30,945 | (10,346) |\n| Total ¥81,039 | | ¥86,179 | ¥129,036 | $757,374 |", - "page_start": 83, - "page_end": 83, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Net periodic pension cost for the years ended December 31, 2002, 2001, and 2000, included:\n\n| | | Year Ended December 31, | |\n| --- | --- | --- | --- |\n| | 2002 | 2001 | 2000 |\n| Service cost - benefits earned during the period | $ 994,630 | $ 847,620 | $ 845,372 |\n| Interest cost on projected benefit obligation | 983,977 | 970,710 | 816,583 |\n| Expected return on plan assets | (880,562) | (1,153,733) | (1,058,787) |\n| Amortization of unrecognized net loss | 116,722 | - | - |\n| Amortization of prior-service cost | 17,960 | 17,961 | 17,961 |\n| Other | (59,405) | (58,954) | 58,779 |\n| Net periodic pension cost | $1,173,322 | $ 623,604 | $ 679,908 |\n\nThe following table sets forth the rates used in the actuarial calculations of the present value of benefit obligations and the rate of return on plan assets:\n\n| | 2002 | 2001 | 2000 |\n| --- | --- | --- | --- |\n| Weighted average discount rate | 6.9% | 6.9% | 7.5% |\n| Rate of increase in future compensation levels | 4% | 4% | 4% |\n| Expected long-term rate of return on assets | 6.5% | 8.5% | 8.5% |\n\nAs of December 31, 2002 and 2001, the fair value of the plan's assets included Company common stock valued at approximately $468,000 and $297,000, respectively.\n\nThe Company also provides a profit sharing plan, which covers substantially all full-time employees. The profit sharing plan is a defined contribution plan and allows employees to contribute up to 5% of their base annual salary. Employees are fully vested to the extent of their contributions and become fully vested in the Company's contributions over a seven-year vesting period. Costs related to the Company's defined contribution plan totaled approximately $2,681,000, $1,858,000 and $1,874,000 in 2002, 2001 and 2000, respectively, and are included in salaries and employee benefits in the accompanying consolidated statements of earnings. As of December 31, 2002 and 2001, the fair value of the plan's assets included Company common stock valued at approximately $14,323,000 and $10,881,000, respectively.\n\n#### 13. DIVIDENDS FROM SUBSIDIARIES:\n\nAt December 31, 2002, approximately $20,728,000 was available for the declaration of dividends by the Company's subsidiary banks without the prior approval of regulatory agencies.\n\n#### 14. REGULATORY MATTERS:\n\nThe Company is subject to various regulatory capital requirements administered by the federal banking agencies. Failure to meet minimum capital requirements can initiate certain mandatory, and possibly additional discretionary, actions by regulators that, if undertaken, could have a direct material effect on the Company's financial statements. Under capital adequacy guidelines and the regulatory framework for prompt corrective action, each of Bankshares' subsidiaries must meet specific capital guidelines that involve quantitative measures of the subsidiaries' assets, liabilities, and certain off-balance-sheet items as calculated under regulatory accounting practices. The subsidiaries' capital amounts and classification are also subject to qualitative judgments by the regulators about components, risk weightings, and other factors.", - "page_start": 86, - "page_end": 86, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "What does Applied has to say regarding the potential creadit risk it could be exposed to ?", - "target_page": 21, - "target_passage": "The Company has a broad customer base representing many diverse industries primarily across North America. As such, the Company does not believe that a significant concentration of credit risk exists in its accounts receivable", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### **ILO 'List of Occupational Diseases Recommendation'**\n\n*2.4. Mental and behavioural disorders* \n\n- *2.4.1. Post-traumatic stress disorder*\n- *2.4.2. Other mental or behavioural disorders not mentioned in the preceding item where a direct link is established scientifically, or determined by methods appropriate to national conditions and practice, between the exposure to risk factors arising from work activities and the mental and behavioural disorder(s) contracted by the worker*\n\nAnd there are also **emerging and new risks** where health data will **not be available until a certain number of workers are exposed for quite a while**. Some prominent examples are nanotechnologies, the significant increase of new chemically based technologies, vision impairment due to long hours of work under artificial light at the same distance with small digital equipment,183 more exposure to 'global' biological agents due to more interactional tasks, and travel and transport between countries and continents. On that note, the Covid-19 pandemic could also be used as an example. In 2022, the Commission proposed an update of the Recommendation on the ESOD to recognise Covid-19 as an occupational disease for workers particularly concerned: health and social care, home help or where there is a proven risk of infection (during a pandemic) in other sectors184.\n\nIt adds to these difficulties that workers are often not only exposed to one disease causing exposure but to **several exposures** at the same time (exposure is understood here in a broad sense: ranging from long working hours over postures and movements to harassment and violence and to noise and chemical and biological substances, etc.). **In theory, a single risk** — if below the threshold limit values and in line with legislation and standards — **will not cause harm — given that it is the only exposure**. The impact of this single exposure is not strong enough to generate a disease on the level of severity of a recognised occupational disease. A **combination of several risks** might add several exposures, worsen the impact and cause serious harm.\n\nQuite well studied is the increased prevalence of musculoskeletal diseases, if not only ergonomic risks but also high psychosocial risks are prevalent at the workplace.185 Research has also found unexpected connections like the synergistic effect of noise and certain chemicals on hearing impairments. Such outcomes of multi-risk profiles are often particularly difficult to identify and understand. Obviously, most sectors and occupations involve workplaces with **multi-risk profiles**. Some prominent major risks in certain sectors or occupations are:\n\n- agriculture = accidents, chemical and biological agents, UV exposure;\n- delivery services = traffic accidents, ergonomics, time pressure, exhaust fumes;\n- decentralised renewable energy construction and maintenance = falls from height, electricity;\n- waste and recycling = biological and chemical agents, cuts and accidents;\n- mobile work = ergonomics, work without time and space limits;\n- care at home = emotional, ergonomic, difficult clients, unsafe household situations, infection risks;\n- healthcare = emotional, ergonomics, biological;\n- personal and household services = emotional, ergonomic, unsafe household situations, e.g. unsafe electrical equipment, exposure to unknown chemicals;\n- long-haul sea, train, road or air transport = atypical working times, shift work, monotony, long phases of physical inactivity;\n- car repair = ergonomics, dust and fumes, chemicals;\n- construction = falls from height, accidents with machinery or vehicles, slips, trips and falls, ergonomics, noise, chemicals, dust, UV exposure, etc.", - "page_start": 75, - "page_end": 75, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "If **a risk assessment is conducted just for compliance purposes**, and not used appropriately for the successful management of OSH and reduction of accidents and occupational diseases, the risk assessment may lose its dynamic nature, and findings may be neither implemented nor communicated appropriately to employees.\n\nThe **types of risks included in risk assessments** are related to the risk profiles of different sectors, for example, it is likely that risk assessments in heavy industries and manual occupations focus more on safety risks. However, while sectoral risk profiles will naturally bias the identification of risks, smaller establishments seem to have **less of a focus on MSDs or psychosocial risk factors**, which would suggest that they are less well recognised or understood, in particular for MSEs.415 Establishments also report that psychosocial risk factors are more difficult to manage than other OSH risks, while as business size grows, so does the proportion of respondents who perceive psychosocial risks as more difficult to manage than other OSH risks.416\n\nESENER 2019 shows that a **reluctance to talk openly** about these issues seems to be the main difficulty for addressing psychosocial risks (60% of establishments in the EU27). This, as with all the other difficulties considered (lack of awareness among staff/management and lack of expertise or specialist support), is reported in all enterprise sizes but more frequently as establishment size grows.\n\nSpecifically, among those establishments that report having to deal with difficult customers, patients or pupils, 51% of those employing 20 or more workers report having a procedure in place to deal with possible cases of threats, abuse or assaults by clients, patients or other external persons. This share rises to 74% among establishments in human health and social work activities.\n\nThe development of concrete outputs such as measures to better manage risks that can result in **musculoskeletal diseases** has actually seen a decline between 2014 and 2019, as follows:\n\n- 85% to 77% on the measure of 'provision of equipment to help with the lifting or moving of loads or other physical heavy work';417\n- 73% to 67% concerning 'provision of ergonomic equipment'; and\n- 66% to 60% regarding 'encouraging regular breaks for people in uncomfortable or static postures including prolonged sitting'.418", - "page_start": 127, - "page_end": 127, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **PART VIII**\n\n## **Risk Management**\n\nKillam faces a variety of risks, the majority of which are common to real estate entities. Real estate investments are generally subject to varying degrees of risk, depending on the nature of the property. These risks include (i) changes in general economic conditions, (ii) changes in local conditions (such as an oversupply of space or a reduction in demand for real estate in the area), (iii) changes to government regulations (such as new or revised residential tenant legislations), (iv) competition from others with available space, and (v) the ability of the landlord or owner to provide adequate maintenance economically.\n\nReal estate is relatively illiquid. Such illiquidity will tend to limit Killam's ability to rebalance its portfolio promptly in response to changing economic or investment conditions. In addition, financial difficulties of other property owners, resulting in distress sales, may depress real estate values in the markets in which the Company operates.\n\nKillam's exposure to general risks associated with real estate investments is mitigated with both its geographic diversification, and investments in both apartments and MHCs.\n\nKillam is exposed to other risks, as outlined below:\n\n#### **Interest Rate Risk**\n\nInterest risk is the risk that the Company would experience lower returns as the result of its exposure to a higher interest rate environment. The Company is exposed to interest rate risk as a result of its mortgages and loans payable, however this risk is mitigated through the Company's strategy to have the majority of its mortgages payable in fixed‑term arrangements. The Company also structures its financings so as to stagger the maturities of its debt, minimizing the Company's exposure to interest rates in any one year.\n\nAs at December 31, 2013, no mortgages or vendor debt had floating interest rates except for four demand loans totaling $3.9 million. These loans have an interest rate of prime plus 1.0% ‑ 2.0% (December 31, 2012 ‑ prime plus 1.0% ‑ 1.5%). Killam also has one construction loan of $14.8 million with a floating interest rate of prime plus 0.75% and consequently, Killam is exposed to short‑term interest rate risk on these loans.\n\n### **Liquidity Risk**\n\nLiquidity risk is the risk that the Company may not have access to sufficient debt and equity capital to fund its growth program and/or refinance its debt obligations as they mature. Senior Management manages the Company's cash resources based on financial forecasts and anticipated cash flows. The maturities of the Company's long‑term financial liabilities are set out in Notes 12 to 15 of the consolidated financial statements. The Company structures its financings so as to stagger the maturities of its debt, thereby minimizing the Company's exposure to liquidity risk in any one year. In addition, the Company's apartments qualify for CMHC insured debt, reducing the refinancing risk on mortgage maturities. The Company's MHCs do not qualify for CMHC insured debt, however, they continue to have access to mortgage debt.\n\n## **Increased Supply Risk**\n\nIncreased supply risk is the risk of loss from increased competition from the addition of new rental units in Killam's core markets. Numerous other residential developers and apartment owners compete for potential tenants. Although it is Killam's strategy to own multifamily residential properties in premier locations in each market in which it operates, some of the apartments or MHCs of Killam's competitors may be newer, better located or offer lower rents. An increase in alternative housing could have a material adverse effect on Killam's ability to lease units and in the rents charged and could adversely affect Killam's revenues and ability to meet its obligations. To mitigate against this risk Killam has a geographically diverse asset base. Management is expanding this diversification by increasing Killam's investment in apartment markets outside Atlantic Canada.\n\n### **Credit Risk**\n\nCredit risk arises from the possibility that tenants may experience financial difficulty and be unable to fulfill their lease term commitments. The Company mitigates the risk of credit loss through the diversification of its existing portfolio and limiting its exposure to any one tenant. Credit assessments are conducted with respect to all new leasing and the Company also obtains a security deposit to assist in potential recovery requirements. In addition, the receivable balances are monitored on an ongoing basis with the result that the Company's exposure to bad debt is not significant. The Company's bad debt expense experience has historically been less than 0.4% of revenues. None of Killam's tenants account for more than 1% of tenant receivables.\n\n#### **Development Risk**\n\nDevelopment risk is the risk that costs of developments will exceed original estimates, unforeseen delays occur and/or units will not be leased in the timeframe and/or at rents anticipated. Killam minimizes its exposure to development risk my limiting the amount of development underway at any one time. To reduce the Company's exposure to price increases, Killam enters into fixed‑rate contracts when possible. To reduce the lease‑up risk, Killam does extensive market research in advance of each development to support expected rental rates, and pre‑markets its properties early on in the process, to increase demand for the new developments.", - "page_start": 58, - "page_end": 58, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "A combination of the above questions is also relevant—how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. We also consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5°C and 2°C global warming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined—one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "### **3.1 Psychosocial risks at work**\n\nDuring the last 30 years, the scientific, political and practical discussions on **psychosocial risks** and preventive measures against psychosocial risks have gained strong importance. After a period of doubts and resistance, today they are regarded as risks of the same severity as the classical physical safety and health risks.4 (Chapter 1 covers the psychosocial risk aspect; for the prevalence of mental diseases and the burden of mental diseases see Chapter 2.2.5)\n\nLooking at the steady increase of certain psychosocial risk indicators at workplace level, either the **risks have increased** and/or the **number of people working in occupations** with higher psychosocial risks has increased.6,7 This is valid, for example, for the indicator time pressure, for example, in delivery services, transport, and often also clerical work; the workforce has grown in sectors where emotional demands from dealing with difficult clients, customers, pupils or patients are common; there are also more workers employed (or self-employed) in interactional occupations, for example, in call centres, or in occupations with a high level of emotional tensions, for example, education, health and care.\n\n#### **Figure 2: Risk factors that can adversely affect mental wellbeing – EWCS8 and ESENER9**\n\nA major difference between the ESENER and the EWCS survey is the respondent. In ESENER those persons who are most familiar with OSH or responsible for OSH in an enterprise were asked whether a certain risk factor exists in the enterprise; in the EWCS survey workers themselves were asked whether they are exposed to a risk factor.", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 27. Contingent liabilities\n\nThe Group had contingent liabilities at 30 June 2013 in respect of guarantees. Bank guarantees have been given by Kingsgate's controlled entities to participating banks in the syndicated loan facility and corporate loan facility as described in Note 16 as part of the security package. These guarantees may give rise to liabilities in the parent entity if the controlled entities do not meet their obligations under the terms of the loans subject to guarantees. No material losses are anticipated in respect of the above contingent liabilities.\n\nIncluded in non-current other asset is $1,838,000 relating to restricted cash deposits against bank guarantees supporting the rehabilitation bond requirements against the Group's mining operations.\n\n## 28. Financial risk management and instruments\n\n#### Financial risk management\n\nThe Group's activities expose it to a variety of financial risks: market risk (including foreign currency risk, price risk, fair value risk, and interest rate risk), credit risk and liquidity risk.\n\nAt this point, the Directors believe that it is in the interest of shareholders to expose the Group to foreign currency risk, price risk and interest rate risk. Therefore, the Group does not employ any derivative hedging of foreign currency or interest rate risks though has entered into forward gold sale contracts to manage Australian gold price risk in respect of the forecast production from the Challenger Mine (refer \"commodity price risk\" section below). The Directors and management monitor these risks, in particular market forecasts of future movements in foreign currency and prices movements and if it is to be believed to be in the interests of shareholders will implement risk management strategies to minimise potential adverse effects on the financial performance of the Group.\n\nRisk management is carried out by the senior executive team. The Board provides written principles for overall risk management, as well as policies covering specific areas, such as foreign exchange risk, interest rate risk, credit risk, use of derivative financial instruments and non-derivative financial instruments, and investment of excess liquidity.\n\nThe Group holds the following financial instruments:\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| | $'000 | $'000 |\n| Financial assets | | |\n| Cash and cash equivalents | 32,987 | 90,623 |\n| Receivables | 9,431 | 12,226 |\n| Restricted cash | 5,474 | - |\n| Available-for-sale financial assets | 767 | 1,751 |\n| Other financial assets | 7,808 | 4,670 |\n| Total financial assets | 56,467 | 109,270 |\n| Financial liabilities | | |\n| Payables | (47,106) | (49,278) |\n| Borrowings | (202,565) | (157,544) |\n| Derivatives held for trading | (1,271) | (2,685) |\n| Total financial liabilities | (250,942) | (209,507) |\n\n#### (a) Market risk\n\n#### Foreign exchange risk\n\nThe Group operates internationally and is exposed to foreign exchange risk arising from currency exposures, primarily with respect to the US dollar and Thai Baht and as discussed earlier, no financial instruments are employed to mitigate the exposed risks. This is the Group's current policy and it is reviewed regularly including forecast movements in these currencies by management and the Board.\n\nCurrent year foreign exchange risks arise primarily from:\n\n- 〉 the sale of gold, which is in US dollars;\n- 〉 payables denominated in US dollars; and\n- 〉 cash balances in US dollars.\n\nThe functional currency of the Thai subsidiaries is Thai Baht.", - "page_start": 100, - "page_end": 100, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "# **3 Status of working conditions**\n\nThis chapter on health and safety-related working conditions provides an overview on status and development of working conditions; it is mainly based on the indicators that were **selected for the data visualisation in the OSH Barometer**. This is a quite limited selection of major data; in surveys and statistics many more indicators on working conditions are provided, particularly at national level.\n\nPractically all working conditions influence **mental health**, that is, they involve **psychosocial risks**, and all also involve **'physical risks'**, including safety aspects of these risks. Mental health risks are illustrated in the OSH Barometer by datasets on time pressure, poor communication, dealing with difficult clients, discrimination and harassment, and similar. **Physical risks** include datasets on accidents at work, exposures to chemical and biological substances, exposure to noise, vibrations, high or low temperatures, and working tasks with ergonomic risks, like carrying, lifting heavy loads or work in tiring or painful positions; and also permanent physical inactivity, mainly sitting or long standing.2\n\nThe figure below shows the percentage of enterprises reporting OSH risks 'present in the establishment', compared between 2014 and 2019 (ESENER) and covering mental and physical risks.3\n\n#### **Figure 1: Risk factors present (% of establishments) – ESENER 2014 and 2019**\n\nNote: Prolonged sitting was a new item in the 2019 survey.\n\nBetween 2014 and 2019, some risk factors increased, like 'Repetitive hand and arm movements', 'Lifting or moving people of heavy loads', and 'Having to deal with difficult customer, patient and pupils; many others showed no changes, like 'Risk of accidents with machines or hand tools', 'Chemical or biological substances', and 'Loud noise', or minor decreases like 'Risk of accidents with vehicles'.", - "page_start": 22, - "page_end": 22, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**Biological agents have always** been a risk at workplaces in several sectors, particularly in health and care, in agriculture and the food industry, in laboratories, and in wastewater treatment, waste disposal and recycling. Also, climate change will raise the risks from biological agents in Europe, due to the expected warming that allows biological agents from tropical and subtropical regions to migrate to Europe. 296 An increasing resistance of bacteria towards antibiotic treatment is a particular risk in hospitals and care institutions.\n\nThe **COVID-19 pandemic** made the public aware of the powerful impact of these risks, and also of the high risks of infections in some occupations. During the pandemic, the above-mentioned list of workplaces with well-known risks from biological agents was significantly extended; practically all workplaces with direct human–human communication were included, for example, workers in education, workers in public transport, sales and restaurant personnel and so on.\n\nDuring the pandemic preventive measures for workplaces were introduced that **might change future prevention** practices towards biological agents, for example, the obligation to wear PPE might be applied for **many more worker groups and for more circumstances**, more rules for the organisation of personal contacts and communication at work have been developed and tested in practice, stronger ventilation might be implemented, and the measures might include a significantly higher use of disinfecting chemicals. The future development — be it regional or national outbreaks or worldwide pandemics — is unforeseeable. The connections between global societies due to international supply and transport chains and tourism will definitely increase the risk of future worldwide pandemics.", - "page_start": 107, - "page_end": 107, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "47 Adăscăliței et al., 2021: The intensification of work in Europe: A multilevel analysis\n\n48 EU-OSHA, 2002: Report - New forms of contractual relationships and the implications for occupational safety and health (p. 7).\n\n49 Eurofound, 2011: Impact of subcontracting on working conditions\n\n50 Koranyi et al., 2018: Precarious employment and occupational accidents and injuries – a systematic review\n\n51 ILO Indicator description: Occupational injuries\n\n52 See the diagrams and country data in the OSH Barometer under: https://visualisation.osha.europa.eu/oshbarometer/\n\n53 Tynes et al., 2017: Physical working conditions as covered in European monitoring questionnaires\n\n54 EU-OSHA: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3) – first findings, 2019, p. 3 and ESENER Data visualisation, section 'Comparisons 2014-2019', section 'Risk factors present in the establishment', Export data\n\n55 EU-OSHA calculations based on EWCS raw data.\n\n56 Eurostat, LFS Ad hoc modules: Persons reporting exposure to risk factors that can adversely affect physical health by sex, age and factor\n\n57 In the LFS-survey the respondents had to decide which of 11 possible risk factors is the most 'serious one'. Quote: *'Eurostat proposed to implement the exposure to risk factors for physical health at work by using one question that strictly reflects the variable or twelve questions asking for the presence of any of the eleven risk factors and then ask for the most serious one.'*\n\nIn the EWCS and ESENER all reported risk factors were registered.\n\n58 EU-OSHA, 2020: Work-related musculoskeletal disorders: why are they still so prevalent? Evidence from a literature review\n\n59 EU-OSHA, 2020: Work-related musculoskeletal disorders: why are they still so prevalent? Evidence from a literature review\n\n60 OSHWiki, 2020: Musculoskeletal disorders and prolonged static sitting\n\n- 61 EU-OSHA, 2010: Maintenance and Occupational Safety and Health: a statistical picture\n- 62 Marin-Garcia et al., 2020: Changes in the Association between European Workers' Employment Conditions and Employee Well-Being in 2005, 2010 and 2015 (p. 9).\n\n63 Balogh et al., 2021: Non-standard employment and mortality in Belgian workers: A census-based investigation\n\n64 Gallagher & Underhill, 2012: Managing work health and safety: recent developments and future directions (p. 238).\n\nMy Business, n.d.: WHAT IS A 'PERSON CONDUCTING A BUSINESS OR UNDERTAKING'?\n\n65 Employment by sex, age and professional status (1 000), quarterly data, Eurostat employment types; Employment and activity - LFS adjusted series - historical data (1989-2020), Total employment, annual data, here\n\nPart-time: here and here\n\nTemporary: here and here\n\nContract with a limited duration, 15-64 years, here 66 Eurostat definitions: EU Labour Force Survey - Methodology\n\n67 OECD, 2019: Pensions at a Glance 2019, OECD and G20 Indicators\n\nQuote: *'Non-standard work is frequent among workers over 65 and women Non-standard work is common among older workers. While overall employment rates decrease at older ages, the share of non-standard work is particularly high among workers over 65: only about 15% of workers between 65 and 74 are in standard employment, against more than 60% at ages 55-64 and 25-54 (Figure 2.2, Panel A)'* (p. 70).\n\n68 Eurofound, 2021: Seasonal worker\n\n*'A seasonal worker is defined in Article 3(b) of Directive 2014/36/EU on the conditions of entry and stay of thirdcountry nationals for the purpose of employment as 'a third-country national who retains his or her principal place of residence in a third country and stays legally and temporarily in the territory of a Member State to carry out an activity dependent on the passing of the seasons, under one or more fixed-term work contracts concluded directly between that third-country national and the employer established in that Member State.'* European Parliament and the Council: Directive 2014/36/EU of 26 February 2014 on the conditions of entry and stay of third-country nationals for the purpose of employment as seasonal workers.\n\n69 Action Plan EU: Seasonal workers are a group of mobile workers who retain their main place of residence in their home country and move temporarily to another Member State to carry out an activity dependent on the passing of the seasons, here\n\nArticle 2.1. *'This Directive shall apply to third-country nationals who reside outside the territory of the Member*", - "page_start": 142, - "page_end": 142, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "However, explained variance estimates in our models ranged from 34 to 61%, suggesting further research is necessary to identify additional factors contributing to healthcare utilization following physical therapy.\n\nThe primary limitation of the study is the high number of subjects lost to follow-up. We attempted to account for the bias introduced by loss to follow-up in our models with IPAW, which is a robust strategy for conducting analyses with missing data [41, 51]. We observed good concordance between results of complete case and weighted analyses, giving us confidence in our findings. However, important differences in age, race, education, symptom onset, baseline pain intensity, and baseline pain-related psychological distress were noted between those who did and did not complete follow-up. These differences suggest that the group lost to follow-up may represent a unique population to whom these results may not apply. Different factors may predict utilization outcomes for this unique population. As a result, readers should exercise caution when extending these findings to individuals and populations that substantially differ from the analytic sample in this study. Specifically, these predictive models may need to be adjusted for younger individuals of non-white race, with lower education levels, sudden onset of symptoms, and those with higher pain intensity and pain-associated distress.\n\nA second limitation is that we did not know about the subjects' prior experiences with physical therapy, or whether they arrived at physical therapy through direct access or referral from another provider. These factors could be associated with treatment expectations, which have known effects on treatment outcomes [52, 53]. We also did not collect specific information on treatment. But by including changes in pain, disability, and pain-related psychological distress in the models, we were able to account for treatment response. The benefit of this approach is that models are generalizable for predicting utilization outcomes across \"real-world\" pragmatic physical therapy settings where treatment variation is expected. The drawback is that we are prohibited from making conclusions regarding which characteristics of the clinical encounter might influence subsequent pain-related healthcare utilization. Important characteristics to consider would include number of visits, type of interventions or whether patients completed their course of physical therapy. These have been proposed or identified as important contributors to downstream costs following physical therapy [54, 55] and may be a source of unexplained variance in our models. Characteristics of the clinical encounter should be considered in future studies to refine the prediction models developed in our analyses.\n\nThird, we were unable to adequately model the specific effects of worker's compensation, self-pay and some commercial insurance coverage on utilization due to the low incidence of these forms of payment in our study sample. Modeling these separately would have created the potential for unreliable and imprecise effect estimates. Readers should consider the within-group heterogeneity caused by this approach and exercise caution when applying these results to individuals who do not have traditional public or private insurance coverage. Future studies should investigate the performance of the\n\nWorker's Compensation. A final limitation is the use of patient recall to measure utilization. To mitigate recall bias, we used two follow-up points, at 6 and 12 months. However, underor over-reporting of utilization is often a concern with studies requiring subject recall [56–58]. Medical record and claims data were not available for these subjects. Readers should consider our inability to independently confirm utilization when interpreting results.\n\nOSPRO tools in predicting outcomes for patients with\n\nIn future studies, we will embed the OSPRO tools into electronic medical record (EMR) databases to refine and test outcomes prediction models at the health care systems level. Importantly, we will collect clinical encounter data through the EMR and combine it with administrative or billing data to confirm the results of this study with more objective measures of health care use. These studies will also allow us to provide better guidance on how to use the OSPRO tools to identify serious psychiatric involvement or systemic sources of pain that require medical referral. Finally, we will explore alternative scoring strategies for the tools, such as weighted scoring for the OSPRO-ROS and use of predicted full-length psychological questionnaire scores for the OSPRO-YF. Healthcare providers could then use the collective information from these studies to build learning health systems that facilitate effective, real-time clinical decision-making support to improve value of care for patients with musculoskeletal pain.\n\n#### Conclusion\n\nBaseline disability and change in pain intensity were important predictors of any subsequent pain-related healthcare utilization, while predictors of individual service utilization were outcome-specific. Identification of risk is improved through treatment monitoring for pain and, in some cases, disability and pain-related psychological distress. Comorbidity burden was an important predictor of subsequent utilization of opioids and diagnostic tests and imaging, both of which have been recent targets of healthcare policy to constrain their unnecessary use. Future research is needed to refine these predictor variables and incorporate them into risk models that support clinical decision-making so that treatment effectiveness and efficiency are optimized in value-based systems.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "To what system of logic do OWL ontologies belong to ?", - "target_page": 7, - "target_passage": "OWL ontologies are an implementation of Description Logic (DL) which is a decidable subset of First Order Logic", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing. The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \"∧\" has the meaning of \"and\".\n\n# **Informal logic**\n\nWhen understood in a wide sense, logic\n\nencompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\".[36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to \"peak under the hood\" of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1. In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class. Similarly when you add NamedPizza as a subclass of Pizza, Protégé adds the triple: NamedPizza rdfs:**s**ubClassOf Pizza.\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs:**s**ubClassOf and the object is any other entity. The *?* before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n#### PREFIX pizza: \n\nWe are almost ready to query the actual ontology. For our first query let's find all the Pizzas purchased by a Customer. The SPARQL code for this is:", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### **Exercise 3: Add a Comment Annotation to Your Ontology**\n\n1. Make sure you are in the Active Ontology tab. In the view just below the Ontology IRI and Ontology Version IRI fields find the Annotations option and click on the + sign. This will bring up a menu to create a new annotation on the ontology.\n\n_____________________________________________________________________________________\n\n2. The rdfs:comment annotation should be highlighted by default. If it isn't highlighted click on it. Then type a new comment into the view to the right. Something like A tutorial ontology for the Pizza domain.\n\n_____________________________________________________________________________________\n\n3. Click OK. Your Active Ontology tab should like Figure 4.3.\n\nFigure 4.4: The Class Hierarchy View Options\n\n#### 4.1 Named Classes\n\nThe main building blocks of an OWL ontology are classes. In Protégé 5, editing of classes can be done in the Entities tab. The Entities tab has a number of sub-tabs. When you select it, the default should be the Class hierarchy view as shown in Figure 4.5. 4 All empty ontologies contains one class called owl:Thing. OWL classes are sets of individuals. The class owl:Thing is the class that represents the set containing all individuals. Because of this all classes are subclasses of owl:Thing.\n\n4 Each of the sub-tabs in the Entities tab also exists as its own major tab. In the tutorial we will refer to tabs like the Class hierarchy tab or Object properties tab and it is up to the user whether to access them from the Entities tab or to create them as independent tabs.", - "page_start": 13, - "page_end": 13, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future.[119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.[120]\n\n#### **Propositional logic**\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component.[121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition.[122]\n\n#### **First-order logic**\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\".[123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\nGottlob Frege's *Begriffschrift* introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n#### **Extended**\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n#### **Modal logic**\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\".[127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information.[154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.[155]\n\n# **Areas of research**\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science.[156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.[157]\n\n# **Philosophy of logic and philosophical logic**\n\n*Philosophy of logic* is the philosophical discipline studying the scope and nature of logic.[59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them.[158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] *Philosophical logic* is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n# **Metalogic**\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.[162]\n\n# **Mathematical logic**\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. [189] It influenced Western medieval writers such as Albertus Magnus and William of Ockham. [190] Ibn Sina wrote on the hypothetical syllogism[191] and on the propositional calculus. [192] He developed an original \"temporally modalized\" syllogistic theory, involving temporal logic and modal logic.[193] He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. [191] Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill.[194]\n\nDuring the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic.[195] Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential *Summa Logicae* was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions.[196]", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n# **Definition**\n\nThe word \"logic\" originates from the Greek word *logos*, which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]\n\n# **Formal logic**\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) *p*, (2) if *p* then *q*, (3) therefore *q*\" are valid, independent of what the terms *p* and *q* stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from *p* to *q* is deductively valid then the claim \"if *p* then *q*\" is a logical truth.[16]\n\nFormal logic uses formal languages to express and analyze arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, *a logic* is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "relations, transitive relations, and many more. An understanding of the basic concepts of set theory will help the user get the most out of OWL but is not required. One of the benefits of Protégé is that it presents an intuitive GUI that enables domain experts to define models without a background in set theory. However, developers are encouraged to refresh their knowledge on logic and set theory. A good source is the first 3 chapters in Elements of the Theory of Computation by Lewis and Papadamitrious. Another good source is the PDF document *Overview of Set Theory* available at: https://www.michaeldebellis.com/post/owl-theoretical-basics\n\n### 3.1.1 Individuals\n\nIndividuals represent objects in the domain of interest. An important difference between OWL and most programming and knowledge representation languages is that OWL does not use the Unique Name Assumption (UNA). This means that two different names could actually refer to the same individual. For example, \"Queen Elizabeth\", \"The Queen\" and \"Elizabeth Windsor\" might all refer to the same individual. In OWL, it must be explicitly stated that individuals are the same as each other, or different from each other. Figure 3.1 shows a representation of some individuals in a domain of people, nations, and relations — in this tutorial we represent individuals as diamonds.\n\nFigure 3.2: Representation of Properties\n\nIndividuals are also known as *instances*. Individuals can be referred to as *instances of classes*.", - "page_start": 7, - "page_end": 7, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it.[129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time.[129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case.[130]\n\n#### **Higher order logic**\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification.[131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" (*some* apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories.[43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used.[133]\n\n#### **Deviant**\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.[134]\n\nIntuitionistic logic is a restricted version of classical logic.[135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true.[135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence.[136]\n\nMulti-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate.[137] These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of \"degrees of truth\", represented by a real number between 0 and 1.[138]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "Concerning ontologies, what is an anonymous class ?", - "target_page": 30, - "target_passage": "They are created by the reasoner when you use class expressions. For example, if you define the range of a property to be PizzaTopping or PizzaBase then the reasoner will create an anonymous class representing the intersection of those two classes", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "5. One last thing we want to do is to configure the reasoner. By default, the reasoner does not perform all possible inferences because some inferences can take a long time for large and complex ontologies. In this tutorial we will always be dealing with small and simple ontologies so we want to see everything the reasoner can do. Go to: Reasoner>Configure. This will bring up a dialog with several check boxes of inferences that the reasoner can perform. If they aren't all checked then check them all. You may receive a warning that some inferences can take a lot of time, but you can ignore those since your ontology will be small.\n\n#### 4.3 Disjoint Classes\n\nHaving added the classes Pizza, PizzaTopping, and PizzaBase to the ontology, we now want to say that these classes are *disjoint*. I.e., no individual can be an instance of more than one of those classes. In set theory terminology the intersection of these three classes is the empty set: owl:Nothing.\n\n_____________________________________________________________________________________\n\n_____________________________________________________________________________________\n\n#### **Exercise 6: Make Pizza, PizzaTopping, and PizzaBase disjoint from each other**\n\n1. Select the class Pizza in the class hierarchy.\n\n2. Find the Disjoint With option in the Description view and select the (+) sign next to it. See the red circle in figure 4.6.\n\n3. This should bring up a dialog with two tabs: Class hierarchy and Expression editor. You want Class hierarchy for now (we will use the expression editor later). This gives you an interface to select a class that is identical to the Class hierarchy view. Use it to navigate to PizzaBase. Hold down the shift key and select PizzaBase and PizzaTopping. Select OK.\n\n4. Do a Reasoner>Synchronize reasoner. Then look at PizzaBase and PizzaTopping. You should see that they each have the appropriate disjoint axioms defined to indicate that each of these classes is disjoint with the other two.\n\n_____________________________________________________________________________________", - "page_start": 16, - "page_end": 16, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "The restrictions for a class are displayed and edited using the Class Description View shown in Figure 4.17. The Class Description View holds most of the information used to describe a class. The Class Description View is a powerful way of describing and defining classes. It is one of the most important differences between describing classes in OWL and in other models such as most object-oriented programming languages. In other models there is no formal definition that describes why one class is a subclass of another, in OWL there is. Indeed, the OWL classifier can actually redefine the class hierarchy based on the logical restrictions defined by the user. We will see an example of this later in the tutorial.\n\nRestrictions are also called axioms in OWL. This has the same meaning as in logic. An axiom is a logical formula defined by the user rather than deduced by the reasoner. As described above, in Protégé all axioms are shown in normal font whereas all inferences inferred by the reasoner are highlighted in yellow.\n\n#### 4.10.2 Existential Restrictions\n\nAn existential restriction describes a class of individuals that have at least one (some) relationship along a specified property to an individual that is a member of a specified class or datatype. For example, hasBase some PizzaBase describes all of the individuals that have at least one relationship along the hasBase property to an individual that is a member of the class PizzaBase — in more natural English, all of the individuals that have at least one pizza base.\n\n#### **Exercise 13: Add a restriction to Pizza that specifies a Pizza must have a PizzaBase**\n\n1. Select Pizza from the class hierarchy on the Classes tab.\n\n2. Click on the Add icon (+) next to the SubClass Of field in the Description view for Pizza.\n\n3. This will bring up a new window with several tab options to define a new restriction. Select the Object restriction creator. This tab has the Restricted property on the left and the Restriction filler on the right.\n\n_____________________________________________________________________________________\n\n4. Expand the property hierarchy on the left and select hasBase as the property to restrict. Then in the Restriction filler on the right select the class PizzaBase. Finally, the Restriction type at the bottom should be set to Some (existential). This should be the default so you shouldn't have to change anything but double check that this is the case. Your window should look like figure 4.16 now.\n\n5. When your UI looks like figure 4.16 click on the OK button. That should close the window. Run the reasoner to make sure things are consistent. Your main window should now look like figure 4.17.\n\n_____________________________________________________________________________________", - "page_start": 31, - "page_end": 31, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "The following are some examples of classes of individuals that we might want to define via property restrictions:\n\n- The class of individuals with at least one hasChild relation.\n- The class of individuals with 2 or more hasChild relations.\n- The class of individuals that have at least one hasTopping relationship to individuals that are members of MozzarellaTopping – i.e. the class of things that have at least a mozzarella topping.\n- The class of individuals that are Pizzas and only have hasTopping relations to instances of the class VegetableTopping (i.e., VegetarianPizza).\n\nIn OWL we can describe all of the above classes using restrictions. OWL restrictions fall into three main categories:\n\n- 1. Quantifier restrictions. These describe that a property must have some or all values that are of a particular class.\n- 2. Cardinality restrictions. These describe the number of individuals that must be related to a class by a specific property.\n- 3. hasValue restrictions. These describe specific values that a property must have.\n\nWe will initially use quantifier restrictions. Quantifier restrictions can be further categorized as *existential* restrictions and *universal* restrictions6 . Both types of restrictions will be illustrated with examples in this tutorial.\n\n- Existential restrictions describe classes of individuals that participate in at least one relation along a specified property. For example, the class of individuals who have at least one (or some) hasTopping relation to instances of VegetableTopping. In OWL the keyword some is used to denote existential restrictions.\n- Universal restrictions describe classes of individuals that for a given property *only* have relations along a property to individuals that are members of a specific class. For example, the class of individuals that only have hasTopping relations to instances of the class VegetableTopping. In OWL they keyword only is used for universal restrictions.\n\nLet's take a closer look at an example of an existential restriction. The restriction hasTopping some MozzarellaTopping is an existential restriction (as indicated by the some keyword), which restricts the hasTopping property, and has a filler MozzarellaTopping. This restriction describes the class of individuals that have at least one hasTopping relationship to an individual that is a member of the class MozzarellaTopping.\n\nA restriction always describes a class. Sometimes (as we will soon see) it can be a defined class. Other times it may be an anonymous class. In all cases the class contains all of the individuals that satisfy the restriction, i.e., all of the individuals that have the relationships required to be a member of the class. In section 9.2 one of our SPARQL queries will return several anonymous classes.\n\n6 These have the same meaning as existential and universal quantification in First Order Logic.", - "page_start": 30, - "page_end": 30, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| Preferences | | | | × |\n| --- | --- | --- | --- | --- |\n| New ontologies | OWLViz Plugins Reasoner | | Renderer User details | |\n| Annotations Explanations | General Loa | New entities | New entities metadata | |\n| Entity rendering @ Render by entity IRI short name (Id) | | | | |\n| | O Render by prefixed name | | | |\n| | O Render by annotation property (e.g., rdfs:label, skos:prefLabel) | | | |\n| | O Render by prefixed annotation property | | | |\n| | Configure ... | | | |\n| Appearance | Highlight active ontology statements | | | |\n| | Show hyperlinks in components that support them | | | |\n| | Highlight keywords | | | |\n| Font size | 12 = | | | |\n| | Reset font | | | |\n| Reset preferences ... | | | | |\n| | OK Cancel | | | |\n\nFigure 4.2 Renderer tab\n\n| □ < PizzaTutorial (http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial) : [C:\\Users\\Michael DeB ... | | | | | × |\n| --- | --- | --- | --- | --- | --- |\n| Edit Refactor Window Help | File View | Reasoner | Tools | | |\n| + PizzaTutorial (http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial) | | | | | Search ... |\n| Active ontology × Entities × Individuals by class × DL Query × | | | | | |\n| Ontology header: 团团启回团 Ontology metric 团团目回区 | | | | | |\n| Ontology IRI http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial Metrics | | | | | |\n| Ontology Version IRI e.g. http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial/1 Axiom 0 | | | | | |\n| Logical axio ... 0 | | | | | |\n| Declaration ... 0 Annotations (+ | | | | | |\n| Class count 0 rdfs:comment × (0) | | | | | |\n| Object prop ... 0 A tutorial ontology for the Pizza domain. | | | | | |\n| Data proper ... 0 | | | | | |\n| Individual c ... 0 | | | | | |\n| Annotation ... 1 | | | | | |\n| Class axioms | | | | | |\n| SubClassOf 0 | | | | | |\n| Ontology imports General class axioms | | Ontology Prefixes | | | |\n| Imported ontologies: | | | | | 008回國國國國 |\n| Direct Imports (+ | | | | | |\n| Indirect Imports | | | | | |\n| Show Inferences | | | | To use the reasoner click Reasoner > Start reasoner | 0 |\n\nFigure 4.3: The Active Ontology Tab with a New Comment", - "page_start": 12, - "page_end": 12, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### **Exercise 12: Define the domain and range for the hasBase property**\n\n1. Now we are going to repeat the same activities as in the previous exercise but for another property: hasBase. Make sure you are still on the Object properties tab. Select the hasBase property.\n\n2. Click on the Add icon (+) next to Domains (intersection) in the Description view for hasBase. Select the ClassHierarchy tab. Then select Pizza from the class hierarchy..\n\n_____________________________________________________________________________________\n\n3. Repeat step 2 but this time start by using the (+) icon next to the Ranges (intersection) in the Description for hasBase. This time select the class PizzaBase as the range.\n\n4. Synchronize the reasoner. Now select isBaseOf You should see that the Domain and Range for isBaseOf have been filled in by the reasoner.\n\n#### 4.10 Describing and Defining Classes\n\nNow that we have defined some properties, we can use these properties to define some more interesting classes. There are 3 types of classes in OWL:\n\n_____________________________________________________________________________________\n\n- 1. Primitive classes. These are classes that are defined by conditions that are *necessary* (but not sufficient) to hold for any individuals that are instances of that class or its subclasses. The condition may be as simple as: *Class A is a subclass of class B*. To start with we will define primitive classes first and then defined classes. When the reasoner encounters an individual that is an instance of a primitive class it infers that all the conditions defined for that class must hold for that individual.\n- 2. Defined classes. These are classes that are defined by both *necessary* and *sufficient* conditions. When the reasoner encounters an individual that satisfies all the conditions for a defined class it will make the inference that the individual is an instance of that class. The reasoner can also use the conditions defined on classes to change the class hierarchy, e.g., to infer that *Class A is a subclass of Class B*. We will see examples of this later in the tutorial.\n- 3. Anonymous classes. These are classes that you won't encounter much and that won't be discussed much in this tutorial, but it is good to know about them. They are created by the reasoner when you use class expressions. For example, if you define the range of a property to be PizzaTopping or PizzaBase then the reasoner will create an anonymous class representing the intersection of those two classes.\n\n#### 4.10.1 Property restrictions\n\nIn OWL properties define binary relations with the same semantics and characteristics as binary relations in First Order Logic. There are two types of OWL properties for describing a domain: Object properties and Data properties. Object properties have classes as their domain and range. Data properties have classes as their domain and simple datatypes such as xsd:string or xsd:dateTime as their range. In figure 3.3 the individual Michael is related to the individual USA by the property livesIn. Consider all the individuals who are an instance of Person and also have the same relation, that each livesIn the USA. This group is a set or OWL class such as USAResidents. In OWL a class can be defined by describing the various properties and values that hold for all individuals in the class. Such definitions are called *restrictions* in OWL.", - "page_start": 29, - "page_end": 29, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### **Exercise 3: Add a Comment Annotation to Your Ontology**\n\n1. Make sure you are in the Active Ontology tab. In the view just below the Ontology IRI and Ontology Version IRI fields find the Annotations option and click on the + sign. This will bring up a menu to create a new annotation on the ontology.\n\n_____________________________________________________________________________________\n\n2. The rdfs:comment annotation should be highlighted by default. If it isn't highlighted click on it. Then type a new comment into the view to the right. Something like A tutorial ontology for the Pizza domain.\n\n_____________________________________________________________________________________\n\n3. Click OK. Your Active Ontology tab should like Figure 4.3.\n\nFigure 4.4: The Class Hierarchy View Options\n\n#### 4.1 Named Classes\n\nThe main building blocks of an OWL ontology are classes. In Protégé 5, editing of classes can be done in the Entities tab. The Entities tab has a number of sub-tabs. When you select it, the default should be the Class hierarchy view as shown in Figure 4.5. 4 All empty ontologies contains one class called owl:Thing. OWL classes are sets of individuals. The class owl:Thing is the class that represents the set containing all individuals. Because of this all classes are subclasses of owl:Thing.\n\n4 Each of the sub-tabs in the Entities tab also exists as its own major tab. In the tutorial we will refer to tabs like the Class hierarchy tab or Object properties tab and it is up to the user whether to access them from the Entities tab or to create them as independent tabs.", - "page_start": 13, - "page_end": 13, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## *5. Examining approaches to building a books data commons*\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## *5a. Public domain and permissively licensed books*\n\n#### **Existing Project Example : The Pile v2** 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile — a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others.28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2.29 Among other things, v2 would \"have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.\" At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.\n\nThis is an illustrative example, and there are also other projects of this ilk. For instance, see the 27 Common Corpus project, which includes an array of public domain books from a number of countries, at https://huggingface.co./blog/Pclanglais/common-corpus; see also https://huggingface.co./datasets/ storytracer/internet_archive_books_en (\"This dataset contains more than 650,000 English public domain books (~ 61 billion words) which were digitized by the Internet Archive and cataloged as part of the Open Library project.\")\n\nSee Gao et al, supra note 8. 28\n\nGoldman, Sharon. \"One of the World's Largest AI Training Datasets Is About to Get Bigger and 29 \"Substantially Better.\" *VentureBeat*, 11 Jan. 2024, venturebeat.com/ai/one-of-the-worlds-largest-aitraining-datasets-is-about-to-get-bigger-and-substantially-better/. Accessed 20 Mar. 2024.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "most similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as \"unintelligible\" [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia, by discarding any page containing one of a list of about 400 \"Dirty, Naughty, Obscene or Otherwise Bad Words\" [p.6].14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people.15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n#### 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.\n\nAn important caveat is that social movements which are poorly documented and which do not receive significant media attention will not be captured at all. Media coverage can fail to cover protest events and social movements [41, 96] and can distort events that challenge state power [36]. This is exemplified by media outlets that tend to ignore peaceful protest activity and instead focus on dramatic or violent events that make for good television but nearly always result in critical coverage [81]. As a result, the data underpinning LMs stands to misrepresent social movements and disproportionately align with existing regimes of power.\n\nDeveloping and shifting frames stand to be learned in incomplete ways or lost in the big-ness of data used to train large LMs — particularly if the training data isn't continually updated. Given the compute costs alone of training large LMs, it likely isn't feasible for even large corporations to fully retrain them frequently enough to keep up with the kind of language change discussed here. Perhaps fine-tuning approaches could be used to retrain LMs, but here again, what would be required is thoughtful curation practices to find appropriate data to capture reframings and techniques for evaluating whether such fine-tuning appropriately captures the ways in which new framings contest hegemonic representations.\n\n## 4.3 Encoding Bias\n\nIt is well established by now that large LMs exhibit various kinds of bias, including stereotypical associations [11, 12, 69, 119, 156, 157], or negative sentiment towards specific groups [61]. Furthermore, we see the effects of intersectionality [34], where BERT, ELMo, GPT and GPT-2 encode more bias against identities marginalized along more than one dimension than would be expected based on just the combination of the bias along each of the axes [54, 132]. Many of these works conclude that these issues are a reflection of training data characteristics. For instance, Hutchinson et al. find that BERT associates phrases referencing persons with disabilities with more negative sentiment words, and that gun violence, homelessness, and drug addiction are overrepresented in texts discussing mental illness [61]. Similarly, Gehman et al. show that models like GPT-3 trained with at least 570GB of data derived mostly from Common Crawl16 can generate sentences with high toxicity scores even when prompted with non-toxic sentences [53]. Their investigation of GPT-2's training data17 also finds 272K documents from unreliable news sites and 63K from banned subreddits.\n\nThese demonstrations of biases learned by LMs are extremely valuable in pointing out the potential for harm when such models are deployed, either in generating text or as components of classification systems, as explored further in §6. However, they do not represent a methodology that can be used to exhaustively discover all such risks, for several reasons.\n\nFirst, model auditing techniques typically rely on automated systems for measuring sentiment, toxicity, or novel metrics such as 'regard' to measure attitudes towards a specific demographic group [119]. But these systems themselves may not be reliable\n\n14Available at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/en, accessed Jan 18, 2021\n\n15This observation is due to William Agnew.\n\n16https://commoncrawl.org/the-data/\n\n17GPT-3's training data is not openly available, but GPT-2's training data was used indirectly to construct GPT-3's [53].", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "provide a language that is called Description Logic or DL for short. One of the key features of DL is that these superclass-subclass relationships (aka subsumption relationships) can be computed automatically by a reasoner – more on this later. Figure 3.3 shows a representation of some classes containing individuals – classes are represented as ovals, like sets in Venn diagrams.\n\nIn OWL classes can be built up of descriptions that specify the conditions that must be satisfied by an individual for it to be a member of the class. How to formulate these descriptions will be explained as the tutorial progresses.", - "page_start": 9, - "page_end": 9, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to \"peak under the hood\" of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1. In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class. Similarly when you add NamedPizza as a subclass of Pizza, Protégé adds the triple: NamedPizza rdfs:**s**ubClassOf Pizza.\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs:**s**ubClassOf and the object is any other entity. The *?* before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n#### PREFIX pizza: \n\nWe are almost ready to query the actual ontology. For our first query let's find all the Pizzas purchased by a Customer. The SPARQL code for this is:", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "When to use an enumerated class in OWL ontologies ?", - "target_page": 46, - "target_passage": "When a property has only a few possible values it can be useful to create a class to represent those values and to explicitly define the class by listing each possible value", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### **Exercise 3: Add a Comment Annotation to Your Ontology**\n\n1. Make sure you are in the Active Ontology tab. In the view just below the Ontology IRI and Ontology Version IRI fields find the Annotations option and click on the + sign. This will bring up a menu to create a new annotation on the ontology.\n\n_____________________________________________________________________________________\n\n2. The rdfs:comment annotation should be highlighted by default. If it isn't highlighted click on it. Then type a new comment into the view to the right. Something like A tutorial ontology for the Pizza domain.\n\n_____________________________________________________________________________________\n\n3. Click OK. Your Active Ontology tab should like Figure 4.3.\n\nFigure 4.4: The Class Hierarchy View Options\n\n#### 4.1 Named Classes\n\nThe main building blocks of an OWL ontology are classes. In Protégé 5, editing of classes can be done in the Entities tab. The Entities tab has a number of sub-tabs. When you select it, the default should be the Class hierarchy view as shown in Figure 4.5. 4 All empty ontologies contains one class called owl:Thing. OWL classes are sets of individuals. The class owl:Thing is the class that represents the set containing all individuals. Because of this all classes are subclasses of owl:Thing.\n\n4 Each of the sub-tabs in the Entities tab also exists as its own major tab. In the tutorial we will refer to tabs like the Class hierarchy tab or Object properties tab and it is up to the user whether to access them from the Entities tab or to create them as independent tabs.", - "page_start": 13, - "page_end": 13, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "Figure 4.23 The Reasoner Inferred that Margherita and Soho Pizzas are subclasses of VegetarianPizza\n\n#### 4.14 Defining an Enumerated Class\n\nA powerful tool in the object-oriented programming (OOP) community is the concept of design patterns. The idea of a design pattern is to capture a reusable model that is at a higher level of abstraction than a specific code library. One of the first and most common design patterns was the Model-View-Controller pattern first used in Smalltalk and now almost the default standard for good user interface design. Since there are significant differences between OWL and standard OOP the many excellent books on OOP design patterns don't directly translate into OWL design patterns. Also, since the use of OWL is more recent than OOP there does not yet exist the excellent documentation of OWL patterns that the OOP community has. However, there are already many design patterns that have been documented for OWL and that can provide users with ways to save time and to standardize their designs according to best practices.\n\nOne of the most common OWL design patterns is an enumerated class. When a property has only a few possible values it can be useful to create a class to represent those values and to explicitly define the class by listing each possible value. We will show an example of such an enumerated class by creating a new", - "page_start": 44, - "page_end": 44, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing. The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to \"peak under the hood\" of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1. In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class. Similarly when you add NamedPizza as a subclass of Pizza, Protégé adds the triple: NamedPizza rdfs:**s**ubClassOf Pizza.\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs:**s**ubClassOf and the object is any other entity. The *?* before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n#### PREFIX pizza: \n\nWe are almost ready to query the actual ontology. For our first query let's find all the Pizzas purchased by a Customer. The SPARQL code for this is:", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| Chapter 1 Introduction 4 |\n| --- |\n| 1.1 Licensing 4 |\n| 1.2 Conventions 4 |\n| Chapter 2 Requirements and the Protégé User Interface 6 |\n| Chapter 3 What are OWL Ontologies? 6 |\n| 3.1 Components of OWL Ontologies 6 |\n| 3.1.1 Individuals 7 |\n| 3.1.2 Properties 8 |\n| 3.1.3 Classes 8 |\n| Chapter 4 Building an OWL Ontology 10 |\n| 4.1 Named Classes 13 |\n| 4.2 Using a Reasoner 15 |\n| 4.4 Using Create Class Hierarchy 17 |\n| 4.5 Create a PizzaTopping Hierarchy 19 |\n| 4.6 OWL Properties 22 |\n| 4.7 Inverse Properties 23 |\n| 4.8 OWL Object Property Characteristics 24 |\n| 4.8.1 Functional Properties 24 |\n| 4.8.2 Inverse Functional Properties 25 |\n| 4.8.3 Transitive Properties 25 |\n| 4.8.4 Symmetric and Asymmetric Properties 25 |\n| 4.8.5 Reflexive and Irreflexive Properties 26 |\n| 4.8.6 Reasoners Automatically Enforce Property Characteristics 26 |\n| 4.9 OWL Property Domains and Ranges 26 |\n| 4.10 Describing and Defining Classes 29 |\n| 4.10.1 Property restrictions 29 |\n| 4.10.2 Existential Restrictions 31 |\n| 4.10.3 Creating Subclasses of Pizza 33 |\n| 4.10.4 Detecting a Class that can't Have Members 37 |\n| 4.11 Primitive and Defined Classes (Necessary and Sufficient Axioms) 38 |\n| 4.12 Universal Restrictions 41 |\n| 4.13 Automated Classification and Open World Reasoning 42 |", - "page_start": 2, - "page_end": 2, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### 3.1.2 Properties\n\nProperties are binary relations between individuals. I.e., properties link two individuals together. For example, the property hasFriend might link the individual Biswanath to the individual Michael, or the property hasChild might link the individual Michael to the individual Oriana. Properties can have inverses. For example, the inverse of hasChild is hasParent. Properties can be limited to having a single value – i.e., to being functional. They can also be transitive or symmetric. These property characteristics are explained in detail in Section 4.8. Figure 3.2 shows a representation of some properties.\n\n> Properties are similar to properties in Object-Oriented Programming (OOP). However, there are important differences between properties in OWL and OOP. The most important difference is that OWL properties are first class entities that exist independent of classes. OOP developers are encouraged to read: https://www.w3.org/2001/sw/BestPractices/SE/ODSD/\n\nFigure 3.3: Representation of Classes containing Individuals\n\n#### 3.1.3 Classes\n\nOWL classes are sets that contain individuals. They are described using formal (mathematical) descriptions that rigorously define the requirements for membership of the class. For example, the class Cat would contain all the individuals that are cats in our domain of interest.2 Classes may be organized into a superclass-subclass hierarchy, which is also known as a taxonomy. However, taxonomies are often trees. I.e., each node has only one parent node. Class hierarchies in OWL are not restricted to be trees and multiple inheritance can be a powerful tool to represent data in an intuitive manner.\n\nSubclasses specialize (aka *are subsumed by*) their superclasses. For example, consider the classes Animal and Dog – Dog might be a subclass of Animal (so Animal is the superclass of Dog). This says that *All dogs are animals*, *All members of the class* Dog *are members of the class* Animal. OWL and Protégé\n\n2 Individuals can belong to more than one class and classes can have more than one superclass. Unlike OOP where multiple inheritance is typically unavailable or discouraged it is common in OWL.", - "page_start": 8, - "page_end": 8, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 1 Introduction\n\nThis introduces Protégé 5 for creating OWL ontologies as well as various plugins. If you have questions specific to this tutorial, please feel free to email me directly: mdebellissf@gmail.com However, if you have general questions about Protégé, OWL, or plugins you should subscribe to and send an email to the User Support for Protégé and Web Protégé email list. This list has many people (including me) who monitor it and can contribute their knowledge to help you understand how to get the most out of this technology. To subscribe to the list, go to: https://protege.stanford.edu/support.php and click on the first orange Subscribe button. That will enable you to subscribe to the list and give you the email to send questions to.\n\nThis chapter covers licensing and describes conventions used in the tutorial. Chapter 2 covers the requirements for the tutorial and describes the Protégé user interface. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses on building an OWL ontology with classes and object properties. Chapter 4 also describes using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy.\n\nChapter 5 describes data properties. Chapter 6 describes design patterns and shows one design pattern: adding an order to an enumerated class. Chapter 7 describes the various concepts related to the name of an OWL entity.\n\nChapter 8 introduces an extended version of the Pizza tutorial developed in chapters 1-7. This ontology has a small number of instances and property values already created which can be used to illustrate the tools in the later chapters for writing rules, doing queries, and defining constraints.\n\nChapter 9 describes two tools for doing queries: Description Logic queries and SPARQL queries. Chapter 10 introduces the Semantic Web Rule Language (SWRL) and walks you through creating SWRL and SQWRL rules. Chapter 11 introduces the Shapes Constraint Language (SHACL) and discusses the difference between defining logical axioms in Description Logic and data integrity constraints in SHACL. Chapter 12 has some concluding thoughts and opinions and Chapter 13 provides a bibliography.\n\n#### 1.1 Licensing\n\nThis document is freely available under the Creative Commons Attribution-ShareAlike 4.0 International Public License. I typically distribute it as a PDF but if you want to make your own version send me an email and I will send you the Word version. For details on licensing see: https://creativecommons.org/licenses/by-sa/4.0/legalcode\n\n#### 1.2 Conventions\n\nClass, property, rule, and individual names are written in Consolas font like this. The term used for any such construct in Protégé and in this document is an *Entity*. Individuals and classes can also be referred to as objects.\n\nNames for user interface tabs, views, menu selections, buttons, and text entry are highlighted like this.\n\nAny time you see highlighted text such as File>Preferences or OK or PizzaTopping it refers to something that you should or optionally could view or enter into the user interface. If you ever aren't sure what to do to accomplish some task look for the highlighted text. Often, as with PizzaTopping the text you enter into a field in the Protégé UI will be the name of a class, property, etc. In those cases, where the", - "page_start": 4, - "page_end": 4, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### 4.10.4 Detecting a Class that can't Have Members\n\nNext, we are going to use the reasoner to detect a class with a definition that means it can never have any members. In the current version of Protégé when the reasoner detects an inconsistency or problem on some operating systems the UI can occasionally lock up and be hard to use. So to make sure you don't lose any of your work save your ontology using File>Save.\n\nSometimes it can be useful to create a class that we think should be impossible to instantiate to make sure the ontology is modeled as we think it is. Such a class is called a Probe Class.\n\n_____________________________________________________________________________________\n\n#### **Exercise 19: Add a Probe Class called ProbeInconsistentTopping**\n\n1. Select the class CheeseTopping from the class hierarchy.\n\n2. Create a subclass of CheeseTopping called ProbeInconsistentTopping.\n\n3. Click on the Add icon (+) next to the SubClass Of field in the Description view for ProbeInconsistentTopping.\n\n4. Select the Class hierarchy tab from the dialogue that pops up. This will bring up a small view that looks like the class hierarchy tab you have been using to add new classes. Use this to navigate to and select the class VegetableTopping. Click on OK.\n\n5. Make sure to save your current ontology file. Now run the reasoner. You should see that ProbeInconsistentTopping is now highlighted in red indicating it is inconsistent.\n\n6. Click on ProbeInconsistentTopping to see why it is highlighted in red. Notice that at the top of the Description view you should now see owl:Nothing under the Equivalent To field. This means that the probe class is equivalent to owl:Nothing. The owl:Nothing class is the opposite of owl:Thing. Whereas all individuals are instances of owl:Thing, no individual can ever be an instance of owl:Nothing. The owl:Nothing class is equivalent to the empty set in set theory.\n\n7. There should be a ? icon just to the right of owl:Nothing. As with any inference of the reasoner it is possible to click on the new information and generate an explanation for it. Do that now, click on the ? icon. This should generate a new window that looks like figure 4.20. The explanation is that ProbeInconsistentTopping is a subclass of CheeseTopping and VegetableTopping but those two classes are disjoint.\n\n8. Click OK to dismiss the window. Delete the class ProbeInconsistentTopping by selecting it and then clicking on the delete class icon at the top of the classes view (see figure 4.4).\n\n_____________________________________________________________________________________\n\n9. Synchronize the reasoner.", - "page_start": 37, - "page_end": 37, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 9 Queries: Description Logic and SPARQL\n\nNow that we have some individuals in our ontology, we can do some interesting queries. There are several tools for doing queries in Protégé.\n\n#### 9.1 Description Logic Queries\n\nTo start with the most straight forward one based on what you have already learned are Description Logic (DL) queries. These are essentially the same kind of statements you have been using to define classes. However, in addition to using such statements to define a class you can use it as a query.\n\n_____________________________________________________________________________________\n\n#### **Exercise 33: Try Some Description Logic Queries**\n\n1. To begin with navigate to the DL Query tab. If it doesn't exist create it using: Window>Tabs>DL Query.\n\n2. At the top right of this tab you should see a view that says DL query: and below it Query (class expression).\n\n3 You can enter any DL statement you want in this box and then see all the entities that are subclasses, superclasses, and instances of it. As an example, enter: Customer and purchasedPizza some (hasTopping some (hasSpiciness value Hot)). I.e., all Customers who have purchased a Pizza that hasSpiciness Hot. At first you may not see anything but don't worry there is one more step.\n\n4. Look at the check boxes on the right under Query for. Check Superclasses, Subclasses (although it should already be checked by default) and Instances. Now your UI should look like figure 9.1. You may notice that owl:Nothing shows up as a subclass. Don't worry that is actually expected. Remember that owl:Nothing is the empty set and the empty set is a subset of every set (including itself) so just as owl:Thing is a superclass of every class owl:Nothing is a subclass of every class. If you don't want to see owl:Nothing you can uncheck the box toward the bottom right that says Display owl:Nothing.\n\n5. Try some additional DL queries such as: hasTopping some (hasSpiciness value Hot) and VegetarianPizza and (hasTopping some (hasSpiciness some (isMilderThan value Hot))). Note that with this last query you are taking advantage of the transitive order you defined for the instances of the Spiciness class in chapter 6.\n\n6. You can also do queries for strings in the names of your entities. For example, first do a query simply with Pizza in the query window. Then type in Hot in the Name contains field. This should give you all the classes and individuals with *Hot* in their name.\n\n_____________________________________________________________________________________", - "page_start": 66, - "page_end": 66, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "I.e., it might be the case that we never get the data to satisfy every integrity constraint which would mean the reasoner is never of any use except to tell us that the ontology is not consistent.\n\nThus, SHACL provides a way to define data integrity constraints that overlap to some degree with what can be defined in OWL and SWRL. For example, both can define the number of values allowed for a specific property. E.g., that each instance of Employee must have one and only one social security number (ssn). If this were defined as a DL axiom, then the axiom would never fire for employees that had no ssn because of the OWA. On the other hand, if an Employee accidentally had 2 ssn values then the entire ontology would be inconsistent until one value was removed. SHACL on the other hand can handle both these examples and rather than making the entire ontology inconsistent it simply logs warnings at various levels of severity.\n\n#### 11.3 Basic SHACL Concepts\n\nTo understand SHACL recall that the language underlying OWL is RDF which describes graphs as triples of the form: Subject Predicate Object. SHACL also works at the level of RDF because some developers may want to simply use that lower level for reasons of efficiency. Thus, RDF can validate an RDF graph as well as an OWL ontology. Fundamentally, SHACL consists of two components:\n\n- 1. An RDF vocabulary for defining data constraints on RDF graphs (which includes OWL since an OWL ontology is an RDF graph).\n- 2. A reasoner for applying the constraints defined in 1 to a specified data graph such as the Pizza ontology.\n\nOne of the most important classes in 1 is a SHACL Shape. An instance of the SHACL Shape class consists of a set of Targets and Constraints. A Target defines which nodes in the RDF graph that the data constraints apply to. For OWL ontologies this is typically the name of a class which indicates that the constraints apply to all instances of that class. The Constraints define the specific property for the constraint as well as the actual constraints such as the minimum or maximum number of values and the datatype. In the following example, a Target is the Employee class in the Pizza ontology. An example constraint is that the ssn property must have exactly one value. Another example constraint is that the format of the ssn value must be a string of the form: \"NNN-NN-NNNN\" where each N must be an integer. For more on SHACL see the references in the bibliography.\n\n#### 11.4 The Protégé SHACL Plug-In\n\nTo start go to Windows>Tabs and see if you have SHACL Editor as an option. If you don't then go to File>Check for plugins and select the SHACL4Protege Constraint Validator. You need to restart Protégé to see the new plugin so save your work and then quit and start Protégé and load the Pizza ontology with data.\n\nBecause editing SHACL is a bit more complex for this version of the tutorial we are only going to view some already written SHACL constraints and see how the validator processes them rather than writing additional constraints. First download the PizzaShapes.txt file to your local hard drive. This file can be found at: https://tinyurl.com/pizzatshapes Once you have downloaded the file open the SHACL Editor: Window>Tabs>SHACL Editor.\n\nYou will see an example shapes file in the editor when it opens but that isn't the shapes file you are looking for. From the editor click on the Open button at the top of the tab and navigate to the PizzaShapes.txt file you downloaded.", - "page_start": 77, - "page_end": 77, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Howcan I specify to Content Manager OnDemand to store the data on the server on which the program runs ?", - "target_page": 121, - "target_passage": "Local: Content Manager OnDemand stores data in a primary storage node on the server on which the data loading program runs", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 1-1 Content Manager OnDemand system overview\n\nContent Manager OnDemand Client programs provide authorized users with high-speed access to the archived data that runs on the user devices (workstations) that are attached to the network and communicate with the Content Manager OnDemand servers.\n\nA Content Manager OnDemand server consists of multiple components that can be installed on a single system or multiple systems. In all cases, the installation appears to the users as a single server. The installation and is administered by the Content Manager OnDemand administrator as a single system.\n\nThe Content Manager OnDemand server includes the following components:\n\n- - A single library server: The library server manages a database that contains the information about the users of the system, and the reports and data that are stored on the system.\n- - One or more object servers: The object servers manage the data on disk or tape storage devices.\n- - One or more archive servers: The archive server stores the archived data objects. Depending on the operating system, the archive servers might be IBM Tivoli® Storage Manager, object access method (OAM), or Archive Storage Manager (ASM).\n\nThe library server and the object server can be packaged separately or as a single executable file.\n\n#### **Content Manager OnDemand Client programs**\n\nContent Manager OnDemand Client programs operate on various environments, including personal computers that are running on Windows, web browsers, and mobile devices. By using the client program, users can search for and retrieve reports that are stored on the system. Specifically, users can construct queries and search for reports, retrieve documents from Content Manager OnDemand, view, print, and fax copies or pages of documents, and attach electronic notes to the pages of a document.", - "page_start": 28, - "page_end": 28, - "source_file": "sg246915.pdf" - }, - { - "text": "# **2.3 Implementing a Content Manager OnDemand instance on a multiplatform UNIX environment**\n\nIn this section, we describe how to set up a single instance in a Content Manager OnDemand for a multiplatform UNIX environment. Always refer to the product documentation of your release for the specific steps to follow.\n\n# **2.3.1 Defining a single instance**\n\nBy default, the initial instance on any library server is named archive. Creating a single instance can be summarized by the following steps:\n\n- 1. Creating a user\n- 2. Creating a DB2 instance\n- 3. Installing IBM Global Security Kit\n- 4. Setting up Secure Sockets Layer (SSL)\n- 5. Storing user IDs and passwords in a stash file\n- 6. Installing and configuring Tivoli Storage Manager\n- 7. Configuring the instance\n- 8. Creating a Content Manager OnDemand database\n- 9. Initializing the system log and system load facility\n\n# **Creating a user**\n\nNew installations (instances) of Content Manager OnDemand can be configured to run under a user other than the root user. If you plan to run an instance under a user other than root, complete the following steps:\n\n- 1. Create the user for the Content Manager OnDemand instance owner that is a member of the database owners group.\n- 2. Give the user administrator authority to the database.\n- 3. Set permissions for the cache storage file systems.\n- 4. Set permissions for the Content Manager OnDemand configuration and script files.\n- 5. Give the instance owner permission to write to the system console.\n- 6. Specify the instance owner in the ARS.INI file.\n\nIf you plan to run a distributed library and object server system, with one or more object servers on different workstations or nodes than the library server, you must also configure Content Manager OnDemand on the object servers.\n\nTo configure Content Manager OnDemand on the object servers, complete the following steps:\n\n- 1. Create a group and user for the Content Manager OnDemand instance owner.\n- 2. Give ownership of the cache storage file systems that are listed in the ARS.CACHE file to the group and user for the Content Manager OnDemand instance owner.", - "page_start": 42, - "page_end": 42, - "source_file": "sg246915.pdf" - }, - { - "text": "# **6.5 Data security**\n\nAccess to the Content Manager OnDemand data tables is secured through various methods. These methods include a secure data model, user authentication, SQL Query support, annotation security, and securing access to the Content Manager OnDemand commands. These methods are described in further detail in this section.\n\n# **6.5.1 Content Manager OnDemand object-owner model**\n\nContent Manager OnDemand internal security is based on an object-owner model, which is illustrated in Figure 6-6. Details about the object-owner model are in the IBM Content Manager OnDemand for Multiplatforms, V9.5, Administration Guide, SC19-3352. In this context, a Content Manager OnDemand instance is an implementation of the library server, one or more object servers, the data access, and the storage model. The data access and storage are implemented in the form of objects. The following objects are all Content Manager OnDemand objects:\n\n- -Users\n- -Groups\n- -Application groups\n- -Folders\n- -Cabinets\n- -Applications\n- -Holds\n- -Storage set\n- -Printers\n\nFigure 6-6 Content Manager OnDemand internal security\n\nThe Content Manager OnDemand object-owner model design handles the following situations:\n\n- - A single system administrator to control one or more Content Manager OnDemand instances through a single Administrator Client interface.\n- - Flexibility to create user administrators who manage users and groups for a specific Content Manager OnDemand instance.\n- - Flexibility to create report administrators who manage application groups, folders, and cabinets for a specific instance.", - "page_start": 161, - "page_end": 161, - "source_file": "sg246915.pdf" - }, - { - "text": "# **3.1 Report administration**\n\nReport design and definition are key to a successful implementation of a Content Manager OnDemand system. Knowledge of the data that will be indexed, loaded, and retrieved, with knowledge of Content Manager OnDemand preferred practices, results in the most efficient and easy-to-use system possible. In this section, we consider the processes that are followed when you define a Content Manager OnDemand report. We present hints and tips to help in the design and implementation process.\n\nThe system components that are required for creating, retrieving, and viewing a Content Manager OnDemand report are a storage set, an application group, an application, and a folder. Optionally, cabinets might be used to organize and simplify folder access. These elements, in combination, allow the Content Manager OnDemand administrator to define and create a report definition that can then be used to index and load data into Content Manager OnDemand. Figure 3-1 illustrates the relationship of these elements in a typical Content Manager OnDemand system.\n\nFigure 3-1 Content Manager OnDemand system components relationship\n\nTo help you better understand how to perform report administration, we use the example company that is mentioned in 1.2.1, \"Background information of an example company\" on page 6 with the Content Manager OnDemand Administrator Client running on Windows to create the required system components. We use the monthly credit card statements that are generated by AFinancial Co in our example. These statements are stored in a single application group in Content Manager OnDemand.\n\n# **3.1.1 Storage sets**\n\nWhen you define a report, the first component to create is a storage set if one does not exist. A *storage set* is a named collection of primary storage nodes that support application groups with similar archive storage management requirements.", - "page_start": 69, - "page_end": 69, - "source_file": "sg246915.pdf" - }, - { - "text": "# **2**\n\n# **Chapter 2. Setting up a Content Manager OnDemand instance**\n\nThis chapter provides guidelines for implementing Content Manager OnDemand as a single instance.\n\nIn this chapter, we cover the following topics:\n\n- -Introduction\n- -Architecture and platform\n- - Implementing a Content Manager OnDemand instance on a multiplatform UNIX environment\n- -Implementing a Content Manager OnDemand instance on IBM i\n- -Implementing a Content Manager OnDemand instance on z/OS", - "page_start": 38, - "page_end": 38, - "source_file": "sg246915.pdf" - }, - { - "text": "- Manages the (optional) Report Distribution System\n- Manages the \"interface\" to the (optional) Full Text Index system\n- Performs user authentication through internal security or external security System Authorization Facility (SAF) calls\n- Performs logging\n- - Object server:\n\t- Provides the repository for Content Manager OnDemand data archives\n\t- Stores archive storage policy information\n\t- Manages the retention of Content Manager OnDemand data archives\n\t- Controls the transition of Content Manager OnDemand archives\n\t- Manages the expiration of Content Manager OnDemand archives\n\n# **2.2.3 Choosing a platform**\n\nA Content Manager OnDemand server can run on many different operating systems and hardware environments. It can be set up to run on a workstation and can scale up to an IBM z™ Systems complex. The following factors need to be part of the decision to implement Content Manager OnDemand on one platform versus another platform:\n\n- -Existing hardware platforms\n- -Future hardware requirements (standardization, consolidation, and others)\n- -Existing personnel and skill set\n- -Current workload (number of users, quantity of data, and others)\n- -Future workload requirements (number of users, quantity of data, and others)\n- -Interfacing with other systems (software and data)\n- - Vendor's ability to support the environment (hardware, software, and users over any geographic extent)\n\n**Default installation directory paths:** The default installation directory path names changed for Content Manager OnDemand for Multiplatforms Server. The default installation paths for Content Manager OnDemand 9.5 are listed:\n\n- - /opt/IBM/ondemand/V9.5 for AIX and Sun (HP is no longer a supported platform in V9.5.)\n- -/opt/ibm/ondemand/V9.5 for Linux and Linux on IBM z Systems™\n- -C:\\Program Files\\IBM\\OnDemand\\V9.5 for Microsoft Windows\n- -/usr/lpp/ars/V9R5M0 for z/OS\n- -/QIBM/ProdData/OnDemand for IBM i\n\nYou can install Content Manager OnDemand to the default path or specify a different path. Because this book describes all Content Manager OnDemand platforms, you see various interchangeable references to these paths. Ensure that you check your own installation for the path name that is implemented and interpret the paths that are identified in the manual.\n\nStarting with version 9.5, the installation can be performed by a non-root user on AIX. When installed as a non-root user, the installation path is fixed under the home directory of the user:\n\n- -$HOME.$HOME/IBM/ondemand/V9.5 for AIX and Sun\n- -$HOME/ibm/ondemand/V9.5 for Linux and Linux on IBM z Systems", - "page_start": 41, - "page_end": 41, - "source_file": "sg246915.pdf" - }, - { - "text": "# **17.2 Administration of Content Federation Services for Content Manager OnDemand for Enterprise Records**\n\nConfigure Content Manager OnDemand for Content Federation Services to declare records by using Enterprise Records. You must disable expiration processes by the storage manager so that it cannot expire data. You must also convert application groups with an expiration type of DOCUMENT, SEGMENT, or STORAGE MANAGER to an expiration type of LOAD.\n\nTo configure Content Federation Services for Content Manager OnDemand, you must perform the following tasks:\n\n- -Enable Content Federation Services for Content Manager OnDemand.\n- -Identify the application groups where Content Federation will be enabled.\n- -Specify the application group field.\n- -Enable Content Federation permissions for the application group.\n- - Federate document metadata to Content Federation Services for Content Manager OnDemand.\n\nThese items are discussed in more detail in the following sections.\n\n# **17.2.1 Enabling Content Federation Services for Content Manager OnDemand**\n\nAll of the steps in this section assume that IBM FileNet P8 and FileNet Content Federation Services are installed correctly.\n\nIn this section, we describe the components in Content Manager OnDemand to enable the federation capabilities to allow record declaration in Enterprise Records. We assume that you are familiar with Content Manager OnDemand administration, so detailed steps are not provided in this chapter.\n\nFor more information about the installation and configuration of FileNet P8 and FileNet Content Federation Services, see Federated Content Management: Accessing Content from Disparate Repositories with IBM Content Federation Services and IBM Content Integrator, SG24-7742.\n\nTo use IBM FileNet P8 Content Federation Services for Content Manager OnDemand, you must enable the feature in Content Manager OnDemand by modifying the ars.cfg file and adding the following line:\n\nARS_SUPPORT_CFSOD=1\n\nIn Content Manager OnDemand for Windows, you can enable IBM FileNet P8 Content Federation Services for Content Manager OnDemand by using the Content Manager OnDemand Administrator Client Configurator. Figure 17-1 on page 368 shows the Content Manager OnDemand configuration setup for Content Federation Services for Content Manager OnDemand.", - "page_start": 390, - "page_end": 390, - "source_file": "sg246915.pdf" - }, - { - "text": "# **1**\n\n# **Chapter 1. Overview and concepts**\n\nIn this chapter, we provide an overview of the IBM Content Manager OnDemand (Content Manager OnDemand) system. We describe how Content Manager OnDemand manages reports and index data. We also provide information to help you better understand how Content Manager OnDemand works.\n\nIn this chapter, we cover the following topics:\n\n- -Overview of Content Manager OnDemand\n- -Content Manager OnDemand concepts\n- -Content Manager OnDemand server and its components", - "page_start": 26, - "page_end": 26, - "source_file": "sg246915.pdf" - }, - { - "text": "- -Builds the Content Manager OnDemand system tables and indexes.\n- -Binds the database to Content Manager OnDemand.\n\nSign on to the user account that you assigned as the owner of the Content Manager OnDemand instance (in the ARS.INI file). Run **arsdb** with the following options:\n\n/opt/IBM/ondemand/V9.5/bin/arsdb -I ondmd950 -cv\n\nIn our scenario, -I ondmd950 is the Content Manager OnDemand instance.\n\nAfter this command completes, you can log in to DB2 and connect to the new instance. List all of the tables by running the following command:\n\ndb2 list tables for all\n\n## **Initializing the system log and system load facility**\n\nAfter you create the database, you can initialize the system log by running the following command:\n\n/opt/IBM/ondemand/V9.5/bin/arssyscr -I ondmd950 -l\n\n-I ondmd950 is the new Content Manager OnDemand instance.\n\nContent Manager OnDemand can track loading activity with the system load logging facility. Content Manager OnDemand stores these load messages in the system load log. You can initialize the system load log by running the following command:\n\n/opt/IBM/ondemand/V9.5/bin/arssyscr -I ondmd950 -a\n\nAgain, -I ondmd950 is the new Content Manager OnDemand instance.\n\nThe **arssyscr** program creates the application groups, applications, and folders that are required by the system logging facility.\n\n**Note:** The **arsdb** and **arssyscr** commands are in /opt/IBM/ondemand/V9.5/bin for AIX, HP-UX, and Sun Solaris, and in /opt/ibm/ondemand/V9.5/bin for Linux.\n\n# **2.3.2 Starting and connecting to the new instance**\n\nAfter the instance is created, you can start the new instance and connect to it.\n\n#### **Starting and stopping arssockd**\n\nTo start the instance manually, run the following command and include the instance name after the **arssockd** command:\n\n/opt/IBM/ondemand/V9.5/bin/arssockd -I ondmd950 -Sv\n\nRun the **ps** command to verify that the instance is started:\n\nps -ef | grep ars\n\nIf more than one instance is running, you see more than one **arssockd** process in the display. The instance other than the default instance archive has a -instancename after **arssockd** for identification:\n\nOnDemand95 65864128 1 0 Jun 11 - 0:00 arssockd-ondmd950:", - "page_start": 51, - "page_end": 51, - "source_file": "sg246915.pdf" - }, - { - "text": "# **15.3.1 Component overview**\n\nFTS in Content Manager OnDemand consists of the FTS Server, the Full Text Search Exporter (FTS Exporter), and a Content Manager OnDemand server that uses both components to provide FTS to the users.\n\n# **Full Text Search Server**\n\nThe FTS feature in Content Manager OnDemand is a separately licensed component that must be downloaded and installed. It contains the FTS Server. Full text Indexing and Search functionality can be implemented on any Content Manager OnDemand platform (z/OS, IBM i, and Multiplatform). The FTS Server itself runs only on Multiplatforms systems. The FTS Server is typically installed on a different system than the Content Manager OnDemand server because of the difference in workload types and the amount of processing that is required for high performance and throughput.\n\n### **Full Text Search Exporter**\n\nThe FTS Exporter is a Java application, which is available as a JAR file (ODFTIExporter.jar), that comes with the Content Manager OnDemand server installation (starting with version 9.0). The ODFTIExporter.jar file is in the jars subdirectory.\n\nThe FTS Exporter relies on the following components:\n\n- - Java Database Connectivity (JDBC) database drivers for your Content Manager OnDemand database (DB2, Oracle, or SQL Server on Windows).\n- - Java Runtime Environment (JRE) (Java 1.7.0) or later can be used to run the ODFTIExporter.jar file.\n\nThe FTS Exporter communicates with the Content Manager OnDemand server to retrieve the documents that are sent to the FTS Server. It uses a JDBC connection to the Content Manager OnDemand database to read the arsftiwork table.\n\nThe FTS Exporter can be run on the Content Manager OnDemand server system or from any other system that is connected by TCP/IP. The FTS Exporter does not require the existence of the Content Manager OnDemand database on the same system. The FTS Exporter obtains the instance configuration from the Content Manager OnDemand server.\n\nFor more information, see 15.4.2, \"Configuration of the Full Text Search Exporter\" on page 344.\n\n**Note:** Ensure that you apply the latest Content Manager OnDemand version and fix pack to the Content Manager OnDemand server and the FTS Server component before you use FTS.\n\n# **15.3.2 Installing the FTS Server**\n\nInstall the FTS Server on a Multiplatforms system by running the FTS Server setup program. Use the command-line parameter **-i console** for a console mode setup.\n\nThe setup creates a set of directories under the FTS_Home (installation target) directory. Most of these directories are not modified after the installation.", - "page_start": 362, - "page_end": 362, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Does the XML indexer of Content Manager OnDemand support large objects ?", - "target_page": 188, - "target_passage": "No", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **6.5 Data security**\n\nAccess to the Content Manager OnDemand data tables is secured through various methods. These methods include a secure data model, user authentication, SQL Query support, annotation security, and securing access to the Content Manager OnDemand commands. These methods are described in further detail in this section.\n\n# **6.5.1 Content Manager OnDemand object-owner model**\n\nContent Manager OnDemand internal security is based on an object-owner model, which is illustrated in Figure 6-6. Details about the object-owner model are in the IBM Content Manager OnDemand for Multiplatforms, V9.5, Administration Guide, SC19-3352. In this context, a Content Manager OnDemand instance is an implementation of the library server, one or more object servers, the data access, and the storage model. The data access and storage are implemented in the form of objects. The following objects are all Content Manager OnDemand objects:\n\n- -Users\n- -Groups\n- -Application groups\n- -Folders\n- -Cabinets\n- -Applications\n- -Holds\n- -Storage set\n- -Printers\n\nFigure 6-6 Content Manager OnDemand internal security\n\nThe Content Manager OnDemand object-owner model design handles the following situations:\n\n- - A single system administrator to control one or more Content Manager OnDemand instances through a single Administrator Client interface.\n- - Flexibility to create user administrators who manage users and groups for a specific Content Manager OnDemand instance.\n- - Flexibility to create report administrators who manage application groups, folders, and cabinets for a specific instance.", - "page_start": 161, - "page_end": 161, - "source_file": "sg246915.pdf" - }, - { - "text": "# **1**\n\n# **Chapter 1. Overview and concepts**\n\nIn this chapter, we provide an overview of the IBM Content Manager OnDemand (Content Manager OnDemand) system. We describe how Content Manager OnDemand manages reports and index data. We also provide information to help you better understand how Content Manager OnDemand works.\n\nIn this chapter, we cover the following topics:\n\n- -Overview of Content Manager OnDemand\n- -Content Manager OnDemand concepts\n- -Content Manager OnDemand server and its components", - "page_start": 26, - "page_end": 26, - "source_file": "sg246915.pdf" - }, - { - "text": "Special thanks to the following people for their content contribution:\n\n**Ben Boltz** is a Senior Software Engineer. He has 32 years of experience in the Software Industry. He has worked on Content Manager OnDemand for Multiplatforms for over 20 years.\n\n**Darrell Bryant** joined IBM as a manufacturing engineer and worked as a Systems Engineer who specialized in S/36 and AS/400 systems. In 2000, Darrell joined the OnDemand team. He has performed a mix of activities, including services, education, support, and testing. Darrell is now the lead tester for OnDemand for i. He also develops and teaches workshops to clients and partners. He is the editor of the OnDemand Newsletter.\n\n**Nelson Chen** is a Software Developer with Content Manager OnDemand. He has over 30 years of experience in software development, among them 27 years at IBM and last 20 years in OnDemand. His areas of expertise include ArsXML, Install, and Configurator.\n\n**Trang Kim Duong** is a Software Developer with Content Manager OnDemand. Among her 17 years of experience in software development, the last 11 years were in Content Manager OnDemand. Trang's areas of expertise include workflow for Space Vehicle Design, Content Manager OnDemand Report Distribution, Exporter utility for Content Federation Services for Content Manager OnDemand (CFS-CMOD), and the Content Manager OnDemand back-end database component.\n\n**Hubert Hwang** is a Software Developer with Content Manager OnDemand. He is a Certified Solutions Expert for Content Manager OnDemand with over 10 years of experience with the product. His areas of expertise include the Content Manager OnDemand Web Enablement Kit Java application programming interfaces (APIs), Content Navigator, and software test automation. He has extensive experience troubleshooting all aspects of the product. He has authored over 200 technotes on topics, such as migration, data collection, and troubleshooting guides, for Content Manager OnDemand.\n\n**Vicki Miller** is a Senior Certified Client Technical Professional at IBM, working in the technology industry for 33 years with a focus on Enterprise Content Manager (ECM) since 1999. Her area of expertise in the realm of ECM is focused on solution sales consulting and technical account leadership that revolves around the management, processing, and analysis of any type of electronic content to help organizations optimize and protect their business. Vicki has spoken at IBM conferences on critical ECM topics, contributed to the development of technical publications, and led groups within IBM and client organizations to drive the enhancement ECM solutions and products.\n\n**Paula Muir** is a Software Developer with Content Manager OnDemand for Multiplatforms in Boulder, Colorado. Her areas of expertise include indexing and loading data, and AFP and PDF architecture.\n\n**Nancy O'Brian** started at IBM as an applications programmer, then transferred to a branch office where she performed her first Content Manager OnDemand (then known as R/DARS) implementation. After many more implementation service engagements, she joined the Content Manager OnDemand development team and continued to perform implementation services, training, support, testing, and technical writing. She currently focuses primarily on technical writing and testing.\n\n**Sandi Pond** is a Software Developer with Content Manager OnDemand for Multiplatforms. She has 17 years of experience with Content Manager OnDemand, working in various areas of the development team. Her area of expertise is the OnDemand Web Enablement Kit (ODWEK).\n\n**Debbie Wagner** is a Senior Software Engineer at IBM and has over 22 years of experience in content management, specifically, Content Manager OnDemand for Multiplatforms. Her areas", - "page_start": 18, - "page_end": 18, - "source_file": "sg246915.pdf" - }, - { - "text": "# **14.1 Introduction to Content Manager OnDemand Distribution Facility**\n\nBefore Content Manager OnDemand version 9.5, two report distribution components were available:\n\n- -OnDemand Distribution Facility for z/OS\n- -Report Distribution Facility for Content Manager OnDemand for Multiplatforms\n\nBoth of these components contained certain strengths and weaknesses. In V9.5, the strengths of both of these components were merged into a single component named OnDemand Distribution Facility (ODF), which offered the following advantages:\n\n- -It runs on all Content Manager OnDemand platforms.\n- - It can run on a separate platform from where the Content Manager OnDemand server is installed.\n- -Its operation can be monitored through a new graphical monitor, the OnDemand Monitor.\n- - It includes transform support where Content Manager OnDemand can transform content from one data type to another data type before the content is sent as part of an ODF distribution.\n\nThis chapter describes ODF V9.5. For any new installations (on z/OS or AIX) before version 9.5 of Content Manager OnDemand, we suggest that you install ODF.\n\nFigure 14-1 shows the evolution and merger of ODF 9.5 from its predecessors ODF9.0 and Report Distribution System (RDF) 9.0.\n\nFigure 14-1 Evolution of ODF\n\nWhen you load documents into Content Manager OnDemand, you might need to print these documents or send them to various people in your organization.\n\nContent Manager OnDemand automates the process of sending the documents that are loaded into Content Manager OnDemand to print (or the JES spool), a file (or a z/OS dataset), to a recipient as an email attachment, or to a recipient as an email notification.", - "page_start": 339, - "page_end": 339, - "source_file": "sg246915.pdf" - }, - { - "text": "In XML, the definition and syntax of the markup language are defined in a *schema file*. For the Content Manager OnDemand XML batch program, the schema file is called ondemand.xsd. It contains the definitions for the Content Manager OnDemand objects: users, groups, applications, application groups, storage sets, folders, printers, and others. Each Content Manager OnDemand object definition contains one or more child objects. For example, a user object has a child object for permissions, and a group object has a child object for users in the group. The schema file (ondemand.xsd) must not be changed in any way by the user.\n\nThe *input XML file* for the XML batch program is parsed to ensure that it is valid according to the schema file. Each object within the file is examined to ensure that the attributes are valid according to the object type. The XML batch program generates XML when Content Manager OnDemand objects are exported. The XML that is generated can be used as an input for the subsequent **arsxml** command.\n\nExample 3-1 shows a sample of the file exportusers.xml from the XML samples directory. You can change the names of the users to the users that you want to export.\n\nExample 3-1 Sample XML input file for exporting users\n\n```\n\n\n \n \n \n \n \n\n```\nYou can export objects by running **arsxml export**. The following command exports the users that are listed in the exportuser.xml file, from the server odserver1, to an output file named users.xml:\n\n```\narsxml export -u oduser1 -p /my/stash/pwfile -h odserver1 -i exportusers.xml -o \nusers.xml -v\n```\nYou can import objects by running **arsxml add**. The following command imports the users from the users.xml file (which is generated from the previous command) to server odserver2:\n\narsxml add -u oduser2 -p /my/stash/pwfile -h odserver2 -i users.xml -v\n\nYou can delete objects by running **arsxml delete**. The following command deletes the users from odserver2, based on the users that are listed in the users.xml file:\n\narsxml delete -u oduser2 -p /my/stash/pwfile -h odserver2 -i users.xml -v\n\nFor deletion, you are prompted before each object in the XML is deleted, unless the **-x** parameter is used.", - "page_start": 96, - "page_end": 96, - "source_file": "sg246915.pdf" - }, - { - "text": "# **15.1 Introduction to full text search in Content Manager OnDemand**\n\nContent Manager OnDemand users primarily search on the metadata (extracted index values) that is associated with documents. By using FTS, you can intelligently search through actual document content. To enable FTS, the documents are first parsed and an index is built. This index can then be queried by a full text engine.\n\nThe FTS feature in Content Manager OnDemand comes with a new server, the Full Text Search Server (FTS Server), which handles the text extraction, indexing, and searching of the indexed data. This new server offloads the processing of full text data to a machine other than your Content Manager OnDemand library and object servers.\n\nThe full text engine is the same search services engine that is used by other IBM products, such as DB2 or IBM FileNet P8. It is based on the Lucene engine and allows advanced and flexible queries. Users can perform wildcard searches, fuzzy (or similar) searches, proximity searches, Boolean searches, and other complex queries.\n\nThe full text feature can handle many formats, including Microsoft Office documents, XML files, and typical Content Manager OnDemand formats, such as AFP, Line Data, and Adobe Portable Document File (PDF).\n\nThe FTS feature supports full text indexing of both new and existing data. For new data, the FTS Server is configured to index the newly loaded reports by using the Administrator Client. For existing data, indexing is invoked by using the Content Manager OnDemand command-line utilities or the Content Manager OnDemand Web Enablement Kit (ODWEK) Java application programming interface (API).\n\nFTS is enabled through the Content Manager OnDemand folder and allows all clients to take advantage of full text queries after the server configuration is complete. Several new Content Manager OnDemand folder field types are defined in support of FTS. Search score, highlight, and summary are returned, aiding the user in determining whether the document is a good match.\n\n**Note:** Before the release of the FTS option in Content Manager OnDemand, a document content-based search was possible by using the server-based text search functionality. However, this functionality is limited to AFP, Line, SCS, and PDF documents. It does not use an index, but instead the server retrieves the documents and then scans those documents for the index values. This method limits the capabilities of the functions to exact matches of a query string and might cause workload problems on the Content Manager OnDemand server. FTS eliminates these issues and limitations by introducing new processing components.\n\n# **15.2 Full text search architecture in Content Manager OnDemand**\n\nThe process of full text indexing can be lengthy in terms of time and processor consumption. Therefore, an integration architecture, which decouples the full text engine from the Content Manager OnDemand server and keeps the different workloads separate, is required.\n\nThe components and their basic communication are shown in Figure 15-1 on page 337.", - "page_start": 359, - "page_end": 359, - "source_file": "sg246915.pdf" - }, - { - "text": "# **Preface**\n\nThis IBM® Redbooks® publication provides a practical guide to the design, installation, configuration, and maintenance of IBM Content Manager OnDemand Version 9.5.\n\nContent Manager OnDemand manages the high-volume storage and retrieval of electronic statements and provides efficient enterprise report management. Content Manager OnDemand transforms formatted computer output and printed reports, such as statements and invoices, into electronic information for easy report management. Content Manager OnDemand helps eliminate costly, high-volume print output by capturing, indexing, archiving, and presenting electronic information for improved customer service.\n\nThis publication covers the key areas of Content Manager OnDemand, some of which might not be known to the Content Manager OnDemand community or are misunderstood. The book covers various topics, including basic information in administration, database structure, storage management, and security. In addition, the book covers data indexing, loading, conversion, and expiration. Other topics include user exits, performance, retention management, records management, and many more.\n\nBecause many other resources are available that address subjects on different platforms, this publication is not intended as a comprehensive guide for Content Manager OnDemand. Rather, it is intended to complement the existing Content Manager OnDemand documentation and provide insight into the issues that might be encountered in the setup and use of Content Manager OnDemand. This book is intended for individuals who need to design, install, configure, and maintain Content Manager OnDemand.\n\n# **Authors**\n\nThis book was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center.\n\n**Wei-Dong Zhu** is a Content Management Project Leader with the ITSO at IBM US, California. She is a Certified Solution Designer for IBM Content Manager. She has more than 10 years of software development experience in accounting, image workflow processing, and digital media distribution (DMD). Her development work in one of the DMD solutions contributed to a first-time ever win for IBM of an Emmy award in 2005. Jackie joined IBM in 1996. She holds a Master of Science degree in Computer Science from the University of Southern California.\n\n**Jim Ilardi** is a Consulting Client Solution Professional in Carmel, New York. Jim has over 30 years of experience in IT and over 18 years working with Content Manager OnDemand. Jim started with IBM in Lab Services installing many OnDemand systems around the world. Today, Jim works in Pre-Sales Technical Sales covering Enterprise Content Manager in the New York Area.\n\n**Deborah Matamoros** is a Software Developer with Content Manager OnDemand. She has 26 years of development experience at IBM, with the last seven of those years in OnDemand. Her area of expertise is Report Distribution. Debbie holds a degree in Computer Science from the University of Oregon and currently resides in Park City, Utah.", - "page_start": 16, - "page_end": 16, - "source_file": "sg246915.pdf" - }, - { - "text": "# **3.3 Content Manager OnDemand XML Batch Administration**\n\nIn addition to the Administrator Client that runs under Windows, Content Manager OnDemand provides an administrative program that uses Extensible Markup Language (XML). The XML Batch Administration program (XML batch program) is run on the Content Manager OnDemand server and provides the same functionality as the Administrator Client.\n\nThe difference between the two programs is that for the Administrator Client, the user must provide input through the graphical user interface (GUI) as opposed to the XML batch program, which receives input through the XML interface.\n\nIn this section, we describe the following items:\n\n- -Benefits of using the XML batch program\n- -Using the XML Batch Administration program\n- -Special features of the XML batch program\n- -Tips on using the ARSXML command\n\n#### **Benefits of using the XML batch program**\n\nMany benefits are possible when you use the XML batch program:\n\n- - It provides another way to perform the Content Manager OnDemand system administrative tasks.\n- - It can process different types of objects, such as updating users in a group and application group permission at the same time.\n- -The Administrator Client is not needed.\n- - It is useful for replicating the same objects to multiple Content Manager OnDemand servers, and it can even replicate the object when no network connection exists between the servers.\n- -It simplifies the automation of system administrative tasks.\n- - For Content Manager OnDemand support purposes, the output XML file can be used to provide information to the support team for problem determination.\n\n# **3.3.1 Using the XML Batch Administration program**\n\nThis section provides a brief explanation of how to use the new XML batch program. For more information, see IBM Content Manager OnDemand for Multiplatforms - Administration Guide, SC19-3352.\n\nThe Batch Administration program is called **arsxml**. With this XML batch program, you can export, add, delete, and update Content Manager OnDemand objects.\n\nTo use the program, you must have the following files:\n\n- -The schema file, ondemand.xsd\n- -An input XML file (for example, exportusers.xml)\n- -A password stash file", - "page_start": 95, - "page_end": 95, - "source_file": "sg246915.pdf" - }, - { - "text": "If a FileNet P8 system is installed in your environment that serves as your primary content management system and reports need to be available to users without their knowing that those reports are in a different system, this integration might suit your needs. The same situation applies to the use of FileNet P8 Records Management, which can be applied to Content Manager OnDemand documents as well, therefore bringing a level of federated records management capability to your documents.\n\nWhen you plan your integration with FileNet P8, remember this federation is active: Content Manager OnDemand actively publishes document links into a FileNet P8 system. You must consider both volumes (FileNet P8 systems usually are smaller than Content Manager OnDemand systems) and the active federation process.\n\nFor more information about Content Manager OnDemand and FileNet P8 integration, see IBM FileNet Content Federation Services for Content Manager OnDemand, SC19-2711.\n\n# **8.3 Client API overview**\n\nWith various client options, multiple API options are available to navigate through the system and access Content Manager OnDemand documents. Although the Java API that is provided by Content Manager ODWEK is the API that is used most by clients and the basis for most development projects, other APIs are available and used for a limited range of scenarios.\n\nThe following list shows the APIs that are available for Content Manager OnDemand:\n\n- -Content Manager ODWEK: The Java API for Content Manager OnDemand\n- - SOAP and Representational State Transfer (REST) web services that follow the CMIS standard\n- -Windows OLE (ActiveX control) that is provided by the Windows client\n- -XML administrative API through the **ARSXML** server command\n- -Structured APIs on z/OS environments\n- - The standard Content Manager OnDemand server commands that serve as a console-based API to work with Content Manager OnDemand documents\n\n# **8.3.1 Content Manager OnDemand Web Enablement Kit**\n\nODWEK provides a Java API to access Content Manager OnDemand servers and their documents. It is the strategic client API that provides the largest feature set of any Content Manager OnDemand API. It is used by web clients, such as Content Navigator or WEBi, by abstraction layers, such as Information Integrator, or by API components, such as CMIS.\n\nThe ODWEK Java API and its use to develop Content Manager OnDemand clients are described in detail in IBM Content Manager OnDemand Web Enablement Kit Java APIs: The Basics and Beyond, SG24-7646. This section covers only a basic overview and focuses on client considerations about ODWEK. Developers are encouraged to read the referenced book before they plan a client development that is based on ODWEK.\n\n#### **Scope**\n\nODWEK is a Content Manager OnDemand component that can be used by all Content Manager OnDemand customers. It is focused on typical client use cases, such as searching for and accessing data that is stored in a Content Manager OnDemand archive. It also has web viewers, such as the line data applet and Content Manager OnDemand AFP viewer.", - "page_start": 225, - "page_end": 225, - "source_file": "sg246915.pdf" - }, - { - "text": "# **2**\n\n# **Chapter 2. Setting up a Content Manager OnDemand instance**\n\nThis chapter provides guidelines for implementing Content Manager OnDemand as a single instance.\n\nIn this chapter, we cover the following topics:\n\n- -Introduction\n- -Architecture and platform\n- - Implementing a Content Manager OnDemand instance on a multiplatform UNIX environment\n- -Implementing a Content Manager OnDemand instance on IBM i\n- -Implementing a Content Manager OnDemand instance on z/OS", - "page_start": 38, - "page_end": 38, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Considering storage efficiency, should I store my AFP documents as PDF to distribute them over the web ?", - "target_page": 232, - "target_passage": "If a requirement exists to present AFP documents in the Portable Document Format (PDF) format over the web, from a storage perspective, it is more efficient to store the documents in their native format and then convert them to PDF at retrieval tim", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# **13.4.1 PDF data**\n\nPortable Document Format (PDF) data is an increasingly common data type that can be archived within Content Manager OnDemand. The following key advantages are available by using this data type as a document format:\n\n- - It is a read-only format that does not require any external resources, such as images or fonts. It is self-contained.\n- - The viewer for PDF can be downloaded at no charge from the Adobe website and the browser plug-ins for PDF are also available at no charge.\n\nDuring PDF document creation, resources, such as images and custom fonts, are placed in the data stream once and then referenced many times from within the PDF file. If a large report is produced from many small documents, that report requires only one copy of the resources.\n\nHowever, when the PDF is indexed, the PDF Indexer creates many PDF documents from the input file. Each of these documents requires a certain number of PDF structures, which define a document. These documents are concatenated together in the .out file, and then loaded into Content Manager OnDemand as separate documents. Because the resources are extracted and placed into a separate resource file, they are not included in each document. For an illustration of the process, see Figure 13-3.\n\nFigure 13-3 PDF indexing\n\nIf no resources are collected, the size of the .out file, which contains all of the individual documents, might be larger than the original file. For tips about how to reduce the size of the output file, see 7.3.5, \"PDF indexing: Using internal indexes (Page Piece Dictionary)\" on page 173.", - "page_start": 331, - "page_end": 331, - "source_file": "sg246915.pdf" - }, - { - "text": "# **9.1 Overview of data conversion**\n\nTo work with data conversion, understand the data conversions that are required, and when and how to convert the data. Perform detailed planning before you build your solution so that you achieve a design that remains efficient for many years.\n\nIn this section, we describe why you might need data conversion, when to convert the data stream, and how to convert the data.\n\n# **9.1.1 Why convert data streams**\n\nYou might want to convert data streams for many reasons:\n\n- - Certain data streams, such as Hewlett-Packard (HP) Printer Command Language (PCL) or Xerox metacode, are printer-specific and cannot be displayed. Before you archive or display the documents, these data streams must be transformed into a compatible format.\n- - The archived data stream might need to comply with a company's internal rules or regulations. Therefore, the produced data streams must be transformed into the defined and required final format before they are archived.\n- - The documents might need to be accessible by a user that is outside of the company. The document must be displayed through standard tools that are available on any or at least most of the clients, such as an Internet browser or Adobe Acrobat Reader.\n- - The documents might need to be manipulated so that only part of the document is displayed in a personalized way.\n\n# **9.1.2 When to convert data streams**\n\nThe decision of *when* to convert data streams relies mainly on the use of the system. Typically, converting data at load time requires more time to process the print stream file, and converting data at retrieval time causes the user retrieval to be a little slower. The decision might depend on how many documents are retrieved, compared to how many documents are loaded daily. It might also depend on legal requirements about the format of stored data.\n\n# **AFP to PDF**\n\nIf a requirement exists to present AFP documents in the Portable Document Format (PDF) format over the web, from a storage perspective, it is more efficient to store the documents in their native format and then convert them to PDF at retrieval time. AFP documents are stored more efficiently than PDF documents.\n\nThe PDF print stream, when it is divided into separate customer statements, is larger than AFP because each statement contains its own set of structures that are required by the PDF architecture to define a document.\n\nElapsed time and processor time are also essential factors in the decision-making process. The amount of time (elapsed and CPU) that is needed to convert the document depends on how large the document is and how many resources or fonts are associated with the document.", - "page_start": 231, - "page_end": 231, - "source_file": "sg246915.pdf" - }, - { - "text": "# **Excel PDF Accessibility**\n\nArticle • 11/26/2024\n\n### **Summary**\n\nAuthors can ensure that their Execl workbooks are accessible to people with disabilities even when distributing them in PDF format using the following approach:\n\n- 1. First, they should follow the practices in Accessibility best practices with Excel spreadsheets .\n- 2. Next, they should follow the steps in Create accessible PDFs to preserve the accessibility of the workbook in PDF format.\n\nThis article provides details about the information Excel includes in the PDF to make it accessible.\n\n- 1. PDF/UA tags are included to provide semantic information about the content in the document.\n- 2. Decorative content does not need to be read, so it is marked as in the Content Tree in the PDF and no PDF/UA tags are included.\n\n### **PDF/UA Tags**", - "page_start": 44, - "page_end": 44, - "source_file": "office-pdf.pdf" - }, - { - "text": "# **5.1 Content Manager OnDemand cache storage**\n\nContent Manager OnDemand has a built-in cache storage management that is used to store documents on locally mounted disk subsystems. These subsystems can be network-attached storage (NAS), storage area networks (SAN), or any type of locally addressable disk that is available to the supported operating system. The cache storage manager uses a list of directories or file systems that are available to determine where space is available for storing and maintaining documents.\n\nEach Content Manager OnDemand object server in the system has a defined set of cache storage devices on which you can maintain the report data for a period to provide the fastest access times for system users.\n\nCertain implementations of Content Manager OnDemand use an all cache system to maintain data for its full retention. Other implementations store to both cache and archive storage. Other implementations store only to the archive.\n\nYou can configure Content Manager OnDemand so that at load time one of the following methods of data storage occurs:\n\n- - Data is stored in cache and later is automatically migrated from the cache subsystem to an archive system.\n- -Data is stored to both local cache and archive storage.\n- -Data is stored directly to archive storage.\n\nThese options are described in the following sections.\n\n# **5.2 IBM Tivoli Storage Manager for Multiplatforms**\n\nContent Manager OnDemand for Multiplatforms integrates with Tivoli Storage Manager and a license for this usage is included with Content Manager OnDemand. Within Tivoli Storage Manager, documents can be archived on various media, such as disk, optical, tape, and content-addressable storage (CAS) devices. These archive storage devices must be defined to the Tivoli Storage Manager system. Content Manager OnDemand uses the archive application programming interface (API) that is provided by Tivoli Storage Manager to store and retrieve documents.\n\nTo store application group data to the Tivoli Storage Manager ASM, the application group must be configured within Content Manager OnDemand to a defined storage set. This storage set contains a storage node that is defined within Tivoli Storage Manager and points to a specific storage area or media.\n\nWith the application group definition, you can specify whether and when the data is migrated to archive storage. For example, you can specify that the data will be migrated to archive storage when the document is originally loaded into the system, or that the data migration occurs the next time that the migration maintenance process is run, or that the data migration occurs after a certain number of days pass from the date that the data was loaded; or never.", - "page_start": 113, - "page_end": 113, - "source_file": "sg246915.pdf" - }, - { - "text": "**Note:** You see one set of messages for each object server on which you run the **ARSMAINT** program.\n\nFor example, when expiration processing starts on a specified server, you might see the following message:\n\n\"109 Cache Expiration (Date) (Min%) (Max%) (Server)\"\n\nMigration processing uses the specified date (the default is \"today\" in internal format). Expiration processing begins on each cache file system that exceeds the Max% (default 80%) and ends when the free space that is available in the file system falls below the Min% (default 80%).\n\nOne of these messages shows for each storage object that is deleted from cache storage. A storage object is eligible to be deleted when its \"Cache Document Data for n Days\" or \"Life of Data\" period passes (whichever occurs first).\n\nA storage deletion message looks similar to the following message:\n\n\"196 Cache Migration (ApplGrp) (ObjName) (Server)\"\n\nAlso, information-only messages report the percentage of space that is used in the file system.\n\nAn information message looks similar to the following message:\n\n```\n\"124 Filesystem Statistics (filesystem) (% full) (server)\"\n```\n#### **Load table (ARSLOAD)**\n\nThe ARSLOAD table can be used to track loads for expiration. This table maintains a record of all successful loads to application groups with the \"expire by load\" expiration type.\n\n# **10.5.3 Removing documents from the Tivoli Storage Manager archive**\n\nRemoving a document from archive storage means that the backup (if the primary document copy is in cache) or long-term copy (if the primary document copy is in archive) of the document is deleted from the system. You remove documents from archive storage when you no longer have a business or legal requirement to keep them.\n\nA *management class* contains an archive copy group that specifies the criteria that makes a document eligible for deletion. Documents become eligible for deletion under the following conditions:\n\n- -Administrators delete documents from client nodes\n- - An archived document exceeds the time criteria in the archive copy group (how long archived copies are kept)\n\nASM does not delete information about expired documents from its database until expiration processing runs. You can run expiration processing either automatically or manually by command. Ensure that expiration processing runs periodically to allow ASM to reuse storage pool space that is occupied by expired documents.\n\nWhen expiration processing runs, ASM deletes documents from its database. The storage space that these documents used to occupy then becomes reclaimable. For more information, see \"Reclaiming space in storage pools\" on page 233.", - "page_start": 255, - "page_end": 255, - "source_file": "sg246915.pdf" - }, - { - "text": "Consider the following information about Table 7-1 on page 164:\n\n- - The Generic indexer requires the user to manually create an index file in the generic index format before the user starts the load process. The Generic indexer allows the capture of documents, index values, and resources that are identified to it. These documents, index values, and resources are then loaded into the Content Manager OnDemand archive and stored in the same manner as though they were loaded through any of the other indexers. An existing resource file can be loaded with a generic index file.\nFor more information about the generic index format, see IBM Content Manager OnDemand - Indexing Reference, SC19-3354.\n\n- - The ACIF, PDF, XML, and OS/400 indexers all generate intermediate files. These files are then used to load the indexes and data into the Content Management OnDemand system.\n- - The OS/390 indexer creates the index data while it loads the indexes and data into the Content Management OnDemand system.\n- - *Conversion* refers to a conversion by the indexer. Other products integrate with Content Manager OnDemand that also convert data.\n- - Because of the architecture of PDF documents, large object support for PDF documents is not possible.\n- - Starting with V9.5, the PDF Indexer runs in the PASE environment on IBM i. PASE is a prerequisite on IBM i for V9.5.\n- -Starting with V9.5, the PDF Indexer is no longer supported on z/OS.\n\n# **7.2 Getting started with PDF indexing**\n\nPDF is a standard that is specified by Adobe Systems, Incorporated, for the electronic distribution of documents. PDF files are compact. They can be distributed globally through email, the web, intranets, or CD-ROM, and viewed with Adobe Reader.\n\nPDF is a data type or file format that is platform (hardware, operating system)-independent. A PDF file contains a complete PDF document that is composed of text, graphics, and the resources that are referenced by that document.\n\nTwo PDF file layouts are possible:\n\n- -Non-Linear (not \"optimized\")\nThis file layout is optimized for space savings. Storing a PDF file by using a Non-Linear layout consumes less disk space than storing the same PDF file linearly. It is slower to access or display this type of layout because portions of the data that is required to assemble pages of the document are scattered throughout the PDF file, so the whole PDF file must be downloaded and accessed before the file can be displayed.\n\n- -Linear (\"optimized\" or \"web optimized\")\nIn this file format, the PDF file is created in a linear (in page order) fashion. This file format allows the PDF viewer to start displaying the PDF document pages when they are downloading without waiting for the whole PDF file to be downloaded.", - "page_start": 188, - "page_end": 188, - "source_file": "sg246915.pdf" - }, - { - "text": "The *object size* is defined by clicking **Advanced** on the Storage Manager tab of the Application Group window. The object size is the size of a storage object in kilobytes (KB). By default, Content Manager OnDemand segments and compresses report data into 10 MB storage objects. For most use cases, the default value is appropriate. Valid values are 1 KB - 150 MB.\n\n**Object size value:** Exercise caution when you change the object size value. Specifying too large or too small a value can adversely affect performance when you load data.\n\nThe storage objects are stored in *storage sets*. The storage sets contain one or more primary *storage nodes*. The storage node points to the location where the data is stored, which can be cache, the storage manager (Tivoli Storage Manager, object access method (OAM), or Archive Storage Manager (ASM)), or a combination.\n\nThe primary storage nodes can be on one or more object servers. When the Load Type is Local, Content Manager OnDemand loads data on the server on which the data loading program runs in the primary storage node with the Load Data property specified. If the Load Type is Local, and the storage set contains primary nodes on different object servers, you must select the **Load Data** check box for one primary node on each object server.\n\nThe storage set must support the number of days that you plan to maintain reports in the application group. For example, if you must maintain reports in archive storage for seven years, the storage set must identify a storage node (or migration policy on an IBM i server) that is maintained by ASM for seven years.\n\nA detailed description of adding storage sets and storage nodes is in Chapter 5, \"Storage management\" on page 89 and the related OnDemand Administrative Guide.\n\n# **10.2.1 Storing the report (document) data**\n\nTo improve efficiency and scalability, stored documents are embedded within storage objects. The storage objects are then stored in cache or a storage manager (OAM, Tivoli Storage Manager, or ASM). The storage objects are eventually expired from the system based on values that are defined by the Content Manager OnDemand administrator. In this section, we describe each scenario and how it is implemented. The parameters that are described in this section are on the Storage Manager tab of the Application Group window unless otherwise specified.\n\nThree sets of data are stored when you load a report:\n\n- -Index data, which is extracted by the indexing program and used by the search process\n- -Resources, such as an overlay and fonts, which are used to customize the viewed data\n- -Documents (or report segments) that will be viewed\n\nFigure 10-2 on page 222 shows the datasets and illustrates four scenarios of their storage and expiration.", - "page_start": 244, - "page_end": 244, - "source_file": "sg246915.pdf" - }, - { - "text": "# **Application Group Identifier and the Application Group ID**\n\nThe Application Group Identifier and the Application Group ID (AGID) are unique identifiers that are used by Content Manager OnDemand to identify the application group in system tables.\n\n# **Migrate Data from Cache**\n\nThe Migrate Data from Cache value determines when documents and resources are migrated to archive storage. A storage set that is associated with a Tivoli Storage Manager client node must be selected to enable migration to archive storage.\n\nThe following values are valid:\n\n- - No: Data is never migrated from cache. This option is unavailable when a storage set that is associated with a Tivoli Storage Manager client node is selected for the application group.\n- - When data is loaded: Data is migrated to archive storage when the data is loaded into the application group.\n- - Next cache migration: Data is migrated to archive storage the next time that **ARSMAINT** is run with the **-m** option. The **-m** option indicates that data and resources are copied from cache to archive storage.\n- - After __ days in cache: This value specifies the number of days that data remains in cache storage. After the prescribed number of days in cache storage are reached, the data is copied to archive storage the next time that **ARSMAINT** is run with the **-m** option for data migration.\n\n# **5.2.7 IBM System Storage Archive Manager**\n\nCertain regulations require data to be stored in devices that are read only. In the past, physical storage devices, such as tapes and optical disks that are Write Once Read Many (WORM), were used.\n\nWORM disks, such as the NetApp SnapLock or EMC Centera, can be used to store data in the same manner as WORM tapes or optical platters. IBM System Storage Archive Manager allows critical data to be retained for a mandated period without the possibility of being rewritten or erased.\n\nIn this section, we describe System Storage Archive Manager and how Content Manager OnDemand can be configured to use this subsystem to support these WORM disk devices.\n\n**Note:** Verify support for any particular device on a particular platform through the Tivoli Storage Manager Device support matrix before you plan your implementation.\n\nFor more information about the Tivoli Storage Manager support of WORM disk devices, such as NetApp SnapLock, or EMC Centera, see the following IBM Knowledge Center documents:\n\n- -Tivoli Storage Manager for AIX Administrator's Guide\n- -Tivoli Storage Manager for Windows Administrator's Guide\n\nYou can obtain these documents from the IBM Tivoli Storage Manager Knowledge Center at the following web address:\n\nhttp://www.ibm.com/support/knowledgecenter/SSGSG7/welcome?lang=en:", - "page_start": 127, - "page_end": 127, - "source_file": "sg246915.pdf" - }, - { - "text": "- - Data reduction techniques for space efficiency, such as thin provisioning, Data Reduction Pools (DRP), deduplication, and IBM Real-time Compression™ (RtC). Today, open systems typically use less than 50% of the provisioned storage capacity. IBM Spectrum Virtualize, can enable significant savings, increase effective capacity of storage systems up to five times, and decrease the floor space, power, and cooling required by the storage system.\nIBM Storwize V7000 is a scalable solution running on a highly available platform that can use diverse back-end storage systems to provide all the benefits to various attached hosts.\n\n# **1.2 Benefits of using IBM Spectrum Virtualize**\n\nThe storage virtualization functions of IBM Spectrum Virtualize are a powerful tool in the hands of storage administrators. However, for an organization to fully realize benefits of storage virtualization, its implementation must be the result of a process that begins with identifying the organization's goals. For a storage virtualization project to be a success, the organization must identify what it wants to achieve before it starts to think how to implement the solution.\n\nToday, organizations are searching for affordable and efficient ways to store, use, protect, and manage their data. Additionally, a storage environment is required to have an easy to manage interface and be sufficiently flexible to support wide range of applications, servers, and mobility requirements. Although business demands change quickly, some recurring client concerns drive adoption of storage virtualization, including the following examples:\n\n- -Growing data center costs\n- -Inability of IT organizations to respond quickly to business demands\n- -Poor asset usage\n- - Poor availability and resultant unsatisfactory (for the clients) or challenging (for the providers) service levels\n- -Lack of skilled staff for storage administration\n\nThe importance of addressing the complexity of managing storage networks is clearly visible in the results of industry analyses of the total cost of ownership (TCO) of storage networks. Typically, storage costs are only about 20% of the TCO. Most of the remaining costs relate to managing storage systems.\n\nIn a non-virtualized storage environment, every system is an \"island\" that must be managed separately. In large SAN environments, the sheer number of separate and different management interfaces and lack of unified view of the whole environment jeopardizes an organization's ability to manage their storage as a single entity and to maintain the current view of the system state.\n\nIBM Storwize V7000, running IBM Spectrum Virtualize software, reduces the number of separate environments that must be managed down to a single system. After the initial configuration of the back-end storage subsystems, all of the day-to-day storage management operations are performed by way of a single graphical user interface. At the same time, administrators gain access to the rich functionality set provided by IBM Spectrum Virtualize, whether particular features are natively available on the virtualized storage systems.", - "page_start": 26, - "page_end": 26, - "source_file": "sg247938.pdf" - }, - { - "text": "We recommend that you use only the base 14 fonts when you create PDF documents. Because these fonts are not embedded in the document, documents that are created with these fonts are smaller, and the resource file is also smaller.\n\n# **Accessing fonts**\n\nIf a document references fonts that are not embedded and fonts that are not available on the system, the document does not display correctly in the report wizard, and the PDF Indexer cannot index it. In the report wizard, the document might display as a series of dots instead of letters; the PDF Indexer fails with the \"Trigger not found\" message.\n\nIf your documents contain Asian fonts, ensure that you install them when you install Adobe Acrobat.\n\nIf the fonts are not embedded in the document, use the **FONTLIB** parameter to tell the PDF Indexer the location of font files.\n\n### **Listing fonts in a PDF file**\n\nIf you want to know the fonts that are contained in a PDF document, a simple method within the Adobe viewer is available to list the fonts in your data.\n\nFollow these steps to list the fonts in a PDF (for example, for Adobe Reader XI, version 11.0.3):\n\n1. Display your PDF document in the Adobe viewer (or reader).\n\n2. Click **File** → **Document Properties** → **Fonts**. You will see a list of fonts for the document.\n\nThe path to see the fonts might differ, depending on your viewer version.\n\n# **7.3.2 Reducing output file size with PDF documents**\n\nWhen you index PDF data, you might be surprised by the size of the output file that the PDF Indexer creates after it indexes the data. In certain cases, the PDF file that is loaded into Content Manager OnDemand is many times larger than the source PDF file.\n\nWhen the input file is indexed, it is split into multiple PDF documents. Each PDF document contains its own set of PDF structures that are required by the PDF architecture. For this reason, the multiple PDF documents that are created by the indexing can be larger in total than the original PDF document.\n\nOne way to reduce the size of the output file is using the base 14 fonts.", - "page_start": 190, - "page_end": 190, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "Where can I consult a summary of the impact of the International tax compliance regulations ?", - "target_page": 3, - "target_passage": "A Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-the- uks-automatic-exchange-of-information-agreements", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\n \n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "accounts so that these terms are defined by reference to the date that those accounts ceased to be excluded accounts. Regulation 2(3) and (4)(a) make consequential amendments.\n\nRegulation 3 makes a transitional provision for the calendar year 2020 in relation to accounts which were previously excluded accounts.\n\nA Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-theuks-automatic-exchange-of-information-agreements. It remains an accurate summary of the impacts that apply to this instrument.\n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "## **2020 No. 438**\n\n## **TAXES**\n\n# The International Tax Compliance (Amendment) Regulations 2020\n\n| Laid before the House of Commons | | | | 21st April 2020 |\n| --- | --- | --- | --- | --- |\n| Made - Coming into force | - | - - | - - | 20th April 2020 13th May 2020 |\n\nThe Treasury make these Regulations in exercise of the powers conferred by section 222 of the Finance Act 2013(**a**):\n\n#### **Citation and commencement**\n\n**1.** These Regulations may be cited as the International Tax Compliance (Amendment) Regulations 2020 and come into force on 13th May 2020.\n\n#### **Amendments to the International Tax Compliance Regulations 2015**\n\n**2.**—(1) The International Tax Compliance Regulations 2015(**b**) are amended as follows.\n\n(2) In regulation 1(3)(b)(i), for \"16th May 2019\" substitute \"19th April 2020\"(**c**).\n\n- (3) In regulation 3(4A)(a), at the beginning insert \"subject to regulation 24(3)\".\n- (4) In regulation 24—\n\n- (a) in the table in paragraph (2), in the column headed \"the CRS\"—\n\t- (i) at the beginning of the entry for \"new account\" insert \"subject to paragraph (3)\", and\n\t- (ii) at the beginning of the entry for \"pre-existing account\" insert \"subject to regulation 3(4A)(a) and paragraph (3)\", and\n- (b) after paragraph (2) insert—\n\t- \"(3) In respect of the accounts listed in paragraph (4)—\n\n(<b>a) 2013 c. 29; section 222 was amended by section 50 of the Finance (No. 2) Act 2015 (c. 33) but the amendments are not relevant to these Regulations.\n\n(<b>b) S.I. 2015/878 (referred to in these footnotes as \"the principal Regulations\"); relevant amending instruments are S.I. 2017/598, 2018/490 and 2019/881.\n\n(<b>c) In accordance with the common reporting standard for automatic exchange of financial account information developed by the Organisation for Economic Co-operation and Development and adopted by the United Kingdom, the United Kingdom exchanges information received from financial institutions under the principal Regulations with a territory which is a \"Reportable Jurisdiction\" under the CRS and with which the United Kingdom has entered into international exchange arrangements for that year. Reportable Jurisdictions are identified in a published list available at https://www.gov.uk/hmrcinternal-manuals/international-exchange-of-information/ieim402340. A hard copy of this list is available for inspection at the offices of HMRC at 10 South Colonnade, 9th Floor, Canary Wharf, London E14 4PU.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "*compliance harder to verify and, in the absence of that verification procedure, harder to enforce (especially in OSH cultures with a history of the prescriptive approach).'*353\n\nRegarding the **level of compliance with the legal goals or prescriptions**, the study authors assess it as 'moderate to good.' They see major differences depending on the topic and the size of the enterprises:\n\n*'However, the collected data shows that overall compliance with the OSH acquis across the EU and across establishment sizes is moderate to good. There is no indication that compliance is measurably higher in the public sector compared to the private sector. Yet, in reality, compliance varies significantly from directive to directive, from MS to MS and across establishment sizes.*\n\n*Micro establishments: Cannot be assessed (limited evidence points to poor overall quantitative compliance)*\n\n- *10 to 19 employees: Poor overall quantitative compliance*\n- *20 to 49 employees: Moderate overall quantitative compliance*\n- *50 to 249 employees: Good overall quantitative compliance*\n- *250 to 499 employees: Good overall quantitative compliance*\n- *500+ employees: Very good overall quantitative compliance'.*354\n\nIn 2018, DG EMPL organised a peer review on 'The efficient transposition, implementation and enforcement of EU OSH legislation' for each EU Member State.355 The overall conclusion is positive but refers to the difference **between formal (paper) compliance and 'real improvements'**:\n\n*'Although not uniform across employers (with evidence that smaller businesses in particular find some of the demands challenging and difficult to implement) indications are also that the transposed legislation is being implemented within workplaces. However, there are indications that the fact of implementation is not necessarily a true indicator of the quality of that action, with suggestions that \"compliance\" is to some extent a paper exercise and is not always reflected in real improvements in working environments.'*\n\nThe authors of EU-OSHA's **'Supporting compliance' report**356 note the same difference, using the terms **'substantive' versus 'rule compliance'**.357 This report and underlying literature review have specifically analysed reasons and context for compliance and non-compliance. They analysed the influence of:\n\n- social norms and social reporting strategies, and corporate social responsibility;\n- economic incentives and the business case for OSH;\n- the role of supply chain relations in supporting OSH;\n- prevention services; and\n- strategies and practices adopted by OSH regulators.\n\nThey conclude on a variety of aspects: *'During the last half-century, there has been a significant and well documented move away from prescriptive regulatory standards and efforts by national regulatory agencies to enforce them towards more principle-, performance- and process-based regulatory requirements …. This shift was originally informed by notions that traditional command and control strategies, however compromised by resource or governance, had achieved as much as they were likely to, and that different approaches were necessary to bring about the further improvements in OSH that were desired.* 358 (regarding reasons of non-compliance at enterprise level see also the chapter on 'Prevention Practices in Enterprises').\n\nNot all worker groups, sectors or forms of work are equally covered by these directives. Since the first protective OSH legislations, **some important groups or sectors had exceptions from full application of OSH legislation**. Depending on the Member State, such exceptions are applied to selfemployed and contracted work, military, public sector, mining, workers in the marine sector and offshore installations, family members, personal and household services, work in charitable organisations, volunteers in general, and domestic and mobile workplaces. In addition to these existing exemptions, we can observe in the last two to three decades an accelerating trend of erosion of the conventional employer–employee relation. Examples are outsourcing of work to contractors, often to self-employed, or platform work.", - "page_start": 121, - "page_end": 121, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "### **6.3 Guidance and support**\n\n**Supervision is only one approach to implementing legislation.** As mentioned, supervision by state authorities can only reach a small share of all enterprises, particularly not the many small ones and the self-employed. In addition to supervision and control, a broad variety of **prevention-supporting activities** has been developed during the past decades.388\n\nThe authors of EU-OSHA's 'Supporting compliance' reports state a strong increase in 'compliance promotion strategies'. They write: *'The regulatory changes have been matched in more recent times by an increasingly diverse set of compliance promotion strategies. Not only has public regulation sought to engage and encourage duty holders in the pursuit of forms of regulated self-regulation, but … the discourse on regulation itself has sought a far broader understanding of its meaning and the role of the private and public regulatory actors and processes potentially involved in both defining and securing compliance.'389*\n\nOne important type of means are **guidance and support tools** for enterprises and workers to extend the reach and impact of legislation. Labour inspectorates and other state institutions produce these tools either themselves or in collaboration with social partners or professional organisations.\n\n**Proactive research and preventive guidelines**, particularly in situations of new risks, have become a quite usual preventive activity (e.g. on nanotechnology, or on some developments in digitalisation). For very complex regulations, like REACH, national institutions installed helpdesks. European institutions also publish such guidance documents for EU-wide use, for example, the guidance on health and safety in agriculture,390 the guidance regarding the implementation of the Machinery directive,391 the guidance documents of EU-OSHA on COVID-19392 and the European Commission guidance documents on seasonal workers and COVID-19. 393 Practically all EU and international OSH institutions published guidance documents on how to identify and reduce psychosocial risk at workplaces.394\n\nA large amount of **OSH guidance** already exists in different formats,395 starting with classical written guidance documents, increasingly complemented by audio-visual and interactive tools. EU-OSHA covers a large variety of workplaces with its digital risk assessment tool OiRA (Online interactive Risk", - "page_start": 124, - "page_end": 124, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Future Accounting Policy Changes**\n\n#### IFRS 9 ‑ Financial Instruments (\"IFRS 9\")\n\nIFRS 9, as issued in 2010, reflects the first phase of the IASB's work on the replacement of IAS 39 and applies to classification and measurement of financial assets and financial liabilities as defined in IAS 39. The standard was initially effective for annual periods beginning on or after January 1, 2013. In November 2013, Chapter 6 of IFRS 9 on hedge accounting was published. At the same time, Chapter 7, containing the effective date and transition provisions, was amended to remove the mandatory effective date of IFRS 9. This was intended to provide sufficient time for preparers to make the transition to the new requirements. The Company may still choose to apply IFRS immediately, but is not required to do so.\n\nIn subsequent phases, the IASB is addressing impairment of financial assets. The adoption of the first phase of IFRS will have an effect on the classification and measurement of the Company's financial assets, but will not have an impact on the classification measurements of financial liabilities. The Company is in the process of assessing the impact IFRS 9 may have on future financial statements.\n\n#### IFRIC Interpretation 21 ‑ Levies (\"IFRIC 21\")\n\nIFRIC 21 clarifies that an entity recognises a liability for a levy when the activity that triggers payment, as identified by the relevant legislation, occurs. IFRIC 21 is effective for annual periods beginning on or after January 1, 2014. The Company is in the process of assessing the impact IFRIC 21 may have on future financial statements.\n\n## **Disclosure Controls and Procedures and Internal Controls**\n\nThe Company's management, including the Chief Executive Officer and the Chief Financial Officer, does not expect that the Company's Disclosure Controls and Procedures and Internal Controls will prevent or detect all error and all fraud. Because of the inherent limitations in all control systems, an evaluation of controls can provide only reasonable, not absolute, assurance that all control issues and instances of fraud or error, if any, within the Company have been detected.\n\n#### *Disclosure Controls and Procedures*\n\nAs of December 31, 2013, the Company's management evaluated the effectiveness of the operation of its disclosure controls and procedures (\"Disclosure Controls\"), as defined under rules adopted by the Canadian Securities Administrators. This evaluation was performed under the supervision of, and with the participation of, the Chief Executive Officer and the Chief Financial Officer.\n\nDisclosure controls and procedures are designed to ensure that information required to be disclosed in documents filed with securities regulatory authorities is recorded, processed, summarized and reported on a timely basis, and is accumulated and communicated to the Company's management, including the Chief Executive Officer and the Chief Financial Officer, as appropriate, to allow timely decisions regarding required disclosure.\n\nBased on the evaluation of Disclosure Controls, the Chief Executive Officer and the Chief Financial Officer have concluded that, subject to the inherent limitations noted above, the Company's Disclosure Controls are effective in ensuring that material information relating to the Company and its consolidated subsidiaries is made known to the Company's management on a timely basis by others within those entities, and is included as appropriate in this MD&A.\n\n#### *Internal Controls over Financial Reportin*g\n\nInternal controls over financial reporting (\"ICFR\") are designed to provide reasonable assurance regarding the reliability of the Company's financial reporting and its preparation of financial statements for external purposes in accordance with IFRS. Management's documentation and assessment of the effectiveness of the Company's ICFR continues as of the date of this MD&A with the focus on processes and controls in areas identified as being \"key risks\".\n\nAs of the financial year ended December 31, 2013, the certifying Officers have evaluated the design and effectiveness of such ICFR, or caused them to be designed and evaluated under their supervision. The certifying Officers have concluded that the design and effectiveness of ICFR were operating effectively as at December 31, 2013, to provide reasonable assurance regarding the reliability of financial reporting and the preparation of financial statements for external purposes in accordance with IFRS. The certifying Officers have evaluated whether there were any changes to the Company's ICFR during the year ended December 31, 2013 that have materially affected, or are reasonably likely to materially affect its ICFR. No changes were identified through their evaluation.\n\n## **Subsequent Events**\n\nOn January 20, 2014, and February 18, 2014, the Company announced dividends of $0.05 per share, payable on February 17, 2014, and March 17, 2014, to shareholders of record on January 31, 2014, and February 28, 2014.", - "page_start": 62, - "page_end": 62, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# **NOTE 1 - STATEMENT OF SIGNIFICANT ACCOUNTING POLICIES continued**\n\n# **u) Adoption of New and Revised Accounting Standards**\n\nDuring the current reporting period the Group adopted all of the new and revised Australian Accounting Standards and Interpretations applicable to its operations which became mandatory. The nature and effect of selected new standards and amendments on the Group's consolidated financial report are described below. Adoption of the other new mandatorily applicable standards did not have a material impact on the financial statement, financial position or performance of the Group.\n\n# **AASB 2011-4 -** *Amendments to Australian Accounting Standards to Remove Individual Key Management Personnel Disclosure*\n\nThis standard removes the requirements to include individual key management personnel disclosures in the notes to and forming part of the Financial Report. This standard also removes the individual KMP disclosure requirements for all disclosing entities in relation to equity holdings, loans and other related party transactions.\n\n# **Amendments to IAS 32 -** *Offsetting Financial Assets and Financial Liabilities*\n\nThe amendments to IAS 32 clarify the requirements relating to the offset of financial assets and financial liabilities. Specifically, the amendments clarify the meaning of 'currently has a legally enforceable right of set-off' and 'simultaneous realization and settlement'. As the Group does not have any financial assets and financial liabilities that qualify for offset, the application of the amendments has had no impact on the disclosure or the Group's consolidated financial statements.\n\n# **Recently issued accounting standards to be applied in future reporting periods:**\n\nThe following Standards and Interpretations have been issued but are not yet effective. These are the standards that the Group reasonably expects will have an impact on its disclosures, financial position or performance with applied at a future date. The Group's assessment of the impact of these new standards, amendments to standards, and interpretations is set out below.\n\n# **AASB 9/IFRS 9 –** *Financial Instruments*\n\nAASB 9/IFRS 9 introduces new requirements for the classification, measurement, and derecognition of financial assets and financial liabilities. The final version of IFRS 9 supersedes all previous versions of the standard. However, for annual periods beginning before 1 January 2018, an entity may elect to apply those earlier versions of IFRS 9 if the entity's relevant date of initial application is before 1 February 2015. The effective date of this standard is for fiscal years beginning on or after 1 January 2018. Management is currently assessing the impact of the new standard but it is not expected to have a material impact on the Group's consolidated financial statements.", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "- (a) \"new account\" means a financial account maintained by a reporting financial institution(**a**) opened on or after 13th May 2020;\n- (b) \"pre-existing account\" means—\n- (i) a financial account maintained by a reporting financial institution as of 12th May 2020, or\n- (ii) a financial account within Section VIII(C)(9)(b) of Annex 1 of the DAC(**b**), but in the application of that provision the references to \"subparagraph C(9)(a)\" are to be read as references to paragraph (i) of this sub-paragraph.\n\t- (4) The accounts are—\n\t\t- (a) non-registered pension arrangements where the annual contributions are limited to £50,000 and funds contributed cannot be accessed before the age of 55 except in circumstances of serious ill health;\n\t\t- (b) Premium Bonds issued by the UK National Savings and Investments;\n\t\t- (c) Fixed Interest Savings Certificates issued by the UK National Savings and Investments; and\n\t\t- (d) Index Linked Savings Certificates issued by the UK National Savings and Investments.\".\n\n(5) In Schedule 2, omit paragraphs 2, 6, 8 and 9.\n\n#### **Transitional provision**\n\n**3.**—(1) For the purposes of the International Tax Compliance Regulations 2015, in relation to an account that by virtue of regulation 2(5) ceases to be an excluded account, the calendar year 2020 is treated as beginning on 13th May 2020 and ending on 31st December 2020.\n\n(2) Where in consequence of paragraph (1) it is necessary to apportion an amount for the calendar year 2020 to the period ending immediately before 13th May 2020 and the period beginning with that date, it is to be apportioned—\n\n- (a) on a time basis according to the respective length of the periods, or\n- (b) if that method would produce a result that is unjust or unreasonable, on a just and reasonable basis.\n\n*David Rutley Maggie Throup* 20th April 2020 Two of the Lords Commissioners of Her Majesty's Treasury\n\n### **EXPLANATORY NOTE**\n\n*(This note is not part of the Regulations)* \n\nThe Regulations amend the International Tax Compliance Regulations 2015 (\"the principal Regulations\") which give effect to agreements and arrangements reached between the United Kingdom and other jurisdictions to improve international tax compliance.\n\nRegulation 2(2) extends the application of the principal Regulations to arrangements entered into by the United Kingdom for the exchange of financial account information with other jurisdictions up to 19th April 2020, the date before the Regulations are made.\n\nRegulation 2(5) omits various accounts from the category of excluded accounts. Regulation 2(4)(b) amends the definitions of \"new account\" and \"pre-existing account\" in relation to those\n\n(<b>a) \"Financial account\" and \"reporting financial institution\" are defined in the table in regulation 24(2) of the principal Regulations.\n\n(<b>b) \"The DAC\" is defined in regulation 1(3)(a) of the principal Regulations.", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200438_en.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "What was the muscle volume of the knee flexors of the 2024 word's strongest man ?", - "target_page": 7, - "target_passage": "Knee flexors 3,060 ", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "predictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of specific individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9–11), pronounced muscle size seems likely to be critical to extreme human strength; however, the specific muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the findings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World's Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13–15).\n\n### MATERIALS AND METHODS\n\n#### Participant\n\nThe WSM's achievements included one World's Strongest Man title (14 mo prior to measurement), five Britain's Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift World Record holder (500 kg; at the time of measurement), and second place at Europe's Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and benefits of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n#### Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant's resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant's training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1–5 repetition maximum (RM)]: 10%; heavy loads (6– 14 RM): 80%; and moderate loads (-15 RM): 10%. The participant reported only occasional (<1/week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently (>3/ week) executed training repetitions with the intention to move the load as fast as possible. The WSM's nutritional supplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n#### Overview\n\nThe WSM reported for a single test session that involved the following assessments (listed in order): axial T1 weighted 3.0-T MRI scans from T12 to the lateral malleolus [to assess muscle size throughout the lower body (left and right sides)], axial and sagittal T1-weighted MRI scans of both knees [to assess patellar tendon cross-sectional area (CSA) and patellar tendon moment arm], maximum countermovement jumps (CMJ), and maximum isometric midthigh pulls (IMTPs). The muscle size, patellar tendon CSA, and patellar tendon moment arm of the WSM were compared with various populations measured within our laboratory, as indicated in Table 1, alongside participant descriptives (10, 11, 13–15). In addition, the IMTP and CMJ measures were compared with existing published literature (included studies are summarized in Supplemental Materials 1 and 2, alongside participant descriptives).\n\n#### MRI Measurement of Muscle Tendon Unit Morphology and Moment Arm\n\nThe participant reported for their MRI scan [3.0-T Discovery MR750W (70-cm-wide bore), GE Medical] having not completed any strenuous physical activity in -24 h and had received prior instruction to arrive in a relaxed state having eaten and drunk normally. The participant sat quietly for 15 min prior to their scan. The participant lay supine for the MRI scan of the lower-body musculature from T12 to the lateral malleolus. A body coil (GE Medical) allowed axial T1 weighted images (time of repetition/time to echo 600/8.144 ms, image matrix 512 512, field of view 500 500 mm, pixel size 0.9766 0.9766 mm, slice thickness 5 mm, and interslice gap 5 mm) to be acquired in five overlapping blocks. Images of both sides of the body were acquired within a single scan for blocks 1 (T12 to pelvis), 4 (knee joint space to midshank), and 5 (midshank to lateral malleolus). However, due to the size of the participant's thighs, it was necessary to scan each thigh individually for blocks 2 (pelvis to midthigh) and 3 (midthigh to knee joint space); this involved the radiographer repositioning the field of view between scanning the first and the second thigh but not physically moving the coil or the participant. Oil-filled capsules were secured to the surface of the participant's skin with Transpore tape at intervals along the length of the lower body prior to the scan and in an offline analysis used to verify the alignment of the blocks (Horos software, Version 3.36, https://horosproject.org/).\n\nThe offline analysis was of the following muscles/compartments (Fig. 1): iliopsoas (psoas major and iliacus combined); sartorius; tensor fasciae latae; adductor magnus; gracilis; gluteus maximus; gluteus medius and minimus (combined, due to difficulty separating the two muscles); rectus femoris (RF); vastus lateralis (VL), medialis (VM), and intermedius (VI); semimembranosus (SM); semitendinosus (ST); biceps femoris long (BFlh) and short heads (BFsh); popliteus; lateral and medial gastrocnemius; soleus; and the anterior, lateral, and deep posterior compartments of the shank. The anterior shank compartment consisted of the", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "Figure 4. Quadriceps femoris (QF; A), vastus medialis (VM; B), vastus lateralis (VL; C), vastus intermedius (VI; D), and rectus femoris (RF; E) muscle volume of a World's Strongest Man and deadlift champion (WSM) compared with long-term resistance-trained (n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [n ¼ 102, pooled population from the works by Miller et al. (13) (n ¼ 11), Balshaw et al. (11) (n ¼ 52), and Balshaw et al. (14) (pretest data n ¼ 39)].\n\nAlthough it was anticipated that the WSM would possess a larger total lower-body muscle volume/mass than untrained controls and other athletic/trained groups we have previously measured, the magnitude and pattern of the differences were unknown. The results indicated that the total volume of the measured muscles was almost twice that of average untrained participants and 32–63% larger than subelite and elite sprinters. Pronounced development of the antigravity muscles (i.e., hip extensors, knee extensors, and plantar flexors) was perhaps not that surprising given the WSM's background in heavy lifting events (including being a double deadlift world champion and record holder). However, the hip flexors appear less important in these tasks, possibly explaining their more modest size, which was inferior to that of three elite 100-m sprinters we have previously assessed. The WSM's plantar flexors were particularly large relative to untrained controls (þ 120%). This could be due to the plantar flexors being the smallest of the antigravity muscle groups that may experience very high mechanical stress and, thus, a pronounced adaptive stimulus during heavy lifting, carrying, and pulling tasks. Furthermore, the very heavy and, therefore, low-velocity nature of these tasks may limit the contribution of the stretch-shortening cycle and tendon recoil to the positive/concentric work done by the plantar flexors, potentially placing a higher demand on the contractile apparatus than for running and jumping tasks.\n\nConsidering individual muscles/compartments, the muscular development of the WSM was distinctly nonuniform. It is striking that the largest muscles relative to the untrained control population were the three \"guy ropes\" (sartorius, gracilis, and semitendinosus: þ 140–202%). These three muscles provide stability to the pelvis and femur by having origins at diverse points around the pelvis while sharing a common insertion onto the anteromedial tibia [via pes anserinus, the conjoined tendons of these three muscles (39)]. Large guy rope muscles likely enhance stabilization of the femur and pelvis and would be expected to be critical during heavy weight-bearing tasks. In contrast, the WSM's five smallest muscles (relative to untrained controls) consisted of two hip flexors (iliopsoas and RF) and two monoarticular knee flexors; actions that appear far less important for lifting, carrying, and pulling tasks.\n\nThe WSM's quadriceps volume and patellar tendon moment arm were both greater than that of untrained controls and indeed any individual we have previously measured. However, the magnitude of difference, relative to the untrained controls, was noticeably larger for quadriceps femoris volume (greater than or equal to twice as large) than for", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed12.pdf" - }, - { - "text": "# RESEARCH ARTICLE\n\n# Muscle and tendon morphology of a world strongman and deadlift champion\n\n# Thomas G. Balshaw,1 Garry J. Massey,1,2 Robert Miller,1,3,4 Emmet J. McDermott,1,5 Thomas M. Maden-Wilkinson,6 and Jonathan P. Folland1\n\n1 School of Sport, Exercise, and Health Sciences, Loughborough University, Loughborough, United Kingdom; 2 College of Life and Environmental Sciences, University of Exeter, Exeter, United Kingdom; 3 UK Athletics, Loughborough University, Loughborough, United Kingdom; 4 Department of Sport Science, Aspire Academy, Doha, Qatar; 5 Department of Physical Education and Sport Sciences, University of Limerick, Limerick, Ireland; and 6 Academy of Sport and Physical Activity, Faculty of Health and Wellbeing, Sheffield Hallam University, Sheffield, United Kingdom\n\n# Abstract\n\nThis study compared the muscle and tendon morphology of an extraordinarily strong individual, a World's Strongest Man and deadlift champion (WSM), with that of various other athletic, trained, and untrained populations. The WSM completed the following: 1) 3.0-T MRI scans, to determine the volume of 22 individual lower limb muscles, 5 functional muscle groups, patellar tendon (PT) cross-sectional area (CSA), and PT moment arm; and 2) countermovement jumps (CMJ) and isometric midthigh pull (IMTP) contractions. The WSM was compared with previously assessed groups from our laboratory (muscle and tendon) and the wider research literature (CMJ and IMTP). The WSM's CMJ peak power (9,866 W) and gross (9,171 N) and net (7,480 N) IMTP peak forces were higher than any previously published values. The WSM's overall measured leg muscle volume was approximately twice that of untrained controls (þ 96%) but with pronounced anatomical variability in the extent of muscular development. The plantar flexor group (þ 120%) and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140% to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences relative to that of untrained controls. The WSM's pronounced quadriceps size (greater than or equal to twofold vs. untrained) was accompanied by modest PT moment arm differences and, notably, was not matched by an equivalent difference in PT CSA (þ 30%). These results provide novel insight into the musculotendinous characteristics of an extraordinarily strong individual, which may be toward the upper limit of human variation, such that the WSM's very pronounced lower limb muscularity also exhibited distinct anatomical variability and with muscle size largely uncoupled from tendon size.\n\nNEW & NOTEWORTHY Lower-body muscle size of an extraordinarily strong individual, a World's Strongest Man and deadlift champion (WSM), was approximately twice that of controls but was underpinned by pronounced anatomical variability in the extent of muscular development ( þ 23–202%): the plantar flexor group and guy rope muscles demonstrating the largest differences. The WSM's quadriceps size (more than or equal to twice that of controls) contrasted with modest differences in patella tendon moment arm ( þ 18%) and was uncoupled from patellar tendon size ( þ 30%).\n\nisometric force; magnetic resonance imaging; power; strength\n\n# INTRODUCTION\n\nFeats of strength have fascinated man since the early stages of human civilization, as shown by the archeological evidence of inscribed heavy stones at Olympia and Thera in Greece, dated to the 6th century BC, detailing the way they were lifted by Bybon and Eumastus, respectively (1). Over the centuries, many types of strength competitions have existed; some of which have been codified and endured within modern sporting competitions (e.g., weightlifting, powerlifting, and shot put). In addition, professional strongman competitions, such as the annually contested \"World's Strongest Man\" event, generate extensive global interest (2). Moreover, scientific understanding of muscular strength is important because of its role in athletic performance (3), injury prevention (4), and healthy aging (5). However, our knowledge of extreme human strength is limited.\n\nTo date, there is little scientific information on the characteristics of extremely strong humans in terms of laboratorybased tests of strength and power, particularly the size and distribution of their muscle mass, as well as tendon size and joint mechanics (moment arm). Kraemer et al. (6) examined the body composition of elite strongman competitors using dualenergy X-ray absorptiometry scanning and found that they had a body mass (153 ± 19 kg) and lean mass (118 ± 12 kg) approximately twice that of an average untrained healthy young man. Whole body skeletal muscle mass of athletes from strength- and power-based sports has also been estimated using ultrasound measurements at a limited number of anatomical locations (7, 8). However, neither ultrasound-derived\n\nCorrespondence: T. G. Balshaw (t.g.balshaw@lboro.ac.uk). Submitted 8 May 2024 / Revised 2 July 2024 / Accepted 16 July 2024", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "Figure 5. Overall hamstrings (HAMS; A), semimembranosus (SM; B), semitendinosus (ST; C), biceps femoris long head (BFlh; D), and biceps femoris short head (BFsh; E) muscle volume of a World's Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [n ¼ 50, pooled population from the works by Miller et al. (13) (n ¼ 11) and Balshaw et al. (14) (pretest data n ¼ 39)].\n\npatellar tendon moment arm (þ 18%). Therefore, of these two key strength determinants, muscle size, rather than joint leverage, appeared to be the predominant factor responsible for the WSM's extraordinary strength. Indeed, when we previously compared the muscle morphology and joint mechanics of individuals with distinct maximum strength capacity (long-term resistance-trained individuals vs. untrained controls), muscle size was the primary factor separating the groups with much more subtle differences in moment arm (10). The extreme example of muscle size provided by the WSM's quadriceps femoris also gave the opportunity to investigate the scaling of tendon size to muscle size; extreme muscular size (greater than or equal to twice that for untrained controls) might be expected to be accompanied by comparable tendinous tissue size to effectively transmit high muscular forces to the skeleton. However, the WSM's patellar tendon CSA was only 30% larger than untrained controls and within the range of individuals we have previously measured (Fig. 6A). This observation supports the notion that tendon structure may be largely fixed by adulthood (40), with only slow/limited\n\nFigure 6. Patellar tendon mean cross-sectional area (A) and patellar tendon moment arm (B) of a World's Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [n ¼ 16, from the work by Massey et al. (15)] and untrained control populations [n ¼ 39, from the work by Massey et al. (15)].", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed12.pdf" - }, - { - "text": "changes in response to functional overload/resistance training. For example, we previously found patellar tendon CSA to show very subtle changes after 15 wk (45 training sessions) of heavy resistance training [ þ 1.4% (41)] and no differences between long-term resistance-trained individuals and untrained controls (15).\n\n#### Limitations\n\nAlthough the current investigation provides a detailed assessment of an individual at/toward the upper limit of human strength performance, it is important to appreciate study limitations. First, the participant was not measured immediately before their World's Strongest Man championship success or other landmark performances, and it is entirely possible the functional and structural characteristics we assessed may have been even higher directly prior to peak performances. Despite using a wide-bore MRI scanner, due to the size of the WSM's shoulders and arms, it was not possible to scan their upper body. Thus, we were not able to investigate this aspect of the WSM's muscle morphology; although given that greater hypertrophy occurs in the upper body compared with the lower body (42), it is possible that the WSM's upper-body muscle size relative to untrained controls may have been even more pronounced than what we have documented for the lower body. In the current study to provide the most representative data on untrained control participants, the largest available untrained control populations were used for each category of measurements. Thus, different untrained control populations were used [e.g., comparison of quadricep and hamstring size (n ¼ 102) vs. comparison of all the leg muscles (n ¼ 11)], which led to some subtle discrepancies in the contrasts between these groups and the WSM [e.g., quadriceps femoris/knee extensors, þ 127% and þ 99% relative to our large pooled (n ¼ 102) and smaller (n ¼ 11) untrained control samples, respectively]. Importantly, however, this discrepancy does not appear to meaningfully affect the interpretation of the findings. There were subtle differences in the precise scanning and analysis approaches used with the reference populations featured in this study, including 1) magnetic field strength [1.5 T (10, 11, 15) vs. 3.0 T, WSM and (13, 14)]; 2) the interslice distance used to quantify quadriceps femoris and hamstrings muscle volume [1.5 cm (10, 11, 14) vs. 2.0 cm, WSM and (13)]; 3) the calculation of muscle volume [area under the cubic spline ACSA-muscle length curve: (10, 11, 14) vs. the equation detailed earlier: WSM and (13)]; and 4) the use of unilateral MRI measures derived from one limb (10, 11, 14, 15) or collapsed across two limbs [WSM and (13)]. However, it seems likely that these subtle differences would have had at most a very minor effect on the findings. Finally, it is also important to highlight that the differences documented between the WSM and comparative populations for the various measures included in the current study cannot be assumed to be anything other than a combination of both innate (genetic) and environmental (training and nutrition) factors.\n\n#### Conclusions\n\nIn conclusion, this novel investigation documented the muscle and tendon morphology and whole body strength and power characteristics of an exceptionally strong individual, relative to comparative athletic, trained, and untrained populations. Overall leg muscle volume of the WSM was approximately twice that of untrained controls but with pronounced anatomical variability in the extent of muscular development. The plantar flexor muscle group and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140 to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences. The pronounced quadriceps femoris size of the WSM (greater than or equal to twice that of untrained) was accompanied by a more modest difference in patella tendon moment arm (þ 18%) and was not matched by a proportional difference in tendon size ( þ 30%).\n\n# DATA AVAILABILITY\n\nData will be made available upon reasonable request.\n\n### SUPPLEMENTAL MATERIAL\n\nSupplemental Material: https://doi.org/10.6084/m9.figshare. 26152939.\n\n### ACKNOWLEDGMENTS\n\nThe authors thank radiographer Julie Thompson.\n\n# DISCLOSURES\n\nNo conflicts of interest, financial or otherwise, are declared by the authors.\n\n## AUTHOR CONTRIBUTIONS\n\nT.G.B. and J.P.F. conceived and designed research; T.G.B., G.J.M., R.M., E.J.M., and J.P.F. performed experiments; T.G.B., G.J.M., R.M., E.J.M., and T.M.M.-W. analyzed data; T.G.B. and J.P.F. interpreted results of experiments; T.G.B. prepared figures; T.G.B. and J.P.F. drafted manuscript; T.G.B. and J.P.F. edited and revised manuscript; T.G.B., G.J.M., R.M., E.J.M., T.M.M.-W., and J.P.F. approved final version of manuscript.\n\n### REFERENCES\n\n- 1. Crowther NB. Weightlifting in antiquity: achievement and training. Greece Rome 24: 111–120, 1977. doi:10.1017/s0017383500018416.\n- 2. Dixon E. How Wave.tv is making the World's Strongest Man think bigger with its digital plans (Online). SportsPro, 2020.https://www. sportspromedia.com/insights/analysis/worlds-strongest-man-wavetvthe-pump-snapchat-brian-verne-interview/ [Apr 6, 2024].\n- 3. Suchomel TJ, Nimphius S, Stone MH. The importance of muscular strength in athletic performance. Sports Med 46: 1419–1449, 2016. doi:10.1007/s40279-016-0486-0.\n- 4. Opar DA, Williams MD, Timmins RG, Hickey J, Duhig SJ, Shield AJ. Eccentric hamstring strength and hamstring injury risk in Australian footballers. Med Sci Sports Exerc 47: 857–865, 2015. doi:10.1249/ mss.0000000000000465.\n- 5. McLeod M, Breen L, Hamilton DL, Philp A. Live strong and prosper: the importance of skeletal muscle strength for healthy ageing. Biogerontology 17: 497–510, 2016. doi:10.1007/s10522-015-9631-7.\n- 6. Kraemer WJ, Caldwell LK, Post EM, DuPont WH, Martini ER, Ratamess NA, Szivak TK, Shurley JP, Beeler MK, Volek JS, Maresh CM, Todd JS, Walrod BJ, Hyde PN, Fairman C, Best TM. Body composition in elite strongman competitors. J Strength Cond Res 34: 3326–3330, 2020. doi:10.1519/jsc.0000000000003763.\n- 7. Abe T, Buckner SL, Dankel SJ, Jessee MB, Mattocks KT, Mouser JG, Loenneke JP. Skeletal muscle mass in human athletes: what is the upper limit? Am J Hum Biol 30: e23102, 2018. doi:10.1002/ ajhb.23102.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed12.pdf" - }, - { - "text": "Table 2. Muscle volume of all muscles, 5 functional muscle groups, and 22 individual muscles/compartments of a World's Strongest Man and deadlift champion and comparative elite sprinters, subelite sprinters, and untrained control participants\n\n| | | | Muscle Volume, cm3 | |\n| --- | --- | --- | --- | --- |\n| Muscle Group/Muscle or Compartment | WSM | Elite Sprinters (n 5 5) | Subelite Sprinters (n 5 26) | Untrained (n 5 11) |\n| All muscles | 14,922 | 11,323 ± 1,328 | 9,164 ± 1,207 | 7,628 ± 1,548 |\n| Hip flexors | 1,704 | 1,620 ± 200 | 1,314 ± 216 | 1,031 ± 151 |\n| Hip extensors | 4,724 | 4,002 ± 489 | 3,029 ± 422 | 2,257 ± 220 |\n| Knee flexors | 3,060 | 2,304 ± 178 | 1,859 ± 301 | 1,460 ± 196 |\n| Knee extensors | 4,386 | 3,218 ± 400 | 2,636 ± 401 | 2,202 ± 315 |\n| Plantar flexors | 1,888 | 1,112 ± 181 | 943 ± 156 | 860 ± 172 |\n| Iliopsoas | 681 | 702 ± 97 | 618 ± 101 | 514 ± 75 |\n| Sartorius | 429 | 306 ± 46 | 209 ± 50 | 142 ± 25 |\n| Tensor fasciae latae | 142 | 135 ± 41 | 86 ± 25 | 73 ± 24 |\n| Adductor magnus | 1,334 | 1,056 ± 83 | 828 ± 128 | 624 ± 81 |\n| Gracilis | 235 | 180 ± 37 | 142 ± 37 | 98 ± 23 |\n| Gluteus maximus | 1,980 | 1,797 ± 376 | 1,257 ± 197 | 931 ± 108 |\n| Gluteus medius and minimus | 1,172 | 626 ± 129 | 575 ± 97 | 583 ± 76 |\n| Rectus femoris | 453 | 476 ± 45 | 401 ± 78 | 303 ± 55 |\n| Vastus lateralis | 1,508 | 1,132 ± 180 | 925 ± 156 | 743 ± 98 |\n| Vastus intermedius | 1,336 | 962 ± 145 | 789 ± 140 | 680 ± 115 |\n| Vastus medialis | 1,088 | 649 ± 97 | 521 ± 79 | 476 ± 111 |\n| Semimembranosus | 392 | 359 ± 60 | 327 ± 59 | 262 ± 18 |\n| Semitendinosus | 563 | 449 ± 70 | 350 ± 79 | 219 ± 39 |\n| Biceps femoris long head | 454 | 340 ± 31 | 267 ± 47 | 221 ± 42 |\n| Biceps femoris short head | 135 | 167 ± 26 | 131 ± 34 | 110 ± 28 |\n| Popliteus | 27 | 23 ± 5 | 17 ± 5 | 19 ± 6 |\n| Lateral gastrocnemius | 310 | 202 ± 34 | 170 ± 37 | 156 ± 41 |\n| Medial gastrocnemius | 515 | 300 ± 38 | 262 ± 58 | 251 ± 52 |\n| Soleus | 1,063 | 610 ± 137 | 510 ± 76 | 453 ± 95 |\n| Anterior compartment | 445 | 302 ± 59 | 273 ± 47 | 291 ± 47 |\n| Lateral compartment | 253 | 147 ± 32 | 161 ± 42 | 153 ± 35 |\n| Posterior compartment | 406 | 401 ± 76 | 345 ± 71 | 326 ± 93 |\n\nIndividual measurements are the average of both sides/legs (i.e., unilateral). All muscles are the sum of muscle volumes from all the individual muscles/compartments listed. Muscle volume data are presented as group means ± SD, except for the WSM (n ¼ 1). Untrained control participants from Miller et al. (13).\n\nassessed (Fig. 5B). BFsh volume (135 cm3 ) of the WSM was a modest 26% greater than that of our pool of untrained control participants (107 ± 31 cm3 ; Fig. 5E) but smaller than that of both long-term resistance-trained individuals (1%; 136 ± 27 cm3 ) and elite sprinters (19%; 167 ± 26 cm3 ; Fig. 5E).\n\n#### Patella Tendon Cross-Sectional Area and Moment Arm\n\nThe patellar tendon mean CSA of the WSM (133.8 mm2 ) was larger than that of average untrained (þ 30%; 103.2 ± 12.5 mm2 ) and long-term resistance-trained individuals (þ 27%; 105.4 ± 13.0 mm2 ; Fig. 6A) but was smaller than the largest individual we have measured from these groups (149.5 mm2 ). The WSM's patellar tendon moment arm (51.5 mm) was also larger than that of average untrained (þ 18%; 43.8 ± 2.7 mm) or long-term resistance-trained groups (þ 12%; 45.8 ± 2.5 mm; Fig. 6B) as well as being 3% greater than the highest individual moment arm we have previously assessed within these groups (49.9 mm).\n\n### DISCUSSION\n\nThis study is the first to document the lower-body muscle and tendon morphology of a World's Strongest Man and deadlift champion (i.e., an exceptionally strong individual), and these are presented alongside functional whole body assessments, which exceeded the highest IMTP force (gross and net) and CMJ power values previously reported by 54%, 100%, and 164%, respectively. The WSM had overall lowerbody muscularity approximately twice that of untrained controls (þ 96%) and 32% greater than that of elite 100-m sprinters. However, there was substantial anatomical variability in the magnitude of the differences, ranging from the plantar flexors (þ 120% vs. untrained) to the hip flexors (þ 65% vs. untrained). Similarly, some specific muscles, such as the guy rope muscles that stabilize the femur and pelvis, were 2.5–3.0 times the volume of untrained individuals (gracilis þ 140%, semitendinosus þ 157%, and sartorius þ 202%) but others displayed more marginal differences (BFsh þ 23%, iliopsoas þ 32% vs. untrained). Considering the knee extensors, the WSM had both quadriceps femoris volume greater than or equal to twofold that of untrained controls and a greater patella tendon moment arm than we have previously measured (þ 18% vs. untrained), which would be expected to combine to facilitate extraordinary strength. Furthermore, despite the WSM's extremely large quadriceps femoris, their patellar tendon CSA was only 30% greater than that of untrained controls and not outside the range of tendons we have previously assessed. The results of this study provide novel insights into the muscle and tendon characteristics, as well as the strength and power capabilities, of an extraordinarily strong individual that may be toward the upper limit of human variation in these characteristics.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "- 8. Abe T, Buckner SL, Mattocks KT, Jessee MB, Dankel SJ, Mouser JG, Bell ZW, Loenneke JP. Skeletal muscle mass and architecture of the world's strongest raw powerlifter: a case study. Asian J Sports Med 9: e61763, 2018. doi:10.5812/asjsm.61763.\n- 9. Powell PL, Roy RR, Kanim P, Bello MA, Edgerton VR. Predictability of skeletal muscle tension from architectural determinations in guinea pig hindlimbs. J Appl Physiol Respir Environ Exerc Physiol 57: 1715–1721, 1984. doi:10.1152/jappl.1984.57.6.1715.\n- 10. Maden-Wilkinson TM, Balshaw TG, Massey G, Folland JP. What makes long-term resistance-trained individuals so strong? A comparison of skeletal muscle morphology, architecture, and joint mechanics. J Appl Physiol (1985) 128: 1000–1011, 2019. doi:10.1152/ japplphysiol.00224.2019.\n- 11. Balshaw TG, Maden-Wilkinson TM, Massey GJ, Folland JP. The human muscle size and strength relationship: effects of architecture, muscle force, and measurement location. Med Sci Sports Exerc 53: 2140–2151, 2021. doi:10.1249/mss.0000000000002691.\n- 12. Baxter JR, Piazza SJ. Plantar flexor moment arm and muscle volume predict torque-generating capacity in young men. J Appl Physiol (1985)116: 538–544, 2014. doi:10.1152/japplphysiol.01140.2013.\n- 13. Miller R, Balshaw TG, Massey GJ, Maeo S, Lanza MB, Johnston M, Allen SJ, Folland JP. The muscle morphology of elite sprint running. Med Sci Sports Exerc 53: 804–815, 2021. doi:10.1249/ mss.0000000000002522.\n- 14. Balshaw TG, Funnell MP, McDermott E, Maden-Wilkinson TM, Abela S, Quteishat B, Edsey M, James LJ, Folland JP. The effect of specific bioactive collagen peptides on function and muscle remodeling during human resistance training. Acta Physiol (Oxf) 237: e13903, 2023 [Erratum in Acta Physiol (Oxf) 237:e13952, 2023]. doi:10.1111/apha.13903.\n- 15. Massey GJ, Balshaw TG, Maden-Wilkinson TM, Folland JP. Tendinous tissue properties after short- and long-term functional overload: differences between controls, 12 weeks and 4 years of resistance training. Acta Physiol (Oxf) 222: e13019, 2018. doi:10.1111/ apha.13019.\n- 16. Sugisaki N, Kobayashi K, Tsuchie H, Kanehisa H. Associations between individual lower-limb muscle volumes and 100-m sprint time in male sprinters. Int J Sports Physiol Perform 13: 214–219, 2018. doi:10.1123/ijspp.2016-0703.\n- 17. Seynnes OR, Erskine RM, Maganaris CN, Longo S, Simoneau EM, Grosset JF, Narici MV. Training-induced changes in structural and mechanical properties of the patellar tendon are related to muscle hypertrophy but not to strength gains. J Appl Physiol (1985) 107: 523–530, 2009. doi:10.1152/japplphysiol.00213.2009.\n- 18. Beckham GK, Sato K, Santana HAP, Mizuguchi S, Haff GG, Stone MH. Effect of body position on force production during the isometric midthigh pull. J Strength Cond Res 32: 48–56, 2018. doi:10.1519/ jsc.0000000000001968.\n- 19. Travis SK, Goodin JR, Beckham GK, Bazyler CD. Identifying a test to monitor weightlifting performance in competitive male and female weightlifters. Sports 6: 46, 2018. doi:10.3390/sports6020046.\n- 20. Beckham G, Mizuguchi S, Carter C, Sato K, Ramsey M, Lamont H, Hornsby G, Haff G, Stone M. Relationships of isometric mid-thigh pull variables to weightlifting performance. J Sports Med Phys Fit 53: 573–581, 2013.\n- 21. Hornsby WG, Gentles JA, MacDonald CJ, Mizuguchi S, Ramsey MW, Stone MH. Maximum strength, rate of force development, jump height, and peak power alterations in weightlifters across five months of training. Sports 5: 78, 2017. doi:10.3390/sports5040078.\n- 22. Beckham GK, Lamont HS, Sato K, Ramsey MW, Gh G, Stone MH. Isometric strength of powerlifters in key positions of the conventional deadlift. J Trainology 1: 32–35, 2012. doi:10.17338/trainology.1.2_32.\n- 23. Stone MH, Sands WA, Pierce KC, Carlock J, Cardinale M, Newton RU. Relationship of maximum strength to weightlifting performance. Med Sci Sports Exerc 37: 1037–1043, 2005. doi:10.1249/01.mss. 0000171621.45134.10.\n- 24. Beattie K, Carson BP, Lyons M, Kenny IC. The relationship between maximal strength and reactive strength. Int J Sports Physiol Perform 12: 548–553, 2017. doi:10.1123/ijspp.2016-0216.\n- 25. Suarez DG, Carroll KM, Slaton JA, Rochau KG, Davis MW, Stone MH. Utility of a shortened isometric midthigh pull protocol for assessing rapid force production in athletes. J Strength Cond Res 36: 1819–1825, 2022. doi:10.1519/jsc.0000000000003774.\n- 26. Suchomel TJ, Nimphius S, Stone MH. Scaling isometric mid-thigh pull maximum strength in division I athletes: are we meeting the assumptions? Sports Biomech 19: 532–546, 2020. doi:10.1080/ 14763141.2018.1498910.\n- 27. Cunningham DJ, Shearer DA, Drawer S, Pollard B, Cook CJ, Bennett M, Russell M, Kilduff LP. Relationships between physical qualities and key performance indicators during match-play in senior international rugby union players. PLoS One 13: e0202811, 2018. doi:10.1371/journal.pone.0202811.\n- 28. Doyle TLA, Fain AC, Wills JA, Cooper D, Toonen K, Kamphius B. Measures of lower body strength associated with injuries in Australian special forces selection candidates. J Appl Biomech 38: 255–262, 2022. doi:10.1123/jab.2021-0134.\n- 29. Kawamori N, Rossi SJ, Justice BD, Haff EE, Pistilli EE, O'Bryant HS, Stone MH, Haff GG. Peak force and rate of force development during isometric and dynamic mid-thigh clean pulls performed at various intensities. J Strength Cond Res 20: 483–491, 2006. doi:10.1519/ 18025.1.\n- 30. Wang R, Hoffman JR, Tanigawa S, Miramonti AA, Monica MB, Beyer KS, Church DD, Fukuda DH, Stout JR. Isometric mid-thigh pull correlates with strength, sprint, and agility performance in collegiate rugby union players. J Strength Cond Res 30: 3051–3056, 2016. doi:10.1519/jsc.0000000000001416.\n- 31. Haff GG, Stone M, O'Bryant HS, Harman E, Dinan C, Johnson R, Han KH. Force-time dependent characteristics of dynamic and isometric muscle actions. J Strength Cond Res 11: 269–272, 1997. doi:10.1519/1533-4287(1997)011<0269:FTDCOD>2.3.CO;2.\n- 32. Mercer RAJ, Russell JL, McGuigan LC, Coutts AJ, Strack DS, McLean BD. Finding the signal in the noise—interday reliability and seasonal sensitivity of 84 countermovement jump variables in professional basketball players. J Strength Cond Res 37: 394–402, 2023. doi:10.1519/jsc.0000000000004182.\n- 33. Cabarkapa D, Philipp N, Cabarkapa D, Eserhaut D, Fry A. Comparison of force-time metrics between countermovement vertical jump with and without an arm swing in professional male basketball players. Int J Strength Cond 3: 1–7, 2023. doi:10.47206/ijsc. v3i1.197.\n- 34. Tillin NA, Pain MT, Folland J. Explosive force production during isometric squats correlates with athletic performance in rugby union players. J Sports Sci 31: 66–76, 2013. doi:10.1080/02640414.2012.720704.\n- 35. Morris CG, Weber JA, Netto KJ. Relationship between mechanical effectiveness in sprint running and force-velocity characteristics of a countermovement jump in Australian rules football athletes. J Strength Cond Res 36: e59–e65, 2022. doi:10.1519/ jsc.0000000000003583.\n- 36. Johnson DL, Bahamonde R. Power output estimate in university athletes. J Strength Cond Res 10: 161–166, 1996. doi:10.1519/1533-4287 (1996)010<0161:poeiua>2.3.co;2.\n- 37. Mkaouer B, Jemni M, Amara S, Chaaben H , Tabka Z. Kinematic and kinetic analysis of counter movement jump versus two different types of standing back somersault. Sci Gymnast J 4: 61–71, 2012. https://www.fsp.uni-lj.si/en/research/scientific-magazines/scienceof-gymnastics/previous-issues/2012102209114244/.\n- 38. Walsh MS, Bohm H € , Butterfield MM, Santhosam J. Gender bias in the effects of arms and countermovement on jumping performance. J Strength Cond Res 21: 362–366, 2007. doi:10.1519/00124278- 200705000-00012.\n- 39. Vadgaonkar R, Prameela MD, Kumar CG, Blossom V, Tonse M, Murlimanju BV, Pai MM, Prabhu LV. Dimensions of pes anserinus of the lower extremity, an anatomical study with its surgical implications. Anat Cell Biol 54: 178–183, 2021. doi:10.5115/acb.20.275.\n- 40. Heinemeier KM, Schjerling P, Heinemeier J, Magnusson SP, Kjaer M. Lack of tissue renewal in human adult Achilles tendon is revealed by nuclear bomb 14C. FASEB J 27: 2074–2079, 2013. doi:10.1096/ fj.12-225599.\n- 41. Balshaw TG, Funnell MP, McDermott EJ, Maden-Wilkinson TM, Massey GJ, Abela S, Quteishat B, Edsey M, James LJ, Folland JP. The effect of specific bioactive collagen peptides on tendon remodeling during 15 wk of lower body resistance training. Med Sci Sports Exerc 55: 2083–2095, 2023. doi:10.1249/mss.0000000000003242.\n- 42. Welle S, Totterman S, Thornton C. Effect of age on muscle hypertrophy induced by resistance training. J Gerontol A Biol Sci Med Sci 51: M270–M275, 1996. doi:10.1093/gerona/51a.6.m270.", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed12.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Figure 1. Example axial MRI images from the World's Strongest Man and deadlift champion (WSM; A–C) and an untrained control participant (D–F) from the hip (A and D), thigh (B and E), and lower leg (C and F). Image location relative to femur and shank length was matched between the WSM and the untrained control as follows: hip image is at approximately midfemoral head, thigh image is at 52% of femur length (0% is distal end of femur, 100% is greater trochanter), and lower leg image is at 70% of shank length (0% is lateral malleolus, 100% is proximal end of tibia). The untrained control participant displayed was from the work by Miller et al. (13) and had a total measured muscle volume of all measured muscles that was 5.1% smaller than the mean of the untrained group within that study.\n\nadjustment to different heights. A bar height producing a knee joint angle of 145 (measured by a manual goniometer) was selected, and the participant was instructed to keep his torso upright while completing the IMTP efforts. Two calibrated 10-kN-capacity force platforms (model 9286B, Kistler Instruments, Ltd., London, UK), one underneath each foot, were placed on top of the isometric rig's base plate, and vertical force signals from the eight individual load cells across the two force platforms were outputted (External Control Unit model 5233 A, Kistler Instruments, Ltd.) and sampled at 2,000 Hz using an external analog-to-digital converter (Micro 1401; CED, Cambridge, UK) and recorded with Spike 2 computer software (CED, Cambridge, UK).\n\nFollowing a warm-up consisting of a series of incremental warm-up contractions of 5 s duration ranging from 50% to 90% of maximum perceived effort, two maximum IMTP efforts of 3–5 s duration were performed under the instruction to \"pull as hard as possible.\" Six minutes separated the maximum efforts, based on a self-selected recovery period. Wrist wraps were worn to remove the influence of grip strength from the assessment. Real-time overall feedback from the force platforms (the sum of the force signals from", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed12.pdf" - }, - { - "text": "comparative populations drawn from the existing literature can be found in Supplemental Materials 1 (gross IMTP peak force and net IMTP peak force) and 2 (CMJ peak power and height).\n\n#### Isometric Midthigh Pull and Countermovement Jump\n\nGross (including body weight) and net (above body weight) IMTP peak forces of the WSM were 9,171 N and 7,480 N, respectively. The WSM's gross IMTP peak force was 54% greater than the highest comparable group mean we located (subelite weightlifters: 5,942 ± 844 N (20); Fig. 2A). The WSM's net IMTP peak force was 100% greater than the highest comparable group mean value in the literature (collegiate soccer athletes: 3,740 ± 692 N (26); Fig. 2B).\n\nThe WSM's CMJ peak power and jump height were 9,866 W and 53.3 cm, respectively. The peak CMJ power of the WSM was >2.5-fold (164%) that of the mean of an untrained control group previously measured in our laboratory (3,735 ± 760 W; unpublished) and 51% greater than the highest comparable group mean value we located in the literature (professional basketball players: 6,518 ± 923 W (32); Fig. 2C). Not surprisingly, given the WSM's high body mass, his jump height was less exceptional, while still being 20% greater than that of a group of untrained control participants previously measured in our laboratory (44.3 ± 9.2 cm; unpublished). However, his jump height was 25% lower than the highest group mean CMJ height we are aware of in the published literature (elite international gymnasts: 71.3 ± 4.5 cm (37); Fig. 2D).\n\n#### Leg Muscle Volumes\n\nThe total unilateral muscle volume of the 22 measured muscles/compartments of WSM (14,922 cm3 ) was nearly twice that of a relatively modest (n ¼ 11) sample of untrained controls (7,628 ± 1,548 cm3 ; þ 96%; Fig. 3), while being 63% greater than subelite (9,164 ± 1,207 cm3 ) and þ 32% greater than elite 100-m sprinters (11,323 ± 1,328 cm3 ; Table 2). The muscle group differences were largest for the plantar flexors ( þ 120% vs. untrained; þ 100% vs. subelite sprinters; þ 70% vs. elite sprinters) and smallest for the hip flexors ( þ 65% vs. untrained; þ 30% vs. subelite sprinters; þ 5% vs. elite sprinters). The WSM had the highest values of any individual we have observed for four out of five muscle groups, but not the hip flexors, which were inferior to three of the elite 100-m sprinters (n ¼ 5).\n\nCompared with untrained control participants (n ¼ 11), all 22 of the WSM's individual muscles/compartments were larger than untrained controls (Table 2 and Fig. 3). However, the differences in muscle volume were extremely variable, with the biggest differences being for the \"guy ropes,\" which were 2.5–3.0 times that of untrained controls (þ 140% gracilis; þ 157% ST; þ 202% sartorius), compared with more modest differences such as 23% (BFsh) and 32% (iliopsoas) greater.\n\n#### Quadriceps Femoris and Hamstring Size\n\nOverall quadriceps femoris volume of the WSM (4,386 cm3 ) was 127% greater than a large, pooled population of untrained controls (1,932 ± 336; n ¼ 102), 66% greater than subelite sprinters (2,636 ± 401 cm3 ), 53% greater than long-term resistancetrained individuals (2,876 ± 311 cm3 ), and 36% greater than elite\n\nFigure 3. Percentage differences in muscle volumes of all muscles, 5 functional muscle groups, and 23 individual muscles/compartments between the World's Strongest Man and deadlift champion (WSM; n ¼ 1) and untrained control participants (n ¼ 11) from the work by Miller et al. (13). A positive value indicates greater muscle volume of WSM relative to the group mean of the untrained controls. The functional muscle groups and individual muscles are ordered according to the magnitude of the percentage differences for absolute muscle volume.\n\nsprinters (3,218 ± 400 cm3 ; Fig. 4A). Moreover, the WSM's quadriceps femoris was 18% larger than the most muscular individual we have previously assessed (elite sprinter: 3,716 cm3 ). The volumes of the individual vasti muscles of the WSM (VL: 1,508 cm3 ; VI: 1,336 cm3 ; VM: 1,088 cm3 ) were 130–138% larger than untrained controls (VL: 633 ± 117 cm3 ; VI: 581 ± 120 cm3 ; VM: 461 ± 89 cm3 ) and also greater than any trained/athletic individual we have previously assessed (Fig. 4, B–D). However, the WSM's RF (453 cm3 ) was not quite so large, being 76% greater than untrained controls (257 ± 57 cm3 ) but smaller than the average elite sprinter (5%; Fig. 4E), 13% greater than subelite sprinters, and 21% greater than long-term resistancetrained individuals.\n\nOverall hamstring volume of the WSM (1,545 cm3 ) was 109% greater than a large pooled population of untrained controls (739 ± 142 cm3 ; n ¼ 50), 44% greater than subelite sprinters (1,075 ± 178 cm3 ), 53% greater than long-term resistancetrained individuals (1,011 ± 142 cm3 ), and 17% greater than elite sprinters (1,315 ± 130 cm3 ; Fig. 5A). The WSM's hamstring volume was also marginally larger (þ 3%) than the most muscular individual we have previously assessed (subelite sprinter, 1,495 cm3 ). The ST (563 cm3 ) and BFlh (454 cm3 ) volumes of the WSM were 132–182% larger than that of the pooled population of untrained controls (ST: 200 ± 48 cm3 ; BFlh: 196 ± 47 cm3 ; Fig. 5, C and D) and greater than the mean of any trained/athletic group we have previously assessed (Fig. 5, C and D). SM (392 cm3 ) volume of the WSM was 66% greater than untrained controls (SM 236 ± 46 cm3 ) and greater than the mean for trained/athletic groups we have previously", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed12.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "What are the nutritionnal added components to the word's strongest man regime ?", - "target_page": 2, - "target_passage": "The WSM’s nutritional supplement consumption included protein, branched-chain amino acids, and electrolytes", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "predictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of specific individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9–11), pronounced muscle size seems likely to be critical to extreme human strength; however, the specific muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the findings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World's Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13–15).\n\n### MATERIALS AND METHODS\n\n#### Participant\n\nThe WSM's achievements included one World's Strongest Man title (14 mo prior to measurement), five Britain's Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift World Record holder (500 kg; at the time of measurement), and second place at Europe's Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and benefits of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n#### Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant's resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant's training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1–5 repetition maximum (RM)]: 10%; heavy loads (6– 14 RM): 80%; and moderate loads (-15 RM): 10%. The participant reported only occasional (<1/week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently (>3/ week) executed training repetitions with the intention to move the load as fast as possible. The WSM's nutritional supplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n#### Overview\n\nThe WSM reported for a single test session that involved the following assessments (listed in order): axial T1 weighted 3.0-T MRI scans from T12 to the lateral malleolus [to assess muscle size throughout the lower body (left and right sides)], axial and sagittal T1-weighted MRI scans of both knees [to assess patellar tendon cross-sectional area (CSA) and patellar tendon moment arm], maximum countermovement jumps (CMJ), and maximum isometric midthigh pulls (IMTPs). The muscle size, patellar tendon CSA, and patellar tendon moment arm of the WSM were compared with various populations measured within our laboratory, as indicated in Table 1, alongside participant descriptives (10, 11, 13–15). In addition, the IMTP and CMJ measures were compared with existing published literature (included studies are summarized in Supplemental Materials 1 and 2, alongside participant descriptives).\n\n#### MRI Measurement of Muscle Tendon Unit Morphology and Moment Arm\n\nThe participant reported for their MRI scan [3.0-T Discovery MR750W (70-cm-wide bore), GE Medical] having not completed any strenuous physical activity in -24 h and had received prior instruction to arrive in a relaxed state having eaten and drunk normally. The participant sat quietly for 15 min prior to their scan. The participant lay supine for the MRI scan of the lower-body musculature from T12 to the lateral malleolus. A body coil (GE Medical) allowed axial T1 weighted images (time of repetition/time to echo 600/8.144 ms, image matrix 512 512, field of view 500 500 mm, pixel size 0.9766 0.9766 mm, slice thickness 5 mm, and interslice gap 5 mm) to be acquired in five overlapping blocks. Images of both sides of the body were acquired within a single scan for blocks 1 (T12 to pelvis), 4 (knee joint space to midshank), and 5 (midshank to lateral malleolus). However, due to the size of the participant's thighs, it was necessary to scan each thigh individually for blocks 2 (pelvis to midthigh) and 3 (midthigh to knee joint space); this involved the radiographer repositioning the field of view between scanning the first and the second thigh but not physically moving the coil or the participant. Oil-filled capsules were secured to the surface of the participant's skin with Transpore tape at intervals along the length of the lower body prior to the scan and in an offline analysis used to verify the alignment of the blocks (Horos software, Version 3.36, https://horosproject.org/).\n\nThe offline analysis was of the following muscles/compartments (Fig. 1): iliopsoas (psoas major and iliacus combined); sartorius; tensor fasciae latae; adductor magnus; gracilis; gluteus maximus; gluteus medius and minimus (combined, due to difficulty separating the two muscles); rectus femoris (RF); vastus lateralis (VL), medialis (VM), and intermedius (VI); semimembranosus (SM); semitendinosus (ST); biceps femoris long (BFlh) and short heads (BFsh); popliteus; lateral and medial gastrocnemius; soleus; and the anterior, lateral, and deep posterior compartments of the shank. The anterior shank compartment consisted of the", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "#### **Table 1.** ClimPACT weather extremes indices.\n\n| ID | definition | units | sector of relevance |\n| --- | --- | --- | --- |\n| TXx | annual maximum daily maximum temperature | °C | health, agriculture and food security |\n| | | | |\n| TX90p | percentage of days above the 90th percentile | % | health, agriculture and food security, |\n| of daily maximum temperature in the | | | water resources and hydrology |\n| 1981–2010 average | | | |\n| | | | |\n| CDD | maximum number of consecutive days with | days | health, agriculture and food security, |\n| precipitation less than 1 mm | | | water resources and hydrology |\n| | | | |\n| RX5day | maximum consecutive 5 day precipitation | mm | health, agriculture and food security, |\n| water resources and hydrology | | | |\n| | | | |\n\nmembers at any given date. Since specific levels of global warming such as 1.5°C or 2°C were reached at different times in the different ensemble members, according to the SST forcings used, any given level of global warming could be associated with different radiative forcings in different ensemble members. In any given ensemble member at any specific level of global warming, the CO2 concentration and SSTs were the same as in the driving CMIP5 model at that GWL. Land cover was fixed in this simulation—there was no dynamic vegetation nor any time-dependent anthropogenic land use change.\n\nSome comparison of the higher-resolution atmospheric simulations with the original CMIP5 simulations, is provided by Wyser *et al.* [20].\n\n### (b) Temperature and precipitation extremes: the ClimPACT indices\n\nTo quantify changes in weather extremes projected in our climate simulations, we calculated a number of indices designed to be relevant to sector-specific impacts using an established methodology, ClimPACT [21] (table 1)\n\n### (c) Food security: the Hunger and Climate Vulnerability Index\n\nTo assess implications of climate change for vulnerability to food insecurity, we used an adaptation of the Hunger and Climate Vulnerability Index (HCVI) [22]. The HCVI was developed by the United Nations World Food Programme to provide a country-level assessment of vulnerability to food insecurity as a result of climate-related events. We used a new iteration of the HCVI which makes use of gridded climate model projections to understand the impact of climate change on vulnerability to food insecurity, and the benefits that adaptation can bring via scenarios of adaptation investment [23]. This iteration of the HCVI only considers in-country production of food and does not account for food trade. For this reason, the HCVI is only calculated for 122 developing and least-developed countries (defined here as countries not in the OECD or EU which can be resolved by the scale of the climate model; i.e. larger than 500 km2).\n\nThe index provides quantification at the national level across the globe of the scale and direction of impact of climate change on food insecurity. As such, it aims to provide the following: (i) information to help policy-makers understand the level of challenge to global food security that climate change presents; (ii) information on the geography of the impacts and help to evaluate the relative benefits of mitigation and adaptation responses.\n\nThe index is not intended to be a detailed planning tool, but aims to help planners evaluate the nature of the top-level threat to food insecurity that climate change presents, thereby supporting prioritization of effort.\n\nThe HCVI consists of three equally weighted components: exposure to climate-related hazards, sensitivity of national agricultural production to climate-related hazards, and adaptive capacity a measure of a country's ability to cope with climate-related food shocks. The sensitivity and adaptive capacity components are based on data from the World Bank, World Resources Institute,", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Compost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.**", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "Figure 5. Overall hamstrings (HAMS; A), semimembranosus (SM; B), semitendinosus (ST; C), biceps femoris long head (BFlh; D), and biceps femoris short head (BFsh; E) muscle volume of a World's Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [n ¼ 50, pooled population from the works by Miller et al. (13) (n ¼ 11) and Balshaw et al. (14) (pretest data n ¼ 39)].\n\npatellar tendon moment arm (þ 18%). Therefore, of these two key strength determinants, muscle size, rather than joint leverage, appeared to be the predominant factor responsible for the WSM's extraordinary strength. Indeed, when we previously compared the muscle morphology and joint mechanics of individuals with distinct maximum strength capacity (long-term resistance-trained individuals vs. untrained controls), muscle size was the primary factor separating the groups with much more subtle differences in moment arm (10). The extreme example of muscle size provided by the WSM's quadriceps femoris also gave the opportunity to investigate the scaling of tendon size to muscle size; extreme muscular size (greater than or equal to twice that for untrained controls) might be expected to be accompanied by comparable tendinous tissue size to effectively transmit high muscular forces to the skeleton. However, the WSM's patellar tendon CSA was only 30% larger than untrained controls and within the range of individuals we have previously measured (Fig. 6A). This observation supports the notion that tendon structure may be largely fixed by adulthood (40), with only slow/limited\n\nFigure 6. Patellar tendon mean cross-sectional area (A) and patellar tendon moment arm (B) of a World's Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [n ¼ 16, from the work by Massey et al. (15)] and untrained control populations [n ¼ 39, from the work by Massey et al. (15)].", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed12.pdf" - }, - { - "text": "# RESEARCH ARTICLE\n\n# Muscle and tendon morphology of a world strongman and deadlift champion\n\n# Thomas G. Balshaw,1 Garry J. Massey,1,2 Robert Miller,1,3,4 Emmet J. McDermott,1,5 Thomas M. Maden-Wilkinson,6 and Jonathan P. Folland1\n\n1 School of Sport, Exercise, and Health Sciences, Loughborough University, Loughborough, United Kingdom; 2 College of Life and Environmental Sciences, University of Exeter, Exeter, United Kingdom; 3 UK Athletics, Loughborough University, Loughborough, United Kingdom; 4 Department of Sport Science, Aspire Academy, Doha, Qatar; 5 Department of Physical Education and Sport Sciences, University of Limerick, Limerick, Ireland; and 6 Academy of Sport and Physical Activity, Faculty of Health and Wellbeing, Sheffield Hallam University, Sheffield, United Kingdom\n\n# Abstract\n\nThis study compared the muscle and tendon morphology of an extraordinarily strong individual, a World's Strongest Man and deadlift champion (WSM), with that of various other athletic, trained, and untrained populations. The WSM completed the following: 1) 3.0-T MRI scans, to determine the volume of 22 individual lower limb muscles, 5 functional muscle groups, patellar tendon (PT) cross-sectional area (CSA), and PT moment arm; and 2) countermovement jumps (CMJ) and isometric midthigh pull (IMTP) contractions. The WSM was compared with previously assessed groups from our laboratory (muscle and tendon) and the wider research literature (CMJ and IMTP). The WSM's CMJ peak power (9,866 W) and gross (9,171 N) and net (7,480 N) IMTP peak forces were higher than any previously published values. The WSM's overall measured leg muscle volume was approximately twice that of untrained controls (þ 96%) but with pronounced anatomical variability in the extent of muscular development. The plantar flexor group (þ 120%) and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140% to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences relative to that of untrained controls. The WSM's pronounced quadriceps size (greater than or equal to twofold vs. untrained) was accompanied by modest PT moment arm differences and, notably, was not matched by an equivalent difference in PT CSA (þ 30%). These results provide novel insight into the musculotendinous characteristics of an extraordinarily strong individual, which may be toward the upper limit of human variation, such that the WSM's very pronounced lower limb muscularity also exhibited distinct anatomical variability and with muscle size largely uncoupled from tendon size.\n\nNEW & NOTEWORTHY Lower-body muscle size of an extraordinarily strong individual, a World's Strongest Man and deadlift champion (WSM), was approximately twice that of controls but was underpinned by pronounced anatomical variability in the extent of muscular development ( þ 23–202%): the plantar flexor group and guy rope muscles demonstrating the largest differences. The WSM's quadriceps size (more than or equal to twice that of controls) contrasted with modest differences in patella tendon moment arm ( þ 18%) and was uncoupled from patellar tendon size ( þ 30%).\n\nisometric force; magnetic resonance imaging; power; strength\n\n# INTRODUCTION\n\nFeats of strength have fascinated man since the early stages of human civilization, as shown by the archeological evidence of inscribed heavy stones at Olympia and Thera in Greece, dated to the 6th century BC, detailing the way they were lifted by Bybon and Eumastus, respectively (1). Over the centuries, many types of strength competitions have existed; some of which have been codified and endured within modern sporting competitions (e.g., weightlifting, powerlifting, and shot put). In addition, professional strongman competitions, such as the annually contested \"World's Strongest Man\" event, generate extensive global interest (2). Moreover, scientific understanding of muscular strength is important because of its role in athletic performance (3), injury prevention (4), and healthy aging (5). However, our knowledge of extreme human strength is limited.\n\nTo date, there is little scientific information on the characteristics of extremely strong humans in terms of laboratorybased tests of strength and power, particularly the size and distribution of their muscle mass, as well as tendon size and joint mechanics (moment arm). Kraemer et al. (6) examined the body composition of elite strongman competitors using dualenergy X-ray absorptiometry scanning and found that they had a body mass (153 ± 19 kg) and lean mass (118 ± 12 kg) approximately twice that of an average untrained healthy young man. Whole body skeletal muscle mass of athletes from strength- and power-based sports has also been estimated using ultrasound measurements at a limited number of anatomical locations (7, 8). However, neither ultrasound-derived\n\nCorrespondence: T. G. Balshaw (t.g.balshaw@lboro.ac.uk). Submitted 8 May 2024 / Revised 2 July 2024 / Accepted 16 July 2024", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, \"Drying of solids wetted by thin liquid films,\" Can. J. Phys. 68, 1084–1088 (1989).\n- [6] P. Muller-Buschbaum, \"Dewetting and pattern formation in thin polymer films as investigated in real ¨ and reciprocal space,\" J. Phys.-Condes. Matter 15, R1549–R1582 (2003).\n- [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, \"Dynamics and structure formation in thin polymer melt films,\" J. Phys.-Condes. Matter 17, S267–S290 (2005).\n- [8] U. Thiele, \"Structure formation in thin liquid films,\" in S. Kalliadasis and U. Thiele, editors, \"Thin films of Soft Matter,\" pages 25–93, Springer, Wien (2007).\n- [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, \"Spinodal dewetting of thin polymer films,\" Phys. Rev. Lett. 81, 1251–1254 (1998).\n- [10] R. Seemann, S. Herminghaus, and K. Jacobs, \"Dewetting patterns and molecular forces: A reconciliation,\" Phys. Rev. Lett. 86, 5534–5537 (2001).\n- [11] U. Thiele, M. G. Velarde, and K. Neuffer, \"Dewetting: Film rupture by nucleation in the spinodal regime,\" Phys. Rev. Lett. 87, 016104 (2001).\n- [12] M. Bestehorn and K. Neuffer, \"Surface patterns of laterally extended thin liquid films in three dimensions,\" Phys. Rev. Lett. 87, 046101 (2001).\n- [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, \"Complex ¨ dewetting scenarios captured by thin-film models,\" Nat. Mater. 2, 59–63 (2003).\n- [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, \"Dynamics of dewetting,\" Phys. Rev. Lett. 66, 715– 718 (1991).\n- [15] R. Seemann, S. Herminghaus, and K. Jacobs, \"Shape of a liquid front upon dewetting,\" Phys. Rev. Lett. 87, 196101 (2001).\n- [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, \"New slip regimes and the shape of ¨ dewetting thin liquid films,\" Phys. Rev. Lett. 95, 127801 (2005).\n- [17] F. Brochard-Wyart and C. Redon, \"Dynamics of liquid rim instabilities,\" Langmuir 8, 2324–2329 (1992).\n- [18] G. Reiter and A. Sharma, \"Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,\" Phys. Rev. Lett. 87, 166103 (2001).\n- [19] A. Munch and B. Wagner, \"Contact-line instability of dewetting thin films,\" Physica D ¨ 209, 178–190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "comparative populations drawn from the existing literature can be found in Supplemental Materials 1 (gross IMTP peak force and net IMTP peak force) and 2 (CMJ peak power and height).\n\n#### Isometric Midthigh Pull and Countermovement Jump\n\nGross (including body weight) and net (above body weight) IMTP peak forces of the WSM were 9,171 N and 7,480 N, respectively. The WSM's gross IMTP peak force was 54% greater than the highest comparable group mean we located (subelite weightlifters: 5,942 ± 844 N (20); Fig. 2A). The WSM's net IMTP peak force was 100% greater than the highest comparable group mean value in the literature (collegiate soccer athletes: 3,740 ± 692 N (26); Fig. 2B).\n\nThe WSM's CMJ peak power and jump height were 9,866 W and 53.3 cm, respectively. The peak CMJ power of the WSM was >2.5-fold (164%) that of the mean of an untrained control group previously measured in our laboratory (3,735 ± 760 W; unpublished) and 51% greater than the highest comparable group mean value we located in the literature (professional basketball players: 6,518 ± 923 W (32); Fig. 2C). Not surprisingly, given the WSM's high body mass, his jump height was less exceptional, while still being 20% greater than that of a group of untrained control participants previously measured in our laboratory (44.3 ± 9.2 cm; unpublished). However, his jump height was 25% lower than the highest group mean CMJ height we are aware of in the published literature (elite international gymnasts: 71.3 ± 4.5 cm (37); Fig. 2D).\n\n#### Leg Muscle Volumes\n\nThe total unilateral muscle volume of the 22 measured muscles/compartments of WSM (14,922 cm3 ) was nearly twice that of a relatively modest (n ¼ 11) sample of untrained controls (7,628 ± 1,548 cm3 ; þ 96%; Fig. 3), while being 63% greater than subelite (9,164 ± 1,207 cm3 ) and þ 32% greater than elite 100-m sprinters (11,323 ± 1,328 cm3 ; Table 2). The muscle group differences were largest for the plantar flexors ( þ 120% vs. untrained; þ 100% vs. subelite sprinters; þ 70% vs. elite sprinters) and smallest for the hip flexors ( þ 65% vs. untrained; þ 30% vs. subelite sprinters; þ 5% vs. elite sprinters). The WSM had the highest values of any individual we have observed for four out of five muscle groups, but not the hip flexors, which were inferior to three of the elite 100-m sprinters (n ¼ 5).\n\nCompared with untrained control participants (n ¼ 11), all 22 of the WSM's individual muscles/compartments were larger than untrained controls (Table 2 and Fig. 3). However, the differences in muscle volume were extremely variable, with the biggest differences being for the \"guy ropes,\" which were 2.5–3.0 times that of untrained controls (þ 140% gracilis; þ 157% ST; þ 202% sartorius), compared with more modest differences such as 23% (BFsh) and 32% (iliopsoas) greater.\n\n#### Quadriceps Femoris and Hamstring Size\n\nOverall quadriceps femoris volume of the WSM (4,386 cm3 ) was 127% greater than a large, pooled population of untrained controls (1,932 ± 336; n ¼ 102), 66% greater than subelite sprinters (2,636 ± 401 cm3 ), 53% greater than long-term resistancetrained individuals (2,876 ± 311 cm3 ), and 36% greater than elite\n\nFigure 3. Percentage differences in muscle volumes of all muscles, 5 functional muscle groups, and 23 individual muscles/compartments between the World's Strongest Man and deadlift champion (WSM; n ¼ 1) and untrained control participants (n ¼ 11) from the work by Miller et al. (13). A positive value indicates greater muscle volume of WSM relative to the group mean of the untrained controls. The functional muscle groups and individual muscles are ordered according to the magnitude of the percentage differences for absolute muscle volume.\n\nsprinters (3,218 ± 400 cm3 ; Fig. 4A). Moreover, the WSM's quadriceps femoris was 18% larger than the most muscular individual we have previously assessed (elite sprinter: 3,716 cm3 ). The volumes of the individual vasti muscles of the WSM (VL: 1,508 cm3 ; VI: 1,336 cm3 ; VM: 1,088 cm3 ) were 130–138% larger than untrained controls (VL: 633 ± 117 cm3 ; VI: 581 ± 120 cm3 ; VM: 461 ± 89 cm3 ) and also greater than any trained/athletic individual we have previously assessed (Fig. 4, B–D). However, the WSM's RF (453 cm3 ) was not quite so large, being 76% greater than untrained controls (257 ± 57 cm3 ) but smaller than the average elite sprinter (5%; Fig. 4E), 13% greater than subelite sprinters, and 21% greater than long-term resistancetrained individuals.\n\nOverall hamstring volume of the WSM (1,545 cm3 ) was 109% greater than a large pooled population of untrained controls (739 ± 142 cm3 ; n ¼ 50), 44% greater than subelite sprinters (1,075 ± 178 cm3 ), 53% greater than long-term resistancetrained individuals (1,011 ± 142 cm3 ), and 17% greater than elite sprinters (1,315 ± 130 cm3 ; Fig. 5A). The WSM's hamstring volume was also marginally larger (þ 3%) than the most muscular individual we have previously assessed (subelite sprinter, 1,495 cm3 ). The ST (563 cm3 ) and BFlh (454 cm3 ) volumes of the WSM were 132–182% larger than that of the pooled population of untrained controls (ST: 200 ± 48 cm3 ; BFlh: 196 ± 47 cm3 ; Fig. 5, C and D) and greater than the mean of any trained/athletic group we have previously assessed (Fig. 5, C and D). SM (392 cm3 ) volume of the WSM was 66% greater than untrained controls (SM 236 ± 46 cm3 ) and greater than the mean for trained/athletic groups we have previously", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed12.pdf" - }, - { - "text": "changes in response to functional overload/resistance training. For example, we previously found patellar tendon CSA to show very subtle changes after 15 wk (45 training sessions) of heavy resistance training [ þ 1.4% (41)] and no differences between long-term resistance-trained individuals and untrained controls (15).\n\n#### Limitations\n\nAlthough the current investigation provides a detailed assessment of an individual at/toward the upper limit of human strength performance, it is important to appreciate study limitations. First, the participant was not measured immediately before their World's Strongest Man championship success or other landmark performances, and it is entirely possible the functional and structural characteristics we assessed may have been even higher directly prior to peak performances. Despite using a wide-bore MRI scanner, due to the size of the WSM's shoulders and arms, it was not possible to scan their upper body. Thus, we were not able to investigate this aspect of the WSM's muscle morphology; although given that greater hypertrophy occurs in the upper body compared with the lower body (42), it is possible that the WSM's upper-body muscle size relative to untrained controls may have been even more pronounced than what we have documented for the lower body. In the current study to provide the most representative data on untrained control participants, the largest available untrained control populations were used for each category of measurements. Thus, different untrained control populations were used [e.g., comparison of quadricep and hamstring size (n ¼ 102) vs. comparison of all the leg muscles (n ¼ 11)], which led to some subtle discrepancies in the contrasts between these groups and the WSM [e.g., quadriceps femoris/knee extensors, þ 127% and þ 99% relative to our large pooled (n ¼ 102) and smaller (n ¼ 11) untrained control samples, respectively]. Importantly, however, this discrepancy does not appear to meaningfully affect the interpretation of the findings. There were subtle differences in the precise scanning and analysis approaches used with the reference populations featured in this study, including 1) magnetic field strength [1.5 T (10, 11, 15) vs. 3.0 T, WSM and (13, 14)]; 2) the interslice distance used to quantify quadriceps femoris and hamstrings muscle volume [1.5 cm (10, 11, 14) vs. 2.0 cm, WSM and (13)]; 3) the calculation of muscle volume [area under the cubic spline ACSA-muscle length curve: (10, 11, 14) vs. the equation detailed earlier: WSM and (13)]; and 4) the use of unilateral MRI measures derived from one limb (10, 11, 14, 15) or collapsed across two limbs [WSM and (13)]. However, it seems likely that these subtle differences would have had at most a very minor effect on the findings. Finally, it is also important to highlight that the differences documented between the WSM and comparative populations for the various measures included in the current study cannot be assumed to be anything other than a combination of both innate (genetic) and environmental (training and nutrition) factors.\n\n#### Conclusions\n\nIn conclusion, this novel investigation documented the muscle and tendon morphology and whole body strength and power characteristics of an exceptionally strong individual, relative to comparative athletic, trained, and untrained populations. Overall leg muscle volume of the WSM was approximately twice that of untrained controls but with pronounced anatomical variability in the extent of muscular development. The plantar flexor muscle group and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140 to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences. The pronounced quadriceps femoris size of the WSM (greater than or equal to twice that of untrained) was accompanied by a more modest difference in patella tendon moment arm (þ 18%) and was not matched by a proportional difference in tendon size ( þ 30%).\n\n# DATA AVAILABILITY\n\nData will be made available upon reasonable request.\n\n### SUPPLEMENTAL MATERIAL\n\nSupplemental Material: https://doi.org/10.6084/m9.figshare. 26152939.\n\n### ACKNOWLEDGMENTS\n\nThe authors thank radiographer Julie Thompson.\n\n# DISCLOSURES\n\nNo conflicts of interest, financial or otherwise, are declared by the authors.\n\n## AUTHOR CONTRIBUTIONS\n\nT.G.B. and J.P.F. conceived and designed research; T.G.B., G.J.M., R.M., E.J.M., and J.P.F. performed experiments; T.G.B., G.J.M., R.M., E.J.M., and T.M.M.-W. analyzed data; T.G.B. and J.P.F. interpreted results of experiments; T.G.B. prepared figures; T.G.B. and J.P.F. drafted manuscript; T.G.B. and J.P.F. edited and revised manuscript; T.G.B., G.J.M., R.M., E.J.M., T.M.M.-W., and J.P.F. approved final version of manuscript.\n\n### REFERENCES\n\n- 1. Crowther NB. Weightlifting in antiquity: achievement and training. Greece Rome 24: 111–120, 1977. doi:10.1017/s0017383500018416.\n- 2. Dixon E. How Wave.tv is making the World's Strongest Man think bigger with its digital plans (Online). SportsPro, 2020.https://www. sportspromedia.com/insights/analysis/worlds-strongest-man-wavetvthe-pump-snapchat-brian-verne-interview/ [Apr 6, 2024].\n- 3. Suchomel TJ, Nimphius S, Stone MH. The importance of muscular strength in athletic performance. Sports Med 46: 1419–1449, 2016. doi:10.1007/s40279-016-0486-0.\n- 4. Opar DA, Williams MD, Timmins RG, Hickey J, Duhig SJ, Shield AJ. Eccentric hamstring strength and hamstring injury risk in Australian footballers. Med Sci Sports Exerc 47: 857–865, 2015. doi:10.1249/ mss.0000000000000465.\n- 5. McLeod M, Breen L, Hamilton DL, Philp A. Live strong and prosper: the importance of skeletal muscle strength for healthy ageing. Biogerontology 17: 497–510, 2016. doi:10.1007/s10522-015-9631-7.\n- 6. Kraemer WJ, Caldwell LK, Post EM, DuPont WH, Martini ER, Ratamess NA, Szivak TK, Shurley JP, Beeler MK, Volek JS, Maresh CM, Todd JS, Walrod BJ, Hyde PN, Fairman C, Best TM. Body composition in elite strongman competitors. J Strength Cond Res 34: 3326–3330, 2020. doi:10.1519/jsc.0000000000003763.\n- 7. Abe T, Buckner SL, Dankel SJ, Jessee MB, Mattocks KT, Mouser JG, Loenneke JP. Skeletal muscle mass in human athletes: what is the upper limit? Am J Hum Biol 30: e23102, 2018. doi:10.1002/ ajhb.23102.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed12.pdf" - }, - { - "text": "This code retrieves the raw document data from ODWEK, gathers all of the document details that Content Manager OnDemand might store from loading the data, and then transforms the document data. The transformed document data can be passed back through ODWEK to the original client request.\n\nTable 9-1 lists the XmlTagNames for the transformation specification.\n\n| XmlTagname | ODConstant | Description |\n| --- | --- | --- |\n| TransformName | TransFormName | Name of the transform. It is used as the |\n| | | viewer argument that is passed to |\n| | | ODWEK Retrieve APIs. |\n| TransformDescription | TRANSFORM_DESC | Description of the transform. |\n| ClientClass | TRANSFORM_CLIENTCLASS | The class name of the custom interface |\n| | | class. |\n| CmdLineExe | TRANSFORM_CMDLINEEXE | Fully qualified name of the transform |\n| | | executable file. |\n| OutputMimeType | TRANSFORM_MIMETYPE | The MIME type of the data as it is |\n| | | returned from the transform. |\n| OutputExtension | TRANSFORM_OUTPUTEXT | The extension of the data that is |\n| | | returned from the transform. |\n| CmdParms | TRANSFORM_PARMS | The mappings of OD Values to custom |\n| | | variables. See the constant key words |\n| | | that are shown in Table 9-2 on |\n| | | page 216. |\n| Passthru | TRANSFORM_PASSTHRU | These values are passed through |\n| | | ODWEK directly to the transform. |\n| Cmdlineparm | TRANSFORM_PASSTHRU_CMDLINE | These values are passed through |\n| | | ODWEK directly to the transform |\n| | | command line. |\n\nTable 9-1 XmlTagNames for the transform specification\n\nTable 9-2 provides information about the XMLTags. These XML tags are used to pass specific values to the transform command line. These XML tags allow the mapping of the command-line option where the specified value can be passed.\n\nTable 9-2 XmlTags detailed information\n\n| XmlTagname | ODConstant | Description |\n| --- | --- | --- |\n| RECORDFORMAT | DOCUMENT_RECORD_FORMAT | The record format of the document as stored |\n| | | in Content Manager OnDemand. |\n| RECORDLENGTH | DOCUMENT_RECORD_LENGTH | The record length of the document as stored |\n| | | in Content Manager OnDemand. |\n| CARRIAGECONTROL | DOCUMENT | The carriage control of the document as |\n| | _CARRIAGE_CONTROL | stored in Content Manager OnDemand. |\n| TRC_EXIST | DOCUMENT_TRC | The TRC settings as stored in Content |\n| | _EXIST | Manager OnDemand. |\n| DOCROTATION | DOCUMENT | The rotation of the document as stored in |\n| | _ROTATION | Content Manager OnDemand. |", - "page_start": 239, - "page_end": 239, - "source_file": "sg246915.pdf" - }, - { - "text": "**Figure 8.** Change in Hunger and Climate Vulnerability Index relative to baseline calculated for simulated climate states at 2°C globalwarming,for five individualHadGEM3simulations driven by SSTs and SICsfrom differentmembers ofthe CMIP5 ensemble, and the ensemble mean.\n\nFour countries show ensemble-mean HCVI values at 2°C global warming that are higher than any seen in the baseline climate; these are Oman, Bangladesh, Mauritania and Yemen. The implication of such HCVI values is that climate change at 2°C is projected to cause levels of vulnerability to food insecurity that are greater than any seen in the present day. For individual ensemble members, the number of countries with 'unprecedented' HCVI values at 2°C varies from three to seven. Conversely, many countries in the baseline climate have levels of vulnerability to food insecurity that are greater than those expected in other countries under 2°C global warming. This suggests that other factors are already posing greater risk for food insecurity than 2°C climate change is expected to cause in other countries, so the increased risk from climate change should not overshadow the need to reduce vulnerability to food insecurity arising from non-climatic factors. There is scope to reduce vulnerability to food insecurity by addressing various socio-economic issues in such counties.\n\nThe JULES simulations show a general tendency towards increased run-off over approximately half of the land surface (figure 9) and the majority of the major river basins assessed (figure 10), but with large regional uncertainties including the possibility of decreased flows in many basins. The ensemble-mean change in mean streamflow shows an increase of between 5 and 25% over most of the Northern Hemisphere land surface, with some regions seeing an increase of over 50% at 2°C global warming. Notable exceptions to this are western Europe and southcentral USA, which see less than a 5% change in run-off, and the already very dry region of the Sahara Desert where the existing very small run-off become even smaller.\n\nEnsemble-mean projected changes in low run-off flows are generally larger (figure 11), with the regions seeing an increase in mean run-off seeing a larger percentage increase in low run-off—over 75% increases over much of North America, Eastern Europe and Asia. Note that this does not necessarily imply a larger increase in absolute low flow compared to absolute mean flow, because the baseline is (by definition) smaller for low flows. In western Europe, where the changes in mean flows were less than 5%, the ensemble-mean low flow decreases by between 5", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "Why constraint made the scanning of the word's strongest man's upper body impossible using a MRI ?", - "target_page": 10, - "target_passage": "Despite using a wide-bore MRI scanner, due to the size of the WSM’s shoulders and arms, it was not possible to scan their upper body", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "changes in response to functional overload/resistance training. For example, we previously found patellar tendon CSA to show very subtle changes after 15 wk (45 training sessions) of heavy resistance training [ þ 1.4% (41)] and no differences between long-term resistance-trained individuals and untrained controls (15).\n\n#### Limitations\n\nAlthough the current investigation provides a detailed assessment of an individual at/toward the upper limit of human strength performance, it is important to appreciate study limitations. First, the participant was not measured immediately before their World's Strongest Man championship success or other landmark performances, and it is entirely possible the functional and structural characteristics we assessed may have been even higher directly prior to peak performances. Despite using a wide-bore MRI scanner, due to the size of the WSM's shoulders and arms, it was not possible to scan their upper body. Thus, we were not able to investigate this aspect of the WSM's muscle morphology; although given that greater hypertrophy occurs in the upper body compared with the lower body (42), it is possible that the WSM's upper-body muscle size relative to untrained controls may have been even more pronounced than what we have documented for the lower body. In the current study to provide the most representative data on untrained control participants, the largest available untrained control populations were used for each category of measurements. Thus, different untrained control populations were used [e.g., comparison of quadricep and hamstring size (n ¼ 102) vs. comparison of all the leg muscles (n ¼ 11)], which led to some subtle discrepancies in the contrasts between these groups and the WSM [e.g., quadriceps femoris/knee extensors, þ 127% and þ 99% relative to our large pooled (n ¼ 102) and smaller (n ¼ 11) untrained control samples, respectively]. Importantly, however, this discrepancy does not appear to meaningfully affect the interpretation of the findings. There were subtle differences in the precise scanning and analysis approaches used with the reference populations featured in this study, including 1) magnetic field strength [1.5 T (10, 11, 15) vs. 3.0 T, WSM and (13, 14)]; 2) the interslice distance used to quantify quadriceps femoris and hamstrings muscle volume [1.5 cm (10, 11, 14) vs. 2.0 cm, WSM and (13)]; 3) the calculation of muscle volume [area under the cubic spline ACSA-muscle length curve: (10, 11, 14) vs. the equation detailed earlier: WSM and (13)]; and 4) the use of unilateral MRI measures derived from one limb (10, 11, 14, 15) or collapsed across two limbs [WSM and (13)]. However, it seems likely that these subtle differences would have had at most a very minor effect on the findings. Finally, it is also important to highlight that the differences documented between the WSM and comparative populations for the various measures included in the current study cannot be assumed to be anything other than a combination of both innate (genetic) and environmental (training and nutrition) factors.\n\n#### Conclusions\n\nIn conclusion, this novel investigation documented the muscle and tendon morphology and whole body strength and power characteristics of an exceptionally strong individual, relative to comparative athletic, trained, and untrained populations. Overall leg muscle volume of the WSM was approximately twice that of untrained controls but with pronounced anatomical variability in the extent of muscular development. The plantar flexor muscle group and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140 to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences. The pronounced quadriceps femoris size of the WSM (greater than or equal to twice that of untrained) was accompanied by a more modest difference in patella tendon moment arm (þ 18%) and was not matched by a proportional difference in tendon size ( þ 30%).\n\n# DATA AVAILABILITY\n\nData will be made available upon reasonable request.\n\n### SUPPLEMENTAL MATERIAL\n\nSupplemental Material: https://doi.org/10.6084/m9.figshare. 26152939.\n\n### ACKNOWLEDGMENTS\n\nThe authors thank radiographer Julie Thompson.\n\n# DISCLOSURES\n\nNo conflicts of interest, financial or otherwise, are declared by the authors.\n\n## AUTHOR CONTRIBUTIONS\n\nT.G.B. and J.P.F. conceived and designed research; T.G.B., G.J.M., R.M., E.J.M., and J.P.F. performed experiments; T.G.B., G.J.M., R.M., E.J.M., and T.M.M.-W. analyzed data; T.G.B. and J.P.F. interpreted results of experiments; T.G.B. prepared figures; T.G.B. and J.P.F. drafted manuscript; T.G.B. and J.P.F. edited and revised manuscript; T.G.B., G.J.M., R.M., E.J.M., T.M.M.-W., and J.P.F. approved final version of manuscript.\n\n### REFERENCES\n\n- 1. Crowther NB. Weightlifting in antiquity: achievement and training. Greece Rome 24: 111–120, 1977. doi:10.1017/s0017383500018416.\n- 2. Dixon E. How Wave.tv is making the World's Strongest Man think bigger with its digital plans (Online). SportsPro, 2020.https://www. sportspromedia.com/insights/analysis/worlds-strongest-man-wavetvthe-pump-snapchat-brian-verne-interview/ [Apr 6, 2024].\n- 3. Suchomel TJ, Nimphius S, Stone MH. The importance of muscular strength in athletic performance. Sports Med 46: 1419–1449, 2016. doi:10.1007/s40279-016-0486-0.\n- 4. Opar DA, Williams MD, Timmins RG, Hickey J, Duhig SJ, Shield AJ. Eccentric hamstring strength and hamstring injury risk in Australian footballers. Med Sci Sports Exerc 47: 857–865, 2015. doi:10.1249/ mss.0000000000000465.\n- 5. McLeod M, Breen L, Hamilton DL, Philp A. Live strong and prosper: the importance of skeletal muscle strength for healthy ageing. Biogerontology 17: 497–510, 2016. doi:10.1007/s10522-015-9631-7.\n- 6. Kraemer WJ, Caldwell LK, Post EM, DuPont WH, Martini ER, Ratamess NA, Szivak TK, Shurley JP, Beeler MK, Volek JS, Maresh CM, Todd JS, Walrod BJ, Hyde PN, Fairman C, Best TM. Body composition in elite strongman competitors. J Strength Cond Res 34: 3326–3330, 2020. doi:10.1519/jsc.0000000000003763.\n- 7. Abe T, Buckner SL, Dankel SJ, Jessee MB, Mattocks KT, Mouser JG, Loenneke JP. Skeletal muscle mass in human athletes: what is the upper limit? Am J Hum Biol 30: e23102, 2018. doi:10.1002/ ajhb.23102.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed12.pdf" - }, - { - "text": "### Materials & experimental systems\n\n| Methods |\n| --- |\n\n| n/a | Involved in the study | n/a | Involved in the study |\n| --- | --- | --- | --- |\n| | Antibodies | | ChIP-seq |\n| | Eukaryotic cell lines | | Flow cytometry |\n| | Palaeontology and archaeology | | MRI-based neuroimaging |\n| | Animals and other organisms | | |\n| | Clinical data | | |\n| | Dual use research of concern | | |\n| | Plants | | |\n\n# Magnetic resonance imaging\n\n### Experimental design\n\n| Design type | Structural & Diffusion MRI | |\n| --- | --- | --- |\n| Design specifications | No task-based fMRI used in this manuscript. | |\n| Behavioral performance measures | N/A; no performance metrics collected | |\n| Acquisition | | |\n| Structural Imaging type(s) | | |\n| 3 Field strength | | |\n| Sequence & imaging parameters | | High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo |\n| | | (MPRAGE) sequence (TR = 2500 ms, TE = 2.31 ms, T1 = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo fieldmap (TR = 758 ms; TE1 = 4.92 ms; TE2 = 7.38 ms; flip angle = 60°). A T2-weighted (T2w) turbo spin echo (TSE) |\n| | | scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices |\n| | with no gap, total acquisition time = 5:42 min). | |\n| Area of acquisition | T1-weighted and dMRI scans = whole-brain | |\n| | T2-weighted scan = high-resolution imaging of medial temporal lobe | |\n| Diffusion MRI Used Not used | | |\n| Parameters | TR = 4300 ms, echo time = 100.2 ms, 139 directions, b-max = 4990, FoV = 259 x 259 mm, 78 slices, 1.7986 x 1.7986 x 1.8 mm voxel | |\n\n# Preprocessing\n\nresolution\n\n| Preprocessing software | Gray Matter Volume & Cortical Thickness: |\n| --- | --- |\n| | Advanced Normalization Tools (ANTs), version 2.1.0 |\n| | FreeSurfer, version 7 |\n| | T2-weighted MTL scans: |\n| | Automatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018 |\n| | Diffusion imaging: |\n| | QSIprep, version 0.15.3 |\n| | DSI Studio, version Chen-2022-07-31 |\n| Normalization | Normalization differed by modality due to inherent limitations of applicable processing pipelines. |\n| | Gray Matter Volume & Cortical Thickness: |\n| | All analyses were kept in native subject-space to limit the amount of warping and leverage the advantages of a precision |\n| | imaging design. |\n| | T2-weighted MTL scans: |\n| | T2w images were registered to the segmentation template (see below) using ANTs deformable registration. |\n| | Diffusion imaging: |\n| | Initial preprocessing through QSIprep normalized diffusion images to the skull-stripped T1w images. Diffusion images were |\n| | then reconstructed in MNI space using DSI studio's Q-space Diffeomorphic Reconstruction. |\n\nApril 2023", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed4.pdf" - }, - { - "text": "predictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of specific individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9–11), pronounced muscle size seems likely to be critical to extreme human strength; however, the specific muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the findings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World's Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13–15).\n\n### MATERIALS AND METHODS\n\n#### Participant\n\nThe WSM's achievements included one World's Strongest Man title (14 mo prior to measurement), five Britain's Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift World Record holder (500 kg; at the time of measurement), and second place at Europe's Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and benefits of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n#### Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant's resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant's training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1–5 repetition maximum (RM)]: 10%; heavy loads (6– 14 RM): 80%; and moderate loads (-15 RM): 10%. The participant reported only occasional (<1/week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently (>3/ week) executed training repetitions with the intention to move the load as fast as possible. The WSM's nutritional supplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n#### Overview\n\nThe WSM reported for a single test session that involved the following assessments (listed in order): axial T1 weighted 3.0-T MRI scans from T12 to the lateral malleolus [to assess muscle size throughout the lower body (left and right sides)], axial and sagittal T1-weighted MRI scans of both knees [to assess patellar tendon cross-sectional area (CSA) and patellar tendon moment arm], maximum countermovement jumps (CMJ), and maximum isometric midthigh pulls (IMTPs). The muscle size, patellar tendon CSA, and patellar tendon moment arm of the WSM were compared with various populations measured within our laboratory, as indicated in Table 1, alongside participant descriptives (10, 11, 13–15). In addition, the IMTP and CMJ measures were compared with existing published literature (included studies are summarized in Supplemental Materials 1 and 2, alongside participant descriptives).\n\n#### MRI Measurement of Muscle Tendon Unit Morphology and Moment Arm\n\nThe participant reported for their MRI scan [3.0-T Discovery MR750W (70-cm-wide bore), GE Medical] having not completed any strenuous physical activity in -24 h and had received prior instruction to arrive in a relaxed state having eaten and drunk normally. The participant sat quietly for 15 min prior to their scan. The participant lay supine for the MRI scan of the lower-body musculature from T12 to the lateral malleolus. A body coil (GE Medical) allowed axial T1 weighted images (time of repetition/time to echo 600/8.144 ms, image matrix 512 512, field of view 500 500 mm, pixel size 0.9766 0.9766 mm, slice thickness 5 mm, and interslice gap 5 mm) to be acquired in five overlapping blocks. Images of both sides of the body were acquired within a single scan for blocks 1 (T12 to pelvis), 4 (knee joint space to midshank), and 5 (midshank to lateral malleolus). However, due to the size of the participant's thighs, it was necessary to scan each thigh individually for blocks 2 (pelvis to midthigh) and 3 (midthigh to knee joint space); this involved the radiographer repositioning the field of view between scanning the first and the second thigh but not physically moving the coil or the participant. Oil-filled capsules were secured to the surface of the participant's skin with Transpore tape at intervals along the length of the lower body prior to the scan and in an offline analysis used to verify the alignment of the blocks (Horos software, Version 3.36, https://horosproject.org/).\n\nThe offline analysis was of the following muscles/compartments (Fig. 1): iliopsoas (psoas major and iliacus combined); sartorius; tensor fasciae latae; adductor magnus; gracilis; gluteus maximus; gluteus medius and minimus (combined, due to difficulty separating the two muscles); rectus femoris (RF); vastus lateralis (VL), medialis (VM), and intermedius (VI); semimembranosus (SM); semitendinosus (ST); biceps femoris long (BFlh) and short heads (BFsh); popliteus; lateral and medial gastrocnemius; soleus; and the anterior, lateral, and deep posterior compartments of the shank. The anterior shank compartment consisted of the", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.[67]\n\n## **General intelligence**\n\nA machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence. [4]\n\n## **Techniques**\n\nAI research uses a wide variety of techniques to accomplish the goals above.[b]\n\n## **Search and optimization**\n\nAI can solve many problems by intelligently searching through many possible solutions.[68] There are two very different kinds of search used in AI: state space search and local search.\n\n#### **State space search**\n\nState space search searches through a tree of possible states to try to find a goal state.[69] For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. [70]\n\nSimple exhaustive searches[71] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes.[15] \"Heuristics\" or \"rules of thumb\" can help prioritize choices that are more likely to reach a goal.[72]\n\nAdversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position.[73]\n\n#### **Local search**\n\nLocal search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally. [74]\n\nGradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks.[75]\n\nAnother type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by \"mutating\" and \"recombining\" them, selecting only the fittest to survive each generation.[76]", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia3.pdf" - }, - { - "text": "# **Methods**\n\n#### **Participant**\n\nOur participant (E.R.C.) was a healthy 38-year-old primiparous woman who underwent in-vitro fertilization (IVF) to achieve pregnancy. Previous studies reported no observable differences in neural changes from prepregnancy to postpregnancy between women who conceived naturally versus women who conceived via IVF13, and doing so provides a controlled way of monitoring pregnancy status. The participant experienced no pregnancy complications (for example, gestational diabetes and hypertension), delivered at full term via vaginal birth, nursed through 16 months postpartum, and had no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking. The participant gave written informed consent and the study was approved by the University of California, Irvine Human Subjects Committee.\n\n#### **Study design**\n\nThe participant underwent 26 MRI scanning sessions from 3 weeks before conception through 2 years postpartum (162 weeks), during which high-resolution anatomical and diffusion spectrum imaging scans of the brain were acquired. Scans were distributed throughout this period, including prepregnancy (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans; Fig. 1c). The first 6 sessions took place at the UCSB Brain Imaging Center (BIC), the final 20 sessions took place at the UCI Facility for Imaging and Brain Research (FIBRE). The majority of scans took place between 9 AM and 2 PM, limiting significant AM–PM fluctuations49. The MRI protocol, scanner (Siemens 3T Prisma) and software (version MR E11) were identical across sites. Each scanner was checked weekly for the duration of the study and passed all QC reports indicating no significant alterations in the geometry. To ensure the robustness of the findings, after the final study session, the participant completed back-to-back validation scans at UCI and UCSB within a 12-h window to assess reliability between scanners. Intraclass correlation coefficients (two-way, random effects, absolute agreement, single rater) reveal 'excellent' test–retest reliability between scanners, including ROI-level GMV (ICC = 0.97, 95% CI: 0.80–0.99), ROI-level CT (ICC = 0.96, 95% CI: 0.90–0.98), MTL subfield volume (ICC = 0.99, 95% CI: 0.97–0.99) and ROI-level QA (ICC = 0.94, 95% CI: 0.91–0.97). Furthermore, when examining the relationship between gestation week and GMV among UCI-only gestational sessions, findings were consistent (Supplementary Fig. 12), indicating that site differences are highly unlikely to have contributed meaningfully to the observed effects. Although not applicable here, we note that having a control participant scanned over a similar duration within the same scanner is critical for estimating how much variation in the brain can be attributed to within-scanner variability.\n\nTo monitor state-dependent mood and lifestyle measures, the following scales were administered on each experiment day: Perceived Stress Scale50, Pittsburgh Sleep Quality Index51, State-Trait Anxiety Inventory for Adults52 and Profile of Mood States53. Correlation analyses between state-dependent measures, summary brain metrics and gestation week revealed little to no relationships. The only exception to this was a moderate negative association between global QA and state anxiety (Spearman's correlation (*ρ*) = −0.65, *q* = 0.04; baseline—36 weeks, *n* = 16). By making this data openly accessible, we encourage a more nuanced approach toward exploring mood and lifestyle measures in relation to brain changes over pregnancy.\n\n#### **Endocrine procedures**\n\nThe participant underwent a blood draw (*n* = 19; Fig. 1c) before MRI scanning. Sex steroid concentrations were determined via ultra-sensitive liquid chromatography–mass spectrometry at the Brigham and Women's Hospital Research Assay Core (BRAC). Assay sensitivities, dynamic range and intra-assay coefficients of variation were as follows: estradiol—1.0 pg ml−1, 1–500 pg ml−1, <5% relative s.d. (RSD); progesterone—0.05 ng ml−1, 0.05–10 ng ml−1, 9.33% RSD. Serological samples were not acquired in five sessions due to scheduling conflicts with UC Irvine's Center for Clinical Research.\n\n**MRI acquisition.** MRI scanning sessions at the University of California, Santa Barbara and Irvine were conducted on 3T Prisma scanners equipped with 64-channel phased-array head/neck coil (of which 50 coils are used for axial brain imaging). High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (repetition time (TR) = 2,500 ms, time to echo (TE) = 2.31 ms, inversion time (TI) = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo field map (TR = 758 ms, TE1 = 4.92 ms, TE2 = 7.38 ms, flip angle = 60°). A T2-weighted (T2w) turbo spin echo scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/ TE = 9,860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2-mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5 min and 42 sec). The Diffusion Spectrum Imaging (DSI) protocol sampled the entire brain with the following parameters: single phase, TR = 4,300 ms, echo time = 100.2 ms, 139 directions, *b*-max = 4,990, FoV = 259 × 259 mm, 78 slices, 1.7986 × 1.7986 × 1.8 mm voxel resolution. These images were linearly registered to the whole-brain T1w MPRAGE image. A custom foam headcase was used to provide extra padding around the head and neck, as well as to minimize head motion. Additionally, a custom-built sound-absorbing foam girdle was placed around the participant's waist to attenuate sound near the fetus during second-trimester and third-trimester scanning.\n\n**Image processing.** *Cortical volume and thickness*. CT and GMV were measured with Advanced Normalization Tools54 version 2.1.0 (ANTs). We first built a subject-specific template (SST) (antsMultivariateTemplateConstruction2) and tissue priors (antsCookTemplatePriors) based on our participant's two preconception whole-brain T1-weighted scans to examine neuroanatomical changes relative to the participant's prepregnancy baseline. We used labels from the OASIS population template, provided by ANTs, as priors for this step. For each session, the structural image was processed and registered to the SST using the ANTs CT pipeline (antsCorticalThickness). This begins with an N4 bias field correction for field inhomogeneity, then brain extraction using a hybrid registration/segmentation method55. Tissue segmentation was performed using Atropos54 to create tissue masks of CSF, gray matter, white matter and deep gray matter. Atropos allows prior knowledge to guide the segmentation algorithm, and we used labels from our SST as priors to minimize warping and remain in native participant space. CT measurements were then estimated using the DiReCT algorithm56, which estimates the gray–white matter interface and the gray matter–CSF interface and computes a diffeomorphic mapping between the two interactions, from which thickness is derived. Each gray matter tissue mask was normalized to the template and multiplied to a Jacobian image that was computed via affine and nonlinear transforms. Using MATLAB (version 2022a), summary, regional-level estimates of CT, GMV and CSF for each scan were obtained by taking the first eigenvariate (akin to a 'weighted mean'57) across all voxels within each parcel of the Schaefer 400-region atlas58. We then averaged ROIs across networks, which were defined by the 17-network Schaefer scheme58,59. Global measures of CT, GMV and CSF were computed for each session by summing across all voxels within the respective output image; total brain volume was computed by summing across all voxels within each session's brain extraction mask. Our findings held when using an SST derived from all 26 MRIs (prepregnancy through postpartum), as well as when estimating the mean (versus weighted mean) of all voxels within each parcel. The ANTs CT pipeline is highly validated with good test–retest reproducibility and improved ability to predict variables such as age and gender from region-wise CT measurements", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed4.pdf" - }, - { - "text": "Figure 11.2 Constraint Violation for the Chef Individual\n\nIf you click on the Manager individual, you will see that he has a constraint violation because his phone number is not in the proper format. Waiter1 has a similar problem. Waiter2 has missing data. Her hasPhone and ssn data properties both must have values but don't.\n\nIf you move your focus to the Customer class, you can see the remaining 3 constraint violations. Customer10's hasDiscount property is greater than 1 which is not allowed. This is defined by the CustomerShape in the hasDiscount node with sh:minInclusive 0.0 and sh:maxInclusive 1.0. This is the way you define a minimum and maximum value for a numeric property (note: this applies to the value not to the number of values). Customer2 also has a hasPhone value that doesn't match the defined format and finally Customer3 does not have a value for hasPhone when at least one is required.\n\nRecall that the SHACL constraints themselves are essentially RDF graphs. Figures 11.3 and 11.4 illustrate the Employee shape and the Customer shape used in the above example as graphed in the Gruff tool from AllegroGraph.", - "page_start": 80, - "page_end": 80, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "It is important to distinguish between strength and stiffness. Strength is simply the resistance to load while stiffness is the resistance to deflection or deformation. While strength and stiffness are related, it is necessary to appreciate that adequate structural strength does not automatically provide adequate stiffness. Thus, special consideration is necessary to provide the structural components with specific stiffness characteristics to prevent undesirable aeroelastic effects during normal operation.\n\nAn obvious solution to the apparent problems of static strength, fatigue strength, stiffness and rigidity would be to build the airplane like a product of an anvil works, capable of withstanding all conceivable loads. However, high performance airplane configurations cannot be developed with ineficient, lowly stressed structures. The effect of additional weight is best illustrated by preliminary design studies of a very long range, high altitude bomber. In the preliminary phases of design, each additional pound of any weight would necessitate a 25-pound increase in gross weight to maintain the same performance. An increase in the weight of any item produced a chain reaction-more fuel, larger tanks, bigger engines, more fuel, heavier landing gear, more fuel, etc. In the competitive sense of design, no additional structural weight can be tolerated to provide more strength than is specified as necessary for the design mission requirement.\n\n#### AIRCRAFT LOADS AND OPERATING LIMITATIONS\n\n#### FLIGHT LOADS-MANEUVERS AND GUSTS\n\nThe loads imposed on an aircraft in flight are the result of maneuvers and gusts. The maneuver loads may predominate in the design of fighter airplanes while gust loads may predominate in the design of the large multiengine aircraft. The maneuver loads an airplane may encounter depend in great part on the mission type of the airplane. However, the maximum maneuvering capability is of interest because of the relationship with strength limits.\n\nThe flight load factor is defined as the proportion between airplane lift and weight, where\n\n> n=L/W n= load factor L=lift, Ibs. W= weight, Ibs.\n\nMANEUVERING LOAD FACTORS. The maximum lift attainable at any airspeed occurs when the airplane is at CLmU. With the use of the basic lift equation, this maximum lift is expressed as:\n\n$$L_{m a x}=C_{L_{m a x}{\\frac{1}{2}}\\rho}V^{2}S$$\n\nSince maximum lift must be equal to the weight at the stall speed,\n\n$$W=C_{L_{\\mathrm{max}}{\\frac{1}{2}}\\rho}V_{s}{}^{2}S$$\n\nIf the effects of compressibility and viscosity on Ch are neglected for simplification, the maximum load factor attainable is determined by the following relationship.\n\n$$n_{\\mathrm{max}}={\\frac{L_{\\mathrm{max}}}{W}}={\\frac{C_{L_{\\mathrm{max}}{\\frac{1}{2}}\\rho}V^{2}S}{C_{L_{\\mathrm{max}}{\\frac{1}{2}}\\rho}V_{s}{}^{2}S^{2}}}$$\n \n\n$$=\\left({\\frac{V_{\\cdot}}{V_{s}}}\\right)^{2}$$\n\nThus, if the airplane is flying at twice the stall speed and the angle of attack is increased to obtain maximum lift, a maximum load factor of four will result. At three times the stall speed, nine \"g's\" would result; four times the stall speed, sixteen g's result; five times the stall speed, twenty-five g's result; etc. Therefore, any airplane which has high speed performance may have the capability of high maneuvering load factors. The airplane which is capable of flight speeds that are", - "page_start": 348, - "page_end": 348, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| Princeton Young Adult 3T ASHS Atlas Template (n=24, mean age = 22.5; Aly & Turk-Browne, 2016). |\n| --- |\n| Diffusion imaging: |\n| All diffusion images were reconstructed using the ICBM152 template. |\n| All T1-weighted images underwent denoising ('denoiseImage') and N4 bias field correction ('N4BiasFieldCorrection') for field |\n| inhomogeneity via ANTs. |\n| T2-weighted MTL scans: |\n| All T2-weighted MTL images underwent denoising ('denoiseImage') via ANTs. |\n| Diffusion: |\n| All diffusion images underwent denoising, motion and distortion correction using MRtrix3's dwidenoise and dwibiascorrect |\n| with the N4 algorithm. All diffusion images were quality checked using DSI studio's `QC1: SRC Files Quality Control. All images |\n| passed QC checks. |\n| Motion: |\n| Mean framewise displacement (FWD) estimates from gestation sessions with a 10-minute resting state scan (n = 18) were |\n| used to indirectly assess whether motion increased throughout pregnancy. Average FWD (millimeters) was extremely |\n| minimal across the entire experiment (M = 0.13, SD = 0.02, range = 0.09–0.17) and varied only slightly by pregnancy stage |\n| (pre: M = 0.11, SD = 0.004; first: M = 0.11, SD = 0.01; second: M = 0.13, SD = 0.02; third: M = 0.16, SD = 0.007; post: M = 0.13, |\n| SD = 0.01). While mean FWD did correspond with gestation week (r = 0.88, p < .001), controlling for this did not alter our |\n\nNormalization template T2-weighted MTL scans:\n\nDiffusion imaging:\n\ninhomogeneity via ANTs. T2-weighted MTL scans:\n\nNoise and artifact removal Gray Matter Volume & Cortical Thickness:\n\nDiffusion:\n\nMotion:\n\npassed QC checks.\n\n### Volume censoring Gray Matter Volume & Cortical Thickness:\n\nthat motion differences between stages were minuscule.\n\nAll images were visually assessed for QC. Further, we computed quality control (QC) assessments on all T1w images using the IQMs pipeline from MRIQC (Esteban et al., 2017). Metrics of interest included 1) coefficient of joint variation (CJV), 2) signalto-noise ratio for gray matter (SNR), and 3) contrast-to-noise ratios (CNR). All QC metrics fell within expected standard ranges. We also used FreeSurfer's Eueler number to evaluate a field-standard quantitative assessment of each T1w structural image. We observed no significant relationships between the Euler number and gestation week or summary brain metrics. A discrepancy (e.g., 2 SD below average) was noted in session eight; however, again, removing this session did not detract from our main findings showing reductions in gray matter volume over gestation.\n\nmain findings (e.g., total GMV negatively associated with gestation; partial correlation: r = -0.87, p < 0.001) owing to the fact\n\nT2-weighted MTL scans:\n\nVolumes were visually assessed for QC. Volumes were removed from the analysis if unable to be reliably segmented.\n\nDiffusion imaging:\n\nAll images were assessed using the DSI studio quality control and a visual inspection. DSI studio performed an outlier check, labeling images as a \"low quality outlier\" if the correlation coefficient was greater than 3 standard deviations from the absolute mean. No images were labeled as a low quality outlier.\n\n### Statistical modeling & inference\n\nModel type and settings Summary brain metrics:\n\nTo reflect the existing literature, we first explored brain metrics across the entire study duration (pre-conception through postpartum). When including all sessions, total brain volume, GMV, CT, global QA, ventricle volume and CSF displayed nonlinear trends over time; therefore, we used generalized additive models (GAM; cubic spline basis, k = 10, smoothing = GCV), a method of non-parametric regression analysis (R package: mgcv), to explore the relationship between summary brain metrics (outcome variables) and gestation week (smooth term). Each model underwent examination (gam.check function) to ensure it was correctly specified with regards 6o 1) the choice of basis dimension (k) and 2) the distribution of the model residuals (see mgcv documentation; Wood, 2017). The general pattern of results held after toggling model parameters; however, we note the risk of overinterpreting complex models with small sample sizes (see Sullivan et al., 2015). To address overfitting and cross-validate our basis type selection, we also fit the data using nonpenalized general linear models (GLM) with both linear and polynomial terms for gestation week. We compared the performance of each GLM (i.e., models using only a linear term vs. models with polynomial terms) via the Akaike information criterion (AIC), which revealed that cubic models consistently outperformed both linear and quadratic models (AICdiff > 3), providing additional evidence for non-linear changes in structural brain variables over time.\n\n#### Gray Matter Volume & Cortical Thickness:\n\nWe first computed Pearson's product-moment correlation matrices between the following variables (n = 19 pregnancy scans): gestation week, estradiol, progesterone, total GMV, and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then re-ran the analyses to include total GMV as a variable of noninterest in the regression model. A similar statistical approach was taken for T1w-derived subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 regions-of-interest by gestation week (FDR-corrected at q < 0.05).\n\nT2-weighted MTL scans:\n\nTo evaluate the relationship between gestation week and medial temporal lobe (MTL) subregion volume over pregnancy (n = 7 bilateral subregions; n = 18 MTL scans), we used a combination of linear and non-linear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described", - "page_start": 16, - "page_end": 16, - "source_file": "pubmed4.pdf" - }, - { - "text": "performed an outlier check, labeling images as a 'low-quality outlier' if the correlation coefficient was >3 s.d. from the absolute mean. None of our scans were flagged as outliers. The reconstructed participant files were aggregated into one connectometry database per metric.\n\n*Day2Day control dataset*. To compare our findings against a control group of nonpregnant densely-sampled individuals, we used the Day-2Day dataset23 which offered comparable whole-brain T1 and T2 MTL scans for eight participants (two male) scanned 12–50 times over 2–7 months. Each participant was run through the ANTs CT and ASHS processing pipelines as outlined above ('Cortical volume and thickness' and 'Hippocampal segmentation'). To note, for each participant, we created an SST based on their first two sessions for consistency with the primary dataset; subfield volumes for the T2 MTL scans did not undergo manual retouching. Due to missing header information on the publicly available diffusion scans, we were unable to benchmark our white matter changes with the Day2Day dataset.\n\n**Statistical analysis.** Statistical analyses were conducted using R (sMRI; version 3.4.4) and DSI Studio (dMRI; Chen-2022-07-31).\n\n*Summary brain metrics*. To reflect the existing literature, we first explored brain metrics across the entire study duration (prepregnancy through postpartum, *n* = 26 scans). When including all sessions, total brain volume, GMV, CT, global QA, ventricle volume and CSF displayed nonlinear trends over time; therefore, we used generalized additive models (GAM; cubic spline basis, *k* = 10, smoothing = GCV), a method of nonparametric regression analysis (R package, mgcv76), to explore the relationship between summary brain metrics (outcome variables) and gestation week (smooth term). Each model underwent examination (gam.check function) to ensure it was correctly specified with regards to (1) the choice of basis dimension (*k*) and (2) the distribution of model residuals (see mgcv documentation in ref. 76). The general pattern of results held after toggling model parameters; however, we note the risk of overinterpreting complex models with small sample sizes77. To address overfitting and cross-validate our basis type selection, we also fit the data using nonpenalized general linear models (GLM) with both linear and polynomial terms for gestation week. We compared the performance of each GLM (that is, models using only a linear term versus models with polynomial terms) via the Akaike information criterion (AIC), which revealed that cubic models consistently outperformed both linear and quadratic models (AICdiff > 3), providing additional evidence for nonlinear changes in structural brain variables over time. Determining whether these patterns replicate in larger cohorts and whether complex models are better suited to capture data patterns across individuals will be a necessary next step.\n\n*Cortical GMV and CT*. We then narrowed our analyses to the first 19 sessions (baseline—36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1–5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at *q* < 0.05).\n\n*Subcortical GMV*. A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at *q* < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy (*n* = 7 bilateral subregions and *n* = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at *q* < 0.05.\n\n*White matter microstructure*. DSI Studio's correlational tractography74 was used to analyze the relationship between white matter structure and gestational week (*n* = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones (*n* = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: *t* score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas78.\n\n*Day2Day dataset: measurement variability*. To establish a marker of normative variability over half a year, we computed metrics of measurement variability using the Day2Day dataset23, which provided both whole-brain T1 and high-resolution T2 MTL scans. For each region, *j*, of the Schaefer parcellation, we assessed across-session variability, *ε*, as\n\n$$\\varepsilon_{j}=100\\times\\mathrm{mean}\\left({\\frac{|t_{s}-{\\hat{t}}|}{{\\hat{t}}}}\\right)$$\n\nWhere *ts* is the morphometric measurement of a parcel for session *s* and *t* ̂ is the mean of *t* across sessions55,79. Thus, we defined variability as the mean absolute percent difference between each individual and the mean across sessions. Across-session variability estimates for all 400 regions were then averaged across eight participants, and a global measure of cortical GMV variability was computed by averaging across the 400 regions. This approach was repeated independently for the T2 hippocampal scans, wherein we computed across-session variability for each parcel of the ASHS parcellation scheme (*n* = 7 bilateral subfields). However, it is important to note that raw subfield values (that is, no manual retouching) were used for Day2Day variability assessments and should be interpreted with caution. Finally, to better compare against our own data, we repeated this approach using our", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - }, - { - "text": "#### NAVWEPS 00-801-80 OPERATING STRENGTH LIMITATIONS\n\nmany times the stall speed will require due consideration of the operating strength limits.\n\nThe structural design of the aircraft must consider the possibility of negative load factors from maneuvers. Since the pilot cannot comfortably tolerate large prolonged negative \"g\", the aircraft need not be designed for negative load factors as great as the positive load factors.\n\nThe effect of airplane gross weight during maneuvers must be appreciated because of the particular relation to flight operating strength limitations. During flight, the pilot appreciates the degree of a maneuver from the inertia forces produced by various load factors; the airplane structure senses the degree of a maneuver principally by the airloads involved. Thus, the pilot recognizes loadfactor while the structure recognizes only load. To better understand this relationship, consider an example airplane whose basic configuration gross weight is 20,000 lbs. At this basic configuration assume a limit load factor for symmetrical flight of 5.6 and an ultimate load factor of 8.4. If the airplane is operated at any other configuration, the load factor limits will be altered. The following data illustrate this fact by tabulating the load factors required to produce identical airloads at various gross weights.\n\n| Grass weight, Ibs. | Limit load | Ultimate |\n| --- | --- | --- |\n| | factor | load factor |\n| 20,wO (basic). | 5.60 | 8.40 |\n| 30,003 (max. rakcoff). | 3.73 | 5.60 |\n| 13,333 (min. f\"cl):. | 8.40 | 12.60 |\n\nAs illustrated, at high gross weights above the basic configuration weight, the limit and ultimate load factors may be seriously reduced. For the airplane shown, a 5-g maneuver immediately after a high gross weight takeoff could be very near the \"disaster regime,\" especially if turbulence is associated with the maneuver. In the same sense, this airplane at very low operating weights below that of the basic configuration would experience greatly increased limit and ultimate load factors.\n\nOperation in this region of high load factors at .low gross weight may create the impression that the airplane has great excess strength capability. This effect must be understood and intelligently appreciated since it is not uncommon to have a modern airplane configuration with more than SO percent of its gross weight as fuel.\n\nGUST LOAD FACTORS. Gusts are associated with the vertical and horizontal velocity gradients in the atmosphere. A horizontal gust produces a change in dynamic pressure on the airplane but causes relatively small and unimportant changes in flight load factor. The more important gusts are the vertical gusts which cause changes in angle of attack. This process is illustrated in figure 5.2. The vectorial addition of the gust velocity to the airplane velocity causes the change in angle of attack and change in lift. The change in angle of attack at some flight condition causes a change in the flight load factor. The increment change in load factor due to the vertical gust can be determined from the following equation:\n\n$$\\Delta n=0.115\\;\\;\\frac{m\\sqrt{\\sigma}}{(W/S)}\\;\\;V_{\\bullet}\\,(K U)$$\n\nwhere\n\n- An=change in load factor due to gust m=lift curve slope, unit of C, per degree of 01 o=altitude density ratio W /S= wing loading, psf V. = equivalent airspeed, knots KU=equivalent sharp edged gust velocity ft. per sec.\nAs an example, consider the case of an airplane with a lift curve slope m=O.OB and wing loading, (W/S)=60 psf. If this airplane were flying at sea level at 350 knots and encountered an effective gust of 30 ft. per sec., the gust would produce a load factor increment of 1.61. This increment would be added to the flight load factor of the airplane prior to the gust,", - "page_start": 349, - "page_end": 349, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What is typical age at which multiple sclerosis is diagnosed ?", - "target_page": 2, - "target_passage": "Multiple sclerosis (MS) is a progressive inflammatory disease of the central nervous system (CNS) that is typically diagnosed at 30– 40 years of ag", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "**Figure 11: Number of recent (within two years) OCU initiates presenting to treatment in 2005 and 2013, by age of individual at first presentation.**\n\nThe mode age of initiation has shifted from around 18 to around 25 and there is an older age profile throughout. Rises in average age of initiation have also been reported recently in cohorts of Australian injecting drug users (Horyniak et al., 2015). There appear to be two possible explanations.\n\n- There is a genuine shift towards new initiates being older, and for them to present to treatment much faster than in previous years.\n- There is a consistent, but small number of individuals who mis-report their age of onset when attending treatment i.e. who report that they have only been using opiates/crack for a short period when in fact they have been using for a far longer period, and that this is starting to really bias the numbers for recent cohorts because attendees from the original epidemic are becoming smaller.\n\nIt is possible then that the flattening we observe in the incidence trend is due to a small in-flux of older initiates, although mis-reporting may also explain that phenomenon. Either way though, as this analysis has made clear throughout, absolute numbers of new OCUs appear to be small – probably fewer than 10,000 per annum and the numbers of those involved with crime will be smaller still. In addition, despite a flattening in the probable trend in new users, there is currently no sign that it is likely to tip upwards. If anything, the data suggest the downward trend is set to resume, though clearly it remains important to monitor the situation.", - "page_start": 28, - "page_end": 28, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identified with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort.1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%.2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks.3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a significant burden on those with undiagnosed conditions. In a systematic review by Müller et al,4 the combined\n\n#### Study Design and Methods Recruitment of Undiagnosed Cases and Healthy Control Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case finding study. Approval for prevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily influenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants.5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identified potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits.7\n\nThe three objectives of our study were as follows: (1) to evaluate the impact of dyspnea in adults from the general population who had no prior diagnosis of respiratory disease but who reported having significant respiratory symptoms in the past 6 months; (2) to identify associated risk factors for dyspnea and estimate their influence on the symptom; and (3) to explore the relationship between dyspnea and health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nthe study was obtained from the research ethics boards of the 17 participating study sites across Canada. Informed, written consent was provided by all study participants.\n\nBoth landlines and cellphones within a 90-minute radius of any of the 17 study sites were dialed randomly. A\n\nDOI: https://doi.org/10.1016/j.chest.2024.07.183\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George's Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael's Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\n(P. H.), Dalhousie University, Halifax, NS; the Department of Medicine (I. M. and M. B.), University of Alberta, Edmonton, AB; the Department of Medicine (M. D. L.), Queen's University, Kingston; the Department of Medicine (C. J. L.), University of Western Ontario, London, ON; the Department of Medicine (T. A.), Memorial University, St. John's, NF; the Department of Medicine (N. E.), McGill University, Montreal, QC; the Department of Medicine (M. A.), University of Manitoba, Winnipeg, MN, Canada.\n\nDrs Bierbrier and Gerstein contributed equally to this manuscript.\n\nPart of this work has been presented at the American Thoracic Society Conference, May 17-22, 2024, San Diego, CA.\n\nCORRESPONDENCE TO: Shawn D. Aaron, MD; email: saaron@ohri.ca Copyright 2024 The Author(s). Published by Elsevier Inc under license from the American College of Chest Physicians. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "Even in the **short period between 2013 and 2018** (the period covered by these pilot statistics) the data show an overall decline and a decline of several relevant occupational diseases. The strongest decrease — practically a halving — can be seen for hearing impairments (diseases of the inner ear). Pneumoconiosis, mesothelioma and selected occupational cancers went down between 7% and 14%. **Asthma and some recognised MSDs** are more or less stagnating, probably due to unchanged exposure to biological or chemical substances and no change regarding the health outcomes of ergonomic working conditions.\n\nIf work is **one of some** causative factors, a clear assignment of work to a health outcome is complex. Moreover, in many cases a quite **long observation period** is necessary simply due to the **latency time between exposure at work, outbreak and detection of a disease**, which is obviously very different from the clear and immediate consequence of an accident at work.\n\nThe detection of a disease and the correlation between work and this disease depends highly on the **monitoring capacities of the health system and its ability, tradition and standards to connect diseases and work-related causes**. In a study on 'Asbestos‐related occupational diseases in Central and East European Countries' the authors refer to different policies for identifying workers formerly exposed to asbestos and conclude:\n\n*'Consequently, large differences are observed from one country to another regarding the number of recognised asbestos-related cases. In Slovenia, for example, the annual asbestosis rate (cases of asbestosis/population) amounts to 14.9, in Croatia 5.3, and in Poland 2.1. Moreover, in Estonia, the incidence of asbestosis is unknown as there is no systematic collection of data.'*181\n\nFor example, until now very few occupational diseases have been recognised as outcomes of psychosocial risks at work. The ILO proposes in its 'List of Occupational Diseases Recommendation' a large number of very specific and 'classic' occupational diseases — a very broad definition of *'Mental and behavioural disorders'* but leaving the responsibility to science and to 'national conditions'. 182 Similarly, the development of the European Schedule of Occupational Diseases (ESOD) aims to improve knowledge, step up prevention and provide assistance in linking occupational activities and diseases.", - "page_start": 74, - "page_end": 74, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "*one causal agent and relatively easy to identify. On the other hand, there are all sorts of disorders without strong or specific connections to occupation and with numerous possible causal agents.'* 176\n\nSome professions and regular work tasks had and have very specific risks, for example, hearing disability through high noise levels, or musculoskeletal diseases caused by permanent repetition of a certain movement or posture, or specific cancers after exposure to carcinogenic chemical substances, infections in healthcare or work in laboratories, or allergies to natural substances in agriculture. Some examples are:\n\n#### **Occupation, work task, exposure Occupational disease**\n\n- \n- highly repetitive hand and arm movements ► epicondylitis\n- quartz dust ► silicosis\n- working long hours in a kneeling position ► bursitis\n- extensive UV exposure ► skin cancer\n- aromatic amines ► bladder cancer\n- professional musicians ► focal dystonia\n- grain dust (agriculture) ► allergies, asthma\n\n- healthcare of infected persons ► infection with the same disease\n\t-\n\t-\n\t-\n\t-\n\t-\n\t-\n\t-\n\nSpecific and strong connections between **a risk and an outcome (risk pairs)** are **covered by occupational disease recognition schemes** in the EU Member States. 177 Some countries have opening options in their list systems, that is, in principle every disease with a dominant cause in working conditions can be recognised. However, many court cases about the recognition of occupational diseases demonstrate that a clear cause-effect relationship is not always evident, that is, due to missing workplace exposure data from the past or competing causes in private circumstances. All occupational diseases with a principally unambiguous relation between cause and consequence account only for a small percentage of all **work-related diseases**.178\n\nWe can observe a **decrease of some of the major recognised diseases**, 179 either triggered by preventive measures or triggered by shifts of workforce to sectors with less recognised occupational diseases. The **new experimental EODS Statistics of Eurostat** 180 documents the following developments of recognised occupational diseases.\n\n| Disease | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 | 2019 |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| Selected occupational cancers | 100 | 100 | 93.8 | 94.6 | 91.9 | 81.5 | 86.3 |\n| Mesothelioma | 100 | 98.5 | 97.8 | 96.2 | 100 | 92.7 | 86.2 |\n| Asthma | 100 | 100 | 100 | 101.1 | 88.9 | 81.8 | 97.2 |\n| Pneumoconiosis | 100 | 107.2 | 91.2 | 76.8 | 80 | 66.9 | 92.9 |\n| Contact dermatitis | 100 | 100 | 98.1 | 101.7 | 101.1 | 98.4 | 77.8 |\n| Selected musculoskeletal diseases | 100 | 104.3 | 100 | 111.6 | 106.5 | 100 | 99.8 |\n| Other diseases of the inner ear | 100 | 87.4 | 91.3 | 78.9 | 81.1 | 78 | 53.7 |\n| Total | 100 | 102.2 | 100 | 100 | 98.5 | 86.2 | 93.3 |\n\n#### **Table 21: Development of recognised occupational diseases in the EU 2013-2019**", - "page_start": 73, - "page_end": 73, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "bronchial challenge testing into a case finding strategy identified asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD.27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status.28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions.29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective.30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits definitive clinical trials.31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al32 revealed that physicians underestimated their patients' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea.19 Patient underreporting of symptoms, coupled with inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population.33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case finding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n# Financial/Nonfinancial Disclosures\n\nNone declared.\n\n# Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David Savage; Natasha Verzosa; Ravneet Mahal; and Mary Justine Angeles; Queen Elizabeth II Health Sciences Centre, Halifax, NS: Scott Fulton, RRT; Hôpital du Sacré Coeur de Montréal, Montréal, QC: Simone Chaboillez, MT; and Meliza Benabdallah; St. Joseph's Hamilton, Hamilton, ON: Liz Johnson; St. Boniface Hospital, Winnipeg, MB: Cheryl Noble, RN; Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval, Québec, QC: Johane Lepage, BSc; Joanne Milot, RN; and Christiane Balizet, RN; University of Calgary, Calgary, AB: Lisette Machado, MD; and Curtis Dumonceaux, BSc; University of Alberta, Edmonton, AB: Miranda Bowen, RRT; Fay Hartt; Angie Hillaby, RRT; and Amy Haartsma, RRT; St. Michael's Hospital, Toronto, ON: Stephanie Segovia, PhD; and Carolyn Spiegel-Feld; Queen's University Kingston General Hospital, Kingston, ON: Ann Taite, BSc; Alison Morra, BScN; Emma Bullock, HBSc; and Taylar Wall, RRT; University of Saskatchewan Royal University Hospital, Saskatoon, SK: Nancy Zacher; Janet Baran, RN; and Yessica Lopez, BA; London Health Sciences Centre - Victoria Hospital, London, ON: Katie Maguire; Heba Almadhoun; and Robert Campbell-Pereira, BSc; St. Clare's Mercy Hospital, St John's, NL: Sarah Anthony, BNRN; and Tanya Nolan, BNRN; McGill University Health Centre, Montreal, QC: Francine Noel; Royal Victoria Regional Health Centre, Barrie, ON: Masoud Mahdavian; and Ashley Brown, RRT; and Michael Garron Hospital, Toronto, ON: Ian Fraser; Han Byul (Liz) Lee; and Yuna Lee, BA. We would also thank Dong Vo We (data manager, Ottawa Hospital Research Institute, Ottawa, ON). We also thank the thousands of study participants who gave their time and came in for the study visits. We also thank ASDE Survey Sampler, Inc (Gatineau, QC, Canada) for organizing the random digit dialing.\n\n# References\n\n- 1. Parshall MB, Schwarthzstein RM, Adams L, et al. An Official American Thoracic Society Statement: update on the mechanisms, assessment, and management of dyspnea. Am J Respir Crit Care Med. 2012;185:435-452.\n- 2. Ho SF, O'Mahony MS, Steward JA, et al. Dyspnoea and quality of life in older people at home. Age Ageing. 2001;30: 155-159.\n- 3. Laviolette L, Laveneziana P. Dyspnoea: a multidimensional and multidisciplinary approach. Eur Respir J. 2014;43: 1750-1762.\n- 4. Müller A, Mraz T, Wouters EFM, et al. Prevalence of dyspnea in general adult populations: a systematic review and meta-analysis. Respir Med. 2023;218: 107379.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# Impact of Dyspnea on Adults With Respiratory Symptoms Without a Defined Diagnosis\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\n> BACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\n> RESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\n> STUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George's Respiratory questionnaire.\n\n> RESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported significantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-specific risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classification and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\n> INTERPRETATION: Our findings showed that in community-based adults with undiagnosed respiratory symptoms, those identified with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case finding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of *new* users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay *et al*., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "The strong differences in the **expectations to do the job until 60 years of age** are probably also caused by the circumstance that the labour market for physically demanding jobs is more rigid. For example, one serious musculoskeletal issue might mean being out of a manual job far before the pension age. For diseases caused by excessive psychosocial burden, other difficulties can be observed: the recognition as work-related is less accepted, work-related and private life causes are closely intertwined, and the diagnosis can be difficult.", - "page_start": 96, - "page_end": 96, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "224 Pega et al., 2022: Global, regional and national burden of disease attributable to 19 selected occupational risk factors for 183 countries, 2000–2016: A systematic analysis from the WHO/ILO Joint Estimates of the Workrelated Burden of Disease and Injury, here\n\n225 Kauppinen et al., 1998: Occupational exposure to carcinogens in the European Union in 1990-1993: international information system on occupational exposure to carcinogens, here CAREX Canada\n\nFevotte et al., 2011: Matgéné: A Program to Develop Job-Exposure Matrices in the General Population in France Mannetje et al., 2011: Developing a general population job-exposure matrix in the absence of sufficient exposure monitoring data\n\n226 YLDs = years lived with disability, together with YLLs = years of life lost, it composes the DALY (DALY = YLL + YLD).\n\n227 GBD 2019 Mental Disorders Collaborators, 2022: Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990–2019: a systematic analysis from the Global Burden of Disease Study 2019, here\n\n228 WHO: Mental disorders, Key facts and\n\nIHME: Global Health Data Exchange (GHDx), here\n\n229 OECD, 2015: Sick on the Job?: Myths and Realities about Mental Health and Work\n\n230 OECD/European Union, 2018: Health at a Glance: Europe 2018: State of Health in the EU Cycle\n\n231 Andlin-Sobocki et al., 2005: Cost of disorders of the brain in Europe\n\n232 Niedhammer et al.; 2021: Update of the fractions of cardiovascular diseases and mental disorders attributable to psychosocial work factors in Europe, here\n\n233 Norder et al., 2017: Beyond return to work from sickness absence due to mental disorders: 5-year longitudinal study of employment status among production workers, here\n\n234 Leka & Jain, 2017: EU Compass for Action on Mental Health and Well-Being - Mental Health in the Workplace in Europe\n\n235 Musculoskeletal disorders refer to backache and/or muscular pains in shoulders, neck, upper limbs and/or lower limbs (hips, legs, knees, feet, etc.). In the medical systematic it is the IC 10 group of diseases: Diseases of the musculoskeletal system and connective tissue.\n\n236 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 237 Graveling, 2018: Ergonomics and Musculoskeletal Disorders (MSDs) in the Workplace. A Forensic and Epidemiological Analysis\n\n238 Da Costa & Viera, 2010: Risk factors for work-related musculoskeletal disorders: a systematic review of recent longitudinal studies, here\n\n239 EU-OSHA, 2020: Work-related musculoskeletal disorders: why are they still so prevalent? Evidence from a literature review (p. 15).\n\n240 EU-OSHA, 2019: Summary - Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU (p. 8).\n\n241 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 242 Ibid., p. 174ff.\n\n243 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (p. 77).\n\n244 United Nations Economic Commission for Europe (UNECE), 2015: Handbook on measuring quality of employment: A statistical framework, here\n\n245 Quinlan & Bohle, 2013: Re-invigorating industrial relations as a field of study: Changes at work, substantive working conditions and the case of OHS, here (p. 8).\n\n246 The percentages of responses to this question in the European Working Conditions Survey (EWCS, 2015) are displayed. Each bar shows the percentages of the four possible responses for each EU Member State, the average for the EU Member States, and the responses for Switzerland and Norway. Responses are displayed for the question below: How satisfied are you with working conditions in your main paid job? Answer options were: Not at all satisfied; Not very satisfied; Satisfied; Very satisfied. See here\n\n247 Flash Eurobarometer 398, 2014, p 2, https://www.cesi.org/wp-content/uploads/2014/04/fl_398_sum_en.pdf . The displayed Flash Eurobarometer data refer to the 'working population', with two subgroups A (employees and manual workers), and B (self-employed). In the Flash Eurobarometer sample these two groups are separated from three further groups forming the 'Not working' population These groups are: subgroups: students, retired, looking for a job.\n\n248 Ibid., p. 58.\n\n249 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (pp. 77-81).", - "page_start": 149, - "page_end": 149, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and reflexivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers' closeness to the intervention and the clinical field may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of \"blind spots\", as the researchers may prejudice participants' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, findings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n#### 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals affiliated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT (n = 15) were included (Table 3).\n\n#### 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were TABLE 3 Participant demographic information.\n\n| Variable | Total (n = 15) |\n| --- | --- |\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |\n\nTABLE 4 Interview guide.\n\n| Theme | Potential questions |\n| --- | --- |\n| Overall experiences and | Generally, what are your main experiences of |\n| reflections from participation | participation? |\n| | What did you perceive as meaningful? |\n| | What did you perceive as negative? |\n| Content | How did you experience: |\n| | • The content of the sessions in general |\n| | • The high-intensity walking/running |\n| | • The specific exercises |\n| | • The combination of specific exercises and |\n| | intervals of running/walking |\n| | • The exercise intensity |\n| | How did you respond to the exercises? How did |\n| | you experience getting tired? |\n| | How do you perceive your specific movement |\n| | impairments (if any) being addressed? |\n| | Please elaborate on situations where you |\n| | experienced the feeling of mastery/failure. |\n| | If anything: What was challenging? What would |\n| | you prefer to have been done differently? What |\n| | did you enjoy? |\n| | What was the value of participating in the |\n| | indoor exercise group beforehand? |\n| | How did you experience this kind of exercise |\n| | intervention compared to other type of exercise |\n| | you may have experience with? |\n| The role of the physiotherapists | What did the physiotherapists do? What was |\n| | the value of this to you? |\n| The group setting | How did you experience the group setting? |\n| | How did you perceive the atmosphere in the |\n| | group? |\n| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park |\n| | environment for exercise? |\n| Closing questions | Are there any experiences from participation |\n| | that you would like to elaborate on? Is anything |\n| | related to this project that we have not talked |\n| | about that you would like to say? |\n| | How did you experience this interview? |\n\nOverall participants were asked to describe situations to exemplify their answers, and follow-up questions were used to capture in-depth reflections, for example, What was positive/negative?, How did it feel?, What do you think of that?, What does it mean to you?, Can you elaborate on that?.\n\nconducted (with pwMS who were not part of the sample), and the interview guide was then refined around the following themes: overall experience and reflections from participation, content, outdoor setting, the group, and the physiotherapists. Questions were open-ended to capture rich, in-depth reflections regarding participants' experiences, following a phenomenological approach. The interviewer asked for both negative and positive experiences", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What was the average year of the group that participated to the study concerning the impact of outdoor pysiotherapy on patient with multiple sclerosis", - "target_page": 4, - "target_passage": "Age in years Mean 47.6", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "Discussion: High-intensity training combined with detailed exercises in a physiotherapy outdoor group was perceived to create meaningful bodily changes and enhance PA and prospects for both PA and life. Importantly, however, some negative experiences were also reported from the high-intensity training. Enactive theory allowed for the illumination of new perspectives: the importance of embodiment for self-efficacy and of tailored physiotherapy and an outdoor-group environment for exploring one's own limits to physical capabilities. These aspects should inform future exercise interventions in pwMS with low disability.\n\n#### KEYWORDS\n\nphysical activity, physiotherapy, multiple sclerosis, qualitative study, exercise therapy, postural balance, enactive theory\n\n#### 1 Introduction\n\nMultiple sclerosis (MS) is a progressive inflammatory disease of the central nervous system (CNS) that is typically diagnosed at 30– 40 years of age (1). A great concern is the significantly lower levels of physical activity (PA) in people with MS (pwMS) across disability levels than in their healthy counterparts (2, 3).\n\nEarly promotion of PA and exercise is recommended due to numerous established benefits in health, symptom management and well-being for pwMS (4). In particular, high-intensity training is endorsed, as it has possible neuroprotective effects in the disease course (5, 6). In addition, exercises addressing sensorimotor impairments (e.g., reduced muscle strength, reduced neuromuscular control) are recommended, as they target individuals' capability to remain physically active (7). Sensorimotor impairments can influence trunk control, which is commonly disturbed in pwMS, even when disability is low (8, 9), and correlate with impaired balance, walking capacity and distance (10, 11). PwMS's knowledge of exercise benefits, attitudes and motivations, as well as contextual aspects such as lack of optimal exercise interventions, accessibility and support, affect the level of PA and exercise participation (12).\n\nCoreDISTparticipation (Table 1) is a new comprehensive intervention addressing sensorimotor function, trunk control, high-intensity running/walking and work participation in pwMS with low disability (13). It is based on the GroupCoreDIST1 intervention, which has been shown to have significant shortand long-term effects on trunk control, balance and walking among pwMS (14, 15). However, no effects of the intervention on objectively measured PA have been identified, even though the participants reported perceptions of new possibilities to be physically active as their sensorimotor impairments improved (16). To address PA challenges in pwMS, GroupCoreDIST was further developed to include a four-week period of outdoor training, in which high-intensity walking/running and GroupCoreDIST exercises are integrated (Table 2). To our knowledge, combinations of high-intensity training and rehabilitation of specific sensorimotor functions have been sparsely explored. Patient perspectives are essential for the evaluation of healthcare interventions (17); however, the new outdoor component of CoreDISTparticipation has yet to be investigated from a first-person perspective. Particularly interesting is what participants perceive as meaningful regarding the intervention, as this is essential for motivation, motor learning and exercise adherence (18).\n\nTo deepen our understanding of what the participants perceive as meaningful, we turn to a theoretical perspective that integrates bodily capacities with the construction of meaning. Enactive theory emphasizes that making sense of the world depends essentially on the biological (living) body and the phenomenological (lived or experienced) body (19), which implies that the body is viewed as a neurobiological organism that is concurrently experiencing, expressing and social (embodiment) (20). Thus, what is experienced by an individual during an exercise intervention is constituted by her sensorimotor repertoire for perception and action in interactions with the requirements of the task and the context (21). From this perspective, dysfunctions related to MS, such as sensorimotor impairments, can influence how individuals with MS interpret and understand their participation in a PA intervention. Moreover, the notion of \"participatory sensemaking\" (22) extends the body into the social domain, enabling an understanding of how the interaction processes between two embodied individuals affect shared and individual meaning-making. These concepts may illuminate pwMS's experiences and direct the focus toward bodily, contextual, and interactional aspects that may generate new insights regarding sensorimotor exercise and high-intensity training as part of PA.\n\nThe aim of this study was to explore participants' experiences of the content, delivery and setting of a new outdoor group intervention combining high-intensity training and detailed exercises to generate new knowledge about important aspects of exercise interventions for pwMS with low disability.\n\n1 GroupCoreDIST is a group-based intervention (Group), involving 35 exercises at different levels, addressing activation of trunk musculature (Core) in motor tasks in lying, sitting and standing (e.g. rolling, reaching, squatting, single leg stance. DIST describes essential elements of the concept: D = dose (high), dual task; I = individualization, insight, intensity; S = sensorimotor activation, selective movement control; T = task oriented training.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed13.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and reflexivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers' closeness to the intervention and the clinical field may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of \"blind spots\", as the researchers may prejudice participants' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, findings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n#### 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals affiliated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT (n = 15) were included (Table 3).\n\n#### 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were TABLE 3 Participant demographic information.\n\n| Variable | Total (n = 15) |\n| --- | --- |\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |\n\nTABLE 4 Interview guide.\n\n| Theme | Potential questions |\n| --- | --- |\n| Overall experiences and | Generally, what are your main experiences of |\n| reflections from participation | participation? |\n| | What did you perceive as meaningful? |\n| | What did you perceive as negative? |\n| Content | How did you experience: |\n| | • The content of the sessions in general |\n| | • The high-intensity walking/running |\n| | • The specific exercises |\n| | • The combination of specific exercises and |\n| | intervals of running/walking |\n| | • The exercise intensity |\n| | How did you respond to the exercises? How did |\n| | you experience getting tired? |\n| | How do you perceive your specific movement |\n| | impairments (if any) being addressed? |\n| | Please elaborate on situations where you |\n| | experienced the feeling of mastery/failure. |\n| | If anything: What was challenging? What would |\n| | you prefer to have been done differently? What |\n| | did you enjoy? |\n| | What was the value of participating in the |\n| | indoor exercise group beforehand? |\n| | How did you experience this kind of exercise |\n| | intervention compared to other type of exercise |\n| | you may have experience with? |\n| The role of the physiotherapists | What did the physiotherapists do? What was |\n| | the value of this to you? |\n| The group setting | How did you experience the group setting? |\n| | How did you perceive the atmosphere in the |\n| | group? |\n| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park |\n| | environment for exercise? |\n| Closing questions | Are there any experiences from participation |\n| | that you would like to elaborate on? Is anything |\n| | related to this project that we have not talked |\n| | about that you would like to say? |\n| | How did you experience this interview? |\n\nOverall participants were asked to describe situations to exemplify their answers, and follow-up questions were used to capture in-depth reflections, for example, What was positive/negative?, How did it feel?, What do you think of that?, What does it mean to you?, Can you elaborate on that?.\n\nconducted (with pwMS who were not part of the sample), and the interview guide was then refined around the following themes: overall experience and reflections from participation, content, outdoor setting, the group, and the physiotherapists. Questions were open-ended to capture rich, in-depth reflections regarding participants' experiences, following a phenomenological approach. The interviewer asked for both negative and positive experiences", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "bronchial challenge testing into a case finding strategy identified asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD.27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status.28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions.29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective.30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits definitive clinical trials.31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al32 revealed that physicians underestimated their patients' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea.19 Patient underreporting of symptoms, coupled with inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population.33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case finding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n# Financial/Nonfinancial Disclosures\n\nNone declared.\n\n# Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David Savage; Natasha Verzosa; Ravneet Mahal; and Mary Justine Angeles; Queen Elizabeth II Health Sciences Centre, Halifax, NS: Scott Fulton, RRT; Hôpital du Sacré Coeur de Montréal, Montréal, QC: Simone Chaboillez, MT; and Meliza Benabdallah; St. Joseph's Hamilton, Hamilton, ON: Liz Johnson; St. Boniface Hospital, Winnipeg, MB: Cheryl Noble, RN; Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval, Québec, QC: Johane Lepage, BSc; Joanne Milot, RN; and Christiane Balizet, RN; University of Calgary, Calgary, AB: Lisette Machado, MD; and Curtis Dumonceaux, BSc; University of Alberta, Edmonton, AB: Miranda Bowen, RRT; Fay Hartt; Angie Hillaby, RRT; and Amy Haartsma, RRT; St. Michael's Hospital, Toronto, ON: Stephanie Segovia, PhD; and Carolyn Spiegel-Feld; Queen's University Kingston General Hospital, Kingston, ON: Ann Taite, BSc; Alison Morra, BScN; Emma Bullock, HBSc; and Taylar Wall, RRT; University of Saskatchewan Royal University Hospital, Saskatoon, SK: Nancy Zacher; Janet Baran, RN; and Yessica Lopez, BA; London Health Sciences Centre - Victoria Hospital, London, ON: Katie Maguire; Heba Almadhoun; and Robert Campbell-Pereira, BSc; St. Clare's Mercy Hospital, St John's, NL: Sarah Anthony, BNRN; and Tanya Nolan, BNRN; McGill University Health Centre, Montreal, QC: Francine Noel; Royal Victoria Regional Health Centre, Barrie, ON: Masoud Mahdavian; and Ashley Brown, RRT; and Michael Garron Hospital, Toronto, ON: Ian Fraser; Han Byul (Liz) Lee; and Yuna Lee, BA. We would also thank Dong Vo We (data manager, Ottawa Hospital Research Institute, Ottawa, ON). We also thank the thousands of study participants who gave their time and came in for the study visits. We also thank ASDE Survey Sampler, Inc (Gatineau, QC, Canada) for organizing the random digit dialing.\n\n# References\n\n- 1. Parshall MB, Schwarthzstein RM, Adams L, et al. An Official American Thoracic Society Statement: update on the mechanisms, assessment, and management of dyspnea. Am J Respir Crit Care Med. 2012;185:435-452.\n- 2. Ho SF, O'Mahony MS, Steward JA, et al. Dyspnoea and quality of life in older people at home. Age Ageing. 2001;30: 155-159.\n- 3. Laviolette L, Laveneziana P. Dyspnoea: a multidimensional and multidisciplinary approach. Eur Respir J. 2014;43: 1750-1762.\n- 4. Müller A, Mraz T, Wouters EFM, et al. Prevalence of dyspnea in general adult populations: a systematic review and meta-analysis. Respir Med. 2023;218: 107379.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "gave them advice for follow-up. Some participants said that when the physiotherapist conducted the exercises or ran/walked together with them, it made them increase their exercise intensity. One participant described this as follows:\n\n> The physiotherapists pushed me to perform beyond what I thought I was able to—and that was great! There is no doubt that if someone is running beside you and shouting \"come onwell done\", you manage to push yourself further. (ID8, EDSS: 2)\n\nHowever, one participant described an incident where the interaction with the physiotherapists was not perceived as helpful:\n\n> When I get tired, it gets difficult. I can only do one thing at a time, and then these physiotherapists came running, talking and trying to motivate at the same time. I got very tired, and my leg would not follow my commands to run. (ID7, EDSS: 3.5)\n\nParticipants reported that they appreciated that the physiotherapists made them engage in playful activities with a ball, run for beanbags, and sing and in general created an informal and nice atmosphere. The enjoyment created was described as important for adherence to the intervention and as encouraging participants' physical effort during the session, as exercise felt easier when it was enjoyable. It was appreciated that the physiotherapists were perceived as both cheerful and serious about the intervention.\n\n#### 4 Discussion\n\nThe main findings of this study are that (1) being supported to explore and push one's own physical capabilities by combining high-intensity running/walking with detailed exercises was meaningful and evoked strong emotions. Improving one's balance, walking, and running lead to increased beliefs in one's own possibilities. Some negative experiences were also described, particularly from the highintensity training. (2) An engaging outdoor group with tailored physiotherapist-participant interactions and the co-creation of enjoyment was perceived to be important for the success of the individual. These findings illustrate how the dynamic intertwining of the body and movement, context and intersubjective interactions create meaning and beliefs in one's own physical capabilities (19).\n\n#### 4.1 Bodily experiences are inherent to beliefs in the mastery of physical activity\n\nThe meaningfulness of exploring the limits of training intensity that we identified in our study corresponds with other studies of pwMS's experiences of interventions addressing intensity of activity (31, 32). The exercises emphasizing trunk control were reported to reduce movement impairments and are in line with a study of pwMS with higher disabilities participating in an indoor group intervention (16). However, the perceived interlinking of improved sensorimotor functions and the ease of and efficiency in high-intensity walking/running have not been reported previously. It is likely that the detailed exercises prompted activations of the CNS and musculoskeletal systems, which are prerequisites for highintensity walking and running (33). Impairments in such systems commonly occur due to CNS lesions or secondary inactivity, and function can improve with increased use (18). Our results support the value of integrating such specificity to optimize the capability to train at high intensity, even in individuals with low EDSS scores.\n\nThe described emotional associations of these bodily changes are interesting. Achieving higher exercise intensities, easier movements, reduced pain and improved sensation lead to positive feelings and enhanced prospects for both PA and life, while for some individuals, a failure to achieve high-intensity or no immediate changes in impairments are associated with feelings of loss and negative prospects. This calls attention to acknowledging that sensorimotor capacities facilitate or constrain how an individual perceives the world, which is closely interlinked with feelings, and that influence why participants perceive what they do (34). These experiences necessitate that sensorimotor changes in pwMS involve not only their biological body but also their relational and self-individuating modes of operating in the world, including how an experience coheres with, for example, participants' historical experiences (35). As we primarily regulate such modes to achieve an optimal positive mood state, this can also explain why only changes perceived as positive appear to enhance participants' beliefs for the future (36). Negative experiences such as failure to achieve high intensity because the legs are not working in the last interval can thus be perceived as detrimental by pwMS.\n\nWe argue that participants' perceived bodily changes affected their self-efficacy for being physically active. Self-efficacy involves an individual's perception of exerting control over his or her own actions (37) and has been extensively reported to be pertinent to PA engagement in pwMS (38, 39). However, selfefficacy is theoretically described according to social cognitive theory (38). Our findings highlight how experiencing, expressing and socially interacting through the body (embodied experiences) shape individuals' self-efficacy and suggest a crucial role of bodily perceptions in constituting self-efficacy for PA.\n\n#### 4.2 Interactions and environment shape meaning making\n\nParticipants perceived the group setting to increase motivation, support, and commitment, which has been found in previously published work (16, 31).\n\nThe physiotherapist-participant interaction is acknowledged in exercise interventions for pwMS, pointing to professionals' role in informing participants of exercise benefits in the management of MS, including the prescribing mode, frequency, intensity, and duration of exercise (40). Tailored interventions are supported", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed13.pdf" - }, - { - "text": "However, explained variance estimates in our models ranged from 34 to 61%, suggesting further research is necessary to identify additional factors contributing to healthcare utilization following physical therapy.\n\nThe primary limitation of the study is the high number of subjects lost to follow-up. We attempted to account for the bias introduced by loss to follow-up in our models with IPAW, which is a robust strategy for conducting analyses with missing data [41, 51]. We observed good concordance between results of complete case and weighted analyses, giving us confidence in our findings. However, important differences in age, race, education, symptom onset, baseline pain intensity, and baseline pain-related psychological distress were noted between those who did and did not complete follow-up. These differences suggest that the group lost to follow-up may represent a unique population to whom these results may not apply. Different factors may predict utilization outcomes for this unique population. As a result, readers should exercise caution when extending these findings to individuals and populations that substantially differ from the analytic sample in this study. Specifically, these predictive models may need to be adjusted for younger individuals of non-white race, with lower education levels, sudden onset of symptoms, and those with higher pain intensity and pain-associated distress.\n\nA second limitation is that we did not know about the subjects' prior experiences with physical therapy, or whether they arrived at physical therapy through direct access or referral from another provider. These factors could be associated with treatment expectations, which have known effects on treatment outcomes [52, 53]. We also did not collect specific information on treatment. But by including changes in pain, disability, and pain-related psychological distress in the models, we were able to account for treatment response. The benefit of this approach is that models are generalizable for predicting utilization outcomes across \"real-world\" pragmatic physical therapy settings where treatment variation is expected. The drawback is that we are prohibited from making conclusions regarding which characteristics of the clinical encounter might influence subsequent pain-related healthcare utilization. Important characteristics to consider would include number of visits, type of interventions or whether patients completed their course of physical therapy. These have been proposed or identified as important contributors to downstream costs following physical therapy [54, 55] and may be a source of unexplained variance in our models. Characteristics of the clinical encounter should be considered in future studies to refine the prediction models developed in our analyses.\n\nThird, we were unable to adequately model the specific effects of worker's compensation, self-pay and some commercial insurance coverage on utilization due to the low incidence of these forms of payment in our study sample. Modeling these separately would have created the potential for unreliable and imprecise effect estimates. Readers should consider the within-group heterogeneity caused by this approach and exercise caution when applying these results to individuals who do not have traditional public or private insurance coverage. Future studies should investigate the performance of the\n\nWorker's Compensation. A final limitation is the use of patient recall to measure utilization. To mitigate recall bias, we used two follow-up points, at 6 and 12 months. However, underor over-reporting of utilization is often a concern with studies requiring subject recall [56–58]. Medical record and claims data were not available for these subjects. Readers should consider our inability to independently confirm utilization when interpreting results.\n\nOSPRO tools in predicting outcomes for patients with\n\nIn future studies, we will embed the OSPRO tools into electronic medical record (EMR) databases to refine and test outcomes prediction models at the health care systems level. Importantly, we will collect clinical encounter data through the EMR and combine it with administrative or billing data to confirm the results of this study with more objective measures of health care use. These studies will also allow us to provide better guidance on how to use the OSPRO tools to identify serious psychiatric involvement or systemic sources of pain that require medical referral. Finally, we will explore alternative scoring strategies for the tools, such as weighted scoring for the OSPRO-ROS and use of predicted full-length psychological questionnaire scores for the OSPRO-YF. Healthcare providers could then use the collective information from these studies to build learning health systems that facilitate effective, real-time clinical decision-making support to improve value of care for patients with musculoskeletal pain.\n\n#### Conclusion\n\nBaseline disability and change in pain intensity were important predictors of any subsequent pain-related healthcare utilization, while predictors of individual service utilization were outcome-specific. Identification of risk is improved through treatment monitoring for pain and, in some cases, disability and pain-related psychological distress. Comorbidity burden was an important predictor of subsequent utilization of opioids and diagnostic tests and imaging, both of which have been recent targets of healthcare policy to constrain their unnecessary use. Future research is needed to refine these predictor variables and incorporate them into risk models that support clinical decision-making so that treatment effectiveness and efficiency are optimized in value-based systems.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - }, - { - "text": "given the heterogenic pathology and symptoms of MS (41, 42). However, our findings illuminate qualitative aspects of how to achieve tailored and meaningful intersubjective interactions in an exercise intervention.\n\nWe consider the instances of the physiotherapist running together with the participant, which were perceived as important for participants' performance, to be an example of \"participatory sense-making\" (22). As participants appreciated being guided or even pushed by the physiotherapists, it appears that the physiotherapists were trusted in directing this interaction. As such, we argue that the physiotherapists' ability to adapt to participants' movements, speech and gestures—tailoring the interaction to their needs—was important for this ability to be perceived as purposeful. This is supported by the few negative incidents described where the participant-physiotherapist interaction seemed to not be jointly coordinated and appeared to fail. The reported mutual influences of sensorimotor capabilities and interpersonal coordination, with the physiotherapists but also the group, are in accordance with sensorimotor capacities and intersubjective interactions being important for sensemaking in the world (35). The benefits of these individualized participant-physiotherapist interactions are also described in specific core-stability exercises in indoor groups (16, 43) and are in line with the theoretical framework of facilitation of movement through hands-on interaction previously proposed (44, 45). Our study informs new knowledge of physiotherapistparticipant interactions to achieve the recommended highintensity training and calls for physiotherapy clinical reasoning through bodily and verbal communication skills adapted to the participants' responses in an ongoing and situated way.\n\nEnjoyment has previously been reported to promote PA in pwMS, and our study brings requested knowledge of what can constitute enjoyment in an exercise intervention (46): playful group-exercise tasks, a cheerful physiotherapist, and the outdoor environment.\n\nThe appreciation of being active outdoors in the study sample aligns with that in the general population (47). The outdoors provided a natural environment, which both invited participants to actively explore abilities thought of as left behind after their diagnosis with MS, such as running, and provided an appreciated break from focusing on MS symptoms. We also suggest that the positive experiences of mastering the challenging weather conditions and the added meaning of exercising among other people in the city park can be explained according to such terms. These positive experiences show how we are enmeshed in our history, context and social encounters (35) and how these aspects should also be accounted for when designing exercise interventions.\n\n#### 4.3 Methodological considerations\n\nThe design and methods were adequate for deriving knowledge from individuals' experiences. The participants selfreferred to the intervention and were recruited based on pre-set criteria. This approach yielded rich information from people with mild to moderate disabilities due to MS who were motivated for physical activity (PA), employed, and residing in northern Norway. Ethnicity or socio-economic class were not recorded. However, considering that all these factors can influence PA engagement (46), it is possible that additional aspects of the phenomenon could be uncovered in a different sample (48). There was a higher percentage of women participating than men; however, this corresponds to the gender distribution in the MS population (1).\n\nThe use of enactive theory was innovative within the field and allowed for, in particular, new aspects of importance for selfefficacy to be identified. Transference of our results to similar populations can be achieved through theoretical generalization (28).\n\n#### 4.4 Implications for clinical practice\n\nCombining high-intensity walking/running and detailed sensorimotor exercises was valued and provided meaningful embodied experiences, improving participants' ability to master PA and their beliefs of their own possibilities for being active in the future. However, the manner in which the content of an exercise intervention is delivered and the environment in which it is delivered should be accounted for, as these aspects were perceived to be of great importance in creating and shaping participants' experiences. In particular, tailored physiotherapistparticipant bodily interactions and an engaging group and outdoor environment were perceived to be pertinent for exploring one's own potential.\n\nTo minimize negative incidents in future interventions, we suggest that (1) the effort required from one's leg muscles during the detailed exercises (in between the running/walking intervals) should be low to minimize the negative consequences of leg muscle fatigue prior to high-intensity running/walking, (2) the capacity for running/walking at highintensity should be explored in one-to-one physiotherapy assessment prior to group training to optimize individuals capabilities and safety, and (3) homogenous and small-sized groups should be used to enable ongoing and tailored physiotherapist-participant interactions.\n\n#### Data availability statement\n\nThe datasets presented in this article are not readily available because of ethical and legal restrictions. Requests to access the datasets should be directed to stine.s.dahl@nord.no.\n\n#### Ethics statement\n\nThis study involving humans was approved by Regional Committee for Medical Research Ethics in North Norway (REK North: 174,837) and the Data Protection Officer at Nordlandssykehuset Hospital Trust, Norway. This study was conducted in accordance with the local legislation and", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "TABLE 1 Overview of the CoreDISTparticipation intervention.\n\n| Week 1: MS outpatient | Consultation with the MS nurse (20 min) to address work-related issues based on a structured guide comprising the following themes: knowledge |\n| --- | --- |\n| clinic | of MS at the workplace, experienced work-related challenges due to MS, potential needs and facilitators. |\n| | Physiotherapy assessment (60 min) to explore the potential for changes in balance and walking aiming to turn focus toward possibilities and thus, |\n| | motivate the patient. |\n| | Based on these assessments the MS nurse and the physiotherapist indicated the aspects of importance on a standardized form to inform the |\n| | municipal physiotherapist. |\n| | Standardized testing (baseline, for the RCT). |\n| Week 2–5: Municipality | Physiotherapy assessment (60–90 min) to explore the patient's impairments and potential for improvements in a clinical examination prior to |\n| | group-training. |\n| | Indoor group (60 min × 2 weekly, for 4 weeks). There were three to five participants in each group and one physiotherapist. Trunk control, balance |\n| | and physical activity were addressed (GroupCoreDIST). Participants received a link to CoreDIST digital exercise-videos and were advised to do |\n| | them 1 × weekly throughout the intervention. (videos can be accessed here: https://www.nord.no/en/node/35,098) |\n| | Digital meeting with a multidisciplinary team (pwMS, employer, physiotherapist & MS nurse) (20 min) regarding barriers to work participation |\n| | and needs for adaptations regarding work and physical activity, according to a structured meeting-guide (one meeting). |\n| Week 6 | Standardized testing (midway, for the RCT). |\n| Week 7–10: Municipality | Outdoor group (60 min × 2 weekly, for 4 weeks). A maximum of ten participants and two physiotherapists were included in each group. Trunk |\n| | control and balance (GroupCoreDIST exercises) were addressed, and high-intensity walking or running was performed. The intervention was |\n| | conducted in a city park where both flat and uneven surfaces and hilly terrain were available (Table 2). |\n| | Additionally, participants were encouraged to comply with the exercise-videos through a weekly SMS-reminder. |\n| Week 11–14 | Standardized testing (final, for the RCT) and qualitative interviews. |\n\nTABLE 2 Description of the outdoor group.\n\n| Content | Purpose |\n| --- | --- |\n| Warm-up and recording one's own balance | |\n| Exercises for detailed sensorimotor | Preparation. |\n| activation, larger muscle groups, muscle | Experience one's own balance and |\n| length and balance while standing. | record eventual changes. |\n| Dual task: motor (using spiky balls and | |\n| medicine balls individually, in pairs and | |\n| in the group) and cognitive (singing, | |\n| rhymes and counting). | |\n| Main part | |\n| (1) High-intensity training (85%–95% | Improve stamina. |\n| maxHR/min 16 RPE) × 4 min: Running | Experience one's own opportunities for |\n| or walking with long strides and large | high-intensity physical activity. |\n| arm movements. Participants chose their | Improve sensorimotor control and |\n| own route, marking it with a cone, and | balance as prerequisites for walking and |\n| picked up a bean bag for each new lap to | running. |\n| count how many laps for each interval. | |\n| (2) Moderate-intensity detailed exercises | |\n| (approx. 70% maxHR) × 3 min. | |\n| CoreDIST exercises while standing | |\n| approximately (10 repetitions × 2 set). | |\n| Examples of exercises: squat, one legged | |\n| stance, rise on toes, reaching, turning and | |\n| rolling down to touch the ground in | |\n| standing. | |\n| Progressions was individually tailored | |\n| (during both running/walking and the | |\n| detailed exercises) through instructions, | |\n| demonstration and hands-on facilitations | |\n| by the physiotherapists. Quality and | |\n| efficiency of movement were addressed | |\n| by the physiotherapists. Optimalization | |\n| of trunk control during movement were | |\n| emphasised. | |\n| A combination of high-intensity and | |\n| CoreDIST exercises was repeated 3–4 | |\n| times during one session. | |\n| Cool-down and recording one's own balance | |\n| | Experience one's own balance and |\n| Hold/relax muscle contraction. | |\n| Balance on one leg. | record eventual changes. |\n\n### 2 Materials and methods\n\n#### 2.1 Design\n\nIndividual in-depth interviews using a phenomenologicalinspired approach were chosen, as this is suitable for exploring the meaning and significance of pwMS's experiences and reflections (23, 24).\n\n#### 2.2 Ethical considerations\n\nThe study was conducted according to the Declaration of Helsinki and approved by the Regional Committee for Medical Research Ethics in North Norway (REK North: 174837). Written informed consent was obtained prior to the intervention and confirmed verbally when arranging the interviews. Participation was voluntary and anonymous, and the participants were informed about the opportunity to withdraw from the study. The Consolidated Criteria for Reporting Qualitative Research (COREQ) (25) were used to optimize the conduct and reporting of the study.\n\n#### 2.3 Study context\n\nThis interview study was nested within a randomized controlled trial (RCT) comparing the CoreDISTparticipation intervention to usual care (26) and conducted at a regional hospital MS-outpatient clinic (Nordland Hospital Trust) and in two affiliated municipalities in the northern Norway. The current study investigates participants in the intervention group's experiences of the four-week outdoor group, which was part of this new intervention (Table 2). The outdoor sessions were conducted by three trained physiotherapists working in the", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed13.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identified with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort.1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%.2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks.3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a significant burden on those with undiagnosed conditions. In a systematic review by Müller et al,4 the combined\n\n#### Study Design and Methods Recruitment of Undiagnosed Cases and Healthy Control Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case finding study. Approval for prevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily influenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants.5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identified potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits.7\n\nThe three objectives of our study were as follows: (1) to evaluate the impact of dyspnea in adults from the general population who had no prior diagnosis of respiratory disease but who reported having significant respiratory symptoms in the past 6 months; (2) to identify associated risk factors for dyspnea and estimate their influence on the symptom; and (3) to explore the relationship between dyspnea and health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nthe study was obtained from the research ethics boards of the 17 participating study sites across Canada. Informed, written consent was provided by all study participants.\n\nBoth landlines and cellphones within a 90-minute radius of any of the 17 study sites were dialed randomly. A\n\nDOI: https://doi.org/10.1016/j.chest.2024.07.183\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George's Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael's Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\n(P. H.), Dalhousie University, Halifax, NS; the Department of Medicine (I. M. and M. B.), University of Alberta, Edmonton, AB; the Department of Medicine (M. D. L.), Queen's University, Kingston; the Department of Medicine (C. J. L.), University of Western Ontario, London, ON; the Department of Medicine (T. A.), Memorial University, St. John's, NF; the Department of Medicine (N. E.), McGill University, Montreal, QC; the Department of Medicine (M. A.), University of Manitoba, Winnipeg, MN, Canada.\n\nDrs Bierbrier and Gerstein contributed equally to this manuscript.\n\nPart of this work has been presented at the American Thoracic Society Conference, May 17-22, 2024, San Diego, CA.\n\nCORRESPONDENCE TO: Shawn D. Aaron, MD; email: saaron@ohri.ca Copyright 2024 The Author(s). Published by Elsevier Inc under license from the American College of Chest Physicians. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## R E S EAR CH A R TIC L E Open Access\n\n# Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz1* , Jason M. Beneciuk2,3 and Steven Z. George4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy (n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% (n = 106) of the study sample that completed the 12-month follow-up (n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n### Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\nto increased utilization of services such as physical\n\n© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.\n\n* Correspondence: trevor.lentz@duke.edu 1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham, NC 27705, USA\n\nFull list of author information is available at the end of the article", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "routine pain-related psychological distress monitoring throughout the early phases of rehabilitation especially if the goal is to identify risk for subsequent pain-related healthcare utilization. The implications of these collective findings are that treatment pathways may provide greater value by 1) addressing modifiable health-related variables like pain, disability and pain-related psychological distress, 2) routine monitoring of these health-related variables and 3) offering treatment alternatives that safely escalate care if needed while minimizing risk of harm and unhelpful utilization.\n\nOpioids and diagnostic tests and imaging were the two most common subsequent healthcare services utilized following physical therapy. Of the individuals that completed follow up and had any subsequent healthcare utilization, approximately 42% reported opioid use and 70% reported use of diagnostic tests and imaging. An important health-related predictor of these services was level of comorbidity burden. For those with high comorbidity burden and inadequate treatment response to physical therapy, use of additional diagnostic tests and imaging or low-dose opioids may be appropriate in some cases. But given the growing public health concern over opioid use and the desire to avoid unnecessary treatment driven by imaging, our results suggest the importance of considering disease burden when developing treatment pathways and healthcare policy to mitigate risk for avoidable use of these services. Interestingly, neither versions of the OSPRO-ROS predicted utilization outcomes even though it has been linked to mental health, comorbidity, and persistent pain state in other analyses [20, 21]. Systemic symptom burden is a measure of patient complexity that is related to but distinct from comorbidity burden [36, 47]. In these analyses, the chronic condition measure (i.e. the CCI) was a better predictor of utilization than symptom burden (i.e. OSPRO-ROS). The reasons for this finding are unclear but may be related to providers and patients being more likely to pursue follow-up medical care for musculoskeletal pain when known co-existing conditions are present as opposed to reporting of symptoms alone. The distinction between symptom and disease burden in defining musculoskeletal patient complexity, and its influence on clinical decision-making and outcomes, should be the subject of future research particularly related to aging populations [48].\n\nUtilization outcomes benchmarks have not been established to determine how the percentage of subsequent healthcare use in this study compares to outcomes using other health services. Prior studies suggest physical therapy is associated with reduced incidence of additional healthcare use compared to not using physical therapy in patients with acute low back pain [10, 49]. Some additional healthcare use is expected following physical therapy, especially among individuals that are on long-term pain management pathways due to chronic or persistent symptoms. Yet with over 40% reporting subsequent pain-related healthcare among those completing follow-up, it is apparent that opportunities exist to improve pathway selection and/or the effectiveness of physical therapy for individuals with musculoskeletal pain. This finding is particularly notable given recent efforts to define physical therapy as an effective first line, non-pharmacological treatment option against more invasive or higher risk services, such as surgery or opioid use, respectively. Predictive variables identified in this analysis can be used to develop risk models that better inform pathway selection for those seeking physical therapy for musculoskeletal pain. The precise application of these risk models, and how they inform policy and practice should be the target of future study. However, physical therapy re-design might incorporate enhanced treatment monitoring to assess ongoing risk for downstream utilization, as well as physical therapist-led interventions to more thoroughly address important modifiable factors such as pain intensity, disability and pain-related psychological distress [38]. Improved pathway selection might entail the consideration of referral to or co-treatment with other providers to more adequately address non-modifiable characteristics. Collectively, these approaches could improve the value of physical therapy by minimizing risk for high downstream healthcare utilization and potentially unwarranted escalation of care.\n\nThe primary strength of the study is longitudinal follow-up at multiple time points following an episode of physical therapy for a variety of musculoskeletal pain conditions. Anatomical location of pain was not a significant predictor of healthcare use in all but one model, suggesting results are widely applicable across a spectrum of musculoskeletal pain conditions. Another strength of this cohort study is the assessment of various healthcare utilization outcomes of interest for establishing health policy. When considered alongside more traditional pain- or disability-related outcomes prediction models, these findings will improve the ability of healthcare systems and providers to make decisions in value-based purchasing environments. The consideration of multiple screening tools (i.e. yellow flags and review of systems) and treatment monitoring variables is also a strength of this study as screening and systematic treatment monitoring are not routine in clinical practice. A final strength is inclusion of multiple sociodemographic, health-related and psychosocial factors as potential predictors. Healthcare outcomes and utilization exhibit emergent properties that require the consideration of multiple, competing factors to fully explain [50].", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What were the prerequisites allowing to be involved in the study concerning the impact of outdoor sport on patients witg multiple sclerosis ?", - "target_page": 4, - "target_passage": "The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and reflexivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers' closeness to the intervention and the clinical field may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of \"blind spots\", as the researchers may prejudice participants' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, findings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n#### 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals affiliated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT (n = 15) were included (Table 3).\n\n#### 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were TABLE 3 Participant demographic information.\n\n| Variable | Total (n = 15) |\n| --- | --- |\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |\n\nTABLE 4 Interview guide.\n\n| Theme | Potential questions |\n| --- | --- |\n| Overall experiences and | Generally, what are your main experiences of |\n| reflections from participation | participation? |\n| | What did you perceive as meaningful? |\n| | What did you perceive as negative? |\n| Content | How did you experience: |\n| | • The content of the sessions in general |\n| | • The high-intensity walking/running |\n| | • The specific exercises |\n| | • The combination of specific exercises and |\n| | intervals of running/walking |\n| | • The exercise intensity |\n| | How did you respond to the exercises? How did |\n| | you experience getting tired? |\n| | How do you perceive your specific movement |\n| | impairments (if any) being addressed? |\n| | Please elaborate on situations where you |\n| | experienced the feeling of mastery/failure. |\n| | If anything: What was challenging? What would |\n| | you prefer to have been done differently? What |\n| | did you enjoy? |\n| | What was the value of participating in the |\n| | indoor exercise group beforehand? |\n| | How did you experience this kind of exercise |\n| | intervention compared to other type of exercise |\n| | you may have experience with? |\n| The role of the physiotherapists | What did the physiotherapists do? What was |\n| | the value of this to you? |\n| The group setting | How did you experience the group setting? |\n| | How did you perceive the atmosphere in the |\n| | group? |\n| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park |\n| | environment for exercise? |\n| Closing questions | Are there any experiences from participation |\n| | that you would like to elaborate on? Is anything |\n| | related to this project that we have not talked |\n| | about that you would like to say? |\n| | How did you experience this interview? |\n\nOverall participants were asked to describe situations to exemplify their answers, and follow-up questions were used to capture in-depth reflections, for example, What was positive/negative?, How did it feel?, What do you think of that?, What does it mean to you?, Can you elaborate on that?.\n\nconducted (with pwMS who were not part of the sample), and the interview guide was then refined around the following themes: overall experience and reflections from participation, content, outdoor setting, the group, and the physiotherapists. Questions were open-ended to capture rich, in-depth reflections regarding participants' experiences, following a phenomenological approach. The interviewer asked for both negative and positive experiences", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "Discussion: High-intensity training combined with detailed exercises in a physiotherapy outdoor group was perceived to create meaningful bodily changes and enhance PA and prospects for both PA and life. Importantly, however, some negative experiences were also reported from the high-intensity training. Enactive theory allowed for the illumination of new perspectives: the importance of embodiment for self-efficacy and of tailored physiotherapy and an outdoor-group environment for exploring one's own limits to physical capabilities. These aspects should inform future exercise interventions in pwMS with low disability.\n\n#### KEYWORDS\n\nphysical activity, physiotherapy, multiple sclerosis, qualitative study, exercise therapy, postural balance, enactive theory\n\n#### 1 Introduction\n\nMultiple sclerosis (MS) is a progressive inflammatory disease of the central nervous system (CNS) that is typically diagnosed at 30– 40 years of age (1). A great concern is the significantly lower levels of physical activity (PA) in people with MS (pwMS) across disability levels than in their healthy counterparts (2, 3).\n\nEarly promotion of PA and exercise is recommended due to numerous established benefits in health, symptom management and well-being for pwMS (4). In particular, high-intensity training is endorsed, as it has possible neuroprotective effects in the disease course (5, 6). In addition, exercises addressing sensorimotor impairments (e.g., reduced muscle strength, reduced neuromuscular control) are recommended, as they target individuals' capability to remain physically active (7). Sensorimotor impairments can influence trunk control, which is commonly disturbed in pwMS, even when disability is low (8, 9), and correlate with impaired balance, walking capacity and distance (10, 11). PwMS's knowledge of exercise benefits, attitudes and motivations, as well as contextual aspects such as lack of optimal exercise interventions, accessibility and support, affect the level of PA and exercise participation (12).\n\nCoreDISTparticipation (Table 1) is a new comprehensive intervention addressing sensorimotor function, trunk control, high-intensity running/walking and work participation in pwMS with low disability (13). It is based on the GroupCoreDIST1 intervention, which has been shown to have significant shortand long-term effects on trunk control, balance and walking among pwMS (14, 15). However, no effects of the intervention on objectively measured PA have been identified, even though the participants reported perceptions of new possibilities to be physically active as their sensorimotor impairments improved (16). To address PA challenges in pwMS, GroupCoreDIST was further developed to include a four-week period of outdoor training, in which high-intensity walking/running and GroupCoreDIST exercises are integrated (Table 2). To our knowledge, combinations of high-intensity training and rehabilitation of specific sensorimotor functions have been sparsely explored. Patient perspectives are essential for the evaluation of healthcare interventions (17); however, the new outdoor component of CoreDISTparticipation has yet to be investigated from a first-person perspective. Particularly interesting is what participants perceive as meaningful regarding the intervention, as this is essential for motivation, motor learning and exercise adherence (18).\n\nTo deepen our understanding of what the participants perceive as meaningful, we turn to a theoretical perspective that integrates bodily capacities with the construction of meaning. Enactive theory emphasizes that making sense of the world depends essentially on the biological (living) body and the phenomenological (lived or experienced) body (19), which implies that the body is viewed as a neurobiological organism that is concurrently experiencing, expressing and social (embodiment) (20). Thus, what is experienced by an individual during an exercise intervention is constituted by her sensorimotor repertoire for perception and action in interactions with the requirements of the task and the context (21). From this perspective, dysfunctions related to MS, such as sensorimotor impairments, can influence how individuals with MS interpret and understand their participation in a PA intervention. Moreover, the notion of \"participatory sensemaking\" (22) extends the body into the social domain, enabling an understanding of how the interaction processes between two embodied individuals affect shared and individual meaning-making. These concepts may illuminate pwMS's experiences and direct the focus toward bodily, contextual, and interactional aspects that may generate new insights regarding sensorimotor exercise and high-intensity training as part of PA.\n\nThe aim of this study was to explore participants' experiences of the content, delivery and setting of a new outdoor group intervention combining high-intensity training and detailed exercises to generate new knowledge about important aspects of exercise interventions for pwMS with low disability.\n\n1 GroupCoreDIST is a group-based intervention (Group), involving 35 exercises at different levels, addressing activation of trunk musculature (Core) in motor tasks in lying, sitting and standing (e.g. rolling, reaching, squatting, single leg stance. DIST describes essential elements of the concept: D = dose (high), dual task; I = individualization, insight, intensity; S = sensorimotor activation, selective movement control; T = task oriented training.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed13.pdf" - }, - { - "text": "TABLE 1 Overview of the CoreDISTparticipation intervention.\n\n| Week 1: MS outpatient | Consultation with the MS nurse (20 min) to address work-related issues based on a structured guide comprising the following themes: knowledge |\n| --- | --- |\n| clinic | of MS at the workplace, experienced work-related challenges due to MS, potential needs and facilitators. |\n| | Physiotherapy assessment (60 min) to explore the potential for changes in balance and walking aiming to turn focus toward possibilities and thus, |\n| | motivate the patient. |\n| | Based on these assessments the MS nurse and the physiotherapist indicated the aspects of importance on a standardized form to inform the |\n| | municipal physiotherapist. |\n| | Standardized testing (baseline, for the RCT). |\n| Week 2–5: Municipality | Physiotherapy assessment (60–90 min) to explore the patient's impairments and potential for improvements in a clinical examination prior to |\n| | group-training. |\n| | Indoor group (60 min × 2 weekly, for 4 weeks). There were three to five participants in each group and one physiotherapist. Trunk control, balance |\n| | and physical activity were addressed (GroupCoreDIST). Participants received a link to CoreDIST digital exercise-videos and were advised to do |\n| | them 1 × weekly throughout the intervention. (videos can be accessed here: https://www.nord.no/en/node/35,098) |\n| | Digital meeting with a multidisciplinary team (pwMS, employer, physiotherapist & MS nurse) (20 min) regarding barriers to work participation |\n| | and needs for adaptations regarding work and physical activity, according to a structured meeting-guide (one meeting). |\n| Week 6 | Standardized testing (midway, for the RCT). |\n| Week 7–10: Municipality | Outdoor group (60 min × 2 weekly, for 4 weeks). A maximum of ten participants and two physiotherapists were included in each group. Trunk |\n| | control and balance (GroupCoreDIST exercises) were addressed, and high-intensity walking or running was performed. The intervention was |\n| | conducted in a city park where both flat and uneven surfaces and hilly terrain were available (Table 2). |\n| | Additionally, participants were encouraged to comply with the exercise-videos through a weekly SMS-reminder. |\n| Week 11–14 | Standardized testing (final, for the RCT) and qualitative interviews. |\n\nTABLE 2 Description of the outdoor group.\n\n| Content | Purpose |\n| --- | --- |\n| Warm-up and recording one's own balance | |\n| Exercises for detailed sensorimotor | Preparation. |\n| activation, larger muscle groups, muscle | Experience one's own balance and |\n| length and balance while standing. | record eventual changes. |\n| Dual task: motor (using spiky balls and | |\n| medicine balls individually, in pairs and | |\n| in the group) and cognitive (singing, | |\n| rhymes and counting). | |\n| Main part | |\n| (1) High-intensity training (85%–95% | Improve stamina. |\n| maxHR/min 16 RPE) × 4 min: Running | Experience one's own opportunities for |\n| or walking with long strides and large | high-intensity physical activity. |\n| arm movements. Participants chose their | Improve sensorimotor control and |\n| own route, marking it with a cone, and | balance as prerequisites for walking and |\n| picked up a bean bag for each new lap to | running. |\n| count how many laps for each interval. | |\n| (2) Moderate-intensity detailed exercises | |\n| (approx. 70% maxHR) × 3 min. | |\n| CoreDIST exercises while standing | |\n| approximately (10 repetitions × 2 set). | |\n| Examples of exercises: squat, one legged | |\n| stance, rise on toes, reaching, turning and | |\n| rolling down to touch the ground in | |\n| standing. | |\n| Progressions was individually tailored | |\n| (during both running/walking and the | |\n| detailed exercises) through instructions, | |\n| demonstration and hands-on facilitations | |\n| by the physiotherapists. Quality and | |\n| efficiency of movement were addressed | |\n| by the physiotherapists. Optimalization | |\n| of trunk control during movement were | |\n| emphasised. | |\n| A combination of high-intensity and | |\n| CoreDIST exercises was repeated 3–4 | |\n| times during one session. | |\n| Cool-down and recording one's own balance | |\n| | Experience one's own balance and |\n| Hold/relax muscle contraction. | |\n| Balance on one leg. | record eventual changes. |\n\n### 2 Materials and methods\n\n#### 2.1 Design\n\nIndividual in-depth interviews using a phenomenologicalinspired approach were chosen, as this is suitable for exploring the meaning and significance of pwMS's experiences and reflections (23, 24).\n\n#### 2.2 Ethical considerations\n\nThe study was conducted according to the Declaration of Helsinki and approved by the Regional Committee for Medical Research Ethics in North Norway (REK North: 174837). Written informed consent was obtained prior to the intervention and confirmed verbally when arranging the interviews. Participation was voluntary and anonymous, and the participants were informed about the opportunity to withdraw from the study. The Consolidated Criteria for Reporting Qualitative Research (COREQ) (25) were used to optimize the conduct and reporting of the study.\n\n#### 2.3 Study context\n\nThis interview study was nested within a randomized controlled trial (RCT) comparing the CoreDISTparticipation intervention to usual care (26) and conducted at a regional hospital MS-outpatient clinic (Nordland Hospital Trust) and in two affiliated municipalities in the northern Norway. The current study investigates participants in the intervention group's experiences of the four-week outdoor group, which was part of this new intervention (Table 2). The outdoor sessions were conducted by three trained physiotherapists working in the", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed13.pdf" - }, - { - "text": "given the heterogenic pathology and symptoms of MS (41, 42). However, our findings illuminate qualitative aspects of how to achieve tailored and meaningful intersubjective interactions in an exercise intervention.\n\nWe consider the instances of the physiotherapist running together with the participant, which were perceived as important for participants' performance, to be an example of \"participatory sense-making\" (22). As participants appreciated being guided or even pushed by the physiotherapists, it appears that the physiotherapists were trusted in directing this interaction. As such, we argue that the physiotherapists' ability to adapt to participants' movements, speech and gestures—tailoring the interaction to their needs—was important for this ability to be perceived as purposeful. This is supported by the few negative incidents described where the participant-physiotherapist interaction seemed to not be jointly coordinated and appeared to fail. The reported mutual influences of sensorimotor capabilities and interpersonal coordination, with the physiotherapists but also the group, are in accordance with sensorimotor capacities and intersubjective interactions being important for sensemaking in the world (35). The benefits of these individualized participant-physiotherapist interactions are also described in specific core-stability exercises in indoor groups (16, 43) and are in line with the theoretical framework of facilitation of movement through hands-on interaction previously proposed (44, 45). Our study informs new knowledge of physiotherapistparticipant interactions to achieve the recommended highintensity training and calls for physiotherapy clinical reasoning through bodily and verbal communication skills adapted to the participants' responses in an ongoing and situated way.\n\nEnjoyment has previously been reported to promote PA in pwMS, and our study brings requested knowledge of what can constitute enjoyment in an exercise intervention (46): playful group-exercise tasks, a cheerful physiotherapist, and the outdoor environment.\n\nThe appreciation of being active outdoors in the study sample aligns with that in the general population (47). The outdoors provided a natural environment, which both invited participants to actively explore abilities thought of as left behind after their diagnosis with MS, such as running, and provided an appreciated break from focusing on MS symptoms. We also suggest that the positive experiences of mastering the challenging weather conditions and the added meaning of exercising among other people in the city park can be explained according to such terms. These positive experiences show how we are enmeshed in our history, context and social encounters (35) and how these aspects should also be accounted for when designing exercise interventions.\n\n#### 4.3 Methodological considerations\n\nThe design and methods were adequate for deriving knowledge from individuals' experiences. The participants selfreferred to the intervention and were recruited based on pre-set criteria. This approach yielded rich information from people with mild to moderate disabilities due to MS who were motivated for physical activity (PA), employed, and residing in northern Norway. Ethnicity or socio-economic class were not recorded. However, considering that all these factors can influence PA engagement (46), it is possible that additional aspects of the phenomenon could be uncovered in a different sample (48). There was a higher percentage of women participating than men; however, this corresponds to the gender distribution in the MS population (1).\n\nThe use of enactive theory was innovative within the field and allowed for, in particular, new aspects of importance for selfefficacy to be identified. Transference of our results to similar populations can be achieved through theoretical generalization (28).\n\n#### 4.4 Implications for clinical practice\n\nCombining high-intensity walking/running and detailed sensorimotor exercises was valued and provided meaningful embodied experiences, improving participants' ability to master PA and their beliefs of their own possibilities for being active in the future. However, the manner in which the content of an exercise intervention is delivered and the environment in which it is delivered should be accounted for, as these aspects were perceived to be of great importance in creating and shaping participants' experiences. In particular, tailored physiotherapistparticipant bodily interactions and an engaging group and outdoor environment were perceived to be pertinent for exploring one's own potential.\n\nTo minimize negative incidents in future interventions, we suggest that (1) the effort required from one's leg muscles during the detailed exercises (in between the running/walking intervals) should be low to minimize the negative consequences of leg muscle fatigue prior to high-intensity running/walking, (2) the capacity for running/walking at highintensity should be explored in one-to-one physiotherapy assessment prior to group training to optimize individuals capabilities and safety, and (3) homogenous and small-sized groups should be used to enable ongoing and tailored physiotherapist-participant interactions.\n\n#### Data availability statement\n\nThe datasets presented in this article are not readily available because of ethical and legal restrictions. Requests to access the datasets should be directed to stine.s.dahl@nord.no.\n\n#### Ethics statement\n\nThis study involving humans was approved by Regional Committee for Medical Research Ethics in North Norway (REK North: 174,837) and the Data Protection Officer at Nordlandssykehuset Hospital Trust, Norway. This study was conducted in accordance with the local legislation and", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "gave them advice for follow-up. Some participants said that when the physiotherapist conducted the exercises or ran/walked together with them, it made them increase their exercise intensity. One participant described this as follows:\n\n> The physiotherapists pushed me to perform beyond what I thought I was able to—and that was great! There is no doubt that if someone is running beside you and shouting \"come onwell done\", you manage to push yourself further. (ID8, EDSS: 2)\n\nHowever, one participant described an incident where the interaction with the physiotherapists was not perceived as helpful:\n\n> When I get tired, it gets difficult. I can only do one thing at a time, and then these physiotherapists came running, talking and trying to motivate at the same time. I got very tired, and my leg would not follow my commands to run. (ID7, EDSS: 3.5)\n\nParticipants reported that they appreciated that the physiotherapists made them engage in playful activities with a ball, run for beanbags, and sing and in general created an informal and nice atmosphere. The enjoyment created was described as important for adherence to the intervention and as encouraging participants' physical effort during the session, as exercise felt easier when it was enjoyable. It was appreciated that the physiotherapists were perceived as both cheerful and serious about the intervention.\n\n#### 4 Discussion\n\nThe main findings of this study are that (1) being supported to explore and push one's own physical capabilities by combining high-intensity running/walking with detailed exercises was meaningful and evoked strong emotions. Improving one's balance, walking, and running lead to increased beliefs in one's own possibilities. Some negative experiences were also described, particularly from the highintensity training. (2) An engaging outdoor group with tailored physiotherapist-participant interactions and the co-creation of enjoyment was perceived to be important for the success of the individual. These findings illustrate how the dynamic intertwining of the body and movement, context and intersubjective interactions create meaning and beliefs in one's own physical capabilities (19).\n\n#### 4.1 Bodily experiences are inherent to beliefs in the mastery of physical activity\n\nThe meaningfulness of exploring the limits of training intensity that we identified in our study corresponds with other studies of pwMS's experiences of interventions addressing intensity of activity (31, 32). The exercises emphasizing trunk control were reported to reduce movement impairments and are in line with a study of pwMS with higher disabilities participating in an indoor group intervention (16). However, the perceived interlinking of improved sensorimotor functions and the ease of and efficiency in high-intensity walking/running have not been reported previously. It is likely that the detailed exercises prompted activations of the CNS and musculoskeletal systems, which are prerequisites for highintensity walking and running (33). Impairments in such systems commonly occur due to CNS lesions or secondary inactivity, and function can improve with increased use (18). Our results support the value of integrating such specificity to optimize the capability to train at high intensity, even in individuals with low EDSS scores.\n\nThe described emotional associations of these bodily changes are interesting. Achieving higher exercise intensities, easier movements, reduced pain and improved sensation lead to positive feelings and enhanced prospects for both PA and life, while for some individuals, a failure to achieve high-intensity or no immediate changes in impairments are associated with feelings of loss and negative prospects. This calls attention to acknowledging that sensorimotor capacities facilitate or constrain how an individual perceives the world, which is closely interlinked with feelings, and that influence why participants perceive what they do (34). These experiences necessitate that sensorimotor changes in pwMS involve not only their biological body but also their relational and self-individuating modes of operating in the world, including how an experience coheres with, for example, participants' historical experiences (35). As we primarily regulate such modes to achieve an optimal positive mood state, this can also explain why only changes perceived as positive appear to enhance participants' beliefs for the future (36). Negative experiences such as failure to achieve high intensity because the legs are not working in the last interval can thus be perceived as detrimental by pwMS.\n\nWe argue that participants' perceived bodily changes affected their self-efficacy for being physically active. Self-efficacy involves an individual's perception of exerting control over his or her own actions (37) and has been extensively reported to be pertinent to PA engagement in pwMS (38, 39). However, selfefficacy is theoretically described according to social cognitive theory (38). Our findings highlight how experiencing, expressing and socially interacting through the body (embodied experiences) shape individuals' self-efficacy and suggest a crucial role of bodily perceptions in constituting self-efficacy for PA.\n\n#### 4.2 Interactions and environment shape meaning making\n\nParticipants perceived the group setting to increase motivation, support, and commitment, which has been found in previously published work (16, 31).\n\nThe physiotherapist-participant interaction is acknowledged in exercise interventions for pwMS, pointing to professionals' role in informing participants of exercise benefits in the management of MS, including the prescribing mode, frequency, intensity, and duration of exercise (40). Tailored interventions are supported", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed13.pdf" - }, - { - "text": "It was somewhat distressing during the last interval, as my feet were not working, and I did not know what to do to increase my heart rate—when I could not run or walk quickly anymore. (ID2, EDSS: 2)\n\nThe focus on the core or the middle of the body in the detailed exercises was stated to improve participants' PA performance; participants described being less clumsy or unsteady or walking without holding on to the walls. Having practiced the detailed CoreDIST exercises in the indoor group beforehand was described as a helpful and pertinent preparation by some participants, as it was regarded as more difficult to accurately execute the exercise techniques outdoors due to their higher intensity, the uneven surface, or bad weather. Some participants commented that the standing exercises (in-between the running/ walking intervals) required too much effort, leaving their legs tired for running afterward.\n\n#### 3.2 New insights and beliefs\n\nA key feature of the participants' stories was their new insights into their own physical abilities, which were perceived to influence their beliefs about their own possibilities for PA and life in general:\n\n> What meant the most for me was the high-pulse training, as I had thoughts of it being a left behind phase for me. The experience of being able to master it felt so good. It enhances my focus on future possibilities rather than limitations. (ID4, EDSS: 0)\n\nGains in insight were also reported from the detailed exercise part of the sessions, highlighting how the function of body parts through movements and sensations was linked to performance in PA, as illustrated below:\n\n> I have simply been taught some tools to improve certain parts of my body and how that has an effect on, for example, walking: That my hip has to be with me to maintain balance—and that makes how I stand on the ground important. Previously I was not aware of that…., now everything works better. (ID6, EDSS: 2)\n\nTwo participants reported that the intervention motivated them to commit to new exercise routines, and some stated that they had more \"readiness\" for activities such as playing with their grandchildren, hiking with friends, or engaging in a highintensity activity. Some stated that their bodily changes were perhaps not noticeable for others, but they themselves noticed that it was easier to climb stairs, balance on one leg and walk fast or that they now moved in a \"better way\" or with less pain. Three participants perceived the duration of the outdoor group to be too short to feel lasting improvements in their physical endurance or muscular strength.\n\n#### 3.3 An engaging environment\n\nMost participants reported that their performances were positively influenced and motivated by the group setting, for example, through cooperating in exercises with balls, seeing other individuals in the group who were \"doing well\", cheering each other and competing when running and walking next to each other. However, one participant emphasized that observing people with visible disabilities from MS was distressing, as it revealed negative thoughts about one's own future. It was emphasized that mastering challenges in the group sessions added more meaning than doing the same alone:\n\n> I think this particular exercise is hard work, and then it becomes very tiring to do it on my own. However, when I did it in the group and we could laugh a bit in between and so on, it was easier because of the social element. (ID12, EDSS: 1.5)\n\nBeing active outdoors was preferred by many participants because of the fresh air and the natural and varied environment:\n\n> It was an added positive experience to use our city park and notice all the other people who were there…it is something about challenging our comfort-zone. (ID4, EDSS: 0)\n\nThe natural environment was also described as taking focus away from MS symptoms. Cold, rainy or snowy weather conditions required planning of adequate clothing; in addition, these conditions led some participants to use cautious behavior when the ground was slippery and led a few to omit sessions. However, mastering outdoor exercise was highlighted in positive terms, such as discovering new ways to become active.\n\n#### 3.4 Professional leadership, tailoring and co-creation of enjoyment\n\nThe way the physiotherapists led the group and, in particular, interacted with each participant were regarded as helpful for improving their bodily functions and activity levels. Some participants reported being afraid to try out new activities or training at high intensities after being diagnosed with MS but felt safe to explore when supervised by the physiotherapist because of their trust in the relationship between them and in the physiotherapist's professional knowledge.\n\nHow the physiotherapist approached the participants individually was described as important from this perspective. In particular, bodily interactions in which the physiotherapist demonstrated with his or her own body or placed his or her hands on the participant's body to correct a movement were reported to be successful, as it helped to increase speed and gave participants a sense of performing better or for a longer duration. If they did an exercise in a suboptimal way, participants reported receiving precise supervision, or if they expressed pain or were injured, the physiotherapist was supportive, assessed them and", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed13.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n#### Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing – original draft, Writing – review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing – review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – review & editing.\n\n### Funding\n\nThe author(s) declare that financial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Conflict of interest\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n### Publisher's note\n\nAll claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.\n\n## References\n\n1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler. (2020) 26(14):1816–21. doi: 10.1177/1352458520970841\n\n2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports. (2018) 28 (9):1960–9. doi: 10.1111/sms.13214\n\n3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord. (2017) 13:38–43. doi: 10.1016/j.msard.2017.01.016\n\n4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport. (2022) 25(2):146–54. doi: 10.1016/j.jsams.2021.08.015\n\n5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis—time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep. (2019) 19(11):1–12. doi: 10.1007/s11910-019-1002-3\n\n6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective efficacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler. (2022) 28(10):1620–9. doi: 10. 1177/13524585221079200\n\n7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler. (2020) 26(12):1459–69. doi: 10.1177/ 1352458520915629\n\n8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther. (2021) 101 (5):1–9. doi: 10.1093/ptj/ptzab049\n\n9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord. (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n\n10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother. (2021) 23(6):377–83. doi: 10.1080/21679169.2020.1772870\n\n11. Unluer NO, Ozkan T, Yasa ME, Ates Y, Anlar O. Investigation of the relationship between trunk motor control and balance, functional mobility, and gait capacity in patients with multiple sclerosis/multipl sklerozlu hastalarda govde motor kontrolu ile denge, fonksiyonel mobilite ve yuruyus kapasitesi arasindaki iliskinin incelenmesi. Türk Nöroloji Dergisi. (2021) 27(3):283. doi: 10.4274/tdn.2021.41017\n\n12. Learmonth YC, Motl RW. Physical activity and exercise training in multiple sclerosis: a review and content analysis of qualitative research identifying perceived determinants and consequences. Disabil Rehabil. (2016) 38(13):1227–42. doi: 10. 3109/09638288.2015.1077397\n\n13. Fikke HK, Normann B, Sivertsen M, Dahl SSH, Arntzen EC. Optimizing sensorimotor function, physical activity and employment for people with MS—a feasibility study. Fysioterapeuten. (2023) 90(1):32–42. doi: 10.52705/ c14a8ca05f7546dabc18bd0275cf2edd\n\n14. Arntzen EC, Straume B, Odeh F, Feys P, Normann B. Group-based, individualized, comprehensive core stability and balance intervention provides immediate and long-term improvements in walking in individuals with multiple sclerosis: a randomized controlled trial. Physiother Res Int. (2019) 25(1):e1798. doi: 10.1002/pri.1798\n\n15. Arntzen EC, Straume BK, Odeh F, Feys P, Zanaboni P, Normann B. Groupbased individualized comprehensive core stability intervention improves balance in persons with multiple sclerosis: a randomized controlled trial. Phys Ther. (2019) 99 (8):1027–38. doi: 10.1093/ptj/pzz017\n\n16. Arntzen EC, Øberg GK, Gallagher S, Normann B. Group-based, individualized exercises can provide perceived bodily changes and strengthen aspects of self in individuals with MS: a qualitative interview study. Physiother Theory Pract. (2019) 37(10):1080–95. doi: 10.1080/09593985.2019.1683923\n\n17. Florio-Smith J, Ayer M, Colhoun S, Daykin N, Hamill B, Liu X, et al. The importance of the patient's perspective in decision-making in multiple sclerosis: results of the OwnMS patient perspectives study. Mult Scler Relat Disord. (2023) 75:104757. doi: 10.1016/j.msard.2023.104757\n\n18. Kleim JA, Jones TA. Principles of experience-dependent neural plasticity: implications for rehabilitation after brain damage. J Speech Lang Hear Res. (2008) 51(1):225–39. doi: 10.1044/1092-4388(2008/018)\n\n19. Thompson E. Mind in Life: Biology, Phenomenology, and The Sciences of Mind. Cambridge, Mass: Harvard University Press (2007).\n\n20. Merleau-Ponty M. Phenomenology of Perception. London: Routledge Classics (2008).", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "However, explained variance estimates in our models ranged from 34 to 61%, suggesting further research is necessary to identify additional factors contributing to healthcare utilization following physical therapy.\n\nThe primary limitation of the study is the high number of subjects lost to follow-up. We attempted to account for the bias introduced by loss to follow-up in our models with IPAW, which is a robust strategy for conducting analyses with missing data [41, 51]. We observed good concordance between results of complete case and weighted analyses, giving us confidence in our findings. However, important differences in age, race, education, symptom onset, baseline pain intensity, and baseline pain-related psychological distress were noted between those who did and did not complete follow-up. These differences suggest that the group lost to follow-up may represent a unique population to whom these results may not apply. Different factors may predict utilization outcomes for this unique population. As a result, readers should exercise caution when extending these findings to individuals and populations that substantially differ from the analytic sample in this study. Specifically, these predictive models may need to be adjusted for younger individuals of non-white race, with lower education levels, sudden onset of symptoms, and those with higher pain intensity and pain-associated distress.\n\nA second limitation is that we did not know about the subjects' prior experiences with physical therapy, or whether they arrived at physical therapy through direct access or referral from another provider. These factors could be associated with treatment expectations, which have known effects on treatment outcomes [52, 53]. We also did not collect specific information on treatment. But by including changes in pain, disability, and pain-related psychological distress in the models, we were able to account for treatment response. The benefit of this approach is that models are generalizable for predicting utilization outcomes across \"real-world\" pragmatic physical therapy settings where treatment variation is expected. The drawback is that we are prohibited from making conclusions regarding which characteristics of the clinical encounter might influence subsequent pain-related healthcare utilization. Important characteristics to consider would include number of visits, type of interventions or whether patients completed their course of physical therapy. These have been proposed or identified as important contributors to downstream costs following physical therapy [54, 55] and may be a source of unexplained variance in our models. Characteristics of the clinical encounter should be considered in future studies to refine the prediction models developed in our analyses.\n\nThird, we were unable to adequately model the specific effects of worker's compensation, self-pay and some commercial insurance coverage on utilization due to the low incidence of these forms of payment in our study sample. Modeling these separately would have created the potential for unreliable and imprecise effect estimates. Readers should consider the within-group heterogeneity caused by this approach and exercise caution when applying these results to individuals who do not have traditional public or private insurance coverage. Future studies should investigate the performance of the\n\nWorker's Compensation. A final limitation is the use of patient recall to measure utilization. To mitigate recall bias, we used two follow-up points, at 6 and 12 months. However, underor over-reporting of utilization is often a concern with studies requiring subject recall [56–58]. Medical record and claims data were not available for these subjects. Readers should consider our inability to independently confirm utilization when interpreting results.\n\nOSPRO tools in predicting outcomes for patients with\n\nIn future studies, we will embed the OSPRO tools into electronic medical record (EMR) databases to refine and test outcomes prediction models at the health care systems level. Importantly, we will collect clinical encounter data through the EMR and combine it with administrative or billing data to confirm the results of this study with more objective measures of health care use. These studies will also allow us to provide better guidance on how to use the OSPRO tools to identify serious psychiatric involvement or systemic sources of pain that require medical referral. Finally, we will explore alternative scoring strategies for the tools, such as weighted scoring for the OSPRO-ROS and use of predicted full-length psychological questionnaire scores for the OSPRO-YF. Healthcare providers could then use the collective information from these studies to build learning health systems that facilitate effective, real-time clinical decision-making support to improve value of care for patients with musculoskeletal pain.\n\n#### Conclusion\n\nBaseline disability and change in pain intensity were important predictors of any subsequent pain-related healthcare utilization, while predictors of individual service utilization were outcome-specific. Identification of risk is improved through treatment monitoring for pain and, in some cases, disability and pain-related psychological distress. Comorbidity burden was an important predictor of subsequent utilization of opioids and diagnostic tests and imaging, both of which have been recent targets of healthcare policy to constrain their unnecessary use. Future research is needed to refine these predictor variables and incorporate them into risk models that support clinical decision-making so that treatment effectiveness and efficiency are optimized in value-based systems.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - }, - { - "text": "Even in the **short period between 2013 and 2018** (the period covered by these pilot statistics) the data show an overall decline and a decline of several relevant occupational diseases. The strongest decrease — practically a halving — can be seen for hearing impairments (diseases of the inner ear). Pneumoconiosis, mesothelioma and selected occupational cancers went down between 7% and 14%. **Asthma and some recognised MSDs** are more or less stagnating, probably due to unchanged exposure to biological or chemical substances and no change regarding the health outcomes of ergonomic working conditions.\n\nIf work is **one of some** causative factors, a clear assignment of work to a health outcome is complex. Moreover, in many cases a quite **long observation period** is necessary simply due to the **latency time between exposure at work, outbreak and detection of a disease**, which is obviously very different from the clear and immediate consequence of an accident at work.\n\nThe detection of a disease and the correlation between work and this disease depends highly on the **monitoring capacities of the health system and its ability, tradition and standards to connect diseases and work-related causes**. In a study on 'Asbestos‐related occupational diseases in Central and East European Countries' the authors refer to different policies for identifying workers formerly exposed to asbestos and conclude:\n\n*'Consequently, large differences are observed from one country to another regarding the number of recognised asbestos-related cases. In Slovenia, for example, the annual asbestosis rate (cases of asbestosis/population) amounts to 14.9, in Croatia 5.3, and in Poland 2.1. Moreover, in Estonia, the incidence of asbestosis is unknown as there is no systematic collection of data.'*181\n\nFor example, until now very few occupational diseases have been recognised as outcomes of psychosocial risks at work. The ILO proposes in its 'List of Occupational Diseases Recommendation' a large number of very specific and 'classic' occupational diseases — a very broad definition of *'Mental and behavioural disorders'* but leaving the responsibility to science and to 'national conditions'. 182 Similarly, the development of the European Schedule of Occupational Diseases (ESOD) aims to improve knowledge, step up prevention and provide assistance in linking occupational activities and diseases.", - "page_start": 74, - "page_end": 74, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- (d) \"specified activities\" means—\n\t- (i) crop maintenance,\n\t- (ii) crop harvesting,\n\t- (iii) tunnel construction and dismantling,\n\t- (iv) irrigation installation and maintaining,\n\t- (v) crop husbandry,\n\t- (vi) packing and processing of crops on employer's premises,\n\t- (vii) preparing and dismantling growing areas and media,\n\t- (viii) general primary production work in edible horticulture,\n\t- (ix) activities relating to supervising teams of horticulture workers.\n\n**44.**—(1) A domestic elite sportsperson, an international elite sportsperson, a domestic ancillary sportsperson or an international ancillary sportsperson.\n\n(2) For the purposes of this paragraph—\n\n\"domestic ancillary sportsperson\" means an individual essential to—\n\n- (a) the running of an elite sports event including—\n\t- (i) operational staff essential to the running of that elite sports event,\n\t- (ii) event officials and referees, or\n- (b) the support of a domestic elite sportsperson including—\n\t- (i) sports team medical, logistical, technical and administration staff,\n\t- (ii) individual sportsperson medical and technical support staff,\n\t- (iii) horse grooms and trainers,\n\t- (iv) motorsport mechanics and technical staff,\n\t- (v) the parent or carer of a domestic elite sportsperson under the age of 18;\n\n\"domestic elite sportsperson\" means an individual who—\n\n- (a) derives a living from competing in a sport or is—\n\t- (i) a senior representative nominated by a relevant sporting body,\n\t- (ii) a member of the senior training squad for a relevant sporting body, or\n\t- (iii) aged 16 or above and on an elite development pathway,\n- (b) is in England, after departing from or transiting through a category 2 country or territory, and\n- (c) either—\n\t- (i) has departed from or transited through the category 2 country or territory in order to compete in an elite sports event, or to participate in training for an Olympic or Paralympic event, and has returned to England with the intention of continuing activities as a sportsperson, or\n\t- (ii) is a United Kingdom sportsperson who is not habitually resident in the United Kingdom and has travelled to England in order to participate in training for or to compete in an elite sports event;\n\n\"elite sports event\" means a specified competition or other sporting event in which the participants compete—\n\n- (a) to derive a living, or\n- (b) to qualify for the right to represent—\n\t- (i) Great Britain and Northern Ireland at the Tokyo or Beijing Olympic or Paralympic Games, or the Paris Olympic or Paralympic Games, or", - "page_start": 46, - "page_end": 46, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "What was the sales revenue of Santos in 2004 ?", - "target_page": 12, - "target_passage": " Sales revenue was a record $1,501 million", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## **MALEO NEGOTIATIONS ADVANCED**\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 21\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## **FIRST RETAIL GAS SALES WITH SANTOS DIRECT**\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field – around 15 TJ/d – in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## **LIQUIDS MARKETING ALLIANCE WITH BP**\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n**'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'**\n\n#### **RICK WILKINSON**\n\nVice President Gas Marketing and Commercialisation\n\n**The alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.**", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 23\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "**Santos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.**\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 28\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n#### **SUPPORTING COMMUNITIES**\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n#### **CORPORATE GOVERNANCE**\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Text 30/3/05 12:06 PM Page 11\n\n## **DEPRECIATION, DEPLETION AND AMORTISATION**\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## **CASH FLOW LOWER**\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n#### **RECORD CAPITAL EXPENDITURE**\n\nCapital expenditure ended right on target at $930 million – a record year for Santos – approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n#### **FINANCIAL FLEXIBILITY INTACT**\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ANALYSING FINANCIAL PERFORMANCE\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 10\n\n**'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'** \n\n#### **PETER WASOW**\n\nChief Financial Officer\n\n#### **2004 WAS A YEAR OF GOOD OPERATING RESULTS**\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## **PRODUCTION HAMPERED BY MOOMBA INCIDENT**\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million boe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## **PRODUCTION COSTS UNDER CONTROL**\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue to effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n- the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n- changes in our accounting added a further $16 million to Santos' production costs\n- higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n- the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n- cost savings in established production areas more than offset increases in the price of services and materials\n- Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.\n\n### **PRODUCTION AND SALES REVENUE**", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# Sustainability\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 26\n\n# MANAGING FOR SUSTAINABLE GROWTH\n\n**'The publication of our first Sustainability Review in 2004 was a major achievement for Santos. The next steps are to undertake projects to improve our performance – not just in Australia but worldwide – and to accurately collect, verify and report on a range of sustainability data.'**\n\n#### **MARTYN EAMES**\n\nVice President Corporate and People\n\nLate in 2004 Santos published *First Steps: Sustainability Review*, the Company's first standalone publication on this topic. It describes how Santos is implementing the principles of sustainability in the areas of corporate governance, the environment, social responsibility and economic performance.\n\nThis was a significant milestone for Santos as it represents a starting point for the collection of data and the ongoing measurement of performance in the area of sustainability.\n\nCommunicating with stakeholders is an important activity and the publication of the Sustainability Review is a further extension of Santos' commitment in this regard. Santos applies considerable resources to the communication effort and aims to present information in a clear and concise manner in order to generate a greater understanding of the business by its stakeholders.\n\nSantos has been recognised for its achievements in this area. Santos' 2003 Annual Report was featured as an example of best practice reporting in PricewaterhouseCoopers' *Trends in Corporate Reporting 2004* publication. Reports from companies worldwide are considered in compiling this publication and they must meet specified criteria. This is the third time a Santos annual report has been featured. Santos was also awarded a 2004 Silver Award for Excellence in Annual Reporting for the 2002 Annual Report by the Australasian Reporting Awards.\n\nReceiving independent recognition for these activities serves as a reference point for Santos' desire to continually improve communication performance.\n\nSantos has been listed as an inaugural member of the Australian SAM Sustainability Index (AuSSI). The AuSSI tracks the performance of around 70 Australian companies that lead their industry in terms of economic, environmental and\n\n#### **TOTAL RECORDABLE CASE FREQUENCY RATE**\n\nTRCFR per millions hours worked\n\nsocial criteria. The index is calculated daily by Dow Jones Indexes and published in *The Australian* newspaper.\n\nFollowing is an overview of progress and achievements in the area of sustainability for 2004.\n\n#### **SAFETY IMPROVING**\n\nThe health and safety of employees is of paramount concern to Santos. Santos delivered another year of improvement in 2004 and achieved its lowest total recordable case frequency rate of 6.4.\n\nFurther improvements were also made with the implementation of the Environment, Health and Safety Management System standards, with Santos operations undergoing full assessments against standards for the first time.\n\nThe results demonstrated considerable improvement over the baseline assessments conducted in 2003 with steady progress in the implementation of the procedures, processes and tools needed to achieve the requirements of the standards.\n\nProcess safety capability which deals with plant and equipment integrity assurance, design and construction, and maintenance, is being developed through the formation of a new set of standards to be incorporated\n\ninto the health and safety management system.\n\nThe safety focus in 2005 will be on finalising a comprehensive set of hazard standards which outline the required controls to ensure that hazards encountered across Santos' operations and activities are well managed.\n\n## **POSITIONING THE WORKFORCE FOR THE FUTURE**\n\nSantos commenced a major company-wide transformational change program in late 2003. The program was designed to significantly improve Santos' performance in four areas: key business processes, financial performance, organisation structure and company culture.\n\nReorganising and simplifying the Company's structure was one of the major outcomes and in May 2004 Santos began operating under a new functionally-based organisation structure.\n\nThe new structure is designed to support the explorationfocused growth strategy. It mirrors the 'conveyor belt' lifecycle of an exploration and production company where exploration success leads to commercialisation and development activity and finally revenue-generating production.\n\nIt also follows the principle that the majority of employees should", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# NOTES TO THE FINANCIAL STATEMENTS\n\nfor the year ended 31 December 2004\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 70\n\n#### **22. Investments in Controlled Entities**\n\n| Name | Place of | Name | Place of |\n| --- | --- | --- | --- |\n| | incorporation | | incorporation |\n| Santos Ltd (Parent Entity) | SA | Santos Brantas Pty Ltd3 | VIC |\n| Controlled entities1 : | | Santos (Donggala) Pty Ltd3 | VIC |\n| Alliance Petroleum Australia Pty Ltd | VIC | Santos Egypt Pty Ltd3 | VIC |\n| Boston L.H.F. Pty Ltd | VIC | Santos Hides Ltd | PNG |\n| Bridgefield Pty Ltd | QLD | Santos International Operations Pty Ltd | QLD |\n| Bridge Oil Developments Pty Limited | NSW | Santos (Madura Offshore) Pty Ltd | WA |\n| Canso Resources Pty Ltd | NSW | Santos Niugini Exploration Limited | PNG |\n| Coveyork Pty Ltd | NSW | Santos (Nth Bali 1) Pty Ltd | SA |\n| Doce Pty Ltd | QLD | Santos (Papalang) Pty Ltd | SA |\n| Farmout Drillers Pty Ltd | NSW | Santos (Popodi) Pty Ltd | SA |\n| Kipper GS Pty Ltd | WA | Santos (JPDA 91-01) Pty Ltd | ACT |\n| Controlled entity of Kipper GS Pty Ltd | | Santos (JPDA 91-12) Pty Ltd | ACT |\n| Crusader (Victoria) Pty Ltd | VIC | Santos (NGA) Pty Ltd | VIC |\n| Moonie Pipeline Company Pty Ltd | QLD | Santos (N.T.) Pty Ltd | ACT |\n| Novus Australia Resources NL2 | VIC | Controlled entity of Santos (N.T.) Pty Ltd | |\n| Reef Oil Pty Ltd | NSW | Bonaparte Gas & Oil Pty Limited | NSW |\n| Santos Asia Pacific Pty Ltd | QLD | Santos Offshore Pty Ltd | VIC |\n| Controlled entities of Santos Asia Pacific Pty Ltd | | Santos Oil Exploration (Malaysia) Sdn Bhd (in liquidation) | MAL |\n| Santos (Sampang) Pty Ltd | SA | Santos Petroleum Pty Ltd | NSW |\n| Santos (Warim) Pty Ltd | SA | Santos QNT Pty Ltd | QLD |\n| Santos Australian Hydrocarbons Pty Ltd | QLD | Controlled entities of Santos QNT Pty Ltd | |\n| Santos (BOL) Pty Ltd | NSW | Santos QNT (No. 1) Pty Ltd | QLD |\n| Controlled entity of Santos (BOL) Pty Ltd | | Controlled entities of Santos QNT (No. 1) Pty Ltd | |\n| Bridge Oil Exploration Pty Limited | ACT | Santos Petroleum Management Pty Ltd | QLD |\n| Santos Darwin LNG Pty Ltd | ACT | Santos Petroleum Operations Pty Ltd | QLD |\n| Santos Direct Pty Ltd3 | SA | TMOC Exploration Proprietary Limited | QLD |\n| Santos Facilities Pty Ltd | SA | Santos QNT (No. 2) Pty Ltd | QLD |\n| Santos Finance Ltd | NSW | Controlled entities of Santos QNT (No. 2) Pty Ltd | |\n| Santos Globe Pty Ltd (formerly Globex Far East Pty Ltd) | WA | Associated Petroleum Pty Ltd | QLD |\n| Santos International Holdings Pty Ltd | ACT | Moonie Oil Pty Ltd | QLD |\n| Controlled entities of Santos International Holdings Pty Ltd | | Petromin Pty Ltd | QLD |\n| Barracuda Limited | PNG | Santos (299) Pty Ltd | QLD |\n| Lavana Limited | PNG | Santos Exploration Pty Ltd | VIC |\n| Novus UK (Kakap 2) Limited2 | UK | Santos Gnuco Pty Ltd | QLD |\n| Peko Offshore Ltd | BER | Transoil Pty Ltd | QLD |\n| Sanro Insurance Pte Ltd | SING | Santos Resources Pty Ltd | QLD |\n| Santos Americas and Europe Corporation | USA | Santos Timor Sea Pipeline Pty Ltd | NSW |\n| Controlled entity of Santos Americas and Europe Corporation | | Sesap Pty Ltd2 | VIC |\n| Santos USA Corp | USA | Vamgas Pty Ltd | VIC |\n| Santos (Bawean) Pty Ltd | SA | | |\n\n1 Beneficial interests in all controlled entities is 100% except for Kipper GS Pty Ltd in which two shares of the total issued capital of 9,246,353 shares are owned by a third party.\n\n2 Company acquired during the year.\n\n3 Company incorporated during the year.", - "page_start": 71, - "page_end": 71, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "is also located in relatively shallow water with infrastructure nearby, creating options for early production.\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 5\n\nAt Santos, we are proud that an Australian company took on that challenge and succeeded, and I congratulate the exploration and drilling teams on a great effort. With the Jeruk discovery behind us, Indonesia is at the forefront of our international exploration efforts. With eight wells planned in the region for 2005, Santos is currently the most active explorer in Indonesia.\n\n## **A STRONG FINANCIAL PERFORMANCE**\n\nIt was pleasing that Santos was able to conclude 2004 on a higher note than it started.\n\nWe achieved record annual revenue thanks to higher oil and gas prices combined with the return of full production at Moomba to produce a 21.5% jump in second half sales: the best result for any six-month period in Santos' history.\n\nThe average realised price for crude oil was up nearly 19% to A$51.83 per barrel.\n\nThese results have left Santos well positioned to continue its strong investment program which saw capital expenditure peak at $930 million in 2004.\n\nIn 2005 we expect to invest around $850 million of new capital in projects and our strategy is to plan for firm developments based on affordability at relatively low oil prices. If higher prices continue and some projects mature quickly and can be given the green light, our overall capital expenditure may be higher.\n\nProduction is expected to rise in 2005 when, as usual, our financial performance will be subject to oil prices, exchange rates and interest rates. These factors have a significant effect on our bottom line. A US$1 per barrel change in the oil price equates to a A$16 million change in net profit after tax in 2005.\n\nA one US cent movement in the Australia–US dollar exchange rate would produce a change in profit after tax of A$8 million, and a 1% change in interest rates equates to a change in net profit after tax of A$9 million.\n\n2004 has also been an important period for shareholders, with a significant improvement in the Santos share price combined with an increase in the dividend.\n\n### **PRODUCTION TO REBOUND**\n\nWhile we expected lower production overall in 2004, our output was obviously curtailed further by the incident at the Moomba plant. The good news is that several projects emerged from the development pipeline during the year and made positive contributions to our expanding suite of oil and gas facilities.\n\nProduction is forecast to increase by 15% in 2005, or by 4% after excluding the effect of the Moomba downtime, to about 54 million boe. We expect this positive forward trend to be followed by further production growth of more than 10% in 2006.\n\nThe Bayu-Undan liquids project came on line in April 2004 and, at its increased design throughput of just over one billion cubic feet of gas per day, produced liquids at a rate of 100,000 barrels per day.\n\nBayu-Undan is currently stripping liquids and re-injecting the gas pending tie-in of the pipeline to Darwin in May 2005 for future LNG production. The onshore LNG facilities are more than two-thirds complete. With a gross production of 19 million barrels, 22% above expectations for the year, we were pleased with the performance of Bayu-Undan and look forward to a full year contribution from this exciting project in 2005.\n\nThe Minerva gas field off Victoria's western coast started production in December 2004 and is ramping up to full field production of around 150 TJ per day. Our share in this project is 10%, and is significant because it represents our first foray into marketing gas directly to customers or into the Victorian spot market through our sales vehicle, Santos Direct, aimed at delivering higher prices.\n\n### **RECORD EXPLORATION EFFORT AHEAD**\n\nExploration is a great way to increase shareholder value so I am pleased to be able to report that in 2004, Santos drilled 16 wildcat wells resulting in seven hydrocarbon discoveries.\n\nGrowing our oil and gas reserves for future production is the goal of our exploration efforts. On a rolling three-year average we have replaced the hydrocarbons that Santos has produced at a rate of 130% of Proven (1P) reserves, at an average replacement cost of around US$7 per boe.\n\nSantos has an exciting exploration program for 2005: one that I believe holds the highest resource potential of any program in the Company's 50-year history.\n\nWe expect to participate in drilling a record 157 wells during 2005, of which 25 are exploration wildcat wells. Consistent with the growing internationalisation of Santos, this includes eight wells in Indonesia and six wells in the Gulf of Suez, Egypt. This program offers an attractive combination of risk and reward and is a new focus to our overseas exploration effort.\n\nIn the US, two exploration wells are planned, one onshore, and one offshore in the shallow waters of the Gulf of Mexico.\n\nIn Australia, our increasing focus on the potential of offshore areas will see Santos drill three wells off Western Australia in 2005, one off southern Australia and two wells off northern Australia. We will also drill two wells onshore in Queensland and one onshore in Victoria.\n\nThe discovery of oil and gas at Hiu Aman in the Kutei Basin, offshore East Kalimantan, has provided a strong start to our 2005 exploration program and we look forward with anticipation to further work on that significant find. Santos has a 50% interest in the discovery. We believe this region of Indonesia is very promising and Santos expects to drill four wells in the Kutei Basin in 2005.\n\n## **BIGGEST DEVELOPMENT YEAR YET**\n\nI am pleased also to report that 2004 was a record year for development with six projects advancing through the pipeline.\n\nThe start-up of the Mutineer-Exeter oil field is a significant milestone in Santos' development history. This project off the", - "page_start": 6, - "page_end": 6, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# Managing Options\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 22\n\n# UNLOCKING THE VALUE OF STRATEGIC ASSETS\n\n**'Our objective is to derive value from undeveloped assets which have been outside of Santos' base business.'**\n\n**BRUCE WOOD** Vice President Strategic Projects Santos' Strategic Projects team focuses on assets that have proven difficult to commercialise or that need to be considered in a regional context rather than on an individual basis.\n\nThe other key activity for this team has been to lead Santos' continuous improvement focus.\n\n#### **UNITED STATES GAS**\n\nThe US gas business was a major focus in 2004 for a number of reasons, not the least of which are the higher gas prices in the US compared with the domestic Australian market, and the ability to rapidly commercialise new discoveries.\n\nAn ongoing development and delineation program was carried out during the year, yielding better than planned production. The exploration initiative also continued to seek higher risk but more material prospects, aimed at enhancing the move into the shallow water area of the Gulf of Mexico. Exploration results in this area during 2005 will shape Santos' future strategy in the US.\n\n#### **TIGHT GAS**\n\nHydrocarbons contained in traps with poor permeability are known as 'tight gas'. Large tight gas resources are known to exist in the Cooper Basin. Under current circumstances, this gas cannot be economically developed but, with the combination of improved production techniques and better commercial terms, could prove attractive.\n\nSantos assessed the resources and potential technologies that could be applied to unlock these resources during 2004 and is now working up a range of possible evaluation projects to be undertaken in 2005.\n\n#### **NORTHERN AUSTRALIA GAS**\n\nSantos has a significant existing gas resource base and some promising exploration acreage in the waters offshore Darwin, where it intends to drill a gas exploration well later this year.\n\nThe Company currently operates the Mereenie gas field in the Amadeus Basin in central Australia, which supplies gas to Darwin. Santos' first offshore gas production in northern Australia begins in 2006, sending Bayu-Undan gas to Darwin for conversion to LNG. Santos plans to build upon its growing position in the region to target further development which could ensure long-term gas supplies for the current market, or an expanded Northern Territory domestic market, or for export.\n\n#### **PAPUA NEW GUINEA GAS**\n\nSantos is in active discussions with the PNG Gas Project participants to potentially re-enter the PNG Gas Project. Santos has a significant interest in a large part of the liquids-rich Hides gas field which is integral to the development of the Project.\n\n## **2004 CONTINGENT RESOURCES** (TOTAL 1,443 mmboe)\n\n- Northern Australia 709 mmboe\n- Western Australia 71 mmboe\n- Central Australia 240 mmboe\n- Southern Australia 32 mmboe\n- Papua New Guinea 391 mmboe", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# THE WORLD OF SANTOS\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 8", - "page_start": 9, - "page_end": 9, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "How have been confirmed nonvanishing neutrino ?", - "target_page": 2, - "target_passage": "The nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### I. INTRODUCTION\n\nThe nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model. The most attractive idea to naturally explain the tiny neutrino masses is the seesaw mechanism [1], in which the right-handed (RH) neutrinos singlet under the SM gauge group are introduced. The minimal gauged U(1)B−L model based on the gauge group SU(3)C ×SU(2)L ×U(1)Y × U(1)B−L [2] is an elegant and simple extension of the SM, in which the RH neutrinos of three generations are necessarily introduced because of the gauge and gravitational anomaly cancellations. In addition, the mass of RH neutrinos arises associated with the U(1)B−L gauge symmetry breaking.\n\nAlthough the scale of the B−L gauge symmetry breaking is basically arbitrary as long as phenomenological constraints are satisfied, one interesting option is to take it to be the TeV scale [3]. It has been recently pointed out [4] that when the classical conformal invariance is imposed on the minimal U(1)B−L model, the symmetry breaking scale appears to be the TeV scale naturally. If this is the case, all new particles, the Z ′ gauge boson, the B − L Higgs boson H and the RH neutrinos appear at the TeV scale unless the U(1)B−L gauge coupling is extremely small, and they can be discovered at Large Hadron Collider [5–8]. Then we may be able to understand the relation between the gauge symmetry breaking and the origin of neutrino masses.\n\nAlthough such a TeV scale model is interesting and appealing, one might think that the absence of dark matter (DM) candidate is a shortcoming of this model. A sterile RH neutrino with mass of the order of MeV is one possibility [9]. In this paper, we propose a very simple idea to introduce the DM candidate in the minimal gauged U(1)B−L model. We introduce the Z2 parity into the model and impose one of three RH neutrinos to be odd, while the others even. In this way, the Z2-odd RH neutrino becomes stable and the DM candidate. Note that two RH neutrinos are enough to reconcile with the observed neutrino oscillation data, with a prediction of one massless light neutrino. Therefore, without introducing any additional new dynamical degrees of freedom, the DM particle arises in the minimal gauged U(1)B−L model.\n\nThe paper is organized as follows. In the next section, we briefly describe our model. In section III, we estimate the thermal relic density of the RH neutrino and identify the model", - "page_start": 1, - "page_end": 1, - "source_file": "1002.2525.pdf" - }, - { - "text": "#### Higgs portal dark matter in the minimal gauged U(1) B − L model\n\nNobuchika Okada ∗\n\nDepartment of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA\n\n> Osamu Seto †\n\nDepartment of Architecture and Building Engineering, Hokkai-Gakuen University, Sapporo 062-8605, Japan\n\n# Abstract\n\nWe propose a scenario of the right-handed neutrino dark matter in the context of the minimal gauged U(1) B − L model by introducing an additional parity which ensures the stability of dark matter particle. The annihilation of this right-handed neutrino takes place dominantly through the s-channel Higgs boson exchange, so that this model can be called Higgs portal dark matter model. We show that the thermal relic abundance of the right-handed neutrino dark matter with help of Higgs resonance can match the observed dark matter abundance. In addition we estimate the cross section with nucleon and show that the next generation direct dark matter search experiments can explore this model.\n\nPACS numbers:\n\nElectronic address: okadan@ua.edu\n\nElectronic address: seto@phyics.umn.edu", - "page_start": 0, - "page_end": 0, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ (p) SI ∝ (sin 2θ/v′ ) 2 for a given DM mass mN . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σSI . 4 × 10−8 − 2 × 10−7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0.7 and 0.3, respectively.\n\n#### IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U(1)B−L model. We have introduced a discrete Z2 parity in the model, so that one RH neutrino assigned as Z2-odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s-channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n#### A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B − L gauge and B − L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z2 parity. The most of annihilation of the RH neutrinos occurs via Z ′ , H and h exchange processes in the s-channel. In practice, the dominant contributions come from the Higgs (h and H) exchange diagrams, because the Z ′ exchange processes are suppressed by the inverse square of the B −L Higgs VEV v ′ & 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯f, W+W−, ZZ, and h(H)h(H). Since RH neutrino DM couples to only B − L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\n$$\\Omega_{N}h^{2}=1.1\\times10^{9}\\frac{m_{N}/T_{d}}{\\sqrt{g_{*}}M_{P}\\langle\\sigma v\\rangle}\\mathrm{GeV}^{-1},\\tag{14}$$\n\nwith the Planck mass MP , the thermal averaged product of the annihilation cross section and the relative velocity hσvi, the total number of relativistic degrees of freedom in the thermal bath g∗, and the decoupling temperature Td, is evaluated by solving the Boltzmann equation for the number density of RH neutrino nN ;\n\n$$\\frac{dn_{N}}{dt}+3Hn_{N}=-\\langle\\sigma v\\rangle(n_{N}^{2}-n_{\\rm EQ}^{2}),\\tag{15}$$\n\nand the Friedmann equation\n\n$$H^{2}\\equiv\\left(\\frac{\\dot{a}}{a}\\right)^{2}=\\frac{8\\pi}{3M_{P}^{2}}\\rho,\\tag{16}$$\n\nwith nEQ and a(t) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρrad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameter to be consistent with the current observations. We also calculate the scattering cross section between the DM particle and nucleon and discuss the implication for the direct DM search experiments. We summarize our results in the section IV. Our notations and the formulas used in our analysis are listed in Appendix.\n\n# II. THE MINIMAL GAUGED U(1)B−L MODEL WITH Z2 PARITY\n\nThe model is based on the gauge group SU(3)C ×SU(2)L×U(1)Y ×U(1)B−L. Additional fields besides the standard model fields are a gauge field Z ′ µ of the U(1)B−L, a SM singlet B − L Higgs boson Ψ with two U(1)B−L charge, and three RH neutrinos Ni which are necessary for the gauge and gravitational anomaly cancellations. In describing the RH neutrinos, we use the four component representation of RH neutrino constructed from the Weyl spinor νRi ,\n\n$$N_{i}\\equiv\\left(\\begin{array}{c}\\nu_{R_{i}}\\\\ \\epsilon\\,\\nu_{R_{i}}^{*}\\end{array}\\right)\\,,\\tag{1}$$\n\nFor the two RH neutrinos, N1 and N2, we assign Z2 parity even, while odd for N3, so that the RH neutrino N3 is stable and, hence, the DM candidate.\n\nDue to the additional gauge symmetry U(1)B−L, the covariant derivative for each fields is given by\n\n$$D_{\\mu}=D_{\\mu}^{(S M)}-i q_{B-L}g_{B-L}Z_{\\mu}^{\\prime},\\eqno(2)$$\n\nwhere D (SM) µ is the covariant derivative in the SM, and qB−L is the charge of each fields under the U(1)B−L with its gauge coupling gB−L.\n\nYukawa interactions relevant for the neutrino masses are given by\n\n$${\\cal L}_{int}=\\sum_{\\alpha=1}^{3}\\sum_{i=1}^{2}y_{\\alpha i}\\bar{L}_{\\alpha}\\hat{\\Phi}N_{i}-\\frac{1}{2}\\sum_{i=1}^{3}\\lambda_{R_{i}}\\bar{N}_{i}\\Psi P_{R}N_{i}+{\\rm h.c.},\\tag{3}$$\n\nwhere Φ =˜ −iτ2Φ ∗ for Φ being the SM Higgs doublet, and without loss of generality we have worked out in the basis where the second term in the right-hand-side is in flavor diagonal for RH neutrinos. Because of the Z2 parity, the DM candidate N3 has no Yukawa couplings with the left-handed lepton doublets.\n\nThe general Higgs potential for the SU(2)L doublet Φ and a singlet B − L Higgs Ψ is generally given by\n\n$$V(\\Phi,\\Psi)=m_{1}^{2}|\\Phi|^{2}+m_{2}^{2}|\\Psi|^{2}+\\lambda_{1}|\\Phi|^{4}+\\lambda_{2}|\\Psi|^{4}+\\lambda_{3}|\\Phi|^{2}|\\Psi|^{2}.\\tag{4}$$", - "page_start": 2, - "page_end": 2, - "source_file": "1002.2525.pdf" - }, - { - "text": "Fig. 1 shows the relic density ΩN h 2 as a function of the DM mass mN for a set of parameters: (v ′ , Mh, MH, MZ′,sin θ) = (4000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as ΩDMh 2 ≃ 0.1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, mN ≈ Mh/2 or MH /2.\n\nFig. 2 shows the relic density ΩN h 2 as a function of the DM mass mN for a smaller Higgs mixing sin θ = 0.3 (others are the same as in Fig. 1). Compared with Fig. 1, for mN . MW where the DM particles dominantly annihilate into f ¯f, the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach ΩN h 2 ≃ 0.1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: (v ′ , Mh, MH, MZ′,sin θ) = (3000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7).\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16–18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B − L Higgs VEV v ′ can play the same role for mN < MW , namely a larger v ′ corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for mN > MW the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "can be achieved only when the annihilation processes are enhanced by Higgs resonances. Therefore, the mass of the RH neutrino DM should be around a half of Higgs boson masses. We have also calculated the elastic scattering cross section between the DM particle and a proton and found it within the reach of future experiments for the direct DM search.\n\n#### Appendix A: The Higgs sector\n\nThe Higgs potential (4) contains five parameters: m2 1 , m2 2 , λ1, λ2 and λ3. These parameters can be rewritten in terms of two Higgs VEVs, two physical Higgs masses and the mixing angle between them. The stationary conditions are\n\n$$m_{1}^{2}+\\lambda_{1}v^{2}+\\frac{1}{2}\\lambda_{3}v^{\\prime2}=0,\\tag{10}$$\n\n$$m_{2}^{2}+\\lambda_{2}v^{2}+\\frac{1}{2}\\lambda_{3}v^{\\prime2}=0.\\tag{10}$$\n\nThe physical Higgs masses are given by Eqs. (8) and (9) with the mixing angle that θ satisfies\n\n$$\\tan2\\theta=-\\frac{\\lambda_{3}vv^{\\prime}}{(\\lambda_{1}v^{2}-\\lambda_{2}v^{\\prime2})}.\\tag{10}$$\n\nHiggs self interaction terms are expressed as\n\n$${\\cal L}_{int}=\\lambda_{1}v\\phi^{3}+\\lambda_{2}v^{\\prime}\\psi^{3}+\\frac{1}{2}\\lambda_{3}(v\\phi\\psi^{2}+v^{\\prime}\\psi\\phi^{2})+\\frac{1}{4}(\\lambda_{1}\\phi^{4}+\\lambda_{2}\\psi^{4}+\\lambda_{3}\\phi^{2}\\psi^{2}),$$\n (A4)\n\nin terms of φ and ψ. With Eq. (7), these are rewritten in terms of h and H with θ as\n\nLint = λ1v cos3 θ − λ2v ′ sin3 θ + 1 2 λ3(v cos θ sin2 θ − v ′ sin θ cos2 θ) hhh + 3λ1v cos2 θ sin θ + 3λ2v ′ sin2 θ cos θ + 1 2 λ3(v(sin3 θ − 2 cos2 θ sin θ) +v ′ (cos3 θ − 2 sin2 θ cos θ)) hhH + 3λ1v cos θ sin2 θ − 3λ2v ′ sin θ cos2 θ + 1 2 λ3(v(cos3 θ − 2 sin2 θ cos θ) +v ′ (− sin3 θ + 2 sin θ cos2 θ)) hHH + λ1v sin3 θ + λ2v ′ cos3 θ + 1 2 λ3(v sin θ cos2 θ + v ′ sin2 θ cos θ) HHH +four point interactions. (A5)\n\nWe can read off a Higgs three point vertex from Eq. (A5).", - "page_start": 8, - "page_end": 8, - "source_file": "1002.2525.pdf" - }, - { - "text": "In the expression of annihilation cross section, we used the following notations :\n\n$$\\partial\\Phi=\\frac{1}{\\sqrt{2}}\\cos\\theta,$$\n \n$$\\partial\\Phi=\\frac{1}{\\sqrt{2}}\\sin\\theta,$$\n \n$$\\partial\\Psi=-\\frac{1}{\\sqrt{2}}\\sin\\theta,$$\n \n$$\\partial\\Psi=\\frac{1}{\\sqrt{2}}\\cos\\theta.$$\n (A6)\n\n#### Appendix B: Amplitude\n\nWe give explicit formulas of the invariant amplitude squared for the pair annihilation processes of the RH neutrinos.\n\n#### 1. Annihilation into charged fermions\n\n|M|2 = 32 g 2 B−L qf qN s − M2 Z′ + iMZ′ΓZ′ 2 (s − 4m2 N ) 3 8 s − 1 2 s 2 − m2 f + 1 2 s 4 − m2 f cos2 θ +16λ 2 N yf ∂Φ ∂h i s − M2 h + iMhΓh ∂Ψ ∂h + ∂Φ ∂H i s − M2 H + iMH ΓH ∂Ψ ∂H \f 2 (s − 4m2 N ) s 4 − m2 f . (B1)\n\n#### 2. Annihilation into neutrinos\n\n- a. Annihilation into νa, νa (light active-like neutrinos)\n\n$$\\begin{array}{l}{{|\\mathcal{M}|^{2}=}}\\\\ {{32\\left|\\frac{g_{B-L}^{2}q q_{N}}{s-M_{Z^{\\prime}}^{2}+i M_{Z^{\\prime}}\\Gamma_{Z^{\\prime}}}\\right|^{2}\\left(s-4m_{N}^{2}\\right)\\left(\\frac{3}{8}s-\\frac{1}{2}\\left(\\frac{s}{2}+m_{\\nu_{0}}^{2}\\right)+\\frac{1}{2}\\left(\\frac{s}{4}+m_{\\nu_{0}}^{2}\\right)\\cos^{2}\\theta\\right)\\mathrm{(B2)}}}\\end{array}$$", - "page_start": 9, - "page_end": 9, - "source_file": "1002.2525.pdf" - }, - { - "text": "FIG. 4: Top - a conductivity plot for the BCSI case in the presence of a lattice. The parameters are ∆ = 30 meV , Γ = 3.5 meV . Bottom – the behavior of Kubo sums. Note that (a) the spectral weight in the NS is always greater in the SCS, (b) the spectral weight decreases with Γ, and (c) the difference between NS and SCS decreases as Γ increases.\n\nlittle variation of ∆W(ωc) at above 0.1 − 0.3eV what implies that for larger ωc, ∆W(ωc) ≈ ∆WK >> ∆f(ωc).\n\nTo make this more quantitative, we compare in Fig. 6 ∆W(ωc) obtained for a constant DOS, when ∆W(ωc) = ∆f(ωc), and for the actual lattice dispersion, when ∆W(ωc) = ∆WK + ∆f(ωc). In the clean limit there is obviously little cutoff dependence beyond 0.1eV , i.e., ∆f(ωc) is truly small, and the difference between the two cases is just ∆WK. In the dirty limit, the situation is similar, but there is obviously more variation with ωc, and ∆f(ωc) becomes truly small only above 0.3eV . Note also that the position of the dip in ∆W(ωc) in the clean limit is at a larger ωc in the presence of the lattice than in a continuum.\n\n#### B. The Einstein boson model\n\nWe next consider the case of electrons interacting with a single boson mode which by itself is not affected by superconductivity. The primary candidate for such mode is an optical phonon. The imaginary part of the NS self energy has been discussed numerous times in the literature. We make one simplifying assumption – approximate the DOS by a constant in calculating fermionic self-energy. We will, however, keep the full lattice dispersion in the calculations of the optical integral. The advantage of this\n\nFIG. 5: The evolution of optical integral in NS(top) and SCS(bottom) for BCSI case. Plots are made for clean limit (solid lines, Γ = 3.5 meV ) and dirty limit (dashed lines, Γ = 150 meV ) for ∆ = 30 meV . Observe that (a) W(0) = 0 in the NS, but has a non-zero value in the SCS because of the δ-function (this value decreases in the dirty limit), and (b) the flat region in the SCS is due to the fact that σ ′ (ω) = 0 for Ω < 2∆. Also note that ∼ 90 − 95% of the spectral weight is recovered up to 1eV\n\napproximation is that the self-energy can be computed analytically. The full self-energy obtained with the lattice dispersion is more involved and can only be obtained numerically, but its structure is quite similar to the one obtained with a constant DOS.\n\nThe self-energy for a constant DOS is given by\n\n$$\\Sigma(i\\omega)=-\\frac{i}{2\\pi}\\lambda_{n}\\int d\\epsilon_{k}d(i\\Omega)\\chi(i\\Omega)G(\\epsilon_{k},i\\omega+i\\Omega)\\tag{13}$$\n\nwhere\n\n$$\\chi(i\\Omega)=\\frac{\\omega_{0}^{2}}{\\omega_{0}^{2}-(i\\Omega)^{2}}\\tag{14}$$\n\nand λn is a dimensionless electron-boson coupling. Integrating and transforming to real frequencies, we obtain\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,\\Theta(|\\omega|-\\omega_{o})$$\n \n \n\n$$\\Sigma^{\\prime}(\\omega)=-\\frac{1}{2}\\,\\lambda_{n}\\omega_{o}\\,log\\left|\\frac{\\omega+\\omega_{o}}{\\omega-\\omega_{o}}\\right|\\tag{15}$$\n\nIn the SCS, we obtain for ω < 0\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,R e\\left(\\frac{\\omega+\\omega_{o}}{\\sqrt{(\\omega+\\omega_{o})^{2}-\\Delta^{2}}}\\right)$$", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0764.pdf" - }, - { - "text": "- Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. *arXiv:2002.06305 [cs]*.\n- Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions. *arXiv:2006.00995 [cs]*.\n- Kawin Ethayarajh. 2019. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 55–65, Hong Kong, China. Association for Computational Linguistics.\n- Allyson Ettinger. 2019. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. *arXiv:1907.13528 [cs]*.\n- Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing Transformer Depth on Demand with Structured Dropout. In *International Conference on Learning Representations*.\n- Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do Neural Language Representations Learn Physical Commonsense? In *Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019)*, page 7.\n- Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In *International Conference on Learning Representations*.\n- Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformerbased models: A case study on BERT. *arXiv preprint arXiv:2002.11985*.\n- Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. In *AAAI*.\n- Michael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G P Shrivatsa Bhargav, Dinesh Garg, and Avi Sil. 2020. Span Selection Pre-training for Question Answering. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2773–2782, Online. Association for Computational Linguistics.\n- Goran Glavaš and Ivan Vulic. 2020. ´ Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation. *arXiv:2008.06788 [cs]*.\n- Adele Goldberg. 2006. *Constructions at Work: The Nature of Generalization in Language*. Oxford University Press, USA.\n- Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. *arXiv preprint arXiv:1901.05287*.\n- Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Efficient training of BERT by progressively stacking. In *International Conference on Machine Learning*, pages 2337–2346.\n- Mitchell A Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. *arXiv preprint arXiv:2002.08307*.\n- Saurabh Goyal, Anamitra Roy Choudhary, Venkatesan Chakaravarthy, Saurabh ManishRaje, Yogish Sabharwal, and Ashish Verma. 2020. Powerbert: Accelerating BERT inference for classification tasks. *arXiv preprint arXiv:2001.08950*.\n- Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue Lin, and Yanzhi Wang. 2019. Reweighted Proximal Pruning for Large-Scale Language Representation. *arXiv:1909.12486 [cs, stat]*.\n- Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-Augmented Language Model Pre-Training. *arXiv:2002.08909 [cs]*.\n- Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and Understanding the Effectiveness of BERT. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 4143–4152, Hong", - "page_start": 14, - "page_end": 14, - "source_file": "arxiv2_taclccby4_license.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "What are the dominant contributions in thermal relic density ?", - "target_page": 5, - "target_passage": "In practice, the dominant contributions come from the Higgs (h and H) exchange diagrams.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Fig. 1 shows the relic density ΩN h 2 as a function of the DM mass mN for a set of parameters: (v ′ , Mh, MH, MZ′,sin θ) = (4000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as ΩDMh 2 ≃ 0.1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, mN ≈ Mh/2 or MH /2.\n\nFig. 2 shows the relic density ΩN h 2 as a function of the DM mass mN for a smaller Higgs mixing sin θ = 0.3 (others are the same as in Fig. 1). Compared with Fig. 1, for mN . MW where the DM particles dominantly annihilate into f ¯f, the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach ΩN h 2 ≃ 0.1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: (v ′ , Mh, MH, MZ′,sin θ) = (3000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7).\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16–18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B − L Higgs VEV v ′ can play the same role for mN < MW , namely a larger v ′ corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for mN > MW the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ (p) SI ∝ (sin 2θ/v′ ) 2 for a given DM mass mN . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σSI . 4 × 10−8 − 2 × 10−7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0.7 and 0.3, respectively.\n\n#### IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U(1)B−L model. We have introduced a discrete Z2 parity in the model, so that one RH neutrino assigned as Z2-odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s-channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n#### A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B − L gauge and B − L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z2 parity. The most of annihilation of the RH neutrinos occurs via Z ′ , H and h exchange processes in the s-channel. In practice, the dominant contributions come from the Higgs (h and H) exchange diagrams, because the Z ′ exchange processes are suppressed by the inverse square of the B −L Higgs VEV v ′ & 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯f, W+W−, ZZ, and h(H)h(H). Since RH neutrino DM couples to only B − L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\n$$\\Omega_{N}h^{2}=1.1\\times10^{9}\\frac{m_{N}/T_{d}}{\\sqrt{g_{*}}M_{P}\\langle\\sigma v\\rangle}\\mathrm{GeV}^{-1},\\tag{14}$$\n\nwith the Planck mass MP , the thermal averaged product of the annihilation cross section and the relative velocity hσvi, the total number of relativistic degrees of freedom in the thermal bath g∗, and the decoupling temperature Td, is evaluated by solving the Boltzmann equation for the number density of RH neutrino nN ;\n\n$$\\frac{dn_{N}}{dt}+3Hn_{N}=-\\langle\\sigma v\\rangle(n_{N}^{2}-n_{\\rm EQ}^{2}),\\tag{15}$$\n\nand the Friedmann equation\n\n$$H^{2}\\equiv\\left(\\frac{\\dot{a}}{a}\\right)^{2}=\\frac{8\\pi}{3M_{P}^{2}}\\rho,\\tag{16}$$\n\nwith nEQ and a(t) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρrad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "FIG. 2: Typical KMC results for the final dried-in nanoparticle structures resulting from the evaporative dewetting processes of nanoparticle solutions (nanofluids) in the case of (a) a spinodal-like process at µ = −2.55, (b) nucleation and growth of holes at µ = −2.3, (c) unstable fronts at µ = −2.3 and low mobility M = 5, and (d) unstable fronts at µ = −2.3 and medium mobility M = 10. The starting configuration in (a) and (b) is a homogeneous liquid film with uniformly distributed particles whereas in (c) and (d) a hole at the center is nucleated 'by hand'. The remaining parameters are (a,b) M = 50, nl = 2.0, nn = 1.5, ρ av n = 0.2, kT = 0.3, MC steps= 500, domain size 1200 × 1200; (c,d) εnn = 2.0, nl = 1.5, ρ av n = 0.2, kT = 0.2, MC steps= 3000, domain size 1200 × 1200. Lattice sites occupied by particles are coloured black, and the empty sites are coloured white.", - "page_start": 10, - "page_end": 10, - "source_file": "1001.2669.pdf" - }, - { - "text": "#### Higgs portal dark matter in the minimal gauged U(1) B − L model\n\nNobuchika Okada ∗\n\nDepartment of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA\n\n> Osamu Seto †\n\nDepartment of Architecture and Building Engineering, Hokkai-Gakuen University, Sapporo 062-8605, Japan\n\n# Abstract\n\nWe propose a scenario of the right-handed neutrino dark matter in the context of the minimal gauged U(1) B − L model by introducing an additional parity which ensures the stability of dark matter particle. The annihilation of this right-handed neutrino takes place dominantly through the s-channel Higgs boson exchange, so that this model can be called Higgs portal dark matter model. We show that the thermal relic abundance of the right-handed neutrino dark matter with help of Higgs resonance can match the observed dark matter abundance. In addition we estimate the cross section with nucleon and show that the next generation direct dark matter search experiments can explore this model.\n\nPACS numbers:\n\nElectronic address: okadan@ua.edu\n\nElectronic address: seto@phyics.umn.edu", - "page_start": 0, - "page_end": 0, - "source_file": "1002.2525.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n### B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78–83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρl and the nanoparticles ρn. The densities ρl and ρn are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F[ρl , ρn], and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height hp = hφ. The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\nshould also be investigated further in the simple case presented here.\n\n### IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: *J. Phys.-Cond. Mat.* 21, 264016 (2009), in the Volume \"Nanofluids on solid substrates\" and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "dewetted liquid. The front recedes until all liquid is collected in a central drop. Since no liquid evaporates [Qnc = 0 in Eq. (1)], the particle concentration does not change during the process.\n\nThe situation changes when allowing for evaporation (Qnc > 0). Now the front may retract by convection *and/or* evaporation. Evaporation leads to the possibility of a strong increase in the particle concentration at the contact line as evaporation is strongest there. Due to the strong nonlinear dependence of the viscosity on the particle concentration, this may lead to a dramatic decrease of the convective contribution to the front velocity. For moderate evaporation rates, this may result in a (temporary) self-pinning of the front. Within the present basic model, the process can (after complete dry-in) result in three different basic deposition patterns: (i) for very fast evaporation rates, all other processes occur over time scales that are much larger. In particular, the effects of convective redistribution of the liquid are neglectable. As a result one finds that a nearly homogeneous film of nanoparticles of thickness hp = φ0h0 is deposited (see Fig. 6(a)). Convection only results in the small heap of material visible at the left hand side of Fig. 6(a). The decrease in hp on the right side of Fig. 6(a) arises due to the diffusion of particles to the right of the initial front position; (ii) for very low evaporation rates, the film dynamics is dominated by convective dewetting as this process acts on a much shorter time scale than evaporation. As a result, all the liquid is collected into a drop before evaporation slowly removes the remaining solvent. Under these conditions most of the nanoparticles are deposited in a single heap (see Fig. 6(c)). Depending on the diffusivity, the heap might be highest at the centre or show a depression there; (iii) at intermediate evaporation rates, one may observe the deposition of a nanoparticle ring around a region with a nanoparticle film of much lower height. At the centre deposition might increase again (see Fig. 6(b)).\n\nThe most intriguing feature is the ring formation that has been observed experimentally for suspensions of very different particle sizes ranging from nanometers [32, 36, 46, 47] to hundreds of micrometers. Pinning of the contact line and thermal Marangoni effects are often mentioned as necessary conditions for the ring formation. The contact line pinning is often assumed to result from substrate heterogeneities. Film height and concentration profiles at various instants during the dewetting process are displayed in Fig. 7. The profiles are from before, at and after self-pinning of the contact line. In Fig. 8 we display a space-time plot for the complete process. At first, the front recedes in the same manner as when there is no evaporation, but now driven by convection and evaporation. A small capillary rim forms that collects all the dewetted liquid that does not evaporate. The particle concentration slowly increases at the contact line (Fig. 7(a) and regime", - "page_start": 20, - "page_end": 20, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Sayabouly Project – Lao PDR\n\nWith the grant of the prospecting and exploration permit in early 2012, exploration activity focused on the definition of an extensive copper (Cu), platinum (Pt), chromium (Cr), nickel (Ni) stream sediment anomaly within the permit area. Surface geochemistry and mapping has defined an extensive multi-element soil anomaly over 16 kilometres in length and 700 metres width with peak values of 829ppm copper (Cu), 1.05% nickel (Ni), 1.54 ppm platinum (Pt), and 0.27% cobalt (Co) and 0.57 ppm palladium (Pd).\n\nThe style of mineralisation is thought to be similar to Cu, Platinum Group Element deposits such as the Great Dyke (Zimbabwe) Deposits. Three broad spaced trenches were completed with another two trenched partially completed up until the commencement of the wet season with results 2.0m @ 1.73 ppm Pt and a broad zone nickel mineralisation including 51 m (853– 904 m) @ 0.96% Ni. In addition to this prospect, several gold occurrences are beginning to take shape and recent high grade rockchip samples\n\n(96.0 g/t Au, 82.7 g/t Au, 53.3 g/t Au, 44.7 g/t Au, 30.0 g/t Au and 18.8 g/t Au) in several adjacent creeks appear to be defining a gold target that will also require drilling at the end of the wet season.", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "What happend to the annihilation and the relic density when the DM is heavier ?", - "target_page": 6, - "target_passage": "When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Fig. 1 shows the relic density ΩN h 2 as a function of the DM mass mN for a set of parameters: (v ′ , Mh, MH, MZ′,sin θ) = (4000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as ΩDMh 2 ≃ 0.1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, mN ≈ Mh/2 or MH /2.\n\nFig. 2 shows the relic density ΩN h 2 as a function of the DM mass mN for a smaller Higgs mixing sin θ = 0.3 (others are the same as in Fig. 1). Compared with Fig. 1, for mN . MW where the DM particles dominantly annihilate into f ¯f, the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach ΩN h 2 ≃ 0.1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: (v ′ , Mh, MH, MZ′,sin θ) = (3000 GeV, 120 GeV, 200 GeV, 1000 GeV, 0.7).\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16–18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B − L Higgs VEV v ′ can play the same role for mN < MW , namely a larger v ′ corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for mN > MW the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n#### A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B − L gauge and B − L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z2 parity. The most of annihilation of the RH neutrinos occurs via Z ′ , H and h exchange processes in the s-channel. In practice, the dominant contributions come from the Higgs (h and H) exchange diagrams, because the Z ′ exchange processes are suppressed by the inverse square of the B −L Higgs VEV v ′ & 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯f, W+W−, ZZ, and h(H)h(H). Since RH neutrino DM couples to only B − L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\n$$\\Omega_{N}h^{2}=1.1\\times10^{9}\\frac{m_{N}/T_{d}}{\\sqrt{g_{*}}M_{P}\\langle\\sigma v\\rangle}\\mathrm{GeV}^{-1},\\tag{14}$$\n\nwith the Planck mass MP , the thermal averaged product of the annihilation cross section and the relative velocity hσvi, the total number of relativistic degrees of freedom in the thermal bath g∗, and the decoupling temperature Td, is evaluated by solving the Boltzmann equation for the number density of RH neutrino nN ;\n\n$$\\frac{dn_{N}}{dt}+3Hn_{N}=-\\langle\\sigma v\\rangle(n_{N}^{2}-n_{\\rm EQ}^{2}),\\tag{15}$$\n\nand the Friedmann equation\n\n$$H^{2}\\equiv\\left(\\frac{\\dot{a}}{a}\\right)^{2}=\\frac{8\\pi}{3M_{P}^{2}}\\rho,\\tag{16}$$\n\nwith nEQ and a(t) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρrad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ (p) SI ∝ (sin 2θ/v′ ) 2 for a given DM mass mN . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σSI . 4 × 10−8 − 2 × 10−7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0.7 and 0.3, respectively.\n\n#### IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U(1)B−L model. We have introduced a discrete Z2 parity in the model, so that one RH neutrino assigned as Z2-odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s-channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "#### *Turning off data retention protection*\n\nWhen you turn off data retention protection, the following descriptions explain what happens when you use the creation-based object expiration policy and the event-based retention object expiration policy:\n\n- - Creation-based object expiration policy: Content Manager OnDemand issues a **delete object** command through the Tivoli Storage Manager API. Objects are deleted during the next inventory expiration. If a Content Manager OnDemand application group is deleted, a **delete filespace** command is issued instead, and the objects are immediately deleted with the file space.\n- - Event-based retention object expiration policy: Content Manager OnDemand issues an **event trigger** command through the Tivoli Storage Manager API. The status of the objects that are affected changes from PENDING to STARTED, and the objects are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT, the objects never expire. If a Content Manager OnDemand application group is deleted, a **delete filespace** command is issued instead, and the objects are immediately deleted with the file space.\n\n#### *Turning on data retention protection*\n\nWhen you turn on data retention protection, the following descriptions explain what happens when you use creation-based object expiration policy and event-based retention object expiration policy:\n\n- - Creation-based object expiration policy: Content Manager OnDemand issues no commands to Tivoli Storage Manager. The objects are effectively orphaned by Content Manager OnDemand and are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT, the objects never expire.\n- - Event-based retention object expiration policy: Content Manager OnDemand issues an **event trigger** command through the Tivoli Storage Manager API. The event status of the objects that are affected is changed from PENDING to STARTED, and the affected objects are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT, the objects never expire.\n\nIf a Content Manager OnDemand application group is deleted, a **delete filespace** command cannot be used with data retention protection; the operation is treated the same as though a delete is indicated. The status of all of the affected objects is changed from PENDING to STARTED, and the affected objects are expired by Tivoli Storage Manager based on their retention parameters. This action leaves the file space entries in Tivoli Storage Manager, so you must manually delete these entries when the file space is empty (even with data retention protection on).\n\n#### *Recommendations*\n\nConsider the following preferred practices when you work with data retention protection:\n\n- -Set up the application groups to expire by load.\n- - Define the Tivoli Storage Manager archive copy groups to be event-based, and retain data for 0 days.\n- - Run the Tivoli Storage Manager inventory expiration regularly to ensure that expired data is removed.", - "page_start": 258, - "page_end": 258, - "source_file": "sg246915.pdf" - }, - { - "text": "Table 5-1 shows the action by Tivoli Storage Manager when a Content Manager OnDemand object is deleted, unloaded, or during the deletion of an application group when data protection is turned OFF.\n\n| Tivoli | Content Manager OnDemand action: | Content Manager OnDemand |\n| --- | --- | --- |\n| Storage | Unload | action: Delete application group |\n| Manager | | |\n| RETINIT | | |\n| Creation | The Delete Object command is issued through | The Delete Filespace command is |\n| | the Tivoli Storage Manager API. | issued. |\n| | Objects are deleted during the next Tivoli | |\n| | Storage Manager expiration. | Objects are immediately deleted with |\n| | | the file space. |\n| Event | Content Manager OnDemand issues an event | The Delete Filespace command is |\n| | trigger command through the Tivoli Storage | issued. |\n| | Manager API. | |\n| | The status of the objects that are affected is | Objects are immediately deleted with |\n| | changed from PENDING to STARTED and is | the file space. |\n| | expired by Tivoli Storage Manager based on | |\n| | their retention parameters. If the retention | |\n| | parameters are set to NOLIMIT, the objects | |\n| | never expire. | |\n\nTable 5-1 Comparison of expiration methods with data protection OFF\n\nTable 5-2 shows the action by Tivoli Storage Manager when data protection is turned ON.\n\n| Tivoli | Content Manager OnDemand action: | Content Manager OnDemand action: |\n| --- | --- | --- |\n| Storage | Unload | Delete application group |\n| Manager | | |\n| RETINIT | | |\n| Creation | Content Manager OnDemand issues no | Content Manager OnDemand issues no |\n| | commands to Tivoli Storage Manager. | commands to Tivoli Storage Manager. |\n| | The objects are effectively orphaned by | The objects are effectively orphaned by |\n| | Content Manager OnDemand and are | Content Manager OnDemand and are |\n| | expired by Tivoli Storage Manager based | expired by Tivoli Storage Manager based on |\n| | on their retention parameters. If the | their retention parameters. If the retention |\n| | retention parameters are set to NOLIMIT, | parameters are set to NOLIMIT, the objects |\n| | the objects never expire. | never expire. |\n| Event | Content Manager OnDemand issues an | The Delete Filespace command cannot be |\n| | event trigger command through the Tivoli | used with DRP ON, so the operation is |\n| | Storage Manager API. | treated the same as though a delete were |\n| | | indicated and the status of all of the affected |\n| | The status of the objects that are | objects is changed from PENDING to |\n| | affected are changed from PENDING to | STARTED. They are expired by Tivoli |\n| | STARTED and are expired by Tivoli | Storage Manager based on their retention |\n| | Storage Manager based on their | parameters. This action unfortunately leaves |\n| | retention parameters. If the retention | the file space entries in Tivoli Storage |\n| | parameters are set to NOLIMIT, the | Manager. These entries can be manually |\n| | objects never expire. | deleted after the file space is empty even |\n| | | with DRP ON. |\n\nTable 5-2 Comparison of expiration methods with data protection ON", - "page_start": 129, - "page_end": 129, - "source_file": "sg246915.pdf" - }, - { - "text": "The Content Manager OnDemand administrator defines and maintains storage sets (Figure 5-5). The load type is the storage set parameter that we examine.\n\n| Add a Storage Set | | | | | എ | ಜ |\n| --- | --- | --- | --- | --- | --- | --- |\n| Name | | | | | | |\n| TSM Storage Set | | | | | | |\n| Description | | | | | | |\n| Storage Set for TSM | | | | | | |\n| Load Type | | | | | | |\n| Fixed > | | | | | | |\n| Storage Nodes | | | | | | |\n| Primary Object Server | Primary Storage Node | | | | | |\n| | Add ... | Update ... | View ... | Delete | | |\n| | OK | | Cancel | Help | | |\n\nFigure 5-5 Storage set definition\n\n# **Load Type parameter**\n\nThe Load Type parameter determines where Content Manager OnDemand stores data. Two values are possible (Figure 5-5):\n\n- - Fixed: Content Manager OnDemand stores data in the primary storage node that has the load data field selected. When Load Type is set to *Fixed*, you must select the load data check box for one primary storage node. Content Manager OnDemand loads data to only one primary storage node regardless of the number of primary nodes that are defined in the storage set.\n- - Local: Content Manager OnDemand stores data in a primary storage node on the server on which the data loading program runs. When the Load Type is *Local*, the load data check box must be selected for a primary storage node on each of the object servers that is identified in the storage set. A storage set can contain one or more primary storage nodes that are on one or more object servers.\n\nNext, we examine several parameters on the Add a Primary Node window (Figure 5-6 on page 98).", - "page_start": 120, - "page_end": 120, - "source_file": "sg246915.pdf" - }, - { - "text": "# **Expiration Type**\n\nThe Expiration Type option determines how report data, indexes, and resources are expired. Three expiration types are available:\n\n- - Load: With this expiration type, a single input file (a Load) at a time can be deleted from the application group. The latest date in the input data and the Life of Data and Indexes determine when Content Manager OnDemand deletes the data. Content Manager OnDemand signals to the storage manager that the data might be deleted.\nFigure 5-10 shows the error message that displays when you use Enhanced Retention Management and you do not set the expiration type to Load.\n\n**Note:** Load is the suggested expiration type.\n\nIf any application group uses either the Enhanced Retention Management feature or IBM Enterprise Records, this setting is required. You must also use this type if event-based processing is used within Tivoli Storage Manager.\n\n| Update an Application Group - Customer Information2 on ECMDEMO01 | | | | ಜ |\n| --- | --- | --- | --- | --- |\n| Field Definition | Field Infomation | | Advanced Index Information | |\n| General | Message Logging | Storage Management | Pemissions | |\n| Storage Set Name | | | | |\n| TSM Storage Set | | | | |\n| Cache Data | | | Life of Data and Indexes | |\n| C No | | | | |\n| OnDemand Administrator | | | 23 13 | |\n| | The Expiration Type must have a value of 'Load' when Enhanced Retention Management is used. | | | |\n| | | | OK | |\n| Restore Resources to Cache | | | | |\n| V Search Cache | | | | |\n| | Advanced ... | | | |\n| | OK Cancel | Help | | |\n\nFigure 5-10 Expiration type set incorrectly\n\n- - Segment: With this expiration type, a segment of data at a time is deleted from the application group. The segment must be closed and the expiration date of every record in the segment must be reached. Data that is stored in archive storage is deleted by the storage manager based on the archive expiration date. If a small amount of data is loaded into the application group, and the Maximum Rows value is high, the segment might be open for a long period, and the data is not expired for the period.", - "page_start": 125, - "page_end": 125, - "source_file": "sg246915.pdf" - }, - { - "text": "# **10.1 Introduction**\n\nFor this chapter, unless explicitly stated otherwise, the term \"data\" is used to refer to the report data, the extracted documents or segments, and their related indexes and the extracted resources.\n\nA Content Manager OnDemand system logically stores data in *application groups*. An application group is defined by the Content Manager OnDemand administrator. It consists of data that has the same indexing, data storage, and expiration requirements. The application group definition also specifies where the report and document data are stored, how long the data is stored, and how the data expires. The method or methods that can be used to expire the data are a function of the application group parameters that are defined before the data is loaded into Content Manager OnDemand. In a Content Manager OnDemand system, data typically goes through a lifecycle of loading, storing, migration, and an expiration process.\n\n# **10.2 Loading and storing the data**\n\nThe Content Manager OnDemand architecture allows the control and management of the data throughout its lifecycle. The data lifecycle begins with running an efficient load process. Each load process invocation ingests report data for a specified application group.\n\nDuring a load process, Content Manager OnDemand stores report (document) data, its resources, and index data, as shown in Figure 10-1.\n\nFigure 10-1 Data and index storage locations\n\nThe Content Manager OnDemand load process identifies, segments, and compresses groups of documents into storage objects that are then stored in the Content Manager OnDemand archive, as illustrated in Figure 10-1. To improve the efficiency of the storage process, Content Manager OnDemand aggregates the stored documents (typically a few kilobytes in size) into storage objects. This aggregation provides efficient, high-volume storage, retrieval, and expiration performance.", - "page_start": 243, - "page_end": 243, - "source_file": "sg246915.pdf" - }, - { - "text": "# **12**\n\n# **Chapter 12. Scalability, reliability, and availability architectures**\n\nIBM Content Manager OnDemand (Content Manager OnDemand) is a lightweight process, that is, the Content Manager OnDemand code itself does not require extensive system resources to perform the functions that are required of it. Content Manager OnDemand installations scale to handle both large quantities of data and many users. The total quantity of data that is stored or retrieved at any time is the main contributor to the resource consumption on the server. This chapter focuses on the scalability, reliability, and availability of Content Manager OnDemand systems.\n\nIn this chapter, we cover the following topics:\n\n- -Scalability, reliability, and availability defined\n- -Scaling a Content Manager OnDemand system\n- -High availability", - "page_start": 306, - "page_end": 306, - "source_file": "sg246915.pdf" - }, - { - "text": "- - Document: With this expiration type, a document at a time is deleted from the application group. Data that is stored in archive storage is deleted by the storage manager based on the archive expiration date. Storing documents with an expiration type of Document causes the expiration process to search through every document in the segment to determine whether the expiration date was reached, which results in long processing times.\nWhen the **arsmaint** expiration process is run, data is deleted only from the application group if the upper threshold for the size of the cache storage is reached. By default, the cache threshold is 80%. A lower threshold can be forced by the expiration command parameters. Unless a reason exists to clear cache, leaving data in cache improves retrieval performance.\n\n# **5.2.6 Advanced application group storage management**\n\nBy using the advanced storage management settings (Figure 5-11), you can adjust the size of the load object and determine when report data, indexes, and resources are migrated to archive storage.\n\n| Advanced Storage Management | | 23 |\n| --- | --- | --- |\n| Object Size (K) 10000 | | |\n| Application Group Identifier: WBA | | |\n| AGID: 5185 | | |\n| Migrate Data from Cache | | |\n| C No | | |\n| · When Data is Loaded | | |\n| C Next Cache Migration | | |\n| C After Days in Cache | | |\n| Migration of Indexes | | |\n| C No Migration | | |\n| C Migrate after Days | | |\n| Keep Imported Migrated Indexes | Days | |\n| OK Cancel | Help | |\n\nFigure 5-11 Advanced application group storage management\n\n# **Object Size**\n\nThe Object Size parameter determines the size of a storage object in kilobytes (KB). Content Manager OnDemand, by default, segments and compresses stored data into 10 MB storage objects. The default of 10 MB is the most commonly used object size value.\n\n**Important:** Be careful when you change the value for Object Size. Setting the value too small or too large can adversely affect load performance. However, increasing this value might be necessary if you load large files and run out of Object IDs during the loading process.\n\n**Note:** The object size that is defined here must be equal to or larger than the size of the compressed storage objects that are defined in any application that is assigned to the application group.", - "page_start": 126, - "page_end": 126, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is the aim of LLM routers ?", - "target_page": 1, - "target_passage": "LLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].\n\n### 3 LLM Control Plane Integrity\n\nIn this section, we define *LLM control plane integrity*. Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the case of n = 2, and, for reasons that will be clear in a moment, use Ms (\"strong\") and Mw (\"weak\") to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ←$ RMω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a query.\n\nInference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of LLM invocations Mij (zj ) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RMω (x). Here m is the total number of LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\n$$T=(i_{1},z_{1}),(i_{2},z_{2}),\\ldots,(i_{m},z_{m})$$\n\nof pairs of model indexes ij ∈ {w, s} and model inputs zj . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x),(s, x)}. We write submitting a sequence of inferences ⃗x = ⃗x1, . . . , ⃗xq to a control plane as\n\n$$R_{\\omega}^{\\mathcal{M}}(\\vec{x})=(R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{1}),\\ldots,R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{q}))$$\n\nwhere note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however, each invocation results in a single LLM invocation.\n\nAn *inference flow policy* dictates the control plane designer's intention regarding use of the underlying models. For example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply defined as a predicate P over (input, model) pairs (⃗x1, i1), . . . ,(⃗xq, iq) since this fully defines the sequence of transcripts. For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:\n\n$${\\mathcal{P}}(({\\vec{x}}_{1},i_{1}),\\ldots,({\\vec{x}}_{q},i_{q}))=\\left(\\sum_{j=1}^{q}{\\frac{\\mathbb{I}(i_{j})}{q}}\\leq\\epsilon\\right)$$", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "# REROUTING LLM ROUTERS\n\nA PREPRINT\n\nAvital Shafran The Hebrew University of Jerusalem\n\nRoei Schuster Wild Moose\n\nThomas Ristenpart Cornell Tech\n\nVitaly Shmatikov Cornell Tech\n\n### ABSTRACT\n\nLLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we investigate routers' adversarial robustness.\n\nWe first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial inputs, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate queryindependent token sequences we call \"confounder gadgets\" that, when added to any query, cause LLM routers to send the query to a strong LLM.\n\nOur quantitative evaluation shows that this attack is successful both in white-box and black-box settings against a variety of open-source and commercial routers, and that confounding queries do not affect the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating alternative defenses.\n\n### 1 Introduction\n\nLarge language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller, less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.\n\nDevelopers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of this writing, GPT-3.5-turbo costs $0.5/$1.5 per 1M input/output tokens, GPT-4o-mini $0.15/$0.6, GPT-4o $2.5/$10, o1-preview $15/$60. The difference in quality between models is not uniform across queries. For some queries, even a cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality answer.\n\nA natural solution to balancing performance and economic considerations is to take advantage of the availability of multiple LLMs at different price-performance points. Recently proposed *LLM routing* systems [5, 12, 27, 47, 53] orchestrate two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of sufficient quality. In the two-LLM case, let Ms be an expensive, high-quality model and Mw a weaker, lower-grade one. Given query q, the routing algorithm R(·) applies a classifier to q that outputs 0 if Mw is sufficient for answering q, or 1 if Ms is required. The system then routes q accordingly.\n\nLLM routing is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple LLMs to process inputs, as further described in Section 2.\n\nOur contributions. First, we introduce *LLM control plane integrity* as a novel problem in AI safety. Recently proposed LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial robustness of the underlying LLMs.", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv1.pdf" - }, - { - "text": "Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between queries sent to the strong and weak models.\n\nTo initiate the study of this problem, we show that existing LLM routing algorithms are not adversarially robust. We design, implement, and evaluate a method that generates *query-independent* adversarial token sequences we call \"confounder gadgets.\" If a gadget is added to any query, this query is routed to the strong model with high probability. Next, we show that this attack is effective even in the *transfer* setting where the adversary does not have full knowledge of the target LLM router (it is black-box), but has access to another router (e.g., an internally trained surrogate). We also evaluate the integrity of commercial LLM routers, showing that they can be confounded as well.\n\nThird, we investigate defenses. Our basic method generates gadgets that have anomalously high perplexity. Confounded queries are thus easily distinguished from normal queries and can be filtered out by the routing system. Unfortunately, this defense can be evaded by an adversary who incorporates a low-perplexity objective into the gadget generation algorithm, producing gadgets that have low perplexity—and yet are effective at re-routing queries to the strong model. We also discuss higher-level defenses, such as identifying users whose queries are routed to the strong model with abnormal frequency.\n\nRouting attacks can be deployed for various adversarial objectives, e.g., to ensure that the adversary always obtains the highest-quality answer regardless of the target applications's internal routing policies and cost constraints, or to maliciously inflate the target's LLM costs. As LLM control planes grow in importance and sophistication, we hope that this work will motivate further research on their adversarial robustness.\n\n# 2 LLM Control Planes and Routing\n\nInference using large language models (LLMs) is traditionally monolithic: a single model is applied to an input or sequence of inputs. This methodology can be sub-optimal for various reasons. State-of-the-art models are often expensive, with API access to LLMs costing as much as several dollars for each query. Elsewhere, distinct LLMs may excel at different tasks, and selectively using them may improve overall quality on a diverse workload. Finally, combining multiple LLMs, even all trained for similar tasks, may become increasingly prevalent as performance improvements of individual LLMs plateaus [8–10].\n\nResearchers and practitioners are therefore now developing inference architectures that use multiple LLMs to answer queries. These LLMs are orchestrated by what we call an *LLM control plane* (borrowing the terminology from networking [13]). The control plane may route queries or parts of queries to different LLMs, derive new strings to query to underlying LLMs, combine answers from underlying LLMs, and more.\n\nLLM routers. A prominent example of this emerging class of LLM control planes are *LLM routers* [27, 41, 47, 53, 59]. LLM routers decide which of the two (or, sometimes, more) LLMs to use to answer a query. In prescriptive routing, the router applies some lightweight classifier to the input query that determines which underlying LLM to utilize for a response. The classifier is itself a learned function that scores the complexity of the query. Deployments can then configure a score threshold for when to route a query to the more expensive LLM. This threshold can be tuned using representative workloads to achieve a desired cost-performance trade-off. Figure 1 shows the basic workflow of binary LLM routers.\n\nNon-prescriptive routing [15, 20, 68] uses the responses from one or more underlying LLMs to determine which response to return to the user. For example, FrugalGPT [20] submits the query to a sequence of models (ordered by price) called a cascade, stopping when it obtains a response classified by the router as sufficient.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv1.pdf" - }, - { - "text": "We introduced and defined a new safety property, *LLM control plane integrity*. Informally, this property holds if an adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating queryindependent \"confounder gadgets.\" When added to any query, the confounder gadget confuses the router into routing the query to the adversary-chosen LLM.\n\nWe evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against these attacks and indicated directions for future research.\n\n# Acknowledgments\n\nThis research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Foundation (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.", - "page_start": 17, - "page_end": 17, - "source_file": "arxiv1.pdf" - }, - { - "text": "an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.\n\nDetecting anomalous user workloads. Another possible defense requires the router to monitor individual user workloads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user's queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent fraction of queries being routed to the strong model.\n\nSuch user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit, each query.\n\n# 9 Related Work\n\nEvasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25, 43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.\n\nPrompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adversarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as \"do not output expletives\" [23, 42, 54, 66, 72, 73].\n\nPrompt injection is also used for extraction attacks that aim to infer some information from or about the model, for example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into thirdparty data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate their integrity by exploiting the weaknesses of the underlying LLM [19, 55].\n\nOur attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming controlplane-confounding queries.\n\nAttacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a given query with a lower computational cost by including an inner routing mechanism that in every layer routes different tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM, rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to build larger models at a fixed compute budget—not all parameters are used at the same time.\n\nHayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore this connection further.\n\nYona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users' prompts. We expect that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.\n\n# 10 Conclusion\n\nLLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an example of a broader, emerging class of systems we call \"LLM control planes\" that aim to achieve various quality, efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - }, - { - "text": "| Routers | Notation | |\n| --- | --- | --- |\n| Similarity-weighted ranking | RSW | |\n| Matrix factorization | RMF | |\n| BERT classifier | RCLS | |\n| LLM scoring | RLLM | |\n| LLM pair | Strong (Ms) | Weak (Mw) |\n| 1 | Llama-3.1-8B | 4-bit Mixtral 8x7B |\n| 2 | Llama-3.1-8B | Mistral-7B-Instruct-v0.3 |\n| 3 | Llama-3.1-8B | Llama-2-7B-chat-hf |\n| 4 | GPT-4-1106-preview | 4-bit Mixtral 8x7B |\n| Benchmark | Description | |\n| MT-Bench [71] | 160 open-ended questions | |\n| MMLU [35] | 14,042 multi-choice questions | |\n| GSM8K [24] | 1,319 grade-school math problems | |\n\nFigure 3: Summary of our setup for routers, underlying LLMs, and benchmark datasets used in the experiments.\n\nIn all experiments, we assume that the adversary's goal is to reroute queries to the strong model. In Appendix E, we evaluate efficacy of the attack when the goal is to reroute to the weak model.\n\nTarget routers. We focus our evaluation on the four prescriptive routing algorithms proposed by Ong et al. [47], which provides open-source code and trained parameters, and does so for a representative variety of routing approaches: similarity-based classification [41, 59], an MLP constructed via matrix factorization [59], BERT-based classification [27, 53, 59], and a fine-tuned LLM.\n\nThe routers we evaluate were trained in a supervised fashion using a set of reference (training) queries whose performance score on each of the considered models is known. The scores were computed from a collection of human pairwise rankings of model answers for each of the queries. We note that while the routers we consider are all learned using this training set, there is no reason to believe a non-learning-based approach (e.g., rule based) to routing would be more adversarially robust.\n\nWe now outline the routing methods considered in this work. See Ong et al. [47] for their full implementation details.\n\n*Similarity-weighted ranking:* The first method is based on the Bradley-Terry (BT) model [17]. For a given user query, this model derives a function to compute the probability of the weak model being preferred over the strong model. The probability-function expressions all share parameters, which are optimized to minimize the sum of cross-entropy losses over the training-set queries, where each element in the sum is weighted by the respective query's similarity with the user's query (computed as embeddings cosine similarity, with the embedding derived using OpenAI's text-embedding-3 small [6]). We denote this method as RSW .\n\n*Matrix factorization:* The second method is based on matrix factorization. The training queries are used to train a bilinear function mapping a model's embedding and a query's embedding to a score corresponding to how well the model performs on the query. Routing is done by computing the score of the input query for each model, and choosing the highest-scoring model. We denote this method as RMF .\n\n*BERT classifier:* The third method involves fine-tuning a classifier, based on the BERT-base architecture [26], to predict which of the two models produces a better response for the given query or whether they do equally well (a tie). The routing decision is based on the probability of the weak model providing a better response versus the strong model or the tie. We denote this method as RCLS.\n\n*LLM classifier:* The last method is based on asking an LLM to provide a score in the range 1–5 of how an AI expert would struggle to respond to a given query based on the query's complexity. For this, Ong et al. fine-tuned a Llama-3-8B model [4] using their reference set of queries and corresponding scores. We denote this method as RLLM.\n\nUnderlying LLMs. In [47], Ong et al. trained the routers with GPT-4-1106-preview [14] as the strong model and Mixtral 8x7B [39] as the weak model. They report successful generalization between the underlying LLMs, stating that their routers trained for a particular strong-weak LLM pair can be used with other strong-weak LLM pairs.\n\nTo allow our evaluation to scale, we use as the strong model Ms the open-sourced Llama-3.1-8B [3] and as Mw the 4-bit quantized version of Mixtral 8x7B (for efficiency reasons). This reduced the cost of our experiments by avoiding expensive GPT API calls and lowering the computational costs of Mixtral. Unless mentioned otherwise, all of our results", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv1.pdf" - }, - { - "text": "| | RSW | | RMF | | RCLS | | RLLM | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Original | Confounded | Original | Confounded | Original | Confounded | Original | Confounded |\n| MT-Bench | 9.2 | 9.2 ± 0.0 | 9.1 | 9.3 ± 0.0 | 9.2 | 9.1 ± 0.0 | 8.9 | 9.1 ± 0.1 |\n| MMLU | 76 | 84 ± 1 | 76 | 81 ± 0 | 76 | 84 ± 0 | 78 | 84 ± 1 |\n| GSM8K | 62 | 86 ± 0 | 65 | 88 ± 1 | 68 | 90 ± 2 | 66 | 85 ± 2 |\n\nTable 10: Benchmark-specific average scores of responses to the original and confounded queries with GPT-4-1106 preview as the strong model (LLM pair 4), in the white-box setting. Results demonstrate a higher increase in performance with respect to the LLM pair 1 setting, due to the larger performance gap between the models.\n\nconfounder gadgets, queries originally routed to GPT are still routed to GPT and no queries are ever routed to Claude. For queries originally routed to Llama, some gadgets result in *all* of them being rerouted to GPT, and some have no impact. Specifically, 4 out of the 10 gadgets we optimized using RSW caused all queries to be rerouted to GPT, 2/10 using RMF , and 3/10 using RLLM. None of the gadgets optimized using RCLS had any impact on routing. In terms of costs, having all queries being rerouted to GPT results with an average cost of $0.25, a greater than 8× increase over the cost of the original queries. Given the lack of documentation of the routing algorithm being used, we are unsure what explains the variability across gadgets.\n\nMartian. This router is supposed to let the user provide a list of models and to specify the maximum amount the user is willing to pay for a query or for 1M tokens. Unfortunately, as of November 14, 2024, the router appears to ignore the list models provided by the user, and forwards the input to the same LLM regardless of it. We tested this in settings including one, two, or multiple models. While responses do not specify which LLM was used, they were identical across settings, so we excluded Martian from our evaluation. We notified Martian about the seemingly buggy behavior.\n\n# 8 Defenses\n\nDefenses against rerouting should be cheap. If the per-query cost of the defense is comparable to the per-query cost of a strong LLM, deploying the defense will defeat the main purpose of LLM routing, which is to reduce the cost of responding to queries.\n\nPerplexity-based filtering. As explained in Section 6, perplexity is a measure of how \"natural\" the text looks. Perplexitybased filtering has been suggested in many contexts as a defense against adversarial text inputs [16, 36]. This defense computes the perplexity of multiple \"trusted\" texts, then compares it with the perplexity of the suspicious text. If the latter is significantly higher, or above some predefined threshold, the text is considered adversarial. Specifically, we assume the defender has access to a set of unmodified queries. The defender computes their perplexity values and uses these values to establish a threshold. Given a new query, the defender checks if its perplexity exceeds the threshold. If so, the query is flagged as adversarial. The defender can then decide how to handle such queries. Options include rejecting them or routing them all to the weak model. Computing the perplexity of a query can be cheap to do, e.g., using GPT-2 as we do in this work; this makes it viable for use as a defense that doesn't undermine the benefits of routing.\n\nTo evaluate the effectiveness of such a defense against our attack, we compare the perplexity values of original and confounded queries. Figure 5 presents histograms of perplexity values for both the original evaluated GSM8K queries and their corresponding confounded versions, generated using one of the confounder gadgets, sampled uniformly at random. Additionally, the figure displays the ROC curve for the defense that detects confounded queries by checking if their perplexity exceeds a threshold. As can be seen, the confounded queries exhibit significantly higher perplexity values, making them readily distinguishable from the original queries. For instance, in the case of the RSW router, setting the threshold value at 55 yields a false-positive rate of 3% and a true-positive rate of 97%. Results are similar for other gadgets and benchmarks and were omitted due to space constraints.\n\nUnfortunately, this defense can be evaded if an adversary incorporates a perplexity constraint into the gadget generation process. To demonstrate the feasibility of this evasion strategy, we modify gadget generation to maximize the score of the routing algorithm R and simultaneously aligning the the gadget's perplexity to some predefined perplexity value. In more detail, in each iteration t ∈ [T], we uniformly sample a target index j ∈ [1, n] and generate a set B of B + 1 candidates as explained in Section 4. We then modify Eq. 1 such that we now find the candidate that maximizes the difference between the router's score and the perplexity constraint for the confounder:\n\n$$c^{(t+1)}\\leftarrow\\operatorname*{arg\\,max}_{c\\in\\mathcal{B}}\\;\\left(S_{\\theta}(c\\|x_{i})-\\alpha\\cdot|\\mathrm{{\\sf{PPL}}}(c)-\\rho|\\right)\\,,$$", - "page_start": 13, - "page_end": 13, - "source_file": "arxiv1.pdf" - }, - { - "text": "will be evaluated with respect to this pair, which we refer to as LLM pair 1. We performed more limited experiments with the original strong, weak model pair (LLM pair 4) and had similar success in rerouting.\n\nWe additionally performed experiments with two further weaker models, in order to better evaluate the case where weak models produce much lower-quality responses for queries (compared to the strong model). In particular, we define LLM pair 2 as the strong model plus Mistral-7B-Instruct-v0.3 [38] and LLM pair 3 as the strong model plus Llama-2-7B-chathf [63]. The weaker models in pairs 2 and 3 were chosen to represent smaller (Mistral 7B) and older-generation (Llama-2) models: according to the Chatbot Arena LLM ranking leaderboard [1, 21], Llama-3.1-8B is ranked in the 58th place, Mixtral 8x7B at the 88th place, Mistral-7B at the 108th place, and Llama-2-7B at the 125th place.\n\nThe LLM strong-weak pairs with which we performed experiments are summarized in Figure 3.\n\nEvaluation datasets. We will evaluate our attacks using three standard LLM benchmarks as workloads: MT-Bench [71], a dataset of 160 open-ended questions, MMLU [35], a dataset of 14,042 multi-choice questions, and GSM8K [24], a dataset of 1,319 grade-school math problems. Note that Ong et al. [47] flagged that some data points are \"contaminated\", i.e., they are too similar to the ones used in their training of the routers. We use these datasets without these contaminated elements, resulting in 72 MT-bench queries, 14,037 MMLU queries, and 1,307 GSM8K queries.\n\nFor MMLU and GSM8K, we will require that the LLMs respond in a predefined format so we can parse and compare the responses to ground-truth answers. To facilitate this, we prepended formatting instructions to the query, inserted as a prefix before the gadget in the case of confounded queries. In other words, a confounded query ends up defined as xˆi = instr∥c∥xi for instruction template instr, confounder gadget c, and original query xi . Thus in this case we model a scenario where the adversary only controls a part of the prompt rather than the entire prompt. See Appendix B for formatting examples and ablations.\n\nRouter calibration. For each workload, we must calibrate each router by setting the threshold τ to achieve some target fraction ϵ of queries routed to the strong model. Note that the calibration process we use is agnostic to the underlying LLM pair. We therefore must define 12 distinct thresholds, one for each router, dataset pair. For our experiments here, we set ϵ = 0.5, meaning the goal is to have about half the queries routed to the strong model. This reflects an application developer that seeks to control for costs, even if it may mean sacrificing some performance for some workloads.\n\nTo calibrate for MT-bench, we use the Chatbot Arena [21] dataset as the calibration set, computing the threshold using the 55 K queries for which Ong et al. precomputed the scoring function outputs. To calibrate for MMLU and GSM8K, we select 1,000 queries uniformly at random and uses these to set thresholds. Looking ahead, we do not use these queries during evaluation of the attacks.\n\nNote that it important that the distribution of calibration queries be similar to the distribution of the target workload (and, in our experiments, the test queries). We observed that the Chatbot Arena-based threshold did not transfer well to MMLU and GSM8K, resulting in the majority of queries (≈ 98%) routed to the strong model.\n\n### 6 Rerouting Open-Source Routers\n\nWe now empirically evaluate our rerouting attack against the open-source routers described in the previous section. Unless otherwise specified, our evaluation focuses on the query-independent attack setting where the attacker first finds a fixed set of gadgets and then uses them to attack arbitrarily many queries. This is the conservative setting, and query-specific gadgets — which carry a higher computational cost — generally work better.\n\nIn Appendix C we evaluate optimization-free alternatives for generating our confounding gadgets, and show they significantly underperform our optimization-based approach.\n\nWhite-box confounder gadget generation. Following our attack framework described in Section 4, we construct a query-independent control-plane gadget designed to confuse each router. We start with the white-box setting, setting the batch size to B = 32 and the number of iterations to T = 100, ignoring thresholds. We generate four sets of n = 10 gadgets, i.e., ten for each router. Examples of generated gadgets can be found in Appendix A.\n\nWhen reporting scores below, we therefore report the average over the n gadgets used with all 72 MT-bench queries, 100 randomly selected MMLU queries, and 100 randomly selected GSM8K queries. None of these testing queries were used in the training of the routers or their calibration.\n\nRuntime and convergence. Figure 4 shows the convergence rates for 10 different gadgets, against different routing algorithms. The overall average number of iterations before convergence is 58. Generation against RSW converges the", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv1.pdf" - }, - { - "text": "Figure 12-4 Horizontal and vertical scaling with multiple LPARs\n\nThis scenario is in organizations with large systems, such as AIX or z/OS, that are installed and that have enough available capacity to support the required Content Manager OnDemand workload. One advantage of this configuration is that you can control the priority of work and computer resource distribution to each of the LPARs, such as the number of processors or the processing priority (depending on the computer system/operating system architecture) that is allocated to each of the LPARs. So, for example, load jobs can be assigned a low priority during the day when the focus is on data retrieval and a high priority during the night when the focus is on data loading.\n\nThis setup supports horizontal scalability by using multiple technologies as appropriate. The main constraint is that clients must have access to all systems through TCP/IP.\n\n# **12.2.6 Multiple server configuration rules**\n\nThe following general rules apply when you configure multiple Content Manager OnDemand servers. In all cases, for additional guidance, see the appropriate Content Manager OnDemand documentation or contact Content Manager OnDemand Lab Services.\n\n- -Each Content Manager OnDemand server has its own set of configuration files.\n- - The parameters in all configuration files must be set so that all of the servers are part of the same instance.\n- - The Content Manager OnDemand clients connect to the IP address listening port of the Content Manager OnDemand server (library server module).\n- - The documents are retrieved from the various object servers based on the location information that is returned by the library server. This retrieval is transparent to the client systems.\n- -Parallel load processes must have separate temp directories.\n\nFigure 12-5 on page 292 depicts this configuration type.", - "page_start": 314, - "page_end": 314, - "source_file": "sg246915.pdf" - }, - { - "text": "Clients might be able to use the intelligence in domain name servers (DNSs) to provide partial failover. Figure 3-1 shows a simple IP addressing scheme that uses a single subnet for iSCSI and management.\n\n*Figure 3-1 IP addressing scheme: Use of a single subnet*\n\n# **3.5.1 Firewall planning**\n\nAfter you have your IP network planned, identify network flows that are required for the correct functioning of the environment. The list must specify source IP address, destination IP addresses, and required protocols or ports for each flow. Present the list to the firewall administrators and request the set up of appropriate firewall rules.\n\nFor a list of mandatory and optional network flows that are required for operating IBM SAN Volume Controller, search for \"TCP/IP requirements for the system\" at IBM Knowledge Center.\n\n# **3.6 SAN configuration planning**\n\nStorwize V7000 cluster can be configured with a minimum of two (and up to eight) Storwize V7000 nodes. These nodes can use SAN fabric to communicate with back-end storage subsystems and hosts.\n\n# **3.6.1 Physical topology**\n\nThe switch configuration in an Storwize V7000 fabric must comply with the switch manufacturer's configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturer's rules is not supported.", - "page_start": 71, - "page_end": 71, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is an LLM control plane ?", - "target_page": 3, - "target_passage": " An LLM control plane Rω is a potentially randomized algorithm.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].\n\n### 3 LLM Control Plane Integrity\n\nIn this section, we define *LLM control plane integrity*. Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the case of n = 2, and, for reasons that will be clear in a moment, use Ms (\"strong\") and Mw (\"weak\") to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ←$ RMω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a query.\n\nInference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of LLM invocations Mij (zj ) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RMω (x). Here m is the total number of LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\n$$T=(i_{1},z_{1}),(i_{2},z_{2}),\\ldots,(i_{m},z_{m})$$\n\nof pairs of model indexes ij ∈ {w, s} and model inputs zj . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x),(s, x)}. We write submitting a sequence of inferences ⃗x = ⃗x1, . . . , ⃗xq to a control plane as\n\n$$R_{\\omega}^{\\mathcal{M}}(\\vec{x})=(R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{1}),\\ldots,R_{\\omega}^{\\mathcal{M}}(\\vec{x}_{q}))$$\n\nwhere note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however, each invocation results in a single LLM invocation.\n\nAn *inference flow policy* dictates the control plane designer's intention regarding use of the underlying models. For example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply defined as a predicate P over (input, model) pairs (⃗x1, i1), . . . ,(⃗xq, iq) since this fully defines the sequence of transcripts. For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:\n\n$${\\mathcal{P}}(({\\vec{x}}_{1},i_{1}),\\ldots,({\\vec{x}}_{q},i_{q}))=\\left(\\sum_{j=1}^{q}{\\frac{\\mathbb{I}(i_{j})}{q}}\\leq\\epsilon\\right)$$", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "# REROUTING LLM ROUTERS\n\nA PREPRINT\n\nAvital Shafran The Hebrew University of Jerusalem\n\nRoei Schuster Wild Moose\n\nThomas Ristenpart Cornell Tech\n\nVitaly Shmatikov Cornell Tech\n\n### ABSTRACT\n\nLLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we investigate routers' adversarial robustness.\n\nWe first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial inputs, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate queryindependent token sequences we call \"confounder gadgets\" that, when added to any query, cause LLM routers to send the query to a strong LLM.\n\nOur quantitative evaluation shows that this attack is successful both in white-box and black-box settings against a variety of open-source and commercial routers, and that confounding queries do not affect the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating alternative defenses.\n\n### 1 Introduction\n\nLarge language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller, less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.\n\nDevelopers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of this writing, GPT-3.5-turbo costs $0.5/$1.5 per 1M input/output tokens, GPT-4o-mini $0.15/$0.6, GPT-4o $2.5/$10, o1-preview $15/$60. The difference in quality between models is not uniform across queries. For some queries, even a cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality answer.\n\nA natural solution to balancing performance and economic considerations is to take advantage of the availability of multiple LLMs at different price-performance points. Recently proposed *LLM routing* systems [5, 12, 27, 47, 53] orchestrate two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of sufficient quality. In the two-LLM case, let Ms be an expensive, high-quality model and Mw a weaker, lower-grade one. Given query q, the routing algorithm R(·) applies a classifier to q that outputs 0 if Mw is sufficient for answering q, or 1 if Ms is required. The system then routes q accordingly.\n\nLLM routing is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple LLMs to process inputs, as further described in Section 2.\n\nOur contributions. First, we introduce *LLM control plane integrity* as a novel problem in AI safety. Recently proposed LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial robustness of the underlying LLMs.", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv1.pdf" - }, - { - "text": "#### NAVWEPS OD4OT-80 APPLICATION OF AERODYNAMICS TO SPECIFIC PROBLEMS OF FLYING\n\ne.g., cruise, climb, maneuvers, etc. The region of reversed command is encountered primarily in the low speed phases of flight during takeoff and landing. Because of the extensive low speed flight during carrier operations, the Naval Aviator will be more familiar with the region of reversed command than the ordinary pilot.\n\nThe characteristics of flight in the region'of normal command are illustrated at point A on the second curve of figure 6.2. If the airplane is established in steady, level flight at point A, lift is equal to weight and the power available is set equal to the power required. When the airplane is disturbed to some airspeed slightly greater than point 'A, a power deficiency exists and, wheq,:the &+la&is disturbed to some airspeed slightly lower than point A, a power excess exists. This relationship provides a tendency for the airplane to return to the equilibrium of point A and resume the original flight condition following a disturbance. Also, the static longitudinal stability of the airplane tends to return the airplane to the original trimmed CL and velocity corresponding to this C,. The phugoid usually has most satisfactory qualities at low values of C,. so the high speed of the region 'of normal command provides little tendency of. the airplane's, airspeed to vary or wander abom.\n\nWith all factors considered, flight in Lhe region of noi& command is characterized by a relatively strong tendency of the airplane to maintain the trim speed quite naturally. However, flight in the region of normal command can lead to some unusual and erroneous impres-, sions regarding proper flying technique. For example, if the airplane is established at point A in steady level flight, a controlled increase in airspeed without a change in power setting will create a deficiency of power and cause the airplane to descend. Similarly, a controlled decrease in airspeed without a change in power setting will create an excess of power and cause the airplane to climb. This fact, coupled with Lhe transient motion of the airplane when the\n\nangle of attack is changed rapidly, may lead to the impression thal rate of climb and descent can be controlled by changes in angle of attack. While such is true in the region of normal command, for the conditions of stead' flight, primary control of altitude remains the power setting and the primary control of airspeed remains the angle of attack. The impressions and habits that can be developed in the region of normal command can bring about disastrous consequences in the region of reversed command\n\nThe characteristics of flight in the region of reversed command are illustrated at point B on the second curve of figure 6.2. If the airplane is established in steady, level flight at point B, lift is equal to weight and the power available is set equal to the. power required. When the airplane is disturbed to some airspeed slightly greater than point B, an excess of power exists and, when the airplane is disturbed to some airspeed slightly lower than point B, a deficiency of power exists. This relationship is basically unstable because the variation of excess power to either side of point B tends to magnify any original disturbance. While the static longitudinal stability of the airplane tends to maintain the original trimmed C, and airspeed corresponding to that CL, the phugoid usually has the least satisfactory qualities at the high values of CL corresponding to low speed flight.\n\nWhen all factors are considered, flight in the region of reversed command is characterized by a relatively weak tendency of the airplane to maintain the trim speed naturally. In fact it is likely that the airplane will exhibit no inherent tendency to maintain the trim speed in this regime of flight. For this reason, the pilot inust give particular attention to precise control of airspeed when operating in the low flight speeds of the region of reversed command.\n\nWhile flight in the region of normal command may create doubt as to the primary control of airspeed and altitude, operation in the region of reversed command should leave little", - "page_start": 372, - "page_end": 372, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| loo | l.lm | 1.30 | 20.00 |\n| --- | --- | --- | --- |\n| 110 | ,826 | 1.24 | 15.P |\n| 17.0 | ,694 | 1.04 | 12.7' |\n| lY) | .444 | .61 | 8.20 |\n| 200 | 230 | .38 | 4.6' |\n| MO | ,111 | .I7 | 2.10 |\n| 4&l | .c453 | .o!J | 1.10 |\n| 30.7 | ,040 | .06 | .T= |\n| 600 | .028 | .04 | .5O |\n\nNote that for the conditions of steady flight, each airspeed requites a specific angle of attack and lift coefficient. This fact provides a fundamental concept of flying technique: Angle of attack is tbs primary Control of airspeed in steady flight. Of course, the control stick or wheel allows the pilot to control the angle of attack and, thus, control the airspeed in steady flight. In the same sense, the throttle controls the output of the powerplant and allows the pilot to control rate of climb and descent at various airspeeds.\n\nThe teal believers of these concepts ate professional instrument pilots, LSO's, and glider pilots.. The glider pilot (or flameout enthusiast) has no recourse but to control airspeed by angle of attack and accept whatever rate of descent is incurred at the various airspeeds. The LSO must become quite proficient at judging the flight path and angle of attack of the airplane in the pattern. The more complete visual reference field available to the LSO allows him to judge the angle of attack of the airplane mote accurately than the pilot. When the airplane approaches the LSO, the precise judgment of airspeed is by the angle of attack rather than the rate of closure. If the LSO sees the airplane on the desired flight path but with too low an angle of attack, the airspeed is too high; if the angle of attack is too high, the airspeed is too low and the aitplane is approaching the stall. The mirror landing system coupled with an angle of attack indicator is an obvious refinement. The mittot indicates the desired flight path and the angle of attack indicator allows precision control of the airspeed. The accomplished insttument pilot is the devotee of \"attitude\" flying technique-his creed being \"attitude plus power equals performance.\" During a GCA approach, the professional instrument pilot controls airspeed with stick (angle of attack) and rate of descent with power adjustment.\n\nManeuvering flight and certain transient conditions of flight tend to complicate the relationship of angle of attack and airspeed. However, the majority of flight and, certainly, the most critical regime of flight (takeoff, approach, and landing), is conducted in essentially steady flight condition.\n\nAIRFOIL LIFT CHARACTERISTICS. Airfoil section properties differ from wing or airplane properties because of the effect of the planform. Actually, the wing may have vatious airfoil sections from root to tip with taper, twist, sweepback and local flow components in a spanwise direction. The resulting aetodynamic properties of the wing are determined by the action of each section along the span and the three-dimensional flow. Airfoil section properties are derived from the basic shape or profile in two-dimensional flow and the force coefficients are given a notation of lower case letters. For example, a wing or airplane lift coefficient is C, while an airfoil section lift coefficient is termed cr. Also, wing angle of attack is Q while section angle of attack is differentiated by the use of 01~. The study of section properties allows an objective consideration of the effects of camber, thickness, etc.\n\nThe lift characteristics of five illustrative airfoil sections are shown in figure 1.12. The section lift coe&icient, c,, is plotted versus section angle of attack, olO, for five standard NACA airfoil profiles. One characteristic feature of all airfoil sections is that the slope of the various lift curves is essentially the same. At low lift coefhcients, the section lift coefficient increases approximately 0.1 for each degree increase in angle of attack. For each of the airfoils shown, a S' change in angle of", - "page_start": 44, - "page_end": 44, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between queries sent to the strong and weak models.\n\nTo initiate the study of this problem, we show that existing LLM routing algorithms are not adversarially robust. We design, implement, and evaluate a method that generates *query-independent* adversarial token sequences we call \"confounder gadgets.\" If a gadget is added to any query, this query is routed to the strong model with high probability. Next, we show that this attack is effective even in the *transfer* setting where the adversary does not have full knowledge of the target LLM router (it is black-box), but has access to another router (e.g., an internally trained surrogate). We also evaluate the integrity of commercial LLM routers, showing that they can be confounded as well.\n\nThird, we investigate defenses. Our basic method generates gadgets that have anomalously high perplexity. Confounded queries are thus easily distinguished from normal queries and can be filtered out by the routing system. Unfortunately, this defense can be evaded by an adversary who incorporates a low-perplexity objective into the gadget generation algorithm, producing gadgets that have low perplexity—and yet are effective at re-routing queries to the strong model. We also discuss higher-level defenses, such as identifying users whose queries are routed to the strong model with abnormal frequency.\n\nRouting attacks can be deployed for various adversarial objectives, e.g., to ensure that the adversary always obtains the highest-quality answer regardless of the target applications's internal routing policies and cost constraints, or to maliciously inflate the target's LLM costs. As LLM control planes grow in importance and sophistication, we hope that this work will motivate further research on their adversarial robustness.\n\n# 2 LLM Control Planes and Routing\n\nInference using large language models (LLMs) is traditionally monolithic: a single model is applied to an input or sequence of inputs. This methodology can be sub-optimal for various reasons. State-of-the-art models are often expensive, with API access to LLMs costing as much as several dollars for each query. Elsewhere, distinct LLMs may excel at different tasks, and selectively using them may improve overall quality on a diverse workload. Finally, combining multiple LLMs, even all trained for similar tasks, may become increasingly prevalent as performance improvements of individual LLMs plateaus [8–10].\n\nResearchers and practitioners are therefore now developing inference architectures that use multiple LLMs to answer queries. These LLMs are orchestrated by what we call an *LLM control plane* (borrowing the terminology from networking [13]). The control plane may route queries or parts of queries to different LLMs, derive new strings to query to underlying LLMs, combine answers from underlying LLMs, and more.\n\nLLM routers. A prominent example of this emerging class of LLM control planes are *LLM routers* [27, 41, 47, 53, 59]. LLM routers decide which of the two (or, sometimes, more) LLMs to use to answer a query. In prescriptive routing, the router applies some lightweight classifier to the input query that determines which underlying LLM to utilize for a response. The classifier is itself a learned function that scores the complexity of the query. Deployments can then configure a score threshold for when to route a query to the more expensive LLM. This threshold can be tuned using representative workloads to achieve a desired cost-performance trade-off. Figure 1 shows the basic workflow of binary LLM routers.\n\nNon-prescriptive routing [15, 20, 68] uses the responses from one or more underlying LLMs to determine which response to return to the user. For example, FrugalGPT [20] submits the query to a sequence of models (ordered by price) called a cascade, stopping when it obtains a response classified by the router as sufficient.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv1.pdf" - }, - { - "text": "an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.\n\nDetecting anomalous user workloads. Another possible defense requires the router to monitor individual user workloads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user's queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent fraction of queries being routed to the strong model.\n\nSuch user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit, each query.\n\n# 9 Related Work\n\nEvasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25, 43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.\n\nPrompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adversarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as \"do not output expletives\" [23, 42, 54, 66, 72, 73].\n\nPrompt injection is also used for extraction attacks that aim to infer some information from or about the model, for example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into thirdparty data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate their integrity by exploiting the weaknesses of the underlying LLM [19, 55].\n\nOur attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming controlplane-confounding queries.\n\nAttacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a given query with a lower computational cost by including an inner routing mechanism that in every layer routes different tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM, rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to build larger models at a fixed compute budget—not all parameters are used at the same time.\n\nHayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore this connection further.\n\nYona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users' prompts. We expect that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.\n\n# 10 Conclusion\n\nLLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an example of a broader, emerging class of systems we call \"LLM control planes\" that aim to achieve various quality, efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - }, - { - "text": "#### NAVWEPS 00-BOT-BO STABILITY AND CONTROL\n\ntype of flight control system is decided by the size and flight speed range of the airplane.\n\nThe conventional control system consists of direct mechanical linkages from the controls to the control surfaces. For the subsonic airplane, the principal means of producing proper control forces utilize aerodynamic balance and various tab, spring, and bobweight devices. Balance and tab devices are capable of reducing control forces and will allow the use of the conventional control system on large airplanes to relatively high subsonic speeds.\n\nWhen the airplane with a conventional control system is operated at transonic speeds, the great changes in the character of flow can produce great aberrations in control surface hinge moments and the contribution of tab devices. Shock wave formation and separation of flow at transonic speeds will limit the use of the conventional control system to subsonic speeds.\n\nThe power-boosted control system employs a 'mechanical actuator in parallel with the mechanical linkages of a conventional control system. The principle of operation is to provide a fixed, percentage of the required control forces thus reducing control forces at high speeds. The power-boosted control system requires a hydraulic actuator with a control valve which supplies boost force in fixed proportion to control force. Thus, the pilot is given an advantage by the boost ratio to assist in deflecting the control surface, e.g., with a boost ratio of 14, the actuator provides 14 lbs. of force for each 1 lb. of stick force.\n\nThe power-boosted control system has the obvious advantage of reducing control forces at high speeds. However, at transonic speeds, the changes in control forces due to shock waves and separation still take place but to a lesser degree. The \"feedback\" of hinge moments is reduced but the aberrations in stick forces may still exist.\n\nThe power-opsrdted, irreversible control system consists of mechanical actuators controlled by the pilot. The control surface is deflected\n\nby the actuator and none of the hinge moments are fed back through the controls. In such a control system, the control position decides the deflection of the control surfaces regardless of the airloads and hinge moments. Since the power-operated control system has zero feedback, control feel must be synthesized otherwise an infinite boost would exist.\n\nThe advantages of the power-operated CORtrol system are most apparent in transonic and supersonic flight. In transonic flight, none of the erratic hinge moments are fed back to the pilot. Thus, no unusual or erratic control forces,will be encountered in transonic flight. Supersonic flight generally requires the use of an all-movable horizontal surface to achieve the necessary control effectiveness. Such control surfaces must then be actuated and positively positioned by an irreversible device.\n\nThe most important item of an artificial feel system is the stick-centering spring or bungee. The bungee develops a stick force in proportion to stick displacement and thus provides feel for airspeed and maneuvers. A bobweight may be included in the feel system to develop a steady positive maneuvering stick force gradient which is independent of airspeed for ordinary maneuvers.\n\nThe gearing between the stick position and control surface deflection is not necessarily a linear relationship. The majority of powered control systems will employ a nonlinear gearing such that relatively greater stick deflection per surface deflection will occur at the neutral stick position. This sort of gearing is to advantage for airplanes which operate at flight conditions of high dynamic pressure. Since the airplane at high 4 is very sensitive to small deflections of the control surface, the nonlinear gearing provides higher stick force stability with less sensitive control movements than the 'system with a linear gearing. Figure 4.21 illustrates a typical linear and nonlinear control system gearing.\n\nThe second chart of figure 4.21 illustrates the typical control system stick force variation", - "page_start": 299, - "page_end": 299, - "source_file": "00-80T-80.pdf" - }, - { - "text": "The range sf the reciprocating powered airplane can be augmented by the use of ground effect. When the airplane is close to the ground or water surface the reduction of induced drag increases the maximum lift-drag ratio and causes a corresponding increase in range. Of course, the airplane must be quite close to the surface to obtain a noticeable increase in (L/D),., and range. The difficulty in holding the airplane at the precise altitude without contacting the ground or water will preclude the use of ground effect during ordinary flying operations. The use of ground effect to extend range should be reserved as a final measure in case of emergency. Because of the very detrimental effect of low altitude on the range of the turbojet, ground effect will not be of a particular advantage in an attempt to augment range.\n\nThe most outstanding examples of the use of ground effect are shown in the cases of multiengine airplanes with some engines inoperative. When the power loss is quite severe, the airplane may not be capable of sustaining altitude and will descend. As ground effect is encountered, the reduced power required may allow the airplane to sustain flight at extremely low altitude with the remaining powerplants functioning. In ground effect, the reciprocating powered airplane will encounter a greater (L/D),, which occurs at a lower airspeed and power required and the increase in range may be quite important during emergency conditions.\n\n#### INTERFERENCE BETWEEN AIRPLANES IN FLIGHT\n\nDuring formation flying and inflight refueling, airplanes in proximity to one another will produce a mutual interference of the flow patterns and alter the aerodynamic characteristics of each airplane. The principal effects of this interference must be appreciated since certain factors due to the mutual interference may enhance the possibility of a collision.\n\n#### NAVWEPS D&ROT-R0 APPLICATION OF AERODYNAMICS TO SPECIFIC 'PROBLEMS OF FLYING\n\nOne example of interference between airplanes in flight is shown first in figure 6.10 with the effect of lateral separation of two airplanes flying in line abreast. A plane of symmetry would exist halfway between two identical airplanes and would furnish a boundary of flow across which there would be no lateral components of flow. As the two airplane wing tips are in proximity, the effect is to reduce the strength of the tip or trailing vortices and reduce the induced velocities in the vicinity of wing tip. Thus, each airplane will experience a local increase in the lift distribution as the tip vortices are reduced and a rolling moment is developed which tends to roll each airplane away from the other. This disturbance may provide the possibility of collision if other airplanes are in the vicinity and there is delay in control correction or overcontrol. If the wing tips are displaced in a fore-and-aft direction, the same effect exists but generally it is of a lower magnitude.\n\nThe magnitude of the interference effect due to lateral separation of the wing tips depends on the proximity of the wi.ig tips and the extent of induced Pov;. This implies that the interference v-r 1 e grealest when the tips are very close AL-L the airplanes are operating at high lift coefficients. An interesting ramification of this effect is that several airplanes in line abreast with the wing tips quite close will experience a reduction in induced drag.\n\nAn indirect form of interference can be encountered from the vortex system created by a preceding airplane along the intended flight path. The vortex sheet rolls up a considerable distance behind an airplane and creates considerable turbulence for any closely following airplane. This wake can prove troublesome if airplanes taking off and landing are not provided adequate separation. The rolled-up vortex sheet will be strongest when the preceding airplanes is large, high gross weight, and operating at high lift coefhcients. At times this turbulence may be falsely attributed to propwash or jetwash.", - "page_start": 400, - "page_end": 400, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS O&ROT-SO APPLICATION OF AERODYNAMICS TO SPECIFIC PROBLEMS OF FLYl.NG\n\ninvitation for trouble of many sorts. The normal and emergency procedures applicable to each specific airplane will insure the proper operation of the equipment.\n\n(3) Operating Limitatiom. The operation of the airplane and powerplant must be conducted within the established limitations. Failure to do so will invite failure or malfunction of the equipment and increase the operating cost or possibly cause an accident.\n\n(4) Flight Characteristics. While all aircraft will have certain minimum requirements for flying qualities, the actual peculiarities and special features of specific airplanes will differ. These particular flight characteristics must be well known and understood by the pilot.\n\n(5) Operating Data. The performance of each specific airplane defines its application to various uses and missions. The handbook operating data must be available at all times to properly plan and elnccate the flight of an aircraft. Constant reference to the operating data will insure safe and effective operation of the airplane.\n\nGreat time and effort are expended in the 1 preparation of the flight handbook to provide the most exact information, data, and procedures. Diligent study and continuous UC of the flight handbook will ensure that the greatest effectiveness is achieved from the airplane while still operating within the inherent capabilities of the design.", - "page_start": 429, - "page_end": 429, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Mach number. As a corollary of this increase in stability is a decrease in controllability and an increase in trim drag.\n\nThe static directional stability of an airplane decreases with Mach number in supersonic flight. The influence of the fuselage and the decrease in vertical tail lift curve slope bring about this condition.\n\nThe dynamic stability of the airplane generally deteriorates with Mach number in supersonic flight. Since a large part of the damping depends on the tail surfaces, the decrease in lift curve slope with Mach number will account in part for the decrease in damp ing. Of course, all principal motions of the aircraft must have satisfactory damping and if the damping is not available aerodynamically it must be provided synthetically to obtain satisfactory flying qualities. For many high speed configurations the pitch and yaw dampers, flight stabilization systems, etc., are basic necessities rather than luxuries.\n\nGenerally, flight at high Mach number will cake place at high altitude hence the effect of high altitude must be separated for study. All of the basic aerodynamic damping is due to moments created by pitching, rolling, or yawing motion of the aircraft. These moments are derived from the changes in angles of attack on the tail surfaces with angular rotation (see fig. 4.15). The very high true airspeeds common to high altitude flight reduce the angle of attack changes and reduce the aerodynamic damping. In fact, the aerodynamic damping is proportional to & similar to the proportion of true airspeed to equivalent airspeed. Thus, at the altitude of 4O,C00 ft., the aerodynamic damping would be reduced to one-half the sea level value and at the altitude of 100,000 ft. the aerodynamic damping would be reduced to one-tenth the sea level value.\n\nHigh dynamic pressures (high $I can be common to flight at high Mach number and adverse aeroelastic effects may be encountered. If the aircraft surfaces, encounter significant\n\ndeflection when subject to load, the tendency may be to lower the contribution to static stability and reduce the damping contribution. Thus, the problem of adequate stability of the various airplane motions is aggravated.\n\n# PILOT INDUCED OSCILLATIONS\n\nThe pilot may purposely induce various motions to the airplane by the action of the controls. In additron, certain undesirable motions may occur due to inadvertent action on the controls. The most important condition exists with the short period longitudinal motion of the airplane where pilotcontrol system response lag can produce an unstable oscillation. The coupling possible in the pilot-control system-airplane combination is most certainly capable of producing damaging flight loads and loss of control of the airplane.\n\nWhen the normal human response lag and control system lag are coupled with the airplane motion, inadvertent control reactions by the pilot may furnish a negative damping to the oscillatory motion and dynamic instability exists. Since the short period motion is of relatively high frequency, the amplitude of the pitching oscillation can reach dangerous proportions in an unbelievably short time. When the pilot induced oscillation is encountered, the most effective solution is an immediate release of the controls. Any attempt to forcibly damp the oscillation simply continues the excitation and amplifies the oscillation. Freeing the controls removes the unstable (but inadvertent) excitation and allows the airplane to recover by virtue of its inherent dynamic stability.\n\nThe pilot induced oscillation is most likely under certain conditions, Most obvious is the case of the pilot unfamiliar with the \"feel\" of the airplane and likely to overcontrol or have excessive response lag. High speed flight at low. altitude (high 4) is most likely to provide low stick-force gradients and periods", - "page_start": 331, - "page_end": 331, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is a confounder gadget ?", - "target_page": 5, - "target_passage": " Given a query xi, we prepend a confounder gadget ci, which is a short sequence of adversarially chosen tokens.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Let B = {c˜0, . . . , c˜B}.\n\n(3) Find the candidate that maximizes the score:\n\n$c_{i}^{(t+1)}\\leftarrow\\arg\\max\\limits_{c\\in\\mathcal{B}}\\ S_{\\theta}(c\\|x_{i})$. \n \n\nThe final confounder c (T) i is used with query xi . We early abort if, after 25 iterations, there is no update to the confounder gadget. Technically, we could abort early if we find a confounder whose score exceeds τ . Running further can be useful when an adversary does not know τ .\n\nThe attack's runtime is dominated by T · B times the cost of executing S. In practice, S are designed to be fast (otherwise routers would significantly increase the latency of applications that use them). We report precise timings later; in summary, the attack is fast because we can set T to be relatively small and still find high-scoring confounders.\n\nDue to the randomness in index and token selection, the method converges to different, yet similarly effective, confounder gadgets on each run. Our evaluation will thus measure average performance over multiple gadgets.\n\nQuery-independent confounders. One downside of the per-query approach is that the adversary must repeat, for each query, the search for a good confounder. In practice, the adversary might prefer a *query-independent* attack. Our confounder gadget approach extends to this setting readily: perform the search routine above for an empty query. In other words, just ignore xi in the query-dependent attack above, replacing Sθ(c∥xi) in Eq. 1 with Sθ(c). This finds a single query-independent confounder c that can be prefixed to all queries, i.e., xˆi = c∥xi . We will show that this works surprisingly well.\n\nIt is tempting to assume the reason a query-independent confounder works well is that a good scoring function should be roughly monotonic in query extensions, i.e., one might expect that Sθ(c∥x) ≥ Sθ(c) for almost any suffix x. This intuition is not correct. In our experiments, we found that Sθ(c∥x) < Sθ(c) for many x and some of the routers discussed below. Nevertheless, by ensuring that Sθ(c) is pretty high (set the number of iterations T higher) the resulting query-independent confounder works well. That is, we at least get that Sθ(c∥x) > Sθ(x).\n\nThe black-box setting: confounders that transfer. Finally, the attacks so far are in the white-box setting, where the attacker can optimize directly against Sθ. While in some cases routing control planes will be public knowledge, in others, including the proprietary control planes we explore in Section 7, they are hidden. This gives rise to the black-box setting. While an attacker might seek to perform model extraction attacks [43, 65] to learn θ, we instead explore attacks that transfer from one router to another.\n\nIn more detail, we assume the adversary has access to a router R′ ω′ , called the *surrogate*, that is trained on data similar to that used for the target router. Then the attack is the same as above, except that we use the surrogate's scoring function S ′ θ ′ instead of the target's Sθ. Again, we will see that this works surprisingly well: the query-independent confounders found for the surrogate transfer to successfully reroute queries against the target router.\n\nPutting it all together. In summary, our methodology for input adaptation attacks is:\n\n- (1) (Preprocessing) Develop a single query-independent confounder gadget c, using either the target router or surrogate to score the confounder.\n- (2) (Input adaptation) For each query xi , submit xˆi = c∥xi instead to obtain a response yˆi .\n\nThe confounder is applied to all queries, i.e., the adversary does not need to guess whether the original query would have been routed to the weak or strong model. In the rest of the paper, we demonstrate the confounders rarely result in \"downgrades,\" i.e., rerouting of queries from the strong to weak model.\n\nWe have experimented with variations of this approach that don't work quite as well, for example adding c as a suffix instead of a prefix. See Appendix B for details.\n\n### 5 Open-Source Routers: Experimental Setup\n\nTo evaluate efficacy of confounder gadgets generated using the method from Section 4, we perform experiments with several LLM routers. This section explains our experimental setup for the open-source routers proposed in the research literature [47]; results of this evaluation appear in Section 6. In Section 7, we discuss experiments with proprietary, commercial routers. Figure 3 shows the summary of our experimental setup.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv1.pdf" - }, - { - "text": "We introduced and defined a new safety property, *LLM control plane integrity*. Informally, this property holds if an adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating queryindependent \"confounder gadgets.\" When added to any query, the confounder gadget confuses the router into routing the query to the adversary-chosen LLM.\n\nWe evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against these attacks and indicated directions for future research.\n\n# Acknowledgments\n\nThis research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Foundation (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.", - "page_start": 17, - "page_end": 17, - "source_file": "arxiv1.pdf" - }, - { - "text": "and view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "- [71] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. Xing *et al.*, \"Judging LLMas-a-judge with MT-Bench and chatbot arena,\" *Advances in Neural Information Processing Systems (NeurIPS)*, 2023.\n- [72] S. Zhu, R. Zhang, B. An, G. Wu, J. Barrow, Z. Wang, F. Huang, A. Nenkova, and T. Sun, \"AutoDAN: Automatic and interpretable adversarial attacks on large language models,\" *arXiv preprint arXiv:2310.15140*, 2023.\n- [73] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, \"Universal and transferable adversarial attacks on aligned language models,\" *arXiv preprint arXiv:2307.15043*, 2023.\n\n### A Gadget Examples\n\nBelow are a few examples of the optimized confounder gadgets c we got, for the different routing algorithms:\n\n• RSW :\n\nlegationbelongs967reglo'hui (DictionaryizedNameantal bidi.numberOf\n\ndemanding RESULTS! XVI-ioibili Written Hormchriftlatable\n\n• RMF :\n\n041 personne.nama tomeTN Exact relac¸ao tinha seis.Description ˜\n\nNUMAGEMENTMedian157 mut-camerauacao nije Hir Ter\n\n• RCLS:\n\ncontiene DEM USERNAME fourteen chees781 consultants200 inici DOJ\n\n571:\n\nOrd:nth Norwegian Mercer docs Abr226 METADATA\n\n• RLLM:\n\ndated:frameifyumi345 Kurdasciiuzeiphertext\n\nMidnightexecution431!784 below1 unwrap : / n / n\n\n# B Ablation Study\n\nIn this section, we evaluate the effect of different hyperparameters and design choices (in the white-box setting).\n\nPrefix vs. suffix. As described in Section 4, we prepend the confounder gadget to the query. An alternative is to append it. This is straightforward for MT-bench and GSM8K, but MMLU consists of multi-choice questions followed by a list of possible answers, and the term \"Answer:\". We insert the gadget at the end of the question text and before the possible answers. If we append it at the very end, after \"Answer:\", the LLM assumes the query was answered and in many cases does not generate any output at all.\n\nTable 12 shows that average upgrade rates are similar regardless of whether the gadget was inserted as a prefix or a suffix. For MMLU, prefix works better. The downgrade rate is 0% in all cases.", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv1.pdf" - }, - { - "text": "#### **B**.**Bind to the APP**\n\n#### **1. APP download method**\n\n1.1 Scan the QR code to download\n\n1.2 Search the application at App market and download\n\nFor Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\nAfter WearPro is installed, the app icon appears as .\n\n#### 2.Bind Bluetooth\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n#### 2.2 Connected to the APP state:\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## **3. Find Watch**\n\nAfter the smartwatch is bound to the APP, you click \"Find Watch\" in the APP, the smartwatch will light up and vibrate for once.\n\n#### **4. Camera**", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "# **4. When the user needs the device repaired, please take the device to our company or our company's dealership.**\n\n#### **5. All functions of the device please refer to the actual product.**\n\n**Purchase date: IMEI code: Where to buy: Customer Signature: Signature of Store Clerk: Stamp of Store:**\n\n#### **FCC Caution:**\n\nThis device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.\n\nAny changes or modifications not expressly approved by the party responsible for compliance could void the user's authority to operate the equipment.\n\nNOTE: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.\n\nIf this equipment does cause harmful interference to radio or television reception,\n\nwhich can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures:\n\n-- Reorient or relocate the receiving antenna.\n\n-- Increase the separation between the equipment and receiver.\n\n-- Connect the equipment into an outlet on a circuit different\n\nfrom that to which the receiver is connected.\n\n-- Consult the dealer or an experienced radio/TV technician for help.\n\nThe device has been evaluated to meet general RF exposure requirement. The device can be used in portable exposure condition without restriction.\n\n#### **FCC ID:2A54U-DT3MATE**", - "page_start": 9, - "page_end": 9, - "source_file": "6126797.pdf" - }, - { - "text": "Our new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of – if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "**Note:** Comprestimator can run for a long period (a few hours) when it is scanning a relatively empty device. The utility randomly selects and reads 256 KB samples from the device. If the sample is empty (that is, full of null values), it is skipped. A minimum number of samples with data is required to provide an accurate estimation. When a device is mostly empty, many random samples are empty. As a result, the utility runs for a longer time as it tries to gather enough non-empty samples that are required for an accurate estimate. The scan is stopped if the number of empty samples is over 95%.\n\n# **10.6.2 Evaluating compression and deduplication**\n\nTo help with the profiling and analysis of user workloads that must be migrated to the new system, IBM provides a highly accurate data reduction estimation tool that supports both deduplication and compression. The tool operates by scanning target workloads on any legacy array (from IBM or third party) and then merging all scan results to provide an integrated system level data reduction estimate.\n\nThe Data Reduction Estimator Tool (DRET) utility uses advanced mathematical and statistical algorithms to perform an analysis with low memory footprint. The utility runs on a host that can access the devices to be analyzed. It performs only read operations so it has no effect on the data stored on the device.\n\nThe following sections provide information about installing DRET on a host and using it to analyze devices on it. Depending on the environment configuration, in many cases DRET is used on more than one host to analyze more data types.\n\nWhen DRET is used to analyze a block device that is used by a file system, all underlying data in the device is analyzed, regardless of whether this data belongs to files that were deleted from the file system. For example, you can fill a 100 GB file system and make it 100% used, and then, delete all the files in the file system to make it 0% used. When scanning the block device that is used for storing the file system in this example, the DRET accesses the data that belongs to the files that are deleted.\n\n**Important:** The preferred method of using DRET is to analyze volumes that contain as much active data as possible rather than volumes that are mostly empty of data. This increases the accuracy level and reduces the risk of analyzing old data that is deleted, but might still have traces on the device.\n\nFor more information and the latest version of this utility, see this IBM Support web page.\n\n# **10.7 Data deduplication and compression on external storage**\n\nStarting from IBM Spectrum Virtualize V8.1.x, it supports over-provisioning on selected back-end controllers. This means that if back-end storage performs data deduplication or data compression on LUs provisioned from it, they still can be used as external MDisks on IBM Storwize V7000.\n\nThin-provisioned MDisks from controllers that are supported by this feature can be used as managed mode MDisks in IBM Storwize V7000 and added to storage pools (including DRPs).\n\nImplementation steps for thin-provisioned MDisks are same as for fully allocated storage controllers. Extra caution is used when planning capacity for such configurations.", - "page_start": 452, - "page_end": 452, - "source_file": "sg247938.pdf" - }, - { - "text": "With Canada's first and fastest LTE wireless network – the global gold standard in wireless network technology – Rogers makes \"placeshifting\" a reality so customers can connect to their communications, information and entertainment from almost anywhere, easily and seamlessly. With Rogers, watching TV on the train, conducting a virtual white-boarding session from the beach, disarming a home monitoring system from a smartphone, or answering a home phone from 5,000 kilometers away are becoming everyday activities. Rogers customers no longer have to pick up the phone to check their voicemail; they don't need to be in town to catch their local news; and they don't have to be at their PCs to access their e-mail. And with Rogers, businesses no longer need to work in traditional offices because we help them to quickly set up virtual workspaces, with complete access to customers, colleagues, files and corporate applications, so they are as productive on the road as they are in the office.\n\nAnd now, small businesses as well as households can enjoy the flexibility and value of Rogers new Wireless Home and Small Business Phone products as well.\n\nCustomers know that Rogers makes it easy and seamless to connect with the same personalized information, communications and entertainment experiences no matter where they are – at work, at school, at home or away, including when travelling to more than 200 countries around the world. And they know that only Rogers is there first with innovative new services, such as mobile TV, remote home monitoring, and Rogers One Number, which allows them to switch calls between their wireless device, computer, and home phone without interruption; manage e-mails, text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices – no matter where they are.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "# **18.4.2 MidServer trace (z/OS only)**\n\nTo collect data that processed by the MidServer, complete the following steps:\n\n- 1. Locate the MidServer arsMSVR.cfg configuration file. The file is in the following directory: /MountPoint/config/midserver\n- 2. Turn on MidServer tracing by setting MIDSERVERTRACE=1.\n\nTo collect the input data that is returned to the application, set traceLevel to 2 before you issue the logon function request. This traceLevel indicates that a full trace by the C stub is requested.\n\n- 3. Run the application to re-create the problem.\n- 4. Send the following files to IBM Support for further analysis. All of these files that are sent must be from the same test.\n\n| File name | Location |\n| --- | --- |\n| arswww.ini | /MountPoint/config/midserver |\n| arswww.trace | As specified in the arswww.ini TraceDir= file |\n| arsMSVR.err | STDERR path in the MidServer start procedure |\n| arsMSVR.out | STDOUT path in the MidServer start procedure |\n| MVS job output | SDSF arsMVSR job |\n\n# **18.4.3 ODWEK trace**\n\nSometimes, it is necessary to collect data for IBM Content Manager OnDemand Web Enablement Kit (ODWEK). Gather this information to assist with the troubleshooting process before you contact IBM Support.\n\nRemember that IBM Support cannot debug custom application code. The purpose of this information is to collect diagnostic test results to help identify a possible problem in ODWEK and provide additional documentation to IBM Support. This information is in IBM Technote 1240220:\n\nhttp://www.ibm.com/support/docview.wss?uid=swg21240220\n\nBy using the ODConfig class, you can set the trace as shown in Example 18-4.\n\nExample 18-4 Setting up trace in the ODConfig class\n\n| OConfig cfg = new ODConfig( | |\n| --- | --- |\n| /*AfpViewer*/ | ODConstant.PLUGIN, |\n| /*LineViewer*/ | ODConstant.APPLET, |\n| /*MetaViewer default*/ null, | |\n| /*MaxHits*/ | 500, |\n| /*AppletDir*/ | \"/applets\", |\n| /*Language*/ | \"ENU\", |\n| /*TempDir*/ | \"c:\\\\temp\", |\n| /*TraceDir*/ | \"c:\\\\path\\\\to\\\\trace\", |\n| /*TraceLevel*/ | 4); |\n\nYour ODWEK application must be recompiled and restarted for the changes to take effect.\n\nAfter you enable tracing, re-create the issue and send all arswww.trace* files to IBM Support.", - "page_start": 426, - "page_end": 426, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "What is called bad-cavity Ramsey laser ?", - "target_page": 1, - "target_passage": "We considerthe case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": ".\n\n## **The Linewidth of Ramsey Laser with Bad Cavity**\n\nYang Li, Wei Zhuang, Jinbiao Chen,∗ and Hong Guo†\n\n*CREAM Group, State Key Laboratory of Advanced Optical Communication*\n\n*Systems and Networks (Peking University) and Institute of Quantum Electronics,*\n\n*School of Electronics Engineering and Computer Science,*\n\n*and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China*\n\n(Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling effect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\n*Introduction:* Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing effect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [3– 10]. However, in all these efforts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14–18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling effect [15– 17], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here focus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3–10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an effective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\n*Theoretical framework:* We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state **a** before entering the first cavity of seperated field, and the lower lasing state is **b**. We assume all the atoms have the same velocities υ, that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22–24]. The length of each oscillating part is *l*, and the length of the free drift region is *L*. The corresponding Hamiltonian is\n\n$$H=\\hbar\\omega\\hat{a}^{\\dagger}\\hat{a}+\\hbar\\sum_{j}\\left[\\omega_{a}^{j}(t)\\sigma_{a}^{j}+\\omega_{b}^{j}(t)\\sigma_{b}^{j}\\right]\\tag{1}$$\n \n$$+\\hbar\\mathrm{g}\\sum_{j}\\Gamma_{j}(t)(\\hat{a}^{\\dagger}\\hat{\\sigma}_{-}^{j}e^{-i\\vec{k}\\cdot\\vec{r}_{j}}+\\hat{\\sigma}_{+}^{j}\\hat{a}e^{i\\vec{k}\\cdot\\vec{r}_{j}}),$$\n\nwhere ˆ*a*, ˆ*a* † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω, σ *j a* = (|*a*i h*a*|) *j* and σ *j b* = (|*b*i h*b*|) *j* are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "*Conclusion:* In summary, we propose a new subnatural linewidth spectroscopy technique, which is a laser by using Ramsey seperated-field cavity to realize the output of stimulated-emission radiation via multiple coherent interaction with atomic beam. We find the linewidth of Ramsey laser is subnatural if we choose an appropriate atomic level, and the bad-cavity laser mechanism will dramatically reduce cavityrelated noise as discussed in active optical clock [15–19]. Our results show that this new subnatural linewidth spectroscopy is superior to conventional optical Ramsey seperated-field spectroscopy and any other available subnatural spectroscopy technique at present [3–10]. Considering one have to apply the separated-field method in any phase detection as in Ramsey-Bord*e*´interferometer [2], to investigate the effects of phase differences between the two oscillating fields [31] in this stimulated separated-field method with such subnatural linewidth will be our next research aim.\n\nWe acknowledge Yiqiu Wang and Deshui Yu for fruitful discussions. This work is supported by MOST of China (grant 2005CB724500, National Natural Science Foundation of China (grant 60837004, 10874009), National Hi-Tech Research and Development (863) Program.\n\n- ∗ E-mail: jbchen@pku.edu.cn\n- † E-mail: hongguo@pku.edu.cn.\n- [1] N. F. Ramsey, Phys. Rev. **76**, 996 (1949).\n- [2] B. Dubetsky and P. R. Berman, In *Atom Interferometry*, edited by P. R. Berman (Academic Press, Cambridge, MA, 1997).\n- [3] M. M. Salour, Rev. Mod. Phys. **50**, 667 (1978).\n- [4] J. Wong and J. C. Garrison, Phys. Rev. Lett. **44**, 1254 (1980).\n- [5] P. L. Knight and P. E. Coleman, J. Phys. B: Atom. Molec. Phys. **13** 4345 (1980).\n- [6] H. -W. Lee, P. Meystre, and M. O. Scully, Phys. Rev. A **24**, 1914 (1981).\n- [7] F. Shimizu, K. Shimizu, and H. Takuma, Phys. Rev. A **28**, 2248 (1983).\n- [8] W. Gawlik, J. Kowalski, F. Tr¨ager, and M. Vollmer, Phys. Rev.\n\nLett. **48**, 871 (1982).\n\n- [9] H. J. Carmichael, R. J. Brecha, M. G. Raizen, H. J. Kimble, and P. R. Rice, Phys. Rev. A **40**, 5516 (1989).\n- [10] U. W. Rathe, M. O. Scully, Letters in Mathematical Physics **34**, 297 (1995)\n- [11] K. Numata, A. Kemery, J. Camp, Phys Rev Lett, **93**, 250602 (2004).\n- [12] A. D. Ludlow *et al.*, Opt. Lett. **32**, 641 (2007).\n- [13] H. J. Kimble, B. L. Lev, and J. Ye, Phys. Rev. Lett. **101**, 260602 (2008).\n- [14] J. Chen, and X.Chen, In *Proceedings of the 2005 IEEE International Frequency Control Symposium and Exposition*, (IEEE, 2005), p.608.\n- [15] J. Chen, e-print arXiv:0512096 quant-ph; Chinese Science Bulletin **54**, 348 (2009).\n- [16] D. Yu and J. Chen, Phys. Rev. A **78**, 013846 (2008).\n- [17] J. Chen, In *Frequency Standards and Metrology: Proceedings of the 7th Symposium*, edited by Maleki Lute (World Scientific Publishing Company, 2009).\n- [18] Y. Wang, Chinese Science Bulletin **54**, 347 (2009).\n- [19] D. Meiser, J. Ye, D. R. Carlson, and M. J. Holland, Phys. Rev. Lett. **102**, 163601 (2009)\n- [20] F. Strumia, Metrologia **8**, 85 (1972).\n- [21] G. Kramer, J. Opt. Soc. Am. **68**, 1634 (1978).\n- [22] V. S. Letokhov and B. D. Pavlik, Opt. Spectrosc. USSR **32**, 455 (1972).\n- [23] Ye. V. Baklanov, B. Ya, Dubetsky, V. P. Chebotayev, Appl. Phys. **9**, 171 (1976).\n- [24] J. C. Bergquist, S. A. Lee, and L. L. Hall, Phys. Rev. Lett. **38**, 159 (1977).\n- [25] L. Davidovich, Rev. Mod. Phys. **68**, 127 (1996).\n- [26] M. I. Kolobov, L. Davidovich, E. Giacobino, and C. Fabre, Phys. Rev. A **47**, 1431 (1993).\n- [27] M. Sargent III, M. O. Scully, and W. E. Lamb, *Laser Physics* (Addition Wesley, Reading, MA, 1974).\n- [28] N. A. Abraham, P. Mandel, and L. M. Narducci, *Dynamic Instabilities and Pulsations in Lasers*, Progress in Optics XXV, edited by E. Wolf (Elsevier, Amsterdam, 1988).\n- [29] L. Pasternack, D. M. Silver, D. R. Yarkony, and P. J. Dagdigian, J. Phys. B **13**, 2231 (1980).\n- [30] K. An and M. S. Feld, Phys. Rev. A **56**, 1662(1997).\n- [31] N. F. Ramsey and H. B. Silsbee, Phys. Rev. **84**, 506(1951).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2670.pdf" - }, - { - "text": "- [65] J. P. Burelbach, S. G. Bankoff, and S. H. Davis, \"Nonlinear stability of evaporating/condensing liquid films,\" J. Fluid Mech. 195, 463–494 (1988).\n- [66] A. Oron and S. G. Bankoff, \"Dewetting of a heated surface by an evaporating liquid film under conjoining/disjoining pressures,\" J. Colloid Interface Sci. 218, 152–166 (1999).\n- [67] L. W. Schwartz, R. V. Roy, R. R. Eley, and S. Petrash, \"Dewetting patterns in a drying liquid film,\" J. Colloid Interface Sci. 214, 363–374 (2001).\n- [68] K. Kargupta, R. Konnur, and A. Sharma, \"Spontaneous dewetting and ordered patterns in evaporating thin liquid films on homogeneous and heterogeneous substrates,\" Langmuir 17, 1294–1305 (2001).\n- [69] M. Bestehorn and D. Merkt, \"Regular surface patterns on Rayleigh-Taylor unstable evaporating films heated from below,\" Phys. Rev. Lett. 97, 127802 (2006).\n- [70] G. F. Teletzke, H. T. Davis, and L. E. Scriven, \"Wetting hydrodynamics,\" Rev. Phys. Appl. 23, 989– 1007 (1988).\n- [71] J. N. Israelachvili, *Intermolecular and Surface Forces*, Academic Press, London (1992).\n- [72] V. S. Mitlin, \"Dewetting of solid surface: Analogy with spinodal decomposition,\" J. Colloid Interface Sci. 156, 491–497 (1993).\n- [73] L. M. Pismen and Y. Pomeau, \"Disjoining potential and spreading of thin liquid layers in the diffuse interface model coupled to hydrodynamics,\" Phys. Rev. E 62, 2480–2492 (2000).\n- [74] L. Onsager, \"Crystal statistics. I. A two-dimensional model with an order-disorder transition,\" Phys. Rev. 65, 117–149 (1944).\n- [75] G. Reiter, \"Unstable thin polymer films: Rupture and dewetting processes,\" Langmuir 9, 1344–1351 (1993).\n- [76] C. G. Sztrum, O. Hod, and E. Rabani, \"Self-assembly of nanoparticles in three-dimensions: Formation of stalagmites,\" J. Phys. Chem. B 109, 6741–6747 (2005).\n- [77] G. Yosef and E. Rabani, \"Self-assembly of nanoparticles into rings: A lattice-gas model,\" J. Phys. Chem. B 110, 20965–20972 (2006).\n- [78] J. F. Gouyet, M. Plapp, W. Dieterich, and P. Maass, \"Description of far-from-equilibrium processes by mean-field lattice gas models,\" Adv. Phys. 52, 523–638 (2003).\n- [79] U. M. B. Marconi and P. Tarazona, \"Dynamic density functional theory of fluids,\" J. Chem. Phys. 110, 8032–8044 (1999).\n- [80] U. M. B. Marconi and P. Tarazona, \"Dynamic density functional theory of fluids,\" J. Phys.-Condes. Matter 12, A413–A418 (2000).", - "page_start": 29, - "page_end": 29, - "source_file": "1001.2669.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, \"Drying of solids wetted by thin liquid films,\" Can. J. Phys. 68, 1084–1088 (1989).\n- [6] P. Muller-Buschbaum, \"Dewetting and pattern formation in thin polymer films as investigated in real ¨ and reciprocal space,\" J. Phys.-Condes. Matter 15, R1549–R1582 (2003).\n- [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, \"Dynamics and structure formation in thin polymer melt films,\" J. Phys.-Condes. Matter 17, S267–S290 (2005).\n- [8] U. Thiele, \"Structure formation in thin liquid films,\" in S. Kalliadasis and U. Thiele, editors, \"Thin films of Soft Matter,\" pages 25–93, Springer, Wien (2007).\n- [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, \"Spinodal dewetting of thin polymer films,\" Phys. Rev. Lett. 81, 1251–1254 (1998).\n- [10] R. Seemann, S. Herminghaus, and K. Jacobs, \"Dewetting patterns and molecular forces: A reconciliation,\" Phys. Rev. Lett. 86, 5534–5537 (2001).\n- [11] U. Thiele, M. G. Velarde, and K. Neuffer, \"Dewetting: Film rupture by nucleation in the spinodal regime,\" Phys. Rev. Lett. 87, 016104 (2001).\n- [12] M. Bestehorn and K. Neuffer, \"Surface patterns of laterally extended thin liquid films in three dimensions,\" Phys. Rev. Lett. 87, 046101 (2001).\n- [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, \"Complex ¨ dewetting scenarios captured by thin-film models,\" Nat. Mater. 2, 59–63 (2003).\n- [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, \"Dynamics of dewetting,\" Phys. Rev. Lett. 66, 715– 718 (1991).\n- [15] R. Seemann, S. Herminghaus, and K. Jacobs, \"Shape of a liquid front upon dewetting,\" Phys. Rev. Lett. 87, 196101 (2001).\n- [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, \"New slip regimes and the shape of ¨ dewetting thin liquid films,\" Phys. Rev. Lett. 95, 127801 (2005).\n- [17] F. Brochard-Wyart and C. Redon, \"Dynamics of liquid rim instabilities,\" Langmuir 8, 2324–2329 (1992).\n- [18] G. Reiter and A. Sharma, \"Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,\" Phys. Rev. Lett. 87, 166103 (2001).\n- [19] A. Munch and B. Wagner, \"Contact-line instability of dewetting thin films,\" Physica D ¨ 209, 178–190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [24] M. Strange, I. S. Kristensen, K. S. Thygesen, and K. W. Jacobsen, \"Benchmark density functional theory calculations for nanoscale conductance\", J. Chem. Phys. 128(11), 114714 (Mar. 2008), doi:10.1063/1.2839275.\n- [25] J. M. Soler, E. Artacho, J. D. Gale, A. Garcia, J. Junquera, P. Ordejon, and D. S ´ anchez-Portal, \"The SIESTA method for ´ *ab initio* order-n materials simulation\", J. Phys.: Condens. Matter 14(11), 2745 (Mar. 2002), doi:10.1088/0953-8984/14/11/302.\n- [26] J. S. Griffith, *The Theory of Transition-Metal Ions* (Cambridge University Press, London, 1961).\n- [27] P. Atkins and J. de Paula, *Physical Chemistry*, 8th ed. (Oxford University Press, London, 2006).\n- [28] D. Lide, *Handbook of Chemistry and Physics*, 87th ed. (CRC-Press, 2006–2007).\n- [29] T. Markussen, R. Rurali, A.-P. Jauho, and M. Brandbyge, \"Scal-\n\ning theory put into practice: First-principles modeling of transport in doped silicon wires\", Phys. Rev. Lett. 99(7), 076803 (Aug. 2007), doi:10.1103/PhysRevLett.99.076803.\n\n- [30] M. Ushiro, K. Uno, T. Fujikawa, Y. Sato, K. Tohji, F. Watari, W.-J. Chun, Y. Koike, and K. Asakura, \"X-ray absorption fine structure (XAFS) analyses of Ni species trapped in graphene sheet of carbon nanofibers\", Phys. Rev. B 73(14), 144103 (Apr. 2006), doi:10.1103/PhysRevB.73.144103.\n- [31] C. Gomez-Navarro, P. J. de Pablo, J. Gomez-Herrero, B. Biel, F. J. Garcia-Vidal, A. Rubio, and F. Flores, \"Tuning the conductance of single-walled carbon nanotubes by ion irradiation in the Anderson localization regime\", Nature Materials 4, 534 (Jun. 2005), doi:10.1038/nmat1414.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2538.pdf" - }, - { - "text": "```\n/* OUTPUT: */ \n/* */ \n/* 1) OnDemand will invoke the exit with action == 1 */ \n/* so that the exit can create the table space (tblsp_name) */ \n/* using (sql) */\n/* *created -> 0 exit did not create the table space, */\n/* OnDemand needs to create the table space */\n/* using (sql), which can be left unchanged */\n/* or modified by the exit */\n/* *created -> 1 exit created the table space */\n/* */\n/* 2) OnDemand will then invoke the exit with action == 2 */\n/* so that the exit can create the table (table_name) */\n/* inside of the table space (tblsp_name) using (sql) */\n/* *created -> 0 exit did not create the table, */\n/* OnDemand needs to create the table */\n/* using (sql), which can be left unchanged */\n/* or modified by the exit */\n/* *created -> 1 exit created the table */\n/* */\n/* 3) OnDemand will then invoke the exit with action == 3 */\n/* so that the exit can create the table indexes (idx_name) */\n/* inside of the table space (tblsp_name) for table */\n/* (table_name) using (sql). This will be invoked based */ \n/* on the number of indexes to create for the appl_grp */ \n/* *created -> 0 exit did not create the index, */ \n/* OnDemand needs to create the index */ \n/* using (sql), which can be left unchanged */ \n/* or modified by the exit */ \n/* *created -> 1 exit created the index */ \n/* */ \n/* 4) OnDemand will then invoke the exit with action == 4 */ \n/* so that the exit can perform any additional work */ \n/* *created -> Is not used */ \n/* sql -> If sql is not an empty string, OnDemand */ \n/* will issue (sql) to the database */ \n/* */ \n/* If ARS_DB_TABLESPACE_USEREXIT_EXTRA=1 is defined in */ \n/* ars.cfg, then the following actions will also be invoked */ \n/* when OnDemand needs to do further actions: */ \n/* */ \n/* 5) OnDemand will invoke the exit with action == 5 */ \n/* so that the exit can drop the table space (tblsp_name) */\n/* using (sql) */\n/* *created -> 0 exit did not drop the table space, */\n/* OnDemand needs to drop the table space */\n/* using (sql), which can be left unchanged */\n/* or modified by the exit */\n/* *created -> 1 exit dropped the table space */\n/* */\n/* 6) OnDemand will invoke the exit with action == 6 */\n/* so that the exit can drop the table (table_name) */\n/* using (sql) when OnDemand needs to drop a table */\n/* *created -> 0 exit did not drop the table, */\n/* OnDemand needs to drop the table */\n/* using (sql), which can be left unchanged */\n/* or modified by the exit */\n/* *created -> 1 exit dropped the table */\n/* */\n/* 7) OnDemand will invoke the exit with action == 7 */\n```", - "page_start": 296, - "page_end": 296, - "source_file": "sg246915.pdf" - }, - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "modes of neighboring tetrahedra. And these coupling constants λx,y,z need to be tuned to produce Jx,y,z of the Kitaev model. This is still not easy to implement in solid state systems. At lowest non-trivial order of perturbative expansion, we do get our model (9). Higher order terms in expansion destroy the exact solvability, but may be controlled by the small parameters λx,y,z/k.\n\n# B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\nIn this Subsection we consider more conventional perturbations, magnetic interactions between the clusters, e.g. the Heisenberg coupling Sj · Sk with j and k belong to different tetrahedra. This has the advantage over the previous phonon approach for not introducing additional degrees of freedom. But it also has a significant disadvantage: the perturbation does not commute with the cluster Heisenberg Hamiltonian (2), so the cluster singlet subspace will be mixed with other total spin states. In this Subsection we will use the spin-chirality representation (6) for τ z .\n\nAgain consider two clusters j and k. For simplicity of notations define a projection operator Pjk = PjPk, where Pj,k is projection into the singlet subspace of cluster j and k, respectively, Pj,k = P s=±1 |τ z j,k = sihτ z j,k = s|. For a given perturbation λ Hperturbation with small parameter λ (in factor λ/Jcluster is the expansion parameter), lowest two orders of the perturbation series are\n\n$$\\lambda\\,{\\cal P}_{jk}H_{\\rm perturbation}{\\cal P}_{jk}+\\lambda^{2}\\,{\\cal P}_{jk}H_{\\rm perturbation}(1-{\\cal P}_{jk})$$\n \n$$\\times[0-H_{\\rm cluster}\\ j-H_{\\rm cluster}\\ k]^{-1}(1-{\\cal P}_{jk})H_{\\rm perturbation}{\\cal P}_{jk}\\tag{15}$$\n\nWith proper choice of λ and Hperturbation we can generate\n\nthe desired Jx,y,z terms in (8) from the first and second order of perturbations.\n\nThe calculation can be dramatically simplified by the following fact that any physical spin-1/2 operator S x,y,z ℓ converts the cluster spin singlet states |τ z = ±1i into spin-1 states of the cluster. This can be checked by explicit calculations and will not be proved here. For all the perturbations to be considered later, the above mentioned fact can be exploited to replace the factor [0 − Hcluster j − Hcluster k] −1 in the second order perturbation to a c-number (−2Jcluster) −1 .\n\nThe detailed calculations are given in Appendix B. We will only list the results here.\n\nThe perturbation on x-links is given by\n\n$$\\begin{array}{c}{{\\lambda_{x}\\,H_{\\mathrm{perturbation,~}x}=\\lambda_{x}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\mathrm{sgn}(J_{x})\\cdot(\\mathbf{S}_{j2}\\cdot\\mathbf{S}_{k2})]}}\\\\ {{-\\,J_{x}(\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{j2}+\\mathbf{S}_{k1}\\cdot\\mathbf{S}_{k2}).}}\\end{array}$$\n\nwhere λx = p 12|Jx| · Jcluster, sgn(Jx) = ±1 is the sign of Jx.\n\nThe perturbation on y-links is\n\n$$\\begin{array}{r}{\\lambda_{y}\\,H_{\\mathrm{perturbation,}\\,y}}\\\\ {=\\lambda_{y}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\operatorname{sgn}(J_{y})\\cdot(\\mathbf{S}_{j3}-\\mathbf{S}_{j4})\\cdot(\\mathbf{S}_{k3}-\\mathbf{S}_{k4})]}\\\\ {\\quad-|J_{y}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}\\end{array}$$\n\nwith λy = p 4|Jy| · Jcluster. The perturbation on z-links is\n\n$\\lambda_{z}\\,H_{\\rm perturbation}$, $z$ \n \n$=\\lambda_{z}[{\\bf S}_{j2}\\cdot({\\bf S}_{k3}\\times{\\bf S}_{k4})+{\\rm sgn}(J_{z})\\cdot{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})]$ \n \n$-|J_{z}|({\\bf S}_{j3}\\cdot{\\bf S}_{j4}+{\\bf S}_{k3}\\cdot{\\bf S}_{k4})$. \n \n\nwith λz = 4p |Jz| · Jcluster. The entire Hamiltonian Hmagnetic reads explicitly as,\n\nHmagnetic = X cluster j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 + X x−links p 12|Jx| · Jcluster- Sj1 · Sk1 + sgn(Jx) · (Sj2 · Sk2) − Jx(Sj1 · Sj2 + Sk1 · Sk2) + X y−links q 4|Jy| · Jcluster- Sj1 · (Sk3 − Sk4) + sgn(Jy)Sk1 · (Sj3 − Sj4) − |Jy|(Sj3 · Sj4 + Sk3 · Sk4) + X z−links 4 p |Jz| · Jcluster- Sj2 · (Sk3 × Sk4) + sgn(Jz)Sk2 · (Sj3 × Sj4) − |Jz|(Sj3 · Sj4 + Sk3 · Sk4) . (16)\n\nIn (16), we have been able to reduce the four spin interactions in (8) to inter-cluster Heisenberg interactions, and the six-spin interactions in (8) to inter-cluster spinchirality interactions. The inter-cluster Heisenberg couplings in Hperturbation x,y may be easier to arrange. The inter-cluster spin-chirality coupling in Hperturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets36,37, and a realization of spin", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "this term becomes\n\n$$-\\frac{\\lambda^{2}}{6J_{\\rm cluster}}\\cdot(3/4)[3/16+(\\tau^{x}/2-1/4)^{2}]$$\n \n$$=-(\\lambda^{2})/(32J_{\\rm cluster})\\cdot(2-\\tau_{k}^{x}).$$\n\n]\n\nAnother second order perturbation term r 2λ 2 PjkSk2 · (Sj3 × Sj4)(1 − Pjk)[0 − Hcluster j − Hcluster k] −1 (1 − Pjk)Sk2 · (Sj3 × Sj4)Pjk can be computed in the similar way and gives the result −(r 2 λ 2 )/(32Jcluster) · (2 − τ x j ).\n\nFor one of the cross term\n\n$$\\begin{array}{l}{{r\\,\\lambda^{2}\\,{\\mathcal{P}}_{j k}{\\mathbf{S}}_{j2}\\cdot({\\mathbf{S}}_{k3}\\times{\\mathbf{S}}_{k4})(1-{\\mathcal{P}}_{j k})}}\\\\ {{\\times\\,{[0-H_{\\mathrm{cluster}~j}-H_{\\mathrm{cluster}~k}]}^{-1}}}\\\\ {{\\times\\,(1-{\\mathcal{P}}_{j k}){\\mathbf{S}}_{k2}\\cdot({\\mathbf{S}}_{j3}\\times{\\mathbf{S}}_{j4}){\\mathcal{P}}_{j k}}}\\end{array}$$\n\nWe can use the previous argument for both cluster j and k, so (1−PAB)[0−Hcluster j−Hcluster k] −1 (1−Pjk) can be replace by c-number (−2Jcluster) −1 . This term becomes\n\n$$-\\frac{r\\,\\lambda^{2}}{2J_{\\mathrm{cluster}}}{\\mathcal{P}}_{j k}[{\\bf S}_{j2}\\cdot({\\bf S}_{k3}\\times{\\bf S}_{k4})][{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j3})]{\\mathcal{P}}_{j k}.$$\n\nSpin rotation symmetry again helps to separate the terms for cluster j and k, and we get −(r λ2 )/(32Jcluster)· τ z j τ z k .\n\nThe other cross term r λ2 PjkSk2 · (Sj3 × Sj4)(1 − Pjk)[0 − Hcluster j − Hcluster k] −1 (1 − Pjk)Sj2 · (Sk3 × Sk4)Pjk gives the same result.\n\nIn summary the second order perturbation from λ[Sj2 · (Sj3 × Sj4) + r Sk2 · (Sj3 × Sj4)] is\n\n$$-\\frac{r\\,\\lambda^{2}}{16J_{\\mathrm{cluster}}}\\cdot\\tau_{j}^{z}\\,\\tau_{k}^{z}+\\frac{\\lambda^{2}}{32J_{\\mathrm{cluster}}}(\\tau_{k}^{x}+r^{2}\\,\\tau_{j}^{x}-2r^{2}-2).$$\n\n- 1 Alexei Kitaev, Ann. Phys. (N.Y.) 321, 2 (2006).\n- 2 Xiao-Yong Feng, Guang-Ming Zhang, Tao Xiang, Phys. Rev. Lett. 98, 087204 (2007).\n- 3 Han-Dong Chen, Zohar Nussinov, J. Phys. A: Math. Theor. 41, 075001 (2008).\n- 4 Dung-Hai Lee, Guang-Ming Zhang, Tao Xiang, Phys. Rev. Lett. 99, 196805 (2007).\n- 5 Yue Yu, Nucl. Phys. B 799, 345 (2008).\n- 6 Yue Yu, Ziqiang Wang, Europhys. Lett. 84, 57002 (2008).\n- 7 G. Kells, J. K. Slingerland, J. Vala, Phys. Rev. B 80, 125415 (2009).\n- 8 Han-Dong Chen, B. Wang, S. Das Sarma, arXiv:0906.0017 (2009).\n- 9 K.P. Schmidt, S. Dusuel, and J. Vidal, Phys. Rev. Lett. 100, 057208 (2008); J. Vidal, K.P. Schmidt, and S. Dusuel, Phys. Rev. B 78, 245121 (2008); S. Dusuel, K.P. Schmidt, J. Vidal, and R.L. Zaffino, Phys. Rev. B 78, 125102 (2008).\n- 10 Hong Yao, Steven A. Kivelson, Phys. Rev. Lett. 99, 247203 (2007).\n- 11 S. Yang, D. L. Zhou, C. P. Sun, Phys. Rev. B 76, 180404(R) (2007).\n- 12 Hong Yao, Shou-Cheng Zhang, Steven A. Kivelson, Phys. Rev. Lett. 102, 217202 (2009).\n- 13 Zohar Nussinov, Gerardo Ortiz, Phys. Rev. B 79, 214440\n\nUsing this result we can choose the following perturbation on z-links,\n\n$$\\begin{array}{r}{{\\lambda_{z}\\,H_{\\mathrm{perturbation,}z}}}\\\\ {{=\\lambda_{z}[\\mathbf{S}_{j2}\\cdot(\\mathbf{S}_{k3}\\times\\mathbf{S}_{k4})+\\mathrm{sgn}(J_{z})\\cdot\\mathbf{S}_{k2}\\cdot(\\mathbf{S}_{j3}\\times\\mathbf{S}_{j4})]}}\\\\ {{\\quad-|J_{z}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}}\\end{array}$$\n\nwith λz = 4p |Jz|Jcluster, r = sgn(Jz) is the sign of Jz. The last term on the right-hand-side is to cancel the nontrivial terms (r 2 τ x j + τ x k )λ 2 z/(32Jcluster) from the second order perturbation of the first term. Up to second order perturbation this will produce −Jzτ z j τ z k interactions.\n\nFinally we have been able to reduce the high order interactions to at most three spin terms, the Hamiltonian Hmagnetic is\n\n$$\\begin{aligned} \nH_{\\text{magnetic}} = & \\sum_{j} H_{\\text{cluster}j} + \\sum_{x-\\text{links}} \\lambda_{x} H_{\\text{perturbation}x} \\\\ \n& + \\sum_{y-\\text{links}} \\lambda_{y} H_{\\text{perturbation}y} \\\\ \n& + \\sum_{z-\\text{links}} \\lambda_{z} H_{\\text{perturbation}z} \n\\end{aligned}$$\n\nwhere Hcluster j are given by (2), λx,y,z Hperturbation x,y,z are given above. Plug in relevant equations we get (16) in Subsection IV B.\n\n(2009).\n\n- 14 Congjun Wu, Daniel Arovas, Hsiang-Hsuan Hung, Phys. Rev. B 79, 134427 (2009).\n- 15 Shinsei Ryu, Phys. Rev. B 79, 075124 (2009).\n- 16 G. Baskaran, G. Santhosh, R. Shankar, arXiv:0908.1614 (2009).\n- 17 L.-M. Duan, E. Demler, M. D. Lukin, Phys. Rev. Lett. 91, 090402 (2003).\n- 18 A. Micheli, G. K. Brennen, P. Zoller, Nature Physics 2, 341 (2006).\n- 19 J. Q. You, Xiao-Feng Shi, Xuedong Hu, Franco Nori, Phys. Rev. B 81, 014505 (2010).\n- 20 G. Jackeli, G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009).\n- 21 A. B. Harris, A. J. Berlinsky, C. Bruder, J. Appl. Phys. 69, 5200 (1991).\n- 22 K. A. Chao, J. Spa lek, A. M. Ole´s, Phys. Rev. B 18, 3453 (1978).\n- 23 A. H. MacDonald, S. M. Girvin, D. Yoshioka, Phys. Rev. B 37, 9753 (1988).\n- 24 J. T. Chayes, L. Chayes, S. A. Kivelson, Commun. Math. Phys. 123, 53 (1989).\n- 25 C. D. Batista, S. A. Trugman, Phys. Rev. Lett. 93, 217202 (2004).", - "page_start": 9, - "page_end": 9, - "source_file": "1001.0266.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "How the steady-state solutions for the mean values of the field and atomic variables for laser operation are obtained ?", - "target_page": 2, - "target_passage": "The steady-state solutions for the mean values of the field and atomic variables for laser operation are obtained by dropping the noise terms of the c-number Langevin equations and setting the time derivatives equal to zero.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "$$\\tilde{N}_{b s s}=\\frac{R\\tau}{2}\\left[1-\\frac{C_{0}-C_{1}+C_{2}}{g\\tau}\\sqrt{\\frac{\\kappa}{R(B_{0}-B_{1}+B_{2})}}\\right]$$\n\n.\n\nA detailed analysis about the stability of the steady-state can be found such as in [28]. In this paper, we assume the steadystate solution is stable.\n\n*Laser linwidth:* Suppose the quantum fluctuation is small, the evolution of the fluctuations can be obtained by making a linearization of the c-number Langevin equations around the steady-state solution. Then the measured spectra of field fluctuations will be directly related to these quantities. By Fourier transformations of the linearized equation, we get the amplitude and phase quadrature components δ*X*(ω) and δ*Y*(ω) [26]. Well above threshold, one can neglect the amplitude fluctuations, and the linewidth inside the cavity is related to the phase-diffusion coefficient [25]. For small fluctuation of laser phase, the spectrum of phase fluctuations is simply related to the spectrum of the phase quadrature component of the field fluctuations, namely,\n\n$$(\\delta\\varphi^{2})_{\\omega}=\\frac{1}{I_{0}}(\\delta Y^{2})_{\\omega}.$$\n\nIn the region γ*ab* ≪ *T* −1 ≪ τ −1 ≪ κ/2, as in the recently proposed active optical clock [15] with atomic beam. The phase quadrature component of the field fluctuations can be expressed as\n\n$$(\\delta\\varphi^{2})_{\\omega}$$\n \n$$\\approx\\frac{(\\kappa/2+\\gamma_{ab})^{2}}{I_{0}\\omega^{2}[(\\kappa/2+\\gamma_{ab})^{2}+\\omega^{2}]}\\frac{g^{2}}{4(\\kappa/2+\\gamma_{ab})^{2}}\\{4\\gamma_{ab}\\hat{N}_{ass}$$\n \n$$+2R[(A_{0}+B_{0})+(A_{2}+B_{2})]$$\n \n$$+Rp[(C_{0}-C_{0}^{*})^{2}+(C_{1}-C_{1}^{*})^{2}+(C_{2}-C_{2}^{*})^{2}]\\}.\\tag{9}$$\n\nSince the time τ and *T* is much shorter than the time scale of the atomic dampings, we can neglect the dampings when calculate *Ai* , *Bi* , *Ci* . By using\n\n*A*0 = cos2 Ω*R* 2 τ ! , *A*1 = cos2 Ω*R* 2 τ ! , *A*2 = 1 − sin2 (Ω*R*τ) cos2 ∆2 2 *T* ! , *B*0 = sin2 Ω*R* 2 τ ! , *B*1 = sin2 Ω*R* 2 τ ! , *B*2 = sin2 (Ω*R*τ) cos2 ∆2*T* 2 ! , (*C*0 − *C* ∗ 0 ) 2 = 0, (*C*1 − *C* ∗ 1 ) 2 = − sin2 (Ω*R*τ)sin2 (∆2*T*), (*C*2 − *C* ∗ 2 ) 2 = − sin2 (Ω*R*τ)sin2 (∆2*T*),\n\nwe get\n\n$$(\\delta\\varphi^{2})_{\\omega}=\\frac{\\left(\\kappa/2+\\gamma_{ab}\\right)^{2}}{\\omega^{2}[(\\kappa/2+\\gamma_{ab})^{2}+\\omega^{2})]}\\frac{\\gamma_{ab}^{2}}{(\\kappa/2+\\gamma_{ab})^{2}}\\{D_{ST}\\tag{10}$$\n \n$$+\\ D_{Ram}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)]\\},$$\n\nwhere Ω*R* is the Rabi frequency on resonance, *DS T*=*g* 2*N*˜ *ass*/*I*0γ*ab* , *DRam* = *g* 2*R*/2*I*0γ 2 *ab*, and ∆2 = ω − (ω*a*2 − ω*b*2) presents the detuning in the free drift region. *p* is a parameter, which characterizes the pumping statistics: a Poissonian excitation statistics corresponds to *p* = 0 , and for a regular statistics we have *p* = 1.\n\nThen the linewidth of Ramsey laser with bad cavity is given by\n\n$$D=\\frac{\\gamma_{ab}^{2}}{(\\kappa/2+\\gamma_{ab})^{2}}\\{D_{ST}+D_{Ram}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)]\\}.\\tag{11}$$\n\nSince *DS T* /*DRam* ≪ 1 in our situation, and in the case of maximal photon number, the steady state value of *N*˜ *ass* is about *R*τ/2. Then we get the\n\n$$D\\approx\\frac{2g^{2}}{\\kappa}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)].\\tag{12}$$\n\nFrom the expression above, we find that the pumping statistic can influence the linewidth. For regular injection (*p* = 1), the linewidth is the narrowest, while for Poissonian injection (*p* = 0), the linewidth is the broadest. But even for regular injection, the linewidth is larger than the case of one cavity. That means the mechanism of separated-field does not play the role in reducing the linewidth as in the conventional optical Ramsey method, which is counter-intuitive. However, the separated fields are indispensable for any phase detection like atom interferometry. The details about the method of active atom interferometry will appear elsewhere.\n\nOur method of Ramsey laser is suitable for any atoms with metastable energy level, as an example, we choose the transition from the metastable state 4*s*4*p* 3*P*1 to the ground state 4*s* 2 1*S* 0 of 40Ca to check the striking feature of this laser: subnatural linewidth. As mentioned in [29], the corresponding natural linewidth of the metastable state 4*s*4*p* 3*P*1 is 320Hz. As in the recently proposed active optical clock with atomic beam [15], the velocity of the atoms in thermal atomic beam is about 500m/s, and the length of the interaction region is about 1mm, then the time for the atom to traverse each coherentinteraction region is on the order of magnitude of 1 µs. If a bad cavity with κ is on the order of 107Hz, the relation κ/2 ≫ τ −1 is satisfied. Then when *g* is on the order of the magnitude of kHz, which can be easily achieved for current technique [30], from the linewidth expression of Eq.(16) the order of magnitude of linewidth is below 1 Hz. This means the linewidth of a Ramsey laser can be more than two orders of magnitude narrower than the atomic natural linewidth, therefore our Ramsey method provides a new subnatural spectroscopy technique. And since it is stimulated-emission spectrum, it overcomes the difficulty in other subnatural linewidth spectroscopy schemes where the quick reduction of signal to noise ratio is a formidable limit. We should point out that this Ramsey laser does not escape the limitation of all active optical clock: in order to pump atoms to the excited state effectively and to be stimulated emit photon during the lifetime of a metastable state, this new method will only be applicable to some special transitions [17].", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2670.pdf" - }, - { - "text": "dependence of different samples during the measurement stage. For each temperature we have usually performed three independent simulations, each one containing at least 2×105 measurements, taken after discarding up to 5×104 Monte Carlo steps in order to assure thermal equilibration.\n\nIn the proximity of the critical region the multiple histogram (MH) technique was also employed21, as it allows us to estimate the physical observables of interest over a whole temperature range in a substantially continuous way by interpolating results obtained from sets of simulations performed at some different temperatures.\n\nFor all the quantities of interest, the average value and the error estimate were obtained by the bootstrap resampling method22 given that, as pointed out in Ref. 23, for a large enough number of measurements, this method turns out to be more accurate than the usual blocking technique. In our implementation, we pick out randomly a sizable number of measurements (typically, between 1 and 1×103 for the single simulation, and between 1 and 5×104 for the MH technique), and iterate the re-sampling at least one hundred times.\n\nThe thermodynamic observables we have investigated include the FM order parameter for each plane l:\n\n$$m_{l}=\\sqrt{(m_{l}^{x})^{2}+(m_{l}^{y})^{2}}\\;\\;,\\qquad\\qquad(2)$$\n\nwhich is related to the SO(2) symmetry breaking. At the same time, it turns out to be significant also the average order parameter of the film, defined as\n\n$$M=\\frac{1}{n}\\sum_{l=1}^{n}m_{l}\\,.\\eqno(3)$$\n\nTurning to the helical order, which is the relevant quantity for the Z2 × SO(2) symmetry, we can explore it along two different directions. The first one is by the introduction of the chirality order parameter1,2\n\n$$\\kappa=\\frac{1}{4(n-1)L^{2}\\sin Q_{z}}\\sum_{\\langle ij\\rangle}\\left[S_{i}^{x}S_{j}^{y}-S_{i}^{y}S_{j}^{x}\\right]\\,,\\tag{4}$$\n\nwhere the sum refers to spins belonging to NN layers i and j, respectively, while Qz is the bulk helical pitch vector along the z direction. The second possibility is that of looking at the integral of the structure factor:\n\n$$M_{H M}=\\frac{1}{K}\\int_{0}^{\\pi}d q_{z}S(\\vec{q})\\qquad\\qquad(5)$$\n\nwhere S(~q), with ~q = (0, 0, qz), is the structure factor24 (i.e. the Fourier transform of the spin correlation function) along the z-direction of the film, while the normalization factor K is the structure factor integral at T = 0. Although the use of the last observable can be seen as a suitable and elegant way to overcome the intrinsic difficulties met in defining a correct helical order parameter, free of any undue external bias (as the wave-vector Qz\n\nFIG. 2: (color online) Specific heat cv per spin vs. temperature for thickness n = 16 (for lateral dimension, see the legend inside the figure). Inset: Maximum of cv vs. L obtained through MH technique. The continuum red line is a power law fit.\n\nentering the definition of κ in Eq. (4)), we remind that such quantity has generally to be managed with particular care, as discussed in details in Refs.14,15, where it was shown that the presence of block structures prevents us to unambiguously relate the evolution of S(~q) with the onset of helical order. However, for the specific case of the model under investigation such integrated quantity can still be considered a fairly significant order parameter, as no block structures emerge from the simulations (see below).\n\nIn order to get a clear picture of the critical region and to give an accurate estimate of the critical temperature, we look also at the following quantities\n\n$$c_{v}=nL^{2}\\beta^{2}\\left(\\langle e^{2}\\rangle-\\langle e\\rangle^{2}\\right)\\,,\\tag{6}$$\n\n$$\\chi_{o}=nL^{2}\\beta\\left(\\langle o^{2}\\rangle-\\langle o\\rangle^{2}\\right)\\,,\\tag{7}$$\n\n$$\\partial_{\\beta}o\\ =\\ n L^{2}\\left(\\langle o e\\rangle-\\langle o\\rangle\\langle e\\rangle\\right)\\,,\\qquad\\qquad(8)$$\n\n$$u_{4}(o)=1-\\frac{\\langle o^{4}\\rangle}{3\\langle o^{2}\\rangle^{2}}\\,,\\tag{9}$$\n\nwhere β = 1/kBT , and o is one of the relevant observables, i.e. ml , M, κ, MHM . In this paper, we shall mainly locate the critical temperature by looking at the intersection of the graphs of the Binder cumulant25, Eq. (9), as a function of T obtained at different L. For clarity reasons, we introduce also the following symbols: by TN (n) we will denote the helical/fan phase transition temperature for thickness n, TC(n) will instead indicate the ordering temperature of the sample as deduced by looking at the behaviour of the average order parameter (3), while T l C(n) will be the l-th plane transition temperature related to the order parameter defined in Eq. (2).", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0510.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, Vij = V (0) ij + ∆Vij , a first-order truncated expression for the free energy density of the system βfv is obtained,\n\n$$\\beta f_{v}\\lesssim\\beta f_{v}^{(0)}+\\frac{1}{2}\\beta\\sum_{i,j}\\rho_{i}\\rho_{j}\\int\\mathrm{d}\\mathbf{r}\\,g_{i j}^{(0)}(r)\\Delta V_{i j}(r)\\qquad(1)$$\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = (kBT ) −1 and ��i the concentration of species i. The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter (σi) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆Vij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g(r) = exp [gMSA(r) − 1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye H¨uckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillan-Mayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\nWe first used LPT for a two-component system (Na+ and Cl− free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2.0 mol l−1 . The minimization leads to almost constant diameters on the whole range of concentration: σ1 = 3.67 ˚A and σ2 = 4.78 ˚A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0.1 mol l−1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4.2 ˚A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.\n\nTo overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "*Conclusion:* In summary, we propose a new subnatural linewidth spectroscopy technique, which is a laser by using Ramsey seperated-field cavity to realize the output of stimulated-emission radiation via multiple coherent interaction with atomic beam. We find the linewidth of Ramsey laser is subnatural if we choose an appropriate atomic level, and the bad-cavity laser mechanism will dramatically reduce cavityrelated noise as discussed in active optical clock [15–19]. Our results show that this new subnatural linewidth spectroscopy is superior to conventional optical Ramsey seperated-field spectroscopy and any other available subnatural spectroscopy technique at present [3–10]. Considering one have to apply the separated-field method in any phase detection as in Ramsey-Bord*e*´interferometer [2], to investigate the effects of phase differences between the two oscillating fields [31] in this stimulated separated-field method with such subnatural linewidth will be our next research aim.\n\nWe acknowledge Yiqiu Wang and Deshui Yu for fruitful discussions. This work is supported by MOST of China (grant 2005CB724500, National Natural Science Foundation of China (grant 60837004, 10874009), National Hi-Tech Research and Development (863) Program.\n\n- ∗ E-mail: jbchen@pku.edu.cn\n- † E-mail: hongguo@pku.edu.cn.\n- [1] N. F. Ramsey, Phys. Rev. **76**, 996 (1949).\n- [2] B. Dubetsky and P. R. Berman, In *Atom Interferometry*, edited by P. R. Berman (Academic Press, Cambridge, MA, 1997).\n- [3] M. M. Salour, Rev. Mod. Phys. **50**, 667 (1978).\n- [4] J. Wong and J. C. Garrison, Phys. Rev. Lett. **44**, 1254 (1980).\n- [5] P. L. Knight and P. E. Coleman, J. Phys. B: Atom. Molec. Phys. **13** 4345 (1980).\n- [6] H. -W. Lee, P. Meystre, and M. O. Scully, Phys. Rev. A **24**, 1914 (1981).\n- [7] F. Shimizu, K. Shimizu, and H. Takuma, Phys. Rev. A **28**, 2248 (1983).\n- [8] W. Gawlik, J. Kowalski, F. Tr¨ager, and M. Vollmer, Phys. Rev.\n\nLett. **48**, 871 (1982).\n\n- [9] H. J. Carmichael, R. J. Brecha, M. G. Raizen, H. J. Kimble, and P. R. Rice, Phys. Rev. A **40**, 5516 (1989).\n- [10] U. W. Rathe, M. O. Scully, Letters in Mathematical Physics **34**, 297 (1995)\n- [11] K. Numata, A. Kemery, J. Camp, Phys Rev Lett, **93**, 250602 (2004).\n- [12] A. D. Ludlow *et al.*, Opt. Lett. **32**, 641 (2007).\n- [13] H. J. Kimble, B. L. Lev, and J. Ye, Phys. Rev. Lett. **101**, 260602 (2008).\n- [14] J. Chen, and X.Chen, In *Proceedings of the 2005 IEEE International Frequency Control Symposium and Exposition*, (IEEE, 2005), p.608.\n- [15] J. Chen, e-print arXiv:0512096 quant-ph; Chinese Science Bulletin **54**, 348 (2009).\n- [16] D. Yu and J. Chen, Phys. Rev. A **78**, 013846 (2008).\n- [17] J. Chen, In *Frequency Standards and Metrology: Proceedings of the 7th Symposium*, edited by Maleki Lute (World Scientific Publishing Company, 2009).\n- [18] Y. Wang, Chinese Science Bulletin **54**, 347 (2009).\n- [19] D. Meiser, J. Ye, D. R. Carlson, and M. J. Holland, Phys. Rev. Lett. **102**, 163601 (2009)\n- [20] F. Strumia, Metrologia **8**, 85 (1972).\n- [21] G. Kramer, J. Opt. Soc. Am. **68**, 1634 (1978).\n- [22] V. S. Letokhov and B. D. Pavlik, Opt. Spectrosc. USSR **32**, 455 (1972).\n- [23] Ye. V. Baklanov, B. Ya, Dubetsky, V. P. Chebotayev, Appl. Phys. **9**, 171 (1976).\n- [24] J. C. Bergquist, S. A. Lee, and L. L. Hall, Phys. Rev. Lett. **38**, 159 (1977).\n- [25] L. Davidovich, Rev. Mod. Phys. **68**, 127 (1996).\n- [26] M. I. Kolobov, L. Davidovich, E. Giacobino, and C. Fabre, Phys. Rev. A **47**, 1431 (1993).\n- [27] M. Sargent III, M. O. Scully, and W. E. Lamb, *Laser Physics* (Addition Wesley, Reading, MA, 1974).\n- [28] N. A. Abraham, P. Mandel, and L. M. Narducci, *Dynamic Instabilities and Pulsations in Lasers*, Progress in Optics XXV, edited by E. Wolf (Elsevier, Amsterdam, 1988).\n- [29] L. Pasternack, D. M. Silver, D. R. Yarkony, and P. J. Dagdigian, J. Phys. B **13**, 2231 (1980).\n- [30] K. An and M. S. Feld, Phys. Rev. A **56**, 1662(1997).\n- [31] N. F. Ramsey and H. B. Silsbee, Phys. Rev. **84**, 506(1951).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2670.pdf" - }, - { - "text": "modes of neighboring tetrahedra. And these coupling constants λx,y,z need to be tuned to produce Jx,y,z of the Kitaev model. This is still not easy to implement in solid state systems. At lowest non-trivial order of perturbative expansion, we do get our model (9). Higher order terms in expansion destroy the exact solvability, but may be controlled by the small parameters λx,y,z/k.\n\n# B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\nIn this Subsection we consider more conventional perturbations, magnetic interactions between the clusters, e.g. the Heisenberg coupling Sj · Sk with j and k belong to different tetrahedra. This has the advantage over the previous phonon approach for not introducing additional degrees of freedom. But it also has a significant disadvantage: the perturbation does not commute with the cluster Heisenberg Hamiltonian (2), so the cluster singlet subspace will be mixed with other total spin states. In this Subsection we will use the spin-chirality representation (6) for τ z .\n\nAgain consider two clusters j and k. For simplicity of notations define a projection operator Pjk = PjPk, where Pj,k is projection into the singlet subspace of cluster j and k, respectively, Pj,k = P s=±1 |τ z j,k = sihτ z j,k = s|. For a given perturbation λ Hperturbation with small parameter λ (in factor λ/Jcluster is the expansion parameter), lowest two orders of the perturbation series are\n\n$$\\lambda\\,{\\cal P}_{jk}H_{\\rm perturbation}{\\cal P}_{jk}+\\lambda^{2}\\,{\\cal P}_{jk}H_{\\rm perturbation}(1-{\\cal P}_{jk})$$\n \n$$\\times[0-H_{\\rm cluster}\\ j-H_{\\rm cluster}\\ k]^{-1}(1-{\\cal P}_{jk})H_{\\rm perturbation}{\\cal P}_{jk}\\tag{15}$$\n\nWith proper choice of λ and Hperturbation we can generate\n\nthe desired Jx,y,z terms in (8) from the first and second order of perturbations.\n\nThe calculation can be dramatically simplified by the following fact that any physical spin-1/2 operator S x,y,z ℓ converts the cluster spin singlet states |τ z = ±1i into spin-1 states of the cluster. This can be checked by explicit calculations and will not be proved here. For all the perturbations to be considered later, the above mentioned fact can be exploited to replace the factor [0 − Hcluster j − Hcluster k] −1 in the second order perturbation to a c-number (−2Jcluster) −1 .\n\nThe detailed calculations are given in Appendix B. We will only list the results here.\n\nThe perturbation on x-links is given by\n\n$$\\begin{array}{c}{{\\lambda_{x}\\,H_{\\mathrm{perturbation,~}x}=\\lambda_{x}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\mathrm{sgn}(J_{x})\\cdot(\\mathbf{S}_{j2}\\cdot\\mathbf{S}_{k2})]}}\\\\ {{-\\,J_{x}(\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{j2}+\\mathbf{S}_{k1}\\cdot\\mathbf{S}_{k2}).}}\\end{array}$$\n\nwhere λx = p 12|Jx| · Jcluster, sgn(Jx) = ±1 is the sign of Jx.\n\nThe perturbation on y-links is\n\n$$\\begin{array}{r}{\\lambda_{y}\\,H_{\\mathrm{perturbation,}\\,y}}\\\\ {=\\lambda_{y}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\operatorname{sgn}(J_{y})\\cdot(\\mathbf{S}_{j3}-\\mathbf{S}_{j4})\\cdot(\\mathbf{S}_{k3}-\\mathbf{S}_{k4})]}\\\\ {\\quad-|J_{y}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}\\end{array}$$\n\nwith λy = p 4|Jy| · Jcluster. The perturbation on z-links is\n\n$\\lambda_{z}\\,H_{\\rm perturbation}$, $z$ \n \n$=\\lambda_{z}[{\\bf S}_{j2}\\cdot({\\bf S}_{k3}\\times{\\bf S}_{k4})+{\\rm sgn}(J_{z})\\cdot{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})]$ \n \n$-|J_{z}|({\\bf S}_{j3}\\cdot{\\bf S}_{j4}+{\\bf S}_{k3}\\cdot{\\bf S}_{k4})$. \n \n\nwith λz = 4p |Jz| · Jcluster. The entire Hamiltonian Hmagnetic reads explicitly as,\n\nHmagnetic = X cluster j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 + X x−links p 12|Jx| · Jcluster- Sj1 · Sk1 + sgn(Jx) · (Sj2 · Sk2) − Jx(Sj1 · Sj2 + Sk1 · Sk2) + X y−links q 4|Jy| · Jcluster- Sj1 · (Sk3 − Sk4) + sgn(Jy)Sk1 · (Sj3 − Sj4) − |Jy|(Sj3 · Sj4 + Sk3 · Sk4) + X z−links 4 p |Jz| · Jcluster- Sj2 · (Sk3 × Sk4) + sgn(Jz)Sk2 · (Sj3 × Sj4) − |Jz|(Sj3 · Sj4 + Sk3 · Sk4) . (16)\n\nIn (16), we have been able to reduce the four spin interactions in (8) to inter-cluster Heisenberg interactions, and the six-spin interactions in (8) to inter-cluster spinchirality interactions. The inter-cluster Heisenberg couplings in Hperturbation x,y may be easier to arrange. The inter-cluster spin-chirality coupling in Hperturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets36,37, and a realization of spin", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "$$\\mathcal{F}[q(s),o]=\\underbrace{D_{\\text{KL}}[q(s)||p(s)]}_{\\text{Complexity}}-\\underbrace{\\mathbb{E}_{q(s)}[\\ln p(o|s)]}_{\\text{Accuracy}}\\tag{10}$$\n\nSince the *VFE* can be calculated (Equation (5)), it can be used as a target for minimisation. This allows us to choose the approximate posterior *q*(*s*) that is associated with the smallest *VFE*; since the surprise does not depend on *q*(*s*), minimising the *VFE* this way must necessarily reduce the divergence between the approximate and exact posterior. We now, as is usually performed in variational inference, introduce a mean-field approximation [35] so that the joint approximate posterior *q*(*st*) factorises across time steps *t* ∈ *T* and hidden state factors *f* ∈ *F*:\n\n$$q(s)=q(s_{1},s_{2},\\ldots,s_{T})=\\prod_{t=1}^{T}q(s_{t})\\tag{11}$$\n\n$$q(s_{t})=q(s_{t}^{1},s_{t}^{2},\\ldots,s_{t}^{F})=\\prod_{f=1}^{F}q(s_{t}^{f})\\tag{12}$$\n\nThe factorisation into hidden state factors allows us to calculate the VFE separately for each factor *f* and sum them to obtain the total VFE. The factorisation in time allows us to calculate the time-specific VFE as in Equation (6) using the predictive posterior ln *p*(*st* | *st*−1, *ut*−1) from the last time step as the prior:\n\n$${\\cal F}_{t}=\\mathbb{E}_{q(s_{t})}[\\ln q(s_{t})-\\ln p(o_{t}\\mid s_{t})-\\ln p(s_{t}\\mid s_{t-1},u_{t-1})]\\tag{13}$$\n\nwhich is the value we intend to minimise. Various methods exist for minimising the *VFE*; the one used in pymdp, from which we drew much of our inspiration, is Coordinate Ascent Variational Inference (CAVI) [35], where the fixed points of the *VFE* are solved for with coordinate descent (also known as fixed-point iteration (FPI) [23]). This is also the algorithm currently available in ActiveInference.jl. We correspondingly use a coordinate descent update to find the factorised approximate posterior *q*(*s f t* ) that minimises the timedependent VFE F*t* , and therefore optimises for the time-specific variational posterior *q*(*st*). To obtain the coordinate descent update, we start by taking the derivative of F*t* with respect to *q*(*s f t* ) and setting the derivative to zero [23]:\n\n$$\\frac{\\partial\\mathcal{F}_{t}}{\\partial q(s^{\\prime}_{t})}=\\ln q(s^{\\prime}_{t})+1-\\mathbb{E}_{q^{\\prime}\\circ f}[\\ln P(o_{t}\\mid s_{t})]-\\ln\\biggl{(}\\mathbb{E}_{p(s^{\\prime}_{t-1},u^{\\prime}_{t-1})}\\Bigl{[}P(s^{\\prime}_{t}\\mid s^{\\prime}_{t-1},u^{\\prime}_{t-1})\\Bigr{]}\\biggr{)}=0\\tag{14}$$\n\nSolving for *q*(*s f t* ) yields\n\n$$\\ln q(s_{t}^{f})=\\mathbb{E}_{q^{\\prime}\\lor t}\\big{[}\\ln P(o_{t}\\mid s_{t})\\big{]}+\\ln\\bigg{(}\\mathbb{E}_{P(s_{t-1}^{f},u_{t-1}^{f})}\\Big{[}P\\big{(}s_{t}^{f}\\mid s_{t-1}^{f},u_{t-1}^{f}\\big{)}\\Big{]}\\bigg{)}-1\\tag{15}$$\n\nwhich leads us to the coordinate descent update equation:\n\n$$q^{*}(s_{t}^{f})=\\sigma\\bigg{(}\\mathbb{E}_{q^{\\vee}|\\sqrt{\\ln P(o_{t}\\mid s_{t})}}+\\ln\\bigg{(}\\mathbb{E}_{P(s_{t-1}^{f},u_{t-1}^{f})}\\Big{[}P(s_{t}^{f}\\mid s_{t-1}^{f},u_{t-1}^{f})\\Big{]}\\bigg{)}\\bigg{)}\\tag{16}$$\n\nwhere E *q i*\\ *f* denotes the expectation over *q*(*s*) for factor *i*, where the posterior over states in the other factors *f* are kept constant. By iteratively solving (16), the FPI scheme will eventually find a local optimum and converge to a solution for the variational posterior. By default, ActiveInference uses 10 iterations or stops when *∂Ft* < 0.001. This posterior then comprises the AIF agent's belief about the state of the environment, and therefore, its perception.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": ".\n\n## **The Linewidth of Ramsey Laser with Bad Cavity**\n\nYang Li, Wei Zhuang, Jinbiao Chen,∗ and Hong Guo†\n\n*CREAM Group, State Key Laboratory of Advanced Optical Communication*\n\n*Systems and Networks (Peking University) and Institute of Quantum Electronics,*\n\n*School of Electronics Engineering and Computer Science,*\n\n*and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China*\n\n(Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling effect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\n*Introduction:* Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing effect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [3– 10]. However, in all these efforts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14–18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling effect [15– 17], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here focus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3–10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an effective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\n*Theoretical framework:* We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state **a** before entering the first cavity of seperated field, and the lower lasing state is **b**. We assume all the atoms have the same velocities υ, that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22–24]. The length of each oscillating part is *l*, and the length of the free drift region is *L*. The corresponding Hamiltonian is\n\n$$H=\\hbar\\omega\\hat{a}^{\\dagger}\\hat{a}+\\hbar\\sum_{j}\\left[\\omega_{a}^{j}(t)\\sigma_{a}^{j}+\\omega_{b}^{j}(t)\\sigma_{b}^{j}\\right]\\tag{1}$$\n \n$$+\\hbar\\mathrm{g}\\sum_{j}\\Gamma_{j}(t)(\\hat{a}^{\\dagger}\\hat{\\sigma}_{-}^{j}e^{-i\\vec{k}\\cdot\\vec{r}_{j}}+\\hat{\\sigma}_{+}^{j}\\hat{a}e^{i\\vec{k}\\cdot\\vec{r}_{j}}),$$\n\nwhere ˆ*a*, ˆ*a* † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω, σ *j a* = (|*a*i h*a*|) *j* and σ *j b* = (|*b*i h*b*|) *j* are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "- f. On the toolbar, click the fourth icon from the right to place the report window back into add mode.\n- 9. Define a field and an index:\n\t- a. Find a text string that can be used to identify the location of the field. The text string needs to contain a sample index value. For example, if you want to extract account number values from the input file, find where the account number is printed on the page.\n\t- b. By using the mouse, draw a box around the text string. Start just outside of the upper-left corner of the string. Click and then drag the mouse toward the lower-right corner of the string. As you drag the mouse, the graphical indexer uses a dotted line to draw a box. After you enclose the text string inside of a box, release the mouse. The graphical indexer highlights the text string inside the box.\n\n**Important:** Use the same principles for collecting fields as collecting the trigger text string in step 8b on page 170. If the fields that must be collected are close together, overlap them with adjacent fields to ensure that the box is as large as possible and to ensure that the data is collected at load time.\n\n- c. Click the Define a Field icon on the toolbar.\n- d. In the Add a Field window, complete the following steps:\n\t- i. On the Field Information tab, verify the attributes of the Index field. For example, the text string that you selected in the report window is displayed under Reference String and the trigger identifies the trigger on which the field is based. Click **Help** for assistance with the options and values that you can specify.\n\t- ii. On the Database Field Attributes tab, verify the attributes of the database field. In the Database Field Name field, enter the name of the application group field into which you want Content Manager OnDemand to store the index value. In the Folder Field Name field, enter the name of the folder field to display in the client search window. Click **Help** for assistance with the other options and values that you can specify.\n\t- iii. Click **OK** to define the field and index.\n- e. To verify the locations of the fields, complete the following steps:\n\t- i. Place the report window into display mode. Blue boxes are drawn around the fields.\n\t- ii. Click the **Select** tool.\n\t- iii. In the Select window, under Fields, double-click **Field 1**. The graphical indexer highlights the text string in the current document. Double-click **Field 1** again. The graphical indexer moves to the next document and highlights the text string.\n\t- iv. Use the Select window to move forward to each document and display the field. Then, return to the first document in the input file.\n- f. Place the report window back into add mode.\n- 10.Click **Create Indexer Parameters and Fields Report** to create the indexer parameter report that the PDF Indexer uses to process the input files that you load into the application. At a minimum, you must have one trigger, one field, and one index. For more information about the indexing parameters, see IBM Content Manager OnDemand - Indexing Reference, SC19-3354.\n- 11.After you define all of the triggers, fields, and indexes, press Esc to close the report window.", - "page_start": 195, - "page_end": 195, - "source_file": "sg246915.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n### A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l. The resulting three possible states of a cell are: liquid (l = 1, n = 0), nanoparticle (l = 0, n = 1), and vapour (l = 0, n = 0, i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\n$$E\\,=\\,-\\frac{\\varepsilon_{nn}}{2}\\sum_{}n_{i}n_{j}\\,-\\,\\frac{\\varepsilon_{nl}}{2}\\sum_{}n_{i}l_{j}\\,-\\,\\frac{\\varepsilon_{ll}}{2}\\sum_{}l_{i}l_{j}\\,-\\,\\mu\\sum_{i}l_{i}\\tag{3}$$\n\nwhere P denotes a sum over nearest neighbour pairs and εll, εnn and εnl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters εll, εnn, εnl and the effective chemical potential µ determines the equilibrium state of the system. We choose εll as unit of energy – i.e. we set εll = 1.\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - }, - { - "text": "# Appendix B: Derivation of the Terms Generated by Second Order Perturbation of Inter-cluster Magnetic Interactions\n\nIn this Appendix we derive the second order perturbations of inter-cluster Heisenberg and spin-chirality interactions. The results can then be used to construct (16).\n\nFirst consider the perturbation λ Hperturbation = λ[Sj1 · Sk1 + r(Sj2 · Sk2)], where r is a real number to be tuned later. Due to the fact mentioned in Subsection IV B, the action of Hperturbation on any cluster singlet state will produce a state with total spin-1 for both cluster j and k. Thus the first order perturbation in (15) vanishes. And the second order perturbation term can be greatly simplified: operator (1 − Pjk)[0 − Hcluster j − Hcluster k] −1 (1 − Pjk) can be replaced by a c-number (−2Jcluster) −1 . Therefore the perturbation up to second order is\n\n$$-\\frac{\\lambda^{2}}{2J_{\\mathrm{cluster}}}\\,{\\mathcal{P}}_{j k}(H_{\\mathrm{perturbation}})^{2}{\\mathcal{P}}_{j k}$$\n\nThis is true for other perturbations considered later in this Appendix. The cluster j and cluster k parts can be separated, this term then becomes (a, b = x, y, z),\n\n$$\\begin{array}{c}{{-\\,\\frac{\\lambda^{2}}{2J_{\\mathrm{cluster}}}\\sum_{a,b}\\left[\\mathcal{P}_{j}S_{j1}^{a}S_{j1}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k1}^{a}S_{k1}^{b}\\mathcal{P}_{k}\\right]}}\\\\ {{\\quad+2r\\,\\mathcal{P}_{j}S_{j1}^{a}S_{j2}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k1}^{a}S_{k2}^{b}\\mathcal{P}_{k}}}\\\\ {{\\quad+r^{2}\\,\\mathcal{P}_{j}S_{j2}^{a}S_{j2}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k2}^{a}S_{k2}^{b}\\mathcal{P}_{k}\\right]}}\\end{array}$$\n\nThen use the fact that PjS a jℓS b jmPj = δab(1/3)Pj(Sjℓ · Sjm)Pj by spin rotation symmetry, the perturbation becomes\n\n$$-\\frac{\\lambda^{2}}{6J_{\\rm cluster}}\\Big{[}\\frac{9+9r^{2}}{16}+2r\\,{\\cal P}_{jk}({\\bf S}_{j1}\\cdot{\\bf S}_{j2})({\\bf S}_{k1}\\cdot{\\bf S}_{k2}){\\cal P}_{jk}\\Big{]}$$\n \n$$=-\\frac{\\lambda^{2}}{6J_{\\rm cluster}}\\Big{[}\\frac{9+9r^{2}}{16}+(r/2)\\tau_{j}^{x}\\tau_{k}^{x}-r/2$$\n \n$$-r\\,{\\cal P}_{jk}({\\bf S}_{j1}\\cdot{\\bf S}_{j2}+{\\bf S}_{k1}\\cdot{\\bf S}_{k2}){\\cal P}_{jk}\\Big{]}.$$\n\nSo we can choose −(r λ2 )/(12Jcluster) = −Jx, and include the last intra-cluster Sj1 ·Sj2 + Sk1 ·Sk2 term in the first order perturbation.\n\nThe perturbation on x-links is then (not unique),\n\n$\\lambda_{x}\\,H_{\\rm perturbation}$, $x=\\lambda_{x}[{\\bf S}_{j1}\\cdot{\\bf S}_{k1}+{\\rm sgn}(J_{x})\\cdot({\\bf S}_{j2}\\cdot{\\bf S}_{k2})]$ \n \n$-J_{x}({\\bf S}_{j1}\\cdot{\\bf S}_{j2}+{\\bf S}_{k1}\\cdot{\\bf S}_{k2})$\n\nwith λx = p 12|Jx| · Jcluster, and r = sgn(Jx) is the sign of Jx. The non-trivial terms produced by up to second order perturbation will be the τ x j τ x k term. Note that the last term in the above equation commutes with cluster Hamiltonians so it does not produce second or higher order perturbations.\n\nSimilarly considering the following perturbation on ylinks, λ Hperturbation = λ[Sj1 ·(Sk3 − Sk4) + r Sk1 ·(Sj3 − Sj4)]. Following similar procedures we get the second order perturbation from this term\n\n− λ 2 6Jcluster h 9 + 9r 2 8 + 2r Pjk[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)]Pjk − (3/2)Pjk(Sk3 · Sk4 + r 2 Sj3 · Sj4)Pjki = − λ 2 6Jcluster h 9 + 9r 2 8 + 2r (3/4)τ y j τ y k − (3/2)Pjk(Sk3 · Sk4 + r 2 Sj3 · Sj4)Pjki\n\nSo we can choose −(r λ2 )/(4Jcluster) = −Jy, and include the last intra-cluster Sk3 · Sk4 + r 2 Sj3 · Sj4 term in the first order perturbation.\n\nTherefore we can choose the following perturbation on y-links (not unique),\n\n$$\\begin{array}{r}{\\lambda_{y}\\,H_{\\mathrm{perturbation,}\\,y}}\\\\ {=\\lambda_{y}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\operatorname{sgn}(J_{y})\\cdot(\\mathbf{S}_{j3}-\\mathbf{S}_{j4})\\cdot(\\mathbf{S}_{k3}-\\mathbf{S}_{k4})]}\\\\ {\\quad-|J_{y}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}\\end{array}$$\n\nwith λy = p 4|Jy| · Jcluster, r = sgn(Jy) is the sign of Jy. The τ z τ z term is again more difficult to get. We use\n\nj k the representation of τ z by spin-chirality (6). And consider the following perturbation\n\n$$H_{\\mathrm{perturbation}}={\\bf S}_{j2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})+r\\,{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})$$\n\nThe first order term in (15) vanishes due to the same reason as before. There are four terms in the second order perturbation. The first one is\n\n$$\\begin{array}{l}{{\\lambda^{2}\\,{\\mathcal{P}}_{j k}{\\mathbf{S}}_{j2}\\cdot({\\mathbf{S}}_{k3}\\times{\\mathbf{S}}_{k4})(1-{\\mathcal{P}}_{j k})}}\\\\ {{\\ \\times\\left[0-H_{\\mathrm{cluster}\\ j}-H_{\\mathrm{cluster}\\ k}\\right]^{-1}}}\\\\ {{\\ \\times\\left(1-{\\mathcal{P}}_{j k}\\right){\\mathbf{S}}_{j2}\\cdot({\\mathbf{S}}_{k3}\\times{\\mathbf{S}}_{k4}){\\mathcal{P}}_{j k}}}\\end{array}$$\n\nFor the cluster j part we can use the same arguments as before, the Hcluster j can be replaced by a c-number Jcluster. For the cluster k part, consider the fact that Sk3 × Sk4 equals to the commutator −i[Sk4, Sk3 · Sk4], the action of Sk3 ×Sk4 on physical singlet states of k will also only produce spin-1 state. So we can replace the Hcluster k in the denominator by a c-number Jcluster as well. Use spin rotation symmetry to separate the j and k parts, this term simplifies to\n\n− λ 2 6Jcluster PjSj2 · Sj2Pj · Pk(Sk3 × Sk4) · (Sk3 × Sk4)Pk. Use (S) 2 = 3/4 and (Sk3 × Sk4) · (Sk3 × Sk4) = X a,b (S a k3S b k4S a k3S b k4 − S a k3S b k4S b k3S a k4 ) = (Sk3 · Sk3)(Sk4 · Sk4) − X a,b S a k3S b k3 [δab/2 − S a k4S b k4 ] = 9/16 + (Sk3 · Sk4)(Sk3 · Sk4) − (3/8)", - "page_start": 8, - "page_end": 8, - "source_file": "1001.0266.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "What are the consequences on the linewidth for regular and Poissonian injections ?", - "target_page": 3, - "target_passage": " For regular injection (p = 1), the linewidth is the narrowest, while for Poissonian injection (p = 0), the linewidth is the broadest.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": ".\n\n## **The Linewidth of Ramsey Laser with Bad Cavity**\n\nYang Li, Wei Zhuang, Jinbiao Chen,∗ and Hong Guo†\n\n*CREAM Group, State Key Laboratory of Advanced Optical Communication*\n\n*Systems and Networks (Peking University) and Institute of Quantum Electronics,*\n\n*School of Electronics Engineering and Computer Science,*\n\n*and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China*\n\n(Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling effect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\n*Introduction:* Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing effect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [3– 10]. However, in all these efforts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14–18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling effect [15– 17], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here focus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3–10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an effective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\n*Theoretical framework:* We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state **a** before entering the first cavity of seperated field, and the lower lasing state is **b**. We assume all the atoms have the same velocities υ, that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22–24]. The length of each oscillating part is *l*, and the length of the free drift region is *L*. The corresponding Hamiltonian is\n\n$$H=\\hbar\\omega\\hat{a}^{\\dagger}\\hat{a}+\\hbar\\sum_{j}\\left[\\omega_{a}^{j}(t)\\sigma_{a}^{j}+\\omega_{b}^{j}(t)\\sigma_{b}^{j}\\right]\\tag{1}$$\n \n$$+\\hbar\\mathrm{g}\\sum_{j}\\Gamma_{j}(t)(\\hat{a}^{\\dagger}\\hat{\\sigma}_{-}^{j}e^{-i\\vec{k}\\cdot\\vec{r}_{j}}+\\hat{\\sigma}_{+}^{j}\\hat{a}e^{i\\vec{k}\\cdot\\vec{r}_{j}}),$$\n\nwhere ˆ*a*, ˆ*a* † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω, σ *j a* = (|*a*i h*a*|) *j* and σ *j b* = (|*b*i h*b*|) *j* are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "169 'Presence' means according to the Eurostat ESAW methodology 2013: *'Presence of the victim or of a third person in itself creating a danger for oneself and possibly others'* (p. 28).\n\n170 European Commission, 2009: Causes and circumstances of accidents at work in the EU (p. 106).\n\n171 The DIFR is defined as the total number of reported disabling and fatal injuries per 1 million hours worked. See: Government of Canada, 2021: 2019 Annual Report - Occupational Injuries amongst Employees Under Federal Jurisdiction (9.39 per 1 million hours worked).\n\n173 Franco, 2012: Bernardino Ramazzini and women workers' health in the second half of the XVIIth century 174 Ramazzini, 1713: De morbis artificum diatriba (p. 199ff). Latin original text: *'Infortunium ergo, quod huiusmodi Artificibus ex suis opificiis, praeter vitae sedentaria incommoda, est Myopia, affectus nempe oculorum satis notus, cum scilicet visibilia oculis propius admovere necesse est.'*\n\n175 EU-OSHA, 2019: The value of occupational safety and health and the societal costs of work-related injuries and diseases\n\n176 ILO Encyclopaedia: Work-related Diseases and Occupational Diseases: The ILO International List\n\n177 European Commission, 2013: Report on the current situation in relation to occupational diseases' systems in EU Member States and EFTA/EEA countries, in particular relative to Commission Recommendation 2003/670/EC concerning the European Schedule of Occupational Diseases and gathering of data on relevant related aspects, here\n\n178 Examples from national recognition schemes: Arbetsmiljöstatistik Rapport 2021:01. Arbetsskador 2020 Occupational accidents and work-related diseases; Germany: DGUV, 2021: DGUV, 2021: Geschäfts- und Rechnungsergebnisse der gewerblichen Berufsgenossenschaften und Unfallversicherungsträger der öffentlichen Hand 2020\n\n179 Eurostat (Statistics explained): Occupational diseases statistics\n\n180 Eurostat: European occupational diseases statistics (EODS) and EU index of occupational diseases (2013=100) – experimental statistics\n\n181 European Federation of Building and Woodworkers (EFBWW), 2013: Asbestos‐related occupational diseases in Central and East European Countries (p. 11).\n\n182 See for example, the ILO List of Occupational Diseases Recommendation R194, revised 2010.\n\n183 European Commission, 2018: Scientific Committee on Health, Environmental and Emerging Risks SCHEER - Statement on emerging health and environmental issues, here see pp. 9-10. The SCHEER refers to this article: Holden et al, 2016: Global Prevalence of Myopia and High Myopia and Temporal Trends from 2000 through 2050\n\n184 European Commission, 2022:\n\nhttps://ec.europa.eu/social/main.jsp?langId=en&catId=89&furtherNews=yes&newsId=10463\n\n185 EU-OSHA, 2021: Executive summary - Musculoskeletal disorders: association with psychosocial risk factors at work\n\n*186 Eurostat: Life expectancy by age and sex, here*\n\n187 OECD/European Union, 2020: Health at a Glance: Europe 2020: State of Health in the EU Cycle (p. 112, p. 116).\n\n188 OECD/European Union, 2022: Health at a Glance: Europe 2022: State of Health in the EU Cycle (p. 87ff).\n\n189 Eurostat: Life expectancy by age and sex\n\n190 Eurostat: Life expectancy by age and sex\n\n191 OECD/European Union, 2020: Health at a Glance: Europe 2020: State of Health in the EU Cycle (p. 120).\n\n192 OECD/European Union, 2022: Health at a Glance: Europe 2022: State of Health in the EU Cycle (p. 95ff). 193 Joumard et al., 2008: Health Status Determinants: Lifestyle, Environment, Health Care Resources and Efficiency\n\n194 Mazeikaite et al., 2021: What Drives CrossCountry Health Inequality in the EU? Unpacking the Role of Socioeconomic Factors\n\n195 Eurofound, 2017: European Quality of Life Survey 2016 - Overview Report\n\n196 Eurostat: Self-perceived health by sex, age and income quintile\n\n197 Eurofound, 2017: European Quality of Life Survey 2016 - Overview Report (p. 18).\n\n198 Eurostat's 'Morbidity Task Force' is working on this: Archive:Morbidity statistics methodology pilot studies – examples, here; and, Eurostat: Morbidity statistics in the EU - Report on pilot studies - 2014 edition, here\n\n199 Pace & Buchow, 2014: Morbidity Statistics in the EU – key results from pilot studies in sixteen Member States 200 More detailed country data are available in the Eurostat section on Health in the European Union – facts and figures Country Health Profiles, here\n\n201 European Commission: European Core Health Indicators\n\n172 Safe Work Australia, 2021: Comparative performance monitoring report 23rd edition (p. 12ff) (3.6 claims per 1,000 employees).", - "page_start": 147, - "page_end": 147, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "$$\\tilde{N}_{b s s}=\\frac{R\\tau}{2}\\left[1-\\frac{C_{0}-C_{1}+C_{2}}{g\\tau}\\sqrt{\\frac{\\kappa}{R(B_{0}-B_{1}+B_{2})}}\\right]$$\n\n.\n\nA detailed analysis about the stability of the steady-state can be found such as in [28]. In this paper, we assume the steadystate solution is stable.\n\n*Laser linwidth:* Suppose the quantum fluctuation is small, the evolution of the fluctuations can be obtained by making a linearization of the c-number Langevin equations around the steady-state solution. Then the measured spectra of field fluctuations will be directly related to these quantities. By Fourier transformations of the linearized equation, we get the amplitude and phase quadrature components δ*X*(ω) and δ*Y*(ω) [26]. Well above threshold, one can neglect the amplitude fluctuations, and the linewidth inside the cavity is related to the phase-diffusion coefficient [25]. For small fluctuation of laser phase, the spectrum of phase fluctuations is simply related to the spectrum of the phase quadrature component of the field fluctuations, namely,\n\n$$(\\delta\\varphi^{2})_{\\omega}=\\frac{1}{I_{0}}(\\delta Y^{2})_{\\omega}.$$\n\nIn the region γ*ab* ≪ *T* −1 ≪ τ −1 ≪ κ/2, as in the recently proposed active optical clock [15] with atomic beam. The phase quadrature component of the field fluctuations can be expressed as\n\n$$(\\delta\\varphi^{2})_{\\omega}$$\n \n$$\\approx\\frac{(\\kappa/2+\\gamma_{ab})^{2}}{I_{0}\\omega^{2}[(\\kappa/2+\\gamma_{ab})^{2}+\\omega^{2}]}\\frac{g^{2}}{4(\\kappa/2+\\gamma_{ab})^{2}}\\{4\\gamma_{ab}\\hat{N}_{ass}$$\n \n$$+2R[(A_{0}+B_{0})+(A_{2}+B_{2})]$$\n \n$$+Rp[(C_{0}-C_{0}^{*})^{2}+(C_{1}-C_{1}^{*})^{2}+(C_{2}-C_{2}^{*})^{2}]\\}.\\tag{9}$$\n\nSince the time τ and *T* is much shorter than the time scale of the atomic dampings, we can neglect the dampings when calculate *Ai* , *Bi* , *Ci* . By using\n\n*A*0 = cos2 Ω*R* 2 τ ! , *A*1 = cos2 Ω*R* 2 τ ! , *A*2 = 1 − sin2 (Ω*R*τ) cos2 ∆2 2 *T* ! , *B*0 = sin2 Ω*R* 2 τ ! , *B*1 = sin2 Ω*R* 2 τ ! , *B*2 = sin2 (Ω*R*τ) cos2 ∆2*T* 2 ! , (*C*0 − *C* ∗ 0 ) 2 = 0, (*C*1 − *C* ∗ 1 ) 2 = − sin2 (Ω*R*τ)sin2 (∆2*T*), (*C*2 − *C* ∗ 2 ) 2 = − sin2 (Ω*R*τ)sin2 (∆2*T*),\n\nwe get\n\n$$(\\delta\\varphi^{2})_{\\omega}=\\frac{\\left(\\kappa/2+\\gamma_{ab}\\right)^{2}}{\\omega^{2}[(\\kappa/2+\\gamma_{ab})^{2}+\\omega^{2})]}\\frac{\\gamma_{ab}^{2}}{(\\kappa/2+\\gamma_{ab})^{2}}\\{D_{ST}\\tag{10}$$\n \n$$+\\ D_{Ram}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)]\\},$$\n\nwhere Ω*R* is the Rabi frequency on resonance, *DS T*=*g* 2*N*˜ *ass*/*I*0γ*ab* , *DRam* = *g* 2*R*/2*I*0γ 2 *ab*, and ∆2 = ω − (ω*a*2 − ω*b*2) presents the detuning in the free drift region. *p* is a parameter, which characterizes the pumping statistics: a Poissonian excitation statistics corresponds to *p* = 0 , and for a regular statistics we have *p* = 1.\n\nThen the linewidth of Ramsey laser with bad cavity is given by\n\n$$D=\\frac{\\gamma_{ab}^{2}}{(\\kappa/2+\\gamma_{ab})^{2}}\\{D_{ST}+D_{Ram}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)]\\}.\\tag{11}$$\n\nSince *DS T* /*DRam* ≪ 1 in our situation, and in the case of maximal photon number, the steady state value of *N*˜ *ass* is about *R*τ/2. Then we get the\n\n$$D\\approx\\frac{2g^{2}}{\\kappa}[2-p\\sin^{2}(\\Omega_{R}\\tau)\\sin^{2}(\\Delta_{2}T)].\\tag{12}$$\n\nFrom the expression above, we find that the pumping statistic can influence the linewidth. For regular injection (*p* = 1), the linewidth is the narrowest, while for Poissonian injection (*p* = 0), the linewidth is the broadest. But even for regular injection, the linewidth is larger than the case of one cavity. That means the mechanism of separated-field does not play the role in reducing the linewidth as in the conventional optical Ramsey method, which is counter-intuitive. However, the separated fields are indispensable for any phase detection like atom interferometry. The details about the method of active atom interferometry will appear elsewhere.\n\nOur method of Ramsey laser is suitable for any atoms with metastable energy level, as an example, we choose the transition from the metastable state 4*s*4*p* 3*P*1 to the ground state 4*s* 2 1*S* 0 of 40Ca to check the striking feature of this laser: subnatural linewidth. As mentioned in [29], the corresponding natural linewidth of the metastable state 4*s*4*p* 3*P*1 is 320Hz. As in the recently proposed active optical clock with atomic beam [15], the velocity of the atoms in thermal atomic beam is about 500m/s, and the length of the interaction region is about 1mm, then the time for the atom to traverse each coherentinteraction region is on the order of magnitude of 1 µs. If a bad cavity with κ is on the order of 107Hz, the relation κ/2 ≫ τ −1 is satisfied. Then when *g* is on the order of the magnitude of kHz, which can be easily achieved for current technique [30], from the linewidth expression of Eq.(16) the order of magnitude of linewidth is below 1 Hz. This means the linewidth of a Ramsey laser can be more than two orders of magnitude narrower than the atomic natural linewidth, therefore our Ramsey method provides a new subnatural spectroscopy technique. And since it is stimulated-emission spectrum, it overcomes the difficulty in other subnatural linewidth spectroscopy schemes where the quick reduction of signal to noise ratio is a formidable limit. We should point out that this Ramsey laser does not escape the limitation of all active optical clock: in order to pump atoms to the excited state effectively and to be stimulated emit photon during the lifetime of a metastable state, this new method will only be applicable to some special transitions [17].", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2670.pdf" - }, - { - "text": "# 2.3. FastBlue tracer injections\n\nTable 2\n\nMice were briefly anesthetized during the procedure, induced with 3% to 5% isoflurane, and then maintained at 1.5% to 2% as required. Hindlimbs were taped with the plantar surface of the paw facing up, and a custom, 26G removable needle with a 30˚ bevel, attached to a 25-mL Hamilton syringe, was inserted between the 2 distal-most footpads, towards the medial aspect of the hindpaw. The needle was then rotated 90˚, so the bevel faced medially. Furthermore, 4-mL FastBlue (FB; 2% in sterile phosphate-buffered saline (PBS); CAS# 73819-41-7; Polysciences, Inc, Warrington, PA) per paw was then slowly injected, and the needle was left in place for 10 seconds, before rotating and carefully retracting to avoid backflow of FB along the needle track. This prevented the FB bolus from contacting the sural innervation territory of the lateral hindpaw, restricting it largely to the tibial innervation territory of the glabrous hindpaw skin.\n\n# 2.4. Immunohistochemistry and image acquisition\n\nMice were anesthetized with an overdose of pentobarbital (20 mg) and transcardially perfused with a fixative containing 4% formaldehyde. L3 to L5 DRGs were removed and postfixed for another 2 hours, cryoprotected in 30% sucrose overnight, and then embedded in optimal cutting temperature media (OCT; Tissue Tek, Alphen aan den Rijn, the Netherlands). Dorsal root ganglia were sectioned on a Leica CM1950 cryostat at 30 mm, with every section collected serially on 5 Superfrost Plus slides (VWR, Lutterworth, United Kingdom) and each slide containing 1 in every 5 sections (4-7 sections per slide). One slide per DRG was selected at random and was washed with PBS, before being incubated with appropriate primary antibodies (Table 2) diluted in 5% normal donkey serum and 0.3% Triton X-100 in PBS for 3 days at 4˚C. After PBS washes, slides were incubated with appropriate secondary antibodies (Table 2) in the same PBS/ (normal donkey serum) NDS/Triton-X100 solution as for primaries, overnight at room temperature. Slides were washed and coverslipped with VectaShield Vibrance Hardset mounting media (Vector Labs, Newark, CA), with 4',6-diamidino-2-phenylindole included in mounting media where FB-labelled cells were not being examined. Sections were imaged using a Zeiss LSM900 Airyscan confocal microscope equipped with 405-, 488-, 561-,\n\n| Primary and secondary antibodies used in the study. | | | |\n| --- | --- | --- | --- |\n| Antibody | Source | Identifiers | Working dilution |\n| Anti-GFP (Chicken polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab13970 | 1:1000 |\n| | | RRID: AB_300798 | |\n| Anti-NeuN (Guinea pig polyclonal) | Synaptic Systems, G ¨ottingen, Germany | Cat#: 266004 | 1:500 |\n| | | RRID: AB_2619988 | |\n| Anti-mCherry (Rat monoclonal) | Invitrogen, Waltham, MA; Thermo Fisher Scientific, | Cat#: M11217 | 1:500 |\n| United Kingdom | | RRID: AB_2536611 | |\n| Anti-Atf3 (Rabbit polyclonal) | Novus Biologicals, Minneapolis, MN | Cat#: NBP1-85816 | 1:500 |\n| | | RRID: AB_11014863 | |\n| Anti-NF200 (Rabbit polyclonal) | Sigma-Aldrich, Saint Louis, MO | Cat#: N4142 | 1:1000 |\n| | | RRID: AB_477272 | |\n| Anti-TrkA (Goat polyclonal) | R&D Systems, Minneapolis, MN | Cat#: AF1056 | 1:500 |\n| | | RRID: AB_2283049 | |\n| Anti-TDP43 (Rabbit polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab133547 | 1:100 |\n| | | RRID: AB_2920621 | |\n| Anti-RFP (Mouse monoclonal) | Thermo Fisher Scientific, United Kingdom | Cat#: MA5-15257 | 1:200 |\n| | | RRID: AB_10999796 | |\n| Anti-RFP (Chicken polyclonal) | Sigma-Aldrich, United Kingdom | Cat#: AB3528 | 1:200 |\n| | | RRID: AB_11212735 | |\n| Alexa Fluor 488 Donkey Anti-Chicken IgY | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 703-545-155 | 1:500 |\n| (Donkey polyclonal) | | RRID: AB_2340375 | |\n| Alexa Fluor 647 Donkey Anti-Guinea pig IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 706-605-148 | 1:250 |\n| (Donkey polyclonal) | | RRID: AB_2340476 | |\n| Rhodamine Red-X Donkey Anti-Rat IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 712-295-153 | 1:100 |\n| polyclonal) | | RRID: AB_2340676 | |\n| Alexa Fluor 647 Donkey Anti-Rabbit IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-605-152 | 1:250 |\n| polyclonal) | | RRID: AB_2492288 | |\n| Rhodamine Red-X Donkey Anti-Rabbit IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-295-152 RRID: AB_2340613 | 1:100 |\n| (Donkey polyclonal) | | | |\n| Alexa Fluor 546 Goat Anti-Chicken IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11040 | 1:400 |\n| polyclonal) | | RRID: AB_2534097 | |\n| Alexa Fluor 488 Goat Anti-Rabbit IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11008 | 1:400 |\n| polyclonal) | | RRID: AB_143165 | |\n| Alexa Fluor 546 Donkey Anti-Mouse IgG (Donkey | Thermo Fisher Scientific, United Kingdom | Cat#: A10036 | 1:400 |\n| polyclonal) | | RRID: AB_2534012 | |\n\nGFP, green fluorescent protein; RFP, red fluorescent protein", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed2.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and – in the case of DNA – liquid crystalline structures [22, 30, 45–49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51–53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55–58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n### II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37–40, 61]. The gold core of 2 – 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms (C6 to C12) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [20] C. Tomlinson, \"On the motion of certain liquids on the surface of water,\" Phil. Mag. Ser. 4 39, 32–48 (1870).\n- [21] C. G. Marangoni, \"Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberfl ¨ ache einer ¨ anderen,\" Ann. Phys. (Poggendorf) 143, 337–354 (1871).\n- [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, \"Formation of ordered mesoscopic poly- ¨ mer arrays by dewetting,\" Chaos 9, 308–314 (1999).\n- [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, \"Hole-growth instability in the dewetting of evaporating polymer solution films,\" J. Polym. Sci. Pt. B-Polym. Phys. 40, 2825–2832 (2002).\n- [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, \"Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,\" Adv. Mater. 19, 1413–1417 (2007).\n- [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, \"Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,\" Langmuir 24, 7923–7930 (2008).\n- [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, \"Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,\" Surface and Interface Analysis 25, 514–521 (1997).\n- [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, \"Dewetting of thin collagenous precursor films,\" Appl. Phys. A 66, S565–S568 (1998).\n- [28] U. Thiele, M. Mertig, and W. Pompe, \"Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,\" Phys. Rev. Lett. 80, 2869–2872 (1998).\n- [29] H. Maeda, \"An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,\" Langmuir 15, 8505–8513 (1999).\n- [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, \"Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,\" Phys. Rev. Lett. 96, 177801 (2006).\n- [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, \"Evaporative self-assembly from complex DNA-colloid suspensions,\" Langmuir 24, 3911–3917 (2008).\n- [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, \"Rings and hexagons made of nanocrystals: A Marangoni effect,\" J. Phys. Chem. B 104, 11871–11877 (2000).\n- [33] G. L. Ge and L. Brus, \"Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,\" J. Phys. Chem. B 104, 9573–9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "*202* Mazeikaite et al., 2021: What Drives CrossCountry Health Inequality in the EU? Unpacking the Role of Socioeconomic Factors\n\n203 Eurostat: LFS 2020 Ad hoc module, here\n\n*204 Eurostat: Persons reporting a work-related health problem by sex, age and occupation, here*\n\n205 Murray & Lopez, 1996: The Global burden of disease : a comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020, here\n\nUpdate: GBD 2017 Risk Factor Collaborators, 2018: Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017, here\n\n206 European Burden of Disease Network\n\n207 WHO definition: *'One DALY represents the loss of the equivalent of one year of full health. DALYs for a disease or health condition are the sum of the years of life lost to due to premature mortality (YLLs) and the years lived with a disability (YLDs) due to prevalent cases of the disease or health condition in a population.'* here\n\n208 Murray & Lopez, 1996:. The Global burden of disease : a comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020, here\n\n209 IHME/GDB: GDB Compare - Vizhub, Visualisation of global health data, here\n\n210 Takala et al., 2017: Comparative Analysis of the Burden of Injury and Illness at Work in Selected Countries and Regions\n\nEzzati et al., 2004: Comparative quantification of health risks: global and regional burden of disease attributable to selected major risk factors\n\nNelson et al., 2005: The global burden of selected occupational disease and injury risks: Methodology and summary\n\n211 WHO: Protecting workers' health, Key facts\n\n212 Pneumoconiosis: a group of lung diseases resulting from inhalation of particles of industrial substances, particularly inorganic dusts.\n\n213 IHME (Institute for Health Metrics and Evaluation) (2016). Rethinking development and health: http://ghdx.healthdata.org/gbd-results-tool?params=gbd-api-2016 permalink/7193a516026f9a7df17cf73ea9ce3a5d *Findings from the Global Burden of Disease Study.* Seattle, WA: IHME. IHME Database.\n\n214 WHO/ILO, 2021: WHO/ILO joint estimates of the work-related burden of disease and injury, 2000–2016: Global monitoring report\n\n215 Ibid., pp. 55-56.\n\n216 WHO/ILO, 2021: WHO/ILO joint estimates of the work-related burden of disease and injury, 2000–2016: Global monitoring report (p. 18).\n\n217 The figures of the working age population of 16 years and above are based on EU-OSHA calculations of data provided by the United Nations World Population Prospects database: United Nations, Department of Economic and Social Affairs, Population Division (2019). World Population Prospects 2019, Online Edition. Rev. 1., File POP/1-1: Total population (both sexes combined) by region, subregion and country, annually for 1950-2100 (thousands), here\n\n218 WHO, Occupational Burden of Disease Application, https://who-ilo-joint-\n\nestimates.shinyapps.io/OccupationalBurdenOfDisease/ and EU-OSHA calculations\n\n219 International Commission on Occupational Health (ICOH) data based on new and until today unpublished calculations: Takala et al.: Comparative Global Estimates on the Work-related Burden of Accidents and Diseases (preprint)\n\n220 WHO, Occupational Burden of Disease Application, https://who-ilo-joint-\n\nestimates.shinyapps.io/OccupationalBurdenOfDisease/ and EU-OSHA calculations\n\n221 WHO/ILO, 2021: WHO/ILO joint estimates of the work-related burden of disease and injury, 2000–2016: Global monitoring report (pp. 55-56).\n\n222 WHO applied for the global estimates as reference the population with an age above 15 years. At EU level, 16 years — probably even older — is the minimum age to start work or an apprenticeship. 223 WHO, Occupational Burden of Disease Application, https://who-ilo-joint-\n\nestimates.shinyapps.io/OccupationalBurdenOfDisease/ and EU-OSHA calculations", - "page_start": 148, - "page_end": 148, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Figure 3.18. Generd Pknform Effects", - "page_start": 252, - "page_end": 252, - "source_file": "00-80T-80.pdf" - }, - { - "text": "- 54. Perret, Aurelie. \"Les traboules de Lyon\" (http://www.histoire-pour-tous.fr/tourisme/101-france-sud-est/5105-les -traboules-de-lyon.html). *histoire-pour-tous.fr*. SF Webmedia. Retrieved 31 July 2015.\n- 55. Curnonsky, Marcel E. Grancher (1935). *Lyon, capitale mondiale de la gastronomie* (https://books.google.com/ books?id=D481HQAACAAJ&q=curnonsky+lyon). Editions Lugdunum. Retrieved 30 July 2015.\n- 56. Buford, Bill (12 February 2011). \"Why Lyon is food capital of the world\" (https://www.theguardian.com/travel/2 011/feb/13/bill-buford-lyon-food-capital). *The Guardian*. Retrieved 11 December 2014.\n- 57. \"Priay Il y a 80 ans \" La mère Bourgeois \" obtenait 3 étoiles\" (http://www.leprogres.fr/ain/2013/01/09/priay-il-ya-80-ans-la-mere-bourgeois-obtenait-3-etoiles). *leprogres.fr*. Le Progres. Retrieved 30 July 2015.\n- 58. \"Histoire de la gastronomie 2/4\" (https://archive.today/20120605042553/http://www.franceculture.fr/emission-l a-fabrique-de-l-histoire-histoire-de-la-gastronomie-24-2010-11-23.html). *franceculture.fr*. Radio France. Archived from the original (http://www.franceculture.fr/emission-la-fabrique-de-l-histoire-histoire-de-la-gastron omie-24-2010-11-23.html) on 5 June 2012. Retrieved 30 July 2015.\n- 59. Gaudry, François-Régis (26 September 2014). \"Paul Bocuse: derniers secrets du \"pape\" de la gastronomie française\" (http://www.lexpress.fr/styles/saveurs/paul-bocuse-derniers-secrets_1578426.html). *lexpress.fr*. Groupe Express-Roularta. Retrieved 30 July 2015.\n- 60. \"Cuisine et boissons Lyon et ses environs\" (http://www.routard.com/guide/lyon/372/cuisine_et_boissons.htm). *routard.com*. Cyberterre / Hachette tourisme. Retrieved 30 July 2015.\n- 61. Bassets, Marc (20 June 2023). \"The secret of the taco: modern, multicultural France's fast food phenomenon\" (https://english.elpais.com/culture/2023-06-20/the-secret-of-the-taco-modern-multicultural-frances-fast-food-p henomenon.html). *El País*. Retrieved 13 August 2024.\n- 62. Collins, Lauren (12 April 2021). \"The Unlikely Rise of the French Tacos\" (https://www.newyorker.com/magazin e/2021/04/19/the-unlikely-rise-of-the-french-tacos). *The New Yorker*. Retrieved 13 August 2024.\n- 63. \"Avant d'être une compétition, le Trophée des champions est une vitrine pour la Ligue 1\" (https://web.archive. org/web/20150731090955/http://webfootballclub.fr/avant-detre-une-competition-le-trophee-des-champions-est -une-vitrine-pour-la-ligue-1-8274). *webfootballclub.fr*. Web Football Club. Archived from the original (http://web footballclub.fr/avant-detre-une-competition-le-trophee-des-champions-est-une-vitrine-pour-la-ligue-1-8274) on 31 July 2015. Retrieved 31 July 2015.\n- 64. Joly, Maxime. \"Le Grand Stade de Lyon pourrait rapporter 70 millions d'euros par an à l'OL\" (https://web.archi ve.org/web/20150905175715/http://sport24.lefigaro.fr/le-scan-sport/business/2015/03/27/27004-20150327AR TFIG00142-le-grand-stade-de-lyon-pourrait-rapporter-70-millions-d-euros-par-an-a-l-ol.php). *lefigaro.fr*. Le Figaro. Archived from the original (http://sport24.lefigaro.fr/le-scan-sport/business/2015/03/27/27004-2015032 7ARTFIG00142-le-grand-stade-de-lyon-pourrait-rapporter-70-millions-d-euros-par-an-a-l-ol.php) on 5 September 2015. Retrieved 31 July 2015.\n- 65. \"Lyon 2e : 60 ans de sport de glace\" (http://www.leprogres.fr/sortir/2015/05/26/lyon-2e-60-ans-de-sport-de-gla ce). *leprogres.fr*. Le Progres. Retrieved 31 July 2015.\n- 66. \"Birdy Kids cultural ambassador of Lyon\" (https://web.archive.org/web/20160305221701/http://www.lyon.fr/e venement/exposition/birdy-kids.html). *lyon.fr*. Archived from the original (http://www.lyon.fr/evenement/expositi on/birdy-kids.html) on 5 March 2016.\n- 67. \"Le nouveau profil de la population active immigrée\" (http://www.insee.fr/fr/themes/document.asp?reg_id=8&r ef_id=19297). Institut national de la statistique et des études économiques.\n- 68. Bienfait, Jean (1968). \"La population de Lyon à travers un quart de siècle de recensements douteux (1911- 1936)\" (https://www.persee.fr/doc/geoca_0035-113x_1968_num_43_1_2625). *Géocarrefour*. **43** (1). Revue de géographie de Lyon: 80. doi:10.3406/geoca.1968.2625 (https://doi.org/10.3406%2Fgeoca.1968.2625). Retrieved 16 October 2020.\n- 69. *Des villages de Cassini aux communes d'aujourd'hui*: Commune data sheet Lyon (http://cassini.ehess.fr/fr/htm l/fiche.php?select_resultat=20464), EHESS (in French).\n- 70. EHESS. \"Des villages de Cassini aux communes d'aujourd'hui\" (http://cassini.ehess.fr/fr/html/). Retrieved 9 April 2022.\n- 71. \"Statistiques locales Métropole de Lyon : Intercommunalité-Métropole Population municipale (historique depuis 1876)\" (https://statistiques-locales.insee.fr/#c=indicator&i=pop_depuis_1876.pop&s=2021&selcodgeo= 200046977&t=A01&view=map4). INSEE. Retrieved 12 July 2024.\n- 72. \"IMG1B Population immigrée par sexe, âge et pays de naissance en 2020 − Recensement de la population – Résultats pour toutes les communes, départements, régions, intercommunalités... −Étrangers - Immigrés en 2020 | Insee\" (https://www.insee.fr/fr/statistiques/7633127?sommaire=7633727&geo=COM-69123).\n- 73. \"欧州の補習授業校一覧(平成25年4月15日現在) (https://web.archive.org/web/20140330190146/http://www. mext.go.jp/a_menu/shotou/clarinet/002/006/001/002/004.htm)\" (Archive (https://web.archive.org/web/2007121 3144924/http://www.mext.go.jp/a_menu/shotou/clarinet/002/006/001/002/004.htm)). Ministry of Education, Culture, Sports, Science and Technology (MEXT). Retrieved on 10 May 2014. Cite Scolaire: \"Cité Scolaire Internationale, 2 place de Montréal,69361 LYON CEDEX 07 FRANCE\" and Lyon: \"Maison Berty Albrecht 14, Place Grandclement, 69100 Viueurbanne, FRANCE\"", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia4.pdf" - }, - { - "text": "figure 3.17. Structurd Complications Due to Sweephk", - "page_start": 250, - "page_end": 250, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2449.pdf", - "query": "Give me the advantages of Ferromagnetic semiconductors", - "target_page": 1, - "target_passage": "Ferromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90◦ to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane h100i orientations, and the latter dominant close to TC (∼35 K) giving an easy axis along the [1¯10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the TC of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field HE, indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/d dependence of HE was found previously for MnAs/(Ga,Mn)As bilayers4 , and is generally observed in exchanged-biased thin films12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆E = MF SHEd = 0.003 erg/cm2 . This value is rather small compared to typical exchange bias systems12, reflecting the low moment density MF S of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures13, while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe L2,3 absorption edges in order to determine the magnetic response of the individual elements. In L2,3 XMCD, electrons are excited from a 2p core level to the unoccupied 3d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products – either fluorescent x-rays or electrons – of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L2,3 absorption, the probing depths for FY and TEY detection are λF Y ≈ 100 nm and λT EY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as (Il − Ir)/(Il + Ir) where Il(r) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70◦ to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L2,3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "samples15, the projected Mn 3d magnetic moments are obtained as −1.4 µB and +0.8 µB per ion at remanence and 1000 Oe, respectively.\n\nThe difference between these values can be understood as being due to an interface layer which is strongly antiferromagnetically coupled to the Fe layer. At zero field, both the interfacial and bulk Mn are aligned antiparallel to the Fe layer. At high fields, the bulk of the (Ga,Mn)As layer away from the interface is re-oriented into the external field direction. However, the interfacial Mn remains antiparallel to the Fe layer and thus partially compensates the XMCD signal from the bulk of the (Ga,Mn)As. From the size of the remanent and 1000 Oe magnetic moments, it can be estimated that around 25-30% of the TEY XMCD signal can be ascribed to the interfacial Mn which is strongly coupled to the Fe moments.\n\nThe interfacial Mn moments are ascribed to the proximity polarization of the (Ga,Mn)As interface by the Fe layer, such as was shown previously by XMCD as well as ab initio theory7 . Evidence for this can be observed from measurement of the Mn L2,3 XMCD signal at temperatures above the (Ga,Mn)As TC . Similar to the previous study7 , we observe a small but not negligible signal at room temperature (Fig. 3), with opposite sign to the Fe L2,3 XMCD. Its spectral shape is characteristic of a localized electronic configuration close to d 5 , similar to bulk (Ga,Mn)As7,9,15 but in contrast to Mn in more metallic environments such as MnxFe1−x 7 or MnAs16. A slight broadening is observed on the low energy side of the Mn L3 peak, which may be due to the different screening induced by proximity to the Fe layer. Since the measured intensity is attenuated with distance z from the surface as I = I0 exp(−z/λT EY ), the thickness of the strongly coupled interface layer is estimated to be ∼0.7 nm or 2-3\n\n- 1 T. Jungwirth, W. A. Atkinson, B. H. Lee, and A. H. Mac-Donald, Phys. Rev. B 59, 9818 (1999); P. Sankowski and P. Kacman, Phys. Rev. B 71, 201303(R) (2005); A. D. Giddings, T. Jungwirth, and B. L. Gallagher, Phys. Rev. B 78, 165312 (2008); K. Szalowski and T. Balcerzak, Phys. Rev. B 79, 214430 (2009).\n- 2 J.-H. Chung, S. J. Chung, S. Lee, B. J. Kirby, J. A. Borchers, Y. J. Cho, X. Liu, and J. K. Furdyna, Phys. Rev. Lett. 101, 237202 (2008).\n- 3 M. Wang, R. P. Campion, A. W. Rushforth, K. W. Edmonds, C. T. Foxon, and R. P. Campion, Appl. Phys. Lett. 93, 132103 (2008).\n- 4 M. Zhu, M. J. Wilson, B. L. Sheu, P. Mitra, P. Schiffer, and N. Samarth, Appl. Phys. Lett. 91, 192503 (2007); M. Zhu, M. J. Wilson, P. Mitra, P. Schiffer, and N. Samarth, Phys. Rev. B 78, 195307 (2008).\n- 5 S. Mark, C. Gould, K. Pappert, J. Wenisch, K. Brunner, G. Schmidt, and L. W. Molenkamp, Phys. Rev. Lett. 103, 017204 (2009).\n- 6 G. Wastlbauer and J.A.C. Bland, Adv. Phys. 54, 137 (2005).\n- 7 F. Maccherozzi, M. Sperl, G. Panaccione, J. Minar, S.\n\nmonolayers, assuming a uniform distribution of Mn ions and magnetic moments throughout the (Ga,Mn)As film. This is around a factor of three thinner than in Ref.7 , which could be due to the lower Mn concentration or the different preparation method of the present samples.\n\nIn summary, we have demonstrated antiferromagnetic coupling between Fe and (Ga,Mn)As layers in bilayer structures. A markedly different coupling is observed for the bulk of the (Ga,Mn)As layer and for Mn moments in the near-interface region. A thickness-dependent exchange bias field is observed to affect the whole of the bulk (Ga,Mn)As layer, which aligns antiparallel to the Fe layer at low fields, and switches to parallel when the external field is large enough to overcome the bias field and the magnetocrystalline anisotropy fields. In contrast, the interfacial Mn moments remain aligned antiparallel to the Fe layer even at 20 kOe, the largest field studied, and are polarized at temperatures well above the TC of the bulk (Ga,Mn)As layer. The latter observation confirms the recently reported result of Ref. 7, in which the Fe/(Ga,Mn)As bilayers were produced by a different method but showed qualitatively similar behavior of the interfacial moments. Our results shed new light on the magnetic coupling in Fe/(Ga,Mn)As hybrid layers which are of potential interest for room temperature spintronics, and also offer a means of controlling the spin orientation in a FM semiconductor.\n\nWe acknowledge support from EU grants SemiSpinNet-215368 and NAMASTE-214499, and STFC studentship grant CMPC07100. The Advanced Light Source is supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Leigh Shelford for help during the Diamond beamtime.\n\nPolesya, H. Ebert, U. Wurstbauer, M. Hochstrasser, G. Rossi, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. Lett. 101, 267201 (2008).\n\n- 8 R. P. Campion, K. W. Edmonds, L. X. Zhao, K. Y. Wang, C. T. Foxon, B. L. Gallagher, and C. R. Staddon, J. Crystal Growth 247, 42 (2003).\n- 9 F. Maccherozzi, G. Panaccione, G. Rossi, M. Hochstrasser, M. Sperl, M. Reinwald, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. B 74, 104421 (2006).\n- 10 Ch. Binek, S. Polisetty, X. He and A. Berger, Phys. Rev. Lett. 96, 067201 (2006).\n- 11 C. Won, Y.Z. Wu, E. Arenholz, J. Choi, J. Wu, and Z. Q. Qiu, Phys. Rev. Lett. 99, 077203 (2007).\n- 12 J. Nogues and I. K. Schuller, J. Magn. Magn. Mater. 192, 203 (1999).\n- 13 K. F. Eid, M. B. Stone, K. C. Ku, O. Maksimov, P. Schiffer, N. Samarth, T. C. Shih and C. J. Palmstrom, Appl. Phys. Lett. 85, 1556 (2004).\n- 14 B. T. Thole, P. Carra, F. Sette, and G. van der Laan, Phys. Rev. Lett. 68, 1943 (1992); P. Carra, B. T. Thole, M. Altarelli, and X. Wang, Phys. Rev. Lett. 70, 694 (1993).\n- 15 T. Jungwirth, J. Masek, K. Y. Wang, K. W. Edmonds,", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2449.pdf" - }, - { - "text": "# Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang1\n\n1Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n| I. Introduction. | 1 |\n| --- | --- |\n| II. Formulation of the Pseudo-spin-1/2 from | |\n| Four-spin Cluster. | 2 |\n| III. Realization of the Kitaev Model. | 3 |\n| IV. Generate the High Order Physical Spin | |\n| Interactions by Perturbative Expansion. | 5 |\n| A. Generate the High Order Terms by Coupling | |\n| to Optical Phonon. | 5 |\n| B. Generate the High Order Terms by Magnetic | |\n| Interactions between Clusters. | 7 |\n| V. Conclusions. | 8 |\n| Acknowledgments | 8 |\n| A. Coupling between Distortions of a | |\n| Tetrahedron and the Pseudo-spins | 8 |\n| B. Derivation of the Terms Generated by | |\n| Second Order Perturbation of Inter-cluster | |\n| Magnetic Interactions | 9 |\n| References | 10 |\n\n#### I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential to realize non-Abelian anyons. The model simply reads\n\n$$H_{\\rm Kitaev}=-\\sum_{x-{\\rm links}\\ }J_{x}\\tau_{j}^{x}\\tau_{k}^{x}-\\sum_{y-{\\rm links}\\ }J_{y}\\tau_{j}^{y}\\tau_{k}^{y}$$\n \n$$-\\sum_{z-{\\rm links}\\ }J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere τ x,y,z are Pauli matrices, and x, y, z-links are defined in FIG. 1. It was shown by Kitaev1 that this spin-1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as |Jx|, |Jy|, and |Jz| satisfy the triangular relation, sum of any two of them is greater than the third one1 . It was further proposed by Kitaev1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems2,3. The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works4–7. Exact diagonalization has been used to study the Kitaev model on small lattices8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models9 .\n\nMany generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λx,y,z/Jcluster ∼ p |Jx,y,z|/Jcluster.\n\n### V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n#### Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n# Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref.35 the couplings of all tetrahedron distortion modes to the spin system. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\n$$H_{\\rm cluster},\\ {\\rm SL}=(J_{\\rm cluster}/2)(\\sum_{\\ell}{\\bf S}_{\\ell})^{2}+J^{\\prime}\\sum_{\\ell1 Ω) for small changes in CO concentration in the relevant range of around 0.1–10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 A for ˚ representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 A˚ ×15 A˚ ×14.622 A). For this size ˚ of supercell a Γ-point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nEform[M@VC] = E[M@VC] + nE[C] − E[M@NT] (1)\n\nwhere E[M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E[C] is the energy per carbon atom in a pristine nanotube, and E[M@NT]\n\n<i>Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\n$$E_{\\rm form}[{\\rm VC}]=E[{\\rm VC}]+nE[{\\rm C}]-E[{\\rm NT}],\\tag{2}$$\n\nwhere E[VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov *et al.* for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\n$$E_{\\rm ads}[X\\,\\mbox{\\small@M@VC}]=E[X\\,\\mbox{\\small@M@VC}]-E[X]-E[\\mbox{\\small@VC}],\\tag{3}$$\n\nFIG. 2: Calculated (a) adsorption energy Eads in eV and (b) change in conductance ∆G in units of G0 =2e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E[X@M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E[X] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\n$$\\Theta[X]=\\frac{K[X]C[X]}{1+\\sum_{Y}K[Y]C[Y]},\\tag{4}$$\n\nwhere K = k+/k− is the ratio of forward and backward rate constants for the adsorption reaction,\n\n$$K[X]=\\exp\\left[-\\frac{E_{\\rm ads}[X]+TS[X]}{k_{B}T}\\right].\\tag{5}$$\n\nIn these expressions C[X] is the concentration of species X, S[X] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2449.pdf", - "query": "I do not remember on wich samples SQUID magnetometry measurements were first performed", - "target_page": 2, - "target_passage": "SQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90◦ to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane h100i orientations, and the latter dominant close to TC (∼35 K) giving an easy axis along the [1¯10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the TC of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field HE, indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/d dependence of HE was found previously for MnAs/(Ga,Mn)As bilayers4 , and is generally observed in exchanged-biased thin films12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆E = MF SHEd = 0.003 erg/cm2 . This value is rather small compared to typical exchange bias systems12, reflecting the low moment density MF S of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures13, while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe L2,3 absorption edges in order to determine the magnetic response of the individual elements. In L2,3 XMCD, electrons are excited from a 2p core level to the unoccupied 3d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products – either fluorescent x-rays or electrons – of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L2,3 absorption, the probing depths for FY and TEY detection are λF Y ≈ 100 nm and λT EY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as (Il − Ir)/(Il + Ir) where Il(r) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70◦ to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L2,3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "On any of these views, you can select any point by using your cursor to know the exact value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure A-7.\n\n*Figure A-7 Viewing performance with details*\n\nFor each of the resources, various metrics are available and you can select which to be displayed. For example, as shown in Figure A-8, from the four available metrics for the MDisks view (Read, Write, Read latency, and Write latency) only Read and Write IOPS are selected.\n\n*Figure A-8 Displaying performance counters*\n\n# **Performance data collection and IBM Spectrum Control**\n\nAlthough you can obtain performance statistics in standard .xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze Storwize V7000 performance statistics.", - "page_start": 773, - "page_end": 773, - "source_file": "sg247938.pdf" - }, - { - "text": "### **Test providers**\n\n**3.**—(1) A test provider complies with this paragraph where—\n\n- (a) they provide appropriate tests in a single end-to-end testing service (whether or not they arrange with another person (\"X\") for X to provide one or more elements of the service on their behalf);\n- (b) they have made a declaration to the Department of Health and Social Care that they meet the minimum standards for private sector-provided testing at https://support-covid-19 testing.dhsc.gov.uk/PrivateSectorSelfDeclaration;\n- (c) in relation to a test which requires laboratory processing—\n\t- (i) the person responsible for the taking of samples meets the relevant requirements for accreditation to ISO standard 15189 or ISO/IEC standard 17025, in respect of the taking of samples, and\n\t- (ii) the laboratory used by the test provider for the processing of samples meets the relevant requirements for accreditation to ISO standard 15189 or ISO/IEC standard 17025, in respect of the processing of samples;\n- (d) in relation to a point of care test, they meet the relevant requirements for accreditation to ISO standard 15189 and ISO standard 22870(**a**);\n- (e) a registered medical practitioner has oversight and approval of medical practices undertaken by the test provider, and responsibility for reporting medical issues;\n- (f) they have an effective system of clinical governance in place which includes appropriate standard operating procedures in relation to the carrying out of appropriate tests;\n- (g) a registered clinical scientist has oversight of clinical practices undertaken by the test provider, and responsibility for reporting clinical issues;\n- (h) they have systems in place to identify any adverse incidents or quality control issues in relation to appropriate tests and be able to report them as soon as reasonably practicable to the Secretary of State;\n- (i) they administer or provide an appropriate test to P, on or after the fifth day after the day on which P arrived in England having received the information required by paragraph 4(b) and (c) (as appropriate); and\n- (j) if they arrange with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with any of paragraphs (c) to (i) and 5(2), (3) and (5) as is relevant to the carrying out of that element.\n- (2) For the purposes of sub-paragraph (1)—\n\n- (a) \"point of care test\" means a test processed outside a laboratory environment;\n- (b) \"registered clinical scientist\" means a person registered as a clinical scientist with the Health and Care Professions Council pursuant to article 5 of the Health Professions Order 2001(**b**);\n- (c) \"single end-to-end testing service\" means a service which comprises accepting the booking from the person to be tested, collecting and processing the sample to be tested, carrying out genomic sequencing and providing the test result to P.\n\n(3) For the purposes of sub-paragraph (1)(c) and (d), a person or laboratory (as the case may be) meets the relevant requirements for accreditation to a standard where that person, or in the case of a laboratory where the person who is the operator of the laboratory—\n\n- (a) has made a valid application for accreditation to UKAS (\"stage one\"); and\n(<b>a) ISO 22870 Point-of-care testing (POCT) requirements for quality and competence was published in November 2016. (**b**) S.I. 2002/254.", - "page_start": 69, - "page_end": 69, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "#### **Day 2 tests: private provider requirements**\n\n**7.**—(1) For the purposes of paragraph 6(1)(b)(iii), a private provider complies with this paragraph where—\n\n- (a) they comply with the requirements of paragraph 3(1)(a) and (e) to (h) of Schedule 10 as if any reference in those provisions to an appropriate test were a reference to a day 2 test;\n- (b) if the provider is a laboratory that conducts diagnostic test evaluation for testing in accordance with this Schedule, they have made a declaration to the Department of Health and Social Care that they meet the minimum standards for private sector-provided testing at https://support-covid-19-testing.dhsc.gov.uk/InternationalTesting;\n- (c) they have provided the Department of Health and Social Care with a list of all organisations that they work with (whether by sub-contract or otherwise) to carry out the testing service or to carry out genomic sequencing, indicating the nature of the service that each organisation is providing, and kept that list updated as appropriate;\n- (d) the person responsible for the taking of samples meets the relevant requirements for accreditation to ISO standard 15189 or ISO/IEC standard 17025 in respect of the taking of samples;\n- (e) the laboratory used by the test provider for the processing of samples meets the relevant requirements for ISO standard 15189 or ISO/IEC standard 17025 in respect of the evaluation of the established molecular detection method and the genomic sequencing of samples;\n- (f) they receive the information required by paragraph 10(3) or (4) (as appropriate), and if they administer the test to P, they do so no later than the end of the second day after the day on which P arrived in England;\n- (g) each day, they notify the Secretary of State in writing of—\n\t- (i) the number of tests they sold on that day, and\n\t- (ii) in relation to each test sold on that day—\n\t\t- (aa) the date of the arrival in England of the person in respect of whom the test was sold, and\n\t\t- (bb) whether the person in respect of whom the test was sold is a category 1 arrival or not;\n- (h) they sequence each sample with a cycle threshold less than 30 (equivalent to ~1,000 viral genome copies per millilitre);\n- (i) in respect of the sequencing of samples, they must secure a reference genome coverage breadth of at least 50% and at least 30 times coverage;\n- (j) on a request by the Secretary of State or the COVID-19 Genomics UK Consortium, they make samples available for the purpose of dual sequencing;\n- (k) they preserve and transport samples in a manner that enables genome sequencing;\n- (l) they have in place a process to remove human reads from any data submitted in a notification to Public Health England pursuant to the Health Protection (Notification) Regulations 2010; and\n- (m) if they arrange with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with the following so far as relevant to the carrying out of that element—\n\t- (i) paragraph 3(1)(e) to (h) of Schedule 10 as applied by paragraph (a) of this subparagraph,\n\t- (ii) paragraph (c) to (l) of this sub-paragraph,\n\t- (iii) paragraph 11(2), (3) and (4).\n\n(2) For the purposes of sub-paragraph (1)(m), \"single end-to-end testing service\" has the meaning given in paragraph 3(2)(c) of Schedule 10.", - "page_start": 61, - "page_end": 61, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "FIG. 9: ∆W vs the cut-off for the EB model. It remains negative for larger cut-offs. Parameters are the same as before. The dot indicates the value of ∆W(∞) = ∆WK\n\nof the lattice (the dashed line in Fig. 9).\n\n#### C. Marginal Fermi liquid model\n\nFor their analysis of the optical integral, Norman and P´epin30 introduced a phenomenological model for the self energy which fits normal state scattering rate measurements by ARPES41. It constructs the NS Σ′′ (ω) out of two contributions - impurity scattering and electronelectron scattering which they approximated phenomenologically by the marginal Fermi liquid form of αω at small frequencies6 (MFLI model). The total Σ′′ is\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=\\Gamma\\,+\\,\\alpha|\\omega|f\\left(\\frac{\\omega}{\\omega_{s a t}}\\right)\\tag{17}$$\n\nwhere ωsat is about ∼ 1 2 of the bandwidth, and f(x) ≈ 1 for x < 1 and decreases for x > 1. In Ref 30 f(x) was assumed to scale as 1/x at large x such that Σ′′ is flat at large ω. The real part of Σ(ω) is obtained from Kramers-Kr¨onig relations. For the superconducting state, they obtained Σ′′ by cutting off the NS expression on the lower end at some frequency ω1 (the analog of ω0 + ∆ that we had for EB model):\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=(\\Gamma\\,+\\,\\alpha|\\omega|)\\Theta(|\\omega|-\\omega_{1})\\qquad\\quad(18)$$\n\nwhere Θ(x) is the step function. In reality, Σ′′ which fits ARPES in the NS has some angular dependence along the Fermi surface42, but this was ignored for simplicity. This model had gained a lot of attention as it predicted the optical sum in the SCS to be larger than in the NS, i.e., ∆W > 0 at large frequencies. This would be consistent with the experimental findings in Refs. 8,9 if, indeed, one identifies ∆W measured up to 1eV with ∆WK.\n\nWe will show below that the sign of ∆W in the MFLI model actually depends on how the normal state results are extended to the superconducting state and, moreover, will argue that ∆WK is actually negative if the extension is done such that at α = 0 the results are consistent with\n\nBCSI model. However, before that, we show in Figs 10- 12 the conductivities and the optical integrals for the original MFLI model.\n\nFIG. 10: Top –the conductivities in the NS and SCS in the original MFLI model of Ref.30. We set Γ = 70 meV , α = 0.75, ∆ = 32 meV , ω1 = 71 meV . Note that σ ′ (ω) in the SCS begins at Ω = ∆ + ω1. Bottom – the behavior of WK with Γ.\n\nIn Fig 10 we plot the conductivities in the NS and the SCS and Kubo sums WK vs Γ at α = 0.75 showing that the spectral weight in the SCS is indeed larger than in the NS. In Fig 11 we show the behavior of the optical sums W(ωc) in NS and SCS. The observation here is that only ∼ 75−80% of the Kubo sum is recovered up to the scale of the bandwidth implying that there is indeed a significant spectral weight well beyond the bandwidth. And in Fig 12 we show the behavior of ∆W(wc). We see that it does not change sign and remain positive at all ωc, very much unlike the BCS case. Comparing the behavior of W(wc) with and without a lattice (solid and dashed lines in Fig. 12) we see that the 'finite bandwidth effect' just shifts the curve in the positive direction. We also see that the solid line flattens above roughly half of the bandwidth, i.e., at these frequencies ∆W(ωc) ≈ ∆WK. Still, we found that ∆W continues going down even above the bandwidth and truly saturates only at about 2 eV (not shown in the figure) supporting the idea that there is 'more' left to recover from higher frequencies.\n\nThe rationale for ∆WK > 0 in the original MFLI model has been provided in Ref. 30. They argued that this is closely linked to the absence of quasiparticle peaks in the NS and their restoration in the SCS state because the phase space for quasiparticle scattering at low energies is smaller in a superconductor than in a normal state.", - "page_start": 7, - "page_end": 7, - "source_file": "1001.0764.pdf" - }, - { - "text": "### Materials & experimental systems\n\n| Methods |\n| --- |\n\n| n/a | Involved in the study | n/a | Involved in the study |\n| --- | --- | --- | --- |\n| | Antibodies | | ChIP-seq |\n| | Eukaryotic cell lines | | Flow cytometry |\n| | Palaeontology and archaeology | | MRI-based neuroimaging |\n| | Animals and other organisms | | |\n| | Clinical data | | |\n| | Dual use research of concern | | |\n| | Plants | | |\n\n# Magnetic resonance imaging\n\n### Experimental design\n\n| Design type | Structural & Diffusion MRI | |\n| --- | --- | --- |\n| Design specifications | No task-based fMRI used in this manuscript. | |\n| Behavioral performance measures | N/A; no performance metrics collected | |\n| Acquisition | | |\n| Structural Imaging type(s) | | |\n| 3 Field strength | | |\n| Sequence & imaging parameters | | High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo |\n| | | (MPRAGE) sequence (TR = 2500 ms, TE = 2.31 ms, T1 = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo fieldmap (TR = 758 ms; TE1 = 4.92 ms; TE2 = 7.38 ms; flip angle = 60°). A T2-weighted (T2w) turbo spin echo (TSE) |\n| | | scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices |\n| | with no gap, total acquisition time = 5:42 min). | |\n| Area of acquisition | T1-weighted and dMRI scans = whole-brain | |\n| | T2-weighted scan = high-resolution imaging of medial temporal lobe | |\n| Diffusion MRI Used Not used | | |\n| Parameters | TR = 4300 ms, echo time = 100.2 ms, 139 directions, b-max = 4990, FoV = 259 x 259 mm, 78 slices, 1.7986 x 1.7986 x 1.8 mm voxel | |\n\n# Preprocessing\n\nresolution\n\n| Preprocessing software | Gray Matter Volume & Cortical Thickness: |\n| --- | --- |\n| | Advanced Normalization Tools (ANTs), version 2.1.0 |\n| | FreeSurfer, version 7 |\n| | T2-weighted MTL scans: |\n| | Automatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018 |\n| | Diffusion imaging: |\n| | QSIprep, version 0.15.3 |\n| | DSI Studio, version Chen-2022-07-31 |\n| Normalization | Normalization differed by modality due to inherent limitations of applicable processing pipelines. |\n| | Gray Matter Volume & Cortical Thickness: |\n| | All analyses were kept in native subject-space to limit the amount of warping and leverage the advantages of a precision |\n| | imaging design. |\n| | T2-weighted MTL scans: |\n| | T2w images were registered to the segmentation template (see below) using ANTs deformable registration. |\n| | Diffusion imaging: |\n| | Initial preprocessing through QSIprep normalized diffusion images to the skull-stripped T1w images. Diffusion images were |\n| | then reconstructed in MNI space using DSI studio's Q-space Diffeomorphic Reconstruction. |\n\nApril 2023", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed4.pdf" - }, - { - "text": "- (c) they have provided the Department of Health and Social Care with a list of all organisations that they work with (whether by sub-contract or otherwise) to carry out the testing service or to carry out genomic sequencing, indicating the nature of the service that each organisation is providing and kept that list updated as appropriate;\n- (d) in relation to a test which requires laboratory processing—\n\t- (i) the person responsible for the taking of samples meets the relevant requirements for accreditation to ISO standard 15189 or ISO/IEC standard 17025 in respect of the taking of samples, and\n\t- (ii) the laboratory used by the test provider for the processing of samples meets the relevant requirements for accreditation to ISO standard 15189 or ISO/IEC standard 17025 in respect of the processing of samples;\n- (e) in relation to a point of care test, they meet the relevant requirements for accreditation to ISO Standard 15189 and ISO standard 22870;\n- (f) they receive the information required by paragraph 10(3) or (4) (as appropriate), and if they administer the test to P, they do so no earlier than the end of the seventh day after the day on which P arrived in England;\n- (g) each day, they notify the Secretary of State in writing of—\n\t- (i) the number of tests they sold on that day, and\n\t- (ii) in relation to each test sold on that day—\n\t\t- (aa) the date of arrival in England of the person in respect of whom the test was sold, and\n\t\t- (bb) whether the person in respect of whom the test was sold is a category 1 arrival or not;\n- (h) if they arrange with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with the following so far as relevant to the carrying out of that element—\n\t- (i) paragraph 3(1)(e) to (i) of Schedule 10 as applied by paragraph (a) of this subparagraph,\n\t- (ii) paragraph (b) to (g) of this sub-paragraph,\n\t- (iii) paragraph 11(2), (3) and (4).\n\n(2) For the purposes of sub-paragraph (1)(h), \"single end-to-end testing service\" has the meaning given in paragraph 3(2)(c) of Schedule 10.\n\n(3) For the purposes of sub-paragraph (1)(d) and (e), a person or laboratory (as the case may be) meets the relevant requirements for accreditation to a standard where the person who is the operator of the laboratory complies with the requirements of regulation 6 of the Health Protection (Coronavirus, Testing Requirements and Standards) (England) Regulations 2020 as if—\n\n- (a) a reference to an applicable test were a reference to a day 8 test;\n- (b) a reference to a test provider were a reference to a private provider.\n\n### **Required circumstances for undertaking a day 2 test or a day 8 test**\n\n**10.**—(1) The circumstances mentioned in regulation 6(12)(a) and (b) are as follows.\n\n- (2) In relation to—\n\t- (a) a day 2 test, P undertakes the test no later than the end of the second day after the day on which P arrived in England;\n\t- (b) a day 8 test, P undertakes the test no earlier than the end of the seventh day after the day on which P arrived in England.\n\n(3) Subject to sub-paragraph (4), at the time the test is booked P notifies the test provider that P is to undertake the test under these Regulations, and provides the test provider with—\n\n- (a) the information set out in paragraph 4(b)(i) to (v) and (vii) to (xiii) of Schedule 10; and", - "page_start": 63, - "page_end": 63, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "you have been traced as a contact of someone who tested positive\n\nFor advice on when you might need to self-isolate and what to do, go to www.nhs.uk/conditions/coronavirus-covid-19 and read 'Self-isolation and treating symptoms'.\n\n#### **Form B: positive test result**\n\nYour coronavirus test result is positive. You had the virus when the test was done.\n\nEven if you have not had symptoms of coronavirus, you must self-isolate for 10 days from the day after your test date. Your test sample may be genome sequenced to check whether you have a virus variant of concern or variant under investigation.\n\nPeople you live with or have travelled with should also self-isolate for 10 days from the day after you took a test.\n\nIf you received a positive test result for the test taken you do not need to take any further tests. People you are travelling with must still take a day 8 test if they have travelled from an amber list country.\n\nYou may be contacted for contact tracing and to check that you, and those who you live or are travelling with, are self-isolating.\n\nYou must not travel, including to leave the UK, during self-isolation.\n\nContact 111 if you need medical help. In an emergency dial 999.\n\n#### **Form C: unclear test result**\n\nYour coronavirus test result is unclear. It is not possible to say if you had the virus when the test was done.\n\nYou must take another test or self-isolate for 10 days from the day after your test date.\n\nYou may be contacted to check that you are self-isolating.\n\n- (4) Where—\n\t- (a) regulation 4 or 4A of the Health Protection (Notification) Regulations 2010 applies in relation to the test provider; or\n\t- (b) if the test provider arranges with another person (\"X\") for X to carry out any element of the single end-to-end testing service on their behalf, either of those regulations applies to X in the carrying out of that element,\n\nthe regulation applies as if it required the information described in sub-paragraph (5) to be included in the notification to Public Health England.\n\n(5) The information mentioned in sub-paragraph (4) is—\n\n- (a) the date on which P last departed from or transited through a category 2 country or territory;\n- (b) P's coach number, flight number or vessel name (as appropriate);\n- (c) the country or territory P was travelling from when P arrived in England, and any country or territory they transited through as part of that journey;\n- (d) the date on which P undertook the appropriate test;\n- (e) whether the test is—\n\t- (i) a day 2 test for a category 1 arrival,\n\t- (ii) a day 2 test for a person who is not a category 1 arrival, or\n\t- (iii) a day 8 test.", - "page_start": 65, - "page_end": 65, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Laura Pritschet, Elizabeth R. Chrastil, & Emily G. Jacobs\n\nLast updated by author(s): 06/30/2024\n\nCorresponding author(s):\n\n# Reporting Summary\n\nNature Portfolio wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Portfolio policies, see our Editorial Policies and the Editorial Policy Checklist.\n\n# Statistics\n\n| | For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. |\n| --- | --- |\n| n/a | Confirmed |\n| | The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement |\n| | A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly |\n| | The statistical test(s) used AND whether they are one- or two-sided |\n| | Only common tests should be described solely by name; describe more complex techniques in the Methods section. |\n| | A description of all covariates tested |\n| | A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons |\n| | A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) |\n| | AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) |\n| | For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted |\n| | Give P values as exact values whenever suitable. |\n| | For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings |\n| | For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes |\n| | Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated |\n| | Our web collection on statistics for biologists contains articles on many of the points above. |\n\n# Software and code\n\nPolicy information about availability of computer code\n\nData collection All data collection was done on a Siemens 3T Prisma with software version MR E11. All sequences were standard Siemens protocols, with the exception to the T2-hippocampal scan: A T2-weighted (T2w) turbo spin echo (TSE) scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5:42 min). No other custom software was used for data collection.\n\nData analysis The following software packages were used: Advanced Normalization Tools (ANTs), version 2.1.0 FreeSurfer, version 7 Automatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018 IQM Pipeline from MRIQC, version 23.1 Matlab, version 2022a QSIprep, version 0.15.3 DSI Studio, version Chen-2022-07-31 R/R Studio, version 3.4.4 ITK-SNAP, v.3.8.0-b\n\nFor manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed4.pdf" - }, - { - "text": "| | For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. |\n| --- | --- |\n| n/a | Confirmed |\n| | The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement |\n| | A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly |\n| | The statistical test(s) used AND whether they are one- or two-sided |\n| | Only common tests should be described solely by name; describe more complex techniques in the Methods section. |\n| | A description of all covariates tested |\n| | A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons |\n| | A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) |\n| | AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) |\n| | For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted |\n| | Give P values as exact values whenever suitable. |\n| | For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings |\n| | For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes |\n| | Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated |\n| | Our web collection on statistics for biologists contains articles on many of the points above. |\n\n| Policy information about availability of computer code | |\n| --- | --- |\n| Data collection No software was used for data collection. | |\n| Data analysis | We used bcftools 1.19, samtools 1.3.1, bwa aln 0.7.17-r1188, GLIMPSEv1.1.1, Relate v1.2.1, and R packages stats (v3.6.2), admixtools2 |\n| | (v2.0.4). Code for twigstats (v1.0.1) is available through https://github.com/leospeidel/twigstats and https://zenodo.org/records/13833120. |\n\n- \n- \n- \n-", - "page_start": 22, - "page_end": 22, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2449.pdf", - "query": "What are the differences observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes ?", - "target_page": 2, - "target_passage": "For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high mag netic fields, whereas for TEY at remanence it is approx imately a factor of two larger than at 1000 Oe.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "samples15, the projected Mn 3d magnetic moments are obtained as −1.4 µB and +0.8 µB per ion at remanence and 1000 Oe, respectively.\n\nThe difference between these values can be understood as being due to an interface layer which is strongly antiferromagnetically coupled to the Fe layer. At zero field, both the interfacial and bulk Mn are aligned antiparallel to the Fe layer. At high fields, the bulk of the (Ga,Mn)As layer away from the interface is re-oriented into the external field direction. However, the interfacial Mn remains antiparallel to the Fe layer and thus partially compensates the XMCD signal from the bulk of the (Ga,Mn)As. From the size of the remanent and 1000 Oe magnetic moments, it can be estimated that around 25-30% of the TEY XMCD signal can be ascribed to the interfacial Mn which is strongly coupled to the Fe moments.\n\nThe interfacial Mn moments are ascribed to the proximity polarization of the (Ga,Mn)As interface by the Fe layer, such as was shown previously by XMCD as well as ab initio theory7 . Evidence for this can be observed from measurement of the Mn L2,3 XMCD signal at temperatures above the (Ga,Mn)As TC . Similar to the previous study7 , we observe a small but not negligible signal at room temperature (Fig. 3), with opposite sign to the Fe L2,3 XMCD. Its spectral shape is characteristic of a localized electronic configuration close to d 5 , similar to bulk (Ga,Mn)As7,9,15 but in contrast to Mn in more metallic environments such as MnxFe1−x 7 or MnAs16. A slight broadening is observed on the low energy side of the Mn L3 peak, which may be due to the different screening induced by proximity to the Fe layer. Since the measured intensity is attenuated with distance z from the surface as I = I0 exp(−z/λT EY ), the thickness of the strongly coupled interface layer is estimated to be ∼0.7 nm or 2-3\n\n- 1 T. Jungwirth, W. A. Atkinson, B. H. Lee, and A. H. Mac-Donald, Phys. Rev. B 59, 9818 (1999); P. Sankowski and P. Kacman, Phys. Rev. B 71, 201303(R) (2005); A. D. Giddings, T. Jungwirth, and B. L. Gallagher, Phys. Rev. B 78, 165312 (2008); K. Szalowski and T. Balcerzak, Phys. Rev. B 79, 214430 (2009).\n- 2 J.-H. Chung, S. J. Chung, S. Lee, B. J. Kirby, J. A. Borchers, Y. J. Cho, X. Liu, and J. K. Furdyna, Phys. Rev. Lett. 101, 237202 (2008).\n- 3 M. Wang, R. P. Campion, A. W. Rushforth, K. W. Edmonds, C. T. Foxon, and R. P. Campion, Appl. Phys. Lett. 93, 132103 (2008).\n- 4 M. Zhu, M. J. Wilson, B. L. Sheu, P. Mitra, P. Schiffer, and N. Samarth, Appl. Phys. Lett. 91, 192503 (2007); M. Zhu, M. J. Wilson, P. Mitra, P. Schiffer, and N. Samarth, Phys. Rev. B 78, 195307 (2008).\n- 5 S. Mark, C. Gould, K. Pappert, J. Wenisch, K. Brunner, G. Schmidt, and L. W. Molenkamp, Phys. Rev. Lett. 103, 017204 (2009).\n- 6 G. Wastlbauer and J.A.C. Bland, Adv. Phys. 54, 137 (2005).\n- 7 F. Maccherozzi, M. Sperl, G. Panaccione, J. Minar, S.\n\nmonolayers, assuming a uniform distribution of Mn ions and magnetic moments throughout the (Ga,Mn)As film. This is around a factor of three thinner than in Ref.7 , which could be due to the lower Mn concentration or the different preparation method of the present samples.\n\nIn summary, we have demonstrated antiferromagnetic coupling between Fe and (Ga,Mn)As layers in bilayer structures. A markedly different coupling is observed for the bulk of the (Ga,Mn)As layer and for Mn moments in the near-interface region. A thickness-dependent exchange bias field is observed to affect the whole of the bulk (Ga,Mn)As layer, which aligns antiparallel to the Fe layer at low fields, and switches to parallel when the external field is large enough to overcome the bias field and the magnetocrystalline anisotropy fields. In contrast, the interfacial Mn moments remain aligned antiparallel to the Fe layer even at 20 kOe, the largest field studied, and are polarized at temperatures well above the TC of the bulk (Ga,Mn)As layer. The latter observation confirms the recently reported result of Ref. 7, in which the Fe/(Ga,Mn)As bilayers were produced by a different method but showed qualitatively similar behavior of the interfacial moments. Our results shed new light on the magnetic coupling in Fe/(Ga,Mn)As hybrid layers which are of potential interest for room temperature spintronics, and also offer a means of controlling the spin orientation in a FM semiconductor.\n\nWe acknowledge support from EU grants SemiSpinNet-215368 and NAMASTE-214499, and STFC studentship grant CMPC07100. The Advanced Light Source is supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Leigh Shelford for help during the Diamond beamtime.\n\nPolesya, H. Ebert, U. Wurstbauer, M. Hochstrasser, G. Rossi, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. Lett. 101, 267201 (2008).\n\n- 8 R. P. Campion, K. W. Edmonds, L. X. Zhao, K. Y. Wang, C. T. Foxon, B. L. Gallagher, and C. R. Staddon, J. Crystal Growth 247, 42 (2003).\n- 9 F. Maccherozzi, G. Panaccione, G. Rossi, M. Hochstrasser, M. Sperl, M. Reinwald, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. B 74, 104421 (2006).\n- 10 Ch. Binek, S. Polisetty, X. He and A. Berger, Phys. Rev. Lett. 96, 067201 (2006).\n- 11 C. Won, Y.Z. Wu, E. Arenholz, J. Choi, J. Wu, and Z. Q. Qiu, Phys. Rev. Lett. 99, 077203 (2007).\n- 12 J. Nogues and I. K. Schuller, J. Magn. Magn. Mater. 192, 203 (1999).\n- 13 K. F. Eid, M. B. Stone, K. C. Ku, O. Maksimov, P. Schiffer, N. Samarth, T. C. Shih and C. J. Palmstrom, Appl. Phys. Lett. 85, 1556 (2004).\n- 14 B. T. Thole, P. Carra, F. Sette, and G. van der Laan, Phys. Rev. Lett. 68, 1943 (1992); P. Carra, B. T. Thole, M. Altarelli, and X. Wang, Phys. Rev. Lett. 70, 694 (1993).\n- 15 T. Jungwirth, J. Masek, K. Y. Wang, K. W. Edmonds,", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2449.pdf" - }, - { - "text": "FIG. 3. (color online) (a) Polarization-averaged Mn L 2 , 3 spectrum for a Fe/(Ga,Mn)As film; (b) XMCD spectra measured in remanence at 2 K; (c) XMCD spectra measured under a 1000 Oe applied field at 2 K; (d) XMCD spectrum measured under a 2000 Oe applied field at 300 K. XMCD spectra are obtained using TEY (thick red lines) and FY (thin blue lines) detection.", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90◦ to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane h100i orientations, and the latter dominant close to TC (∼35 K) giving an easy axis along the [1¯10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the TC of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field HE, indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/d dependence of HE was found previously for MnAs/(Ga,Mn)As bilayers4 , and is generally observed in exchanged-biased thin films12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆E = MF SHEd = 0.003 erg/cm2 . This value is rather small compared to typical exchange bias systems12, reflecting the low moment density MF S of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures13, while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe L2,3 absorption edges in order to determine the magnetic response of the individual elements. In L2,3 XMCD, electrons are excited from a 2p core level to the unoccupied 3d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products – either fluorescent x-rays or electrons – of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L2,3 absorption, the probing depths for FY and TEY detection are λF Y ≈ 100 nm and λT EY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as (Il − Ir)/(Il + Ir) where Il(r) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70◦ to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L2,3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "### Materials & experimental systems\n\n| Methods |\n| --- |\n\n| n/a | Involved in the study | n/a | Involved in the study |\n| --- | --- | --- | --- |\n| | Antibodies | | ChIP-seq |\n| | Eukaryotic cell lines | | Flow cytometry |\n| | Palaeontology and archaeology | | MRI-based neuroimaging |\n| | Animals and other organisms | | |\n| | Clinical data | | |\n| | Dual use research of concern | | |\n| | Plants | | |\n\n# Magnetic resonance imaging\n\n### Experimental design\n\n| Design type | Structural & Diffusion MRI | |\n| --- | --- | --- |\n| Design specifications | No task-based fMRI used in this manuscript. | |\n| Behavioral performance measures | N/A; no performance metrics collected | |\n| Acquisition | | |\n| Structural Imaging type(s) | | |\n| 3 Field strength | | |\n| Sequence & imaging parameters | | High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo |\n| | | (MPRAGE) sequence (TR = 2500 ms, TE = 2.31 ms, T1 = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo fieldmap (TR = 758 ms; TE1 = 4.92 ms; TE2 = 7.38 ms; flip angle = 60°). A T2-weighted (T2w) turbo spin echo (TSE) |\n| | | scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices |\n| | with no gap, total acquisition time = 5:42 min). | |\n| Area of acquisition | T1-weighted and dMRI scans = whole-brain | |\n| | T2-weighted scan = high-resolution imaging of medial temporal lobe | |\n| Diffusion MRI Used Not used | | |\n| Parameters | TR = 4300 ms, echo time = 100.2 ms, 139 directions, b-max = 4990, FoV = 259 x 259 mm, 78 slices, 1.7986 x 1.7986 x 1.8 mm voxel | |\n\n# Preprocessing\n\nresolution\n\n| Preprocessing software | Gray Matter Volume & Cortical Thickness: |\n| --- | --- |\n| | Advanced Normalization Tools (ANTs), version 2.1.0 |\n| | FreeSurfer, version 7 |\n| | T2-weighted MTL scans: |\n| | Automatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018 |\n| | Diffusion imaging: |\n| | QSIprep, version 0.15.3 |\n| | DSI Studio, version Chen-2022-07-31 |\n| Normalization | Normalization differed by modality due to inherent limitations of applicable processing pipelines. |\n| | Gray Matter Volume & Cortical Thickness: |\n| | All analyses were kept in native subject-space to limit the amount of warping and leverage the advantages of a precision |\n| | imaging design. |\n| | T2-weighted MTL scans: |\n| | T2w images were registered to the segmentation template (see below) using ANTs deformable registration. |\n| | Diffusion imaging: |\n| | Initial preprocessing through QSIprep normalized diffusion images to the skull-stripped T1w images. Diffusion images were |\n| | then reconstructed in MNI space using DSI studio's Q-space Diffeomorphic Reconstruction. |\n\nApril 2023", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed4.pdf" - }, - { - "text": "Image mode creates one-to-one mapping of logical block addresses (LBAs) between a volume and an MDisk (LU presented by the virtualized storage). Image mode volumes have the minimum size of one block (512 bytes) and always occupy at least one extent. An Image mode MDisk cannot be used as a quorum disk and no IBM Spectrum Virtualize system metadata extents are allocated from it; however, all of the IBM Spectrum Virtualize copy services functions can be applied to image mode disks. The difference between a managed mode volume (with striped extent allocation) and an image mode volume is shown in Figure 7-4.\n\n*Figure 7-4 Image mode volume versus striped volume*\n\nAn image mode volume is mapped to one, and only one, image mode MDisks and is mapped to the entirety of the MDisk. Therefore, the image mode volume capacity must be equal to the size of the corresponding image mode MDisk. If the size of the (image mode) MDisk is not a multiple of the MDisk group's extent size, the last extent is marked as partial (not filled).\n\nWhen you create an image mode volume, the specified MDisk must be in unmanaged mode and must not be a member of a storage pool. As the image mode volume is configured, the MDisk is made a member of the specified storage pool (Storage Pool_IMG_*xxx*).\n\nIBM Spectrum Virtualize also supports the reverse process, in which a managed mode volume can be migrated to an image mode volume. The extent size that is chosen for this specific storage pool must be the same as the extent size of the storage pool into which you plan to migrate the data off the image mode volumes. If a volume is migrated to another MDisk, it is represented as being in managed mode during the migration. Its mode changes to \"image\" only after the process completes.\n\nIt is a preferred practice to put image mode MDisks in a dedicated storage pool and use a special name for it (for example, Storage Pool_IMG_*xxx*).", - "page_start": 267, - "page_end": 267, - "source_file": "sg247938.pdf" - }, - { - "text": "FIG. 2. (color online) XMCD asymmetry versus applied field along the [110] axis at 2 K, for a Fe (2 nm)/(Ga,Mn)As (10 nm) film. (a) Fe L 3, total electron yield; (b) Mn L 3 , total electron yield; (c) Mn L 3, fluorescent yield. Black and red points are data for increasing and decreasing fields respectively; lines are to guide the eye.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2449.pdf" - }, - { - "text": "FIG. 1. (color) Main figure: Major (red/black) and minor (green) hysteresis loops along the [110] axis at 5 K, for a Fe (2 nm)/(Ga,Mn)As (20 nm) film, and the hysteresis loop for a control (Ga,Mn)As (20 nm) film along the same axis (blue). Left inset: Magnetization versus temperature for the Fe/(Ga,Mn)As film at remanence (black) and under a 500 Oe applied field (red). Right inset: Exchange bias field versus thickness d of the (Ga,Mn)As film (points) and fit showing 1/d dependence (dashed line).\n\nM. Sawicki, M. Polini, J. Sinova, A. H. MacDonald, R. P. Campion, L. X. Zhao, N. R. S. Farley, T. K. Johal, G. van der Laan, C. T. Foxon, and B. L. Gallagher, Phys. Rev. B 73, 165205 (2006).\n\n16 K. W. Edmonds, A. A. Freeman, N. R. S. Farley, K. Y. Wang, R. P. Campion, B. L. Gallagher, C. T. Foxon, G. van der Laan, and E. Arenholz, J. Appl. Phys. 102, 023902 (2007).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2449.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "**Note:** Consider the compression guidelines in Chapter 10, \"Advanced features for storage efficiency\" on page 403 before creating the first compressed volume copy on a system.\n\n# **7.3.3 General pane**\n\nThe General pane is shown in Figure 7-22.\n\n| General | | |\n| --- | --- | --- |\n| Format volume: | V | Enabled |\n| Cache mode: | | |\n| Enabled | | |\n| OpenVMS UDID: | | |\n\n*Figure 7-22 Custom volume creation – General pane*\n\nThis pane gives the following options:\n\n- - Format volume: Controls whether the volume is formatted before being made available; defaults to Enabled.\n- - Cache mode: Controls volume caching; defaults to Enabled. Other available options are Read-only and Disabled.\n- - OpenVMS UDID: Each OpenVMS Fibre Channel-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used in the creation of the OpenVMS device name.\n\n# **7.4 HyperSwap volumes**\n\nThe HyperSwap function provides highly available volumes accessible through two sites at up to 300 km (186.4 miles) apart. A fully independent copy of the data is maintained at each site.\n\nWhen data is written by hosts at either site, both copies are synchronously updated before the write operation completion is reported to the host. The HyperSwap function automatically optimizes itself to minimize data transmitted between sites and to minimize host read and write latency.\n\nIf the nodes or storage at either site go offline, the HyperSwap function automatically fails over access to the other copy. The HyperSwap function also automatically resynchronizes the two copies when possible.\n\nThe HyperSwap function is built on the foundation of two earlier technologies: Non-Disruptive Volume Move (NDVM) function that was introduced in IBM Spectrum Virtualize V6.4, and the Remote Copy features that include Metro Mirror, Global Mirror, and Global Mirror with Change Volumes.", - "page_start": 290, - "page_end": 290, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What is Kingsgate ?", - "target_page": 2, - "target_passage": "Kingsgate is a highly successful gold mining, development and exploration company with two operating gold mines and two advanced development projects.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "*Kingsgate is a highly successful gold mining, development and exploration company with two operating gold mines and two advanced development projects. Shareholders can look forward to the benefits of this strong operating and development platform, where Kingsgate aims to build value though operating, earnings and dividend growth for the benefit of all stakeholders.*\n\nCHILE\n\nAUSTRALIA\n\nTHAILAND", - "page_start": 1, - "page_end": 1, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Senior Management\n\nKingsgate's executives have a comprehensive range of skills and experience including mine development and operations, exploration, finance and administration. They are supported by highly qualified specialists, whose backgrounds cover the full scope of mining resources activities.\n\nSenior members of Kingsgate's management team are:\n\n## Gavin Thomas\n\nBSc (Geology), FAusIMM\n\n#### Managing Director and Chief Executive Officer\n\nGavin Thomas was appointed Chief Executive Officer of Kingsgate in 2004 and joined the Kingsgate Board on 16th November 2007. Gavin has had a successful career in developing mining companies from the exploration phase into mid-tier gold or copper producers. He has over 42 years of international experience in exploring for, evaluating, developing, operating and reclaiming mines in North and South America, Australia, the Southwest Pacific, Asia and Europe. Amongst Gavin's credits is the discovery of \"Lihir\" in Papua New Guinea, one of the largest gold deposits in the world. In particular, he has extensive experience in Thailand and South America.\n\n#### Duane Woodbury BEc (Hons)\n\n#### Chief Financial Officer\n\nDuane Woodbury was appointed Chief Financial Officer of Kingsgate on 1 September 2011. Duane has a BEc (Hons) Degree and has worked in various financial, accounting and advisory roles during his career in a number of locations, including London, New York and Singapore. He has been assisting Kingsgate in its business development initiatives since August 2007 and brings over 20 years of experience in financial markets and corporate finance transactions, principally with the Macquarie Group.\n\n#### Tim Benfield\n\nDip CSM (mining), MBA, MAusIMM\n\n#### Chief Operating Officer\n\nTim Benfield joined Kingsgate in February 2012 as Chief Operating Officer. Tim is a mining engineer with over 21 years underground and open pit experience in the mining industry in both operational and corporate roles. He has operational and project development experience in Australia, Africa and Saudi Arabia. This includes 10 years with Barrick Gold of Australia where he provided support to four operating mines and two development projects. Tim was most recently General Manager of the Pajingo Gold mine in Queensland for Evolution Mining Limited.\n\n#### Ross Coyle BA, FCPA, FCIS\n\n#### General Manager Finance and Administration Company Secretary\n\nRoss Coyle joined Kingsgate in March 2011 following the Company's acquisition of Dominion Mining Limited and was with the Dominion group for over 25 years. He is a qualified accountant and has over 30 years experience in finance and accounting within the resource industry. He was Finance Director of Dominion from 1996. Ross was appointed Kingsgate's Company Secretary in September 2011.\n\n#### Joel Forwood Bsc (Hons) FFin\n\n#### General Manager Corporate and Markets\n\nJoel Forwood joined Kingsgate in November 2010 and has over 27 years experience in the resource and investment industries covering investor relations, funds management and exploration. For over 12 years, he has been leading investor relations at a number of listed companies, most recently for Lihir Gold Limited. Prior to this he was a fund manager with Queensland Investment Corporation (QIC) following his early career in mineral exploration with BHP and corporate development with RGC.\n\n## Ronald James\n\nBSc (Geology), MAusIMM, MAIG\n\n#### General Manager Exploration and Resource Development\n\nRon James has 30 years of experience in exploration and mining at management level inclusive of setting up gold mines and exploration projects from their earliest stages through to development and sustainability. Before joining Kingsgate, he was Chief Mine Geologist at the Gold Ridge Mine in the Solomon Islands and later Group Exploration Manager for Ross Mining NL. Ron is familiar with the technical and operating requirements for emerging projects in a variety of terrains and environments and has a strong focus on maximising returns from ore bodies through optimum waste and ore classification as well as increasing reserves from nearmine resource development.", - "page_start": 40, - "page_end": 40, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Financing Arrangements\n\n#### Corporate loan facility\n\nKingsgate has a three year secured loan facility with Investec which was amended during the year. The amended facility has a limit of $40 million (30 June 2012: $50 million), of which $20 million has been drawn down as at 30 June 2013 (30 June 2012: $40 million).\n\n#### Convertible loan facility\n\nKingsgate has a five year A$35 million convertible loan facility with Investec entered into in a prior period to provide funding for the Bowdens acquisition. Kingsgate has the option to make a prepayment against the facility with an issue of Kingsgate shares.\n\n#### Restructure of corporate loan and convertible loan facilities\n\nAs indicated previously in the Preliminary Final report, at balance date it was the Group's intention to restructure and amalgamate these facilities in the next financial year. This relates to the potential for completion of the Initial Public Offering (\"IPO\") of Akara on the Stock Exchange of Thailand and the updated mine plan for Challenger. Any restructure would optimise the Group's anticipated balance sheet liquidity and operational cash flows. Accordingly, the Group classified the total amount drawn down under these facilities of $55 million as a current liability at 30 June 2013.\n\nSubsequent to the end of the financial year, the Group received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40 million. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n- 〉 Tranche one will be a $25 million Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO, although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n- 〉 Tranche two is an amortising facility with $5 million to be repaid during the 2014 financial year and the balance of $10 million repaid during the 2015 financial year.\n\n#### Convertible revolving credit facility\n\nThe Group also has a three year $25 million Convertible Revolving Credit Facility available. As at the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the company to repay any cash drawdown under the facility by the issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\n#### Multi-currency and syndicated loan facilities\n\nKingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125 million (fully drawn as at period end) and an additional Thai Baht denominated working capital facility equivalent to US$15 million (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100 million Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n## Financial Position\n\nShareholders' equity at 30 June 2013 was $474 million (2012: $776 million). The decrease of $302 million reflects the year's loss together with dividends paid.\n\n## Dividends\n\nNo final dividend has been declared for the year ended 30 June 2013.\n\nAn interim dividend declared for the half-year ended 31 December 2012 of 5 cents per fully paid share was paid on 12 April 2013.\n\nA final dividend declared for the year ended 30 June 2012 of 10 cents per fully paid share was paid on 1 October 2012.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Directors' Report\n\nYour Directors present their report on the Group consisting of Kingsgate Consolidated Limited and the entities it controlled at the end of, or during, the year ended 30 June 2013.\n\n## Directors\n\nThe following persons were Directors of Kingsgate Consolidated Limited during the whole of the financial year and up to the date of this report.\n\n- 〉 Ross Smyth-Kirk Chairman\n- 〉 Peter Alexander Non-Executive Director\n- 〉 Craig Carracher Non-Executive Director\n- 〉 Peter McAleer Non-Executive Director\n- 〉 Gavin Thomas Executive Director\n\n## Principal activities\n\nThe principal activities of Kingsgate Consolidated Limited are mining and mineral exploration in Australia, South East Asia and South America.\n\n## Dividends\n\nDividends paid to members during the financial year were as follows:\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| | $'000 | $'000 |\n| Final dividend declared for the year ended 30 June 2012 of | 15,148 | 6,829 |\n| 10 cents per fully paid share paid on 1 October 2012 | | |\n| Interim dividend declared for the year ended 30 June 2013 of | 7,591 | 15,196 |\n| 5 cents per fully paid share paid on 12 April 2013 | | |\n| Total dividends | 22,739 | 22,025 |\n\n## Review of operations and results\n\n#### Operational performance\n\nKingsgate is a gold mining, development and exploration company based in Sydney, Australia. Kingsgate owns and operates two gold mines, the world class Chatree Mine in Thailand and the underground Challenger Mine in South Australia. In addition, the Company has two advanced development projects, the Nueva Esperanza Silver / Gold Project, in the highly prospective Maricunga Gold / Silver Belt in Chile, and the Bowdens Silver Project in New South Wales, Australia. From this operating and development platform, Kingsgate aims to build value for all shareholders.\n\nGroup gold production was 199,897 ounces, a decrease of 4% on the previous corresponding year. The contribution from Chatree was 133,681 ounces with 66,216 ounces from Challenger.\n\nChatree gold production was 10% higher than the previous corresponding period as a result of an increase in throughput from the expanded Chatree process plant and access to higher grade oxide ore from Q Prospect.\n\nChallenger gold production was 24% lower than the previous corresponding year given additional dilution and depletion at Challenger Deeps and a shortfall in planned development. This resulted in lower ore tonnes from the mine that was supplemented by low grade stockpiled ore. Following the fall in the gold price a strategic review of Challenger was implemented that has resulted in a new mine plan to focus primarily on the higher grade Challenger West orebody. The new mine plan will be implemented during the first three months of the 2014 financial year.\n\nA lower gold price and industry wide cost pressures had a negative impact on the underlying earnings of the Group which contributed to a major impairment to the carrying value of a number of Group assets, particularly assets relating to the Challenger Gold Operations. Impairments totalling $332,808,000 were the major contributor to the after tax loss of $323,726,000 for the year.\n\nThe development projects continued to advance during the year. At Nueva Esperanza, the feasibility work shifted to focus on identifying the lowest cost and lowest power consumption development alternatives. This included reviewing a heap leach process option with on-site power generation. Further work is expected to be completed in the December quarter 2013. At Bowdens, the feasibility work has confirmed the optimum process route. Completion of the technical feasibility study including mine planning, infrastructure and metallurgy, and lodging of the Environmental Impact Statement (\"EIS\") are scheduled for 2014.", - "page_start": 43, - "page_end": 43, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## 32. Parent entity financial information continued\n\n#### Contingent liabilities of the parent entity\n\nBank guarantees have been given by Kingsgate's controlled entities to participating banks in the syndicated loan facility and revolving loan facility as described in Note 16 as part of the security package.\n\nThese guarantees may give rise to liabilities in the parent entity if the controlled entities do not meet their obligations under the terms of the loans subject to guarantees. No material losses are anticipated in respect of the above contingent liabilities.\n\n## 33. Sale of exploration assets\n\nOn 28 March 2013, the Group sold its exploration assets in Western Australia and Queensland through the sale of shares in its subsidiary company, Quadrio Resources Limited, to Caravel Minerals Limited (\"Caravel\"), an Australian company listed on the ASX.\n\nKingsgate received 135,000,000 fully paid ordinary shares in the issued capital of Caravel and 20,000,000 unlisted options to acquire Caravel shares exercisable at 10 cents on or before three years from the date of issue. Subsequent to the sale, Kingsgate became the largest shareholder in Caravel with 35.54% held at 30 June 2013. Kingsgate's holding in Caravel reduced to 27.04% post 30 June 2013 following a rights issue by Caravel that Kingsgate did not participate in.\n\nThe financial impact of the sale transaction as at the date of disposal is summarised below:\n\n| | 2013 |\n| --- | --- |\n| Fair value of consideration | $'000 |\n| 135,000,000 Caravel shares at $0.025 per share | 3,375 |\n| 20,000,000 unlisted Caravel options | – |\n| Total consideration | 3,375 |\n| Carry value of the exploration assets sold | 20,084 |\n| Loss on sale | 16,709 |", - "page_start": 111, - "page_end": 111, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Auditor's Independence Declaration\n\n### Auditor's Independence Declaration\n\nAs lead auditor for the audit of Kingsgate Consolidated Limited for the year ended 30 June 2013, I declare that to the best of my knowledge and belief, there have been:\n\n- a) no contraventions of the auditor independence requirements of the *Corporations Act 2001* in relation to the audit; and\n- b) no contraventions of any applicable code of professional conduct in relation to the audit.\n\nThis declaration is in respect of Kingsgate Consolidated Limited and the entities it controlled during the period.\n\nBrett Entwistle Partner PricewaterhouseCoopers 23 September 2013", - "page_start": 63, - "page_end": 63, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "| | | | | Equity holding |\n| --- | --- | --- | --- | --- |\n| 21. Controlled entities | Country of | Class of | 2013 | 2012 |\n| Entity | Incorporation | shares | % | % |\n| Parent Entity | | | | |\n| Kingsgate Consolidated Limited | | | | |\n| Subsidiaries | | | | |\n| Dominion Mining Ltd | Australia | Ordinary | 100 | 100 |\n| Challenger Gold Operations Pty Ltd(i) | Australia | Ordinary | 100 | 100 |\n| Quadrio Resources Limited(ii) | Australia | Ordinary | – | 100 |\n| Gawler Gold Mining Pty Ltd | Australia | Ordinary | 100 | 100 |\n| Dominion Copper Pty Ltd | Australia | Ordinary | 100 | 100 |\n| Dominion Metals Proprietary Limited | Australia | Ordinary | 100 | 100 |\n| Yilgarn Metals Limited | Australia | Ordinary | 100 | 100 |\n| Kingsgate Treasury Pty Ltd(iii) | Australia | Ordinary | 100 | 100 |\n| Kingsgate Bowdens Pty Limited | Australia | Ordinary | 100 | 100 |\n| Kingsgate Capital Pty Ltd | Australia | Ordinary | 100 | 100 |\n| Kingsgate Nominees Pty Limited | Australia | Ordinary | 100 | 100 |\n| Kingsgate South America Pty Ltd | Australia | Ordinary | 100 | 100 |\n| Laguna Resources NL | Australia | Ordinary | 100 | 100 |\n| Laguna Exploration Pty Ltd | Australia | Ordinary | 100 | 100 |\n| Akara Mining Limited (iv) | Thailand | Ordinary | 100 | 100 |\n| Issara Mining Ltd | Thailand | Ordinary | 100 | 100 |\n| Suan Sak Patana Ltd | Thailand | Ordinary | 100 | 100 |\n| Phar Mai Exploration Ltd | Thailand | Ordinary | 100 | 100 |\n| Richaphum Mining Ltd | Thailand | Ordinary | 100 | 100 |\n| Phar Lap Ltd | Thailand | Ordinary | 100 | 100 |\n| Phar Rong Ltd | Thailand | Ordinary | 100 | 100 |\n| Dominion (Lao) Co., Ltd | Laos | Ordinary | 100 | 100 |\n| Laguna Chile Ltda | Chile | Ordinary | 100 | 100 |\n| Minera Kingsgate Limitada | Chile | Ordinary | 100 | 100 |\n| Kingsgate Peru SRL | Peru | Ordinary | 100 | 100 |\n| Minera Kingsgate Argentina S.A. | Argentina | Ordinary | 100 | 100 |\n\n(i) Challenger Gold Operations Pty Ltd changed its name from Dominion Gold Operations Pty Ltd on 26 March 2013.\n\n(ii) Quadrio Resources Limited was sold by the Group during the year.\n\n(iii) Kingsgate Treasury Pty Ltd changed its name from Yilgarn Metals Exploration Pty Ltd on 29 November 2012.\n\n(iv) Akara Mining Limited changed its name to Akara Resource Public Company Limited on 29 August 2013.", - "page_start": 95, - "page_end": 95, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Exploration Report\n\n### Summary\n\nKingsgate has a portfolio of exploration tenements and applications in Thailand, Chile and Lao PDR. Following the sale of exploration tenements to Caravel Minerals, exploration in Australia is currently only conducted in the vicinity of the Challenger Mine in South Australia and the Bowdens Silver Project in New South Wales.\n\nKingsgate's South East Asian exploration team continued their exploration activities on Thailand and surrounding countries. Strategically the team has turned the majority of their attention to projects which have the capacity to add value to the Company through exploration drilling subsequent resource expansion. These projects include the granted Mining Leases at Chatree and the granted Sayabouly Concession in the Lao PDR.\n\nOutside of these active areas, the South East Asian exploration team continues to review new opportunities throughout Thailand, Laos and their neighbouring countries.", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Resource\n\nMPR Geological Consultants Pty Ltd (MPR) has estimated Mineral Resources for the Bowdens silver lead zinc deposit and reviewed the quality of sampling and assaying for Kingsgate's 2012 drilling.\n\nEstimated resources include silver, lead and zinc grades and are reported above silver equivalent cut off grades. The silver equivalence formula is based on commodity prices and recoveries provided by Kingsgate, which give the following function.\n\nAg equivalent (g/t) = Ag (g/t) + 27.5 x Pb (%) + 22.8 x Zn (%)\n\nThe study database comprises 567 RAB, aircore, Reverse Circulation hammer (RC), and diamond drill holes completed by Kingsgate and previous explorers since 1989 for a combined 63,088 metres of drilling.\n\nA JORC-compliant resource estimate was completed in October 2012 and the current total measured, indicated and inferred resource (at a 30 grams per tonne silver equivalent (AgEq) lower cut-off grade) is 182 million ounces of AgEq.\n\n## Feasibility Study\n\nDuring 2013, the process design and engineering work for the Definitive Feasibility Study (DFS) progressed to a point where the draft study was close to completion as at 30 June 2013. The study encompassed detailed process design based on using the most recent metallurgical test results, capital and operating cost estimates, project water and power supply, infrastructure requirements and mine optimisation.\n\nA specialist water supply and engineering firm was engaged to determine the project options for supply, ground and surface water management. Separate specialist consulting firms were engaged to prepare the design and costing of the tailings storage facility and power supply for the Bowdens project.\n\nA geo-technical drilling program was largely completed and the results utilised in determining preliminary mine design and costing for an open pit mine for Bowdens including pit wall angles for the mine optimisation.\n\nA geo-metallurgical test program was completed using core samples prepared from the major lithology types at the Bowdens silver project. The geo-metallurgical programme was successful in providing important information related to the physical characteristics and flotation recovery of mineralisation from the dominant\n\nlithology types. This included providing confirmation of milling circuit parameters and overall improved metallurgical recovery.\n\nTesting of the long term geochemical stability of the ore and waste for potentially acid forming properties is ongoing, with initial weathering columns nearing completion. The geochemical characterisation results will form an important input to the Environmental Impact Statement (EIS).\n\n## EIS, Community and Project Approval Process\n\nKingsgate submitted an application for the Director General's Requirements (DGRs) in December 2012. Following a planning focus meeting with various NSW government departments and agencies in February 2013, the DGR's were issued in late February 2013. The DGR document combines the elements of the conceptual project development plan (CPDP) and sets out environmental assessment requirements for the proposed project development.\n\nThe preparation for lodgement of an Environmental Impact Statement (EIS) to the NSW Department of Planning (\"Planning\") continues. It is envisaged that the EIS will be completed and lodged in 2014. Data for flora and fauna, surface water, groundwater, meteorology, ambient noise and dust levels are collected routinely. Further investigations of cultural heritage, social-economic impact, traffic impact, soil type and agricultural suitability have also been undertaken on site.\n\nThere have been no serious safety incidents reported to date. At the end of June there were over 600 days Lost Time Injury free since Kingsgate exploration and pre-development activities began on site.\n\nEnvironmental, regulatory and NSW Government approvals remain the key determinants to the timing of project development at Bowdens. Of particular note were two recent NSW Land and Environment Court decisions relating to the overturning of existing mining approvals that will require extra diligence and consideration as the Bowdens Project moves forward. Community relations was undertaken throughout the year utilising a variety of techniques including: letters, telephone calls, attendance at trade shows, industry presentations, site tours, Community Liaison Group meetings, governmental meetings and two open days.\n\nThe open days were highly successful in engaging with the community with more than 200 local people providing feedback on a range of topics. Sentiment capture and management remains an important aspect for the project as part of the ongoing community relations program, and a full time Community and Government Relations Manager has been engaged on that basis.\n\n# Projects Report", - "page_start": 28, - "page_end": 28, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "*The Board of Kingsgate is determined to re-establish the path to building shareholder wealth via profits and dividends...*\n\nRoss Smyth-Kirk Director\n\n## Chairman's Review\n\nThe past twelve months has been another challenging year for the resources industry with weakening commodity prices and a cost structure that more reflected boom times. The gold price drifted lower during the year until April when it underwent a major fall of around US$200 per ounce and then continued to weaken through to late June when it bottomed at a 34 month low at around US$1,200 per ounce. Subsequently, there has been a small recovery in the gold price that has been helped by a weakening Australian dollar to improve the position of Australian based gold producers. Kingsgate is one of many resource companies whose earnings and share price performance has been affected by the weakening gold price and the downturn in the global industry.\n\nKingsgate had a mixed year of transition in 2013 with the completion and final permitting of the major expansion at Chatree in Thailand but also undergoing a major restructure at the Challenger mining operations, caused by the lower gold price and ongoing price volatility. In the weak and uncertain metal price environment, Kingsgate moved quickly to reduce all non-essential expenditure on its operations and on the development projects. Additionally, the Board and senior management have participated in the cost reduction initiatives through the implementation of a 10 percent cut to Directors fees, and an effective 20 percent cut to senior management remuneration.\n\nThe lower metal prices and industry cost pressures had a negative impact on the underlying earnings of the Group of $17.2 million and also contributed to non-cash impairments to the carrying value of a number of Group assets, particularly assets relating to the Challenger Gold Operations. The impairments were the major contributor to the after tax loss of $323.7 million for the year.\n\nWith lower earnings and the current uncertainty and volatility in the metal markets the Board decided not to pay a final dividend. Note that your Company did pay an interim dividend of 5 cents per share following the first half of the financial year.\n\nChatree had a strong year producing 133,681 ounces of gold. The good production performance was achieved despite some operational hurdles with slower than anticipated Government approvals to allow full utilisation of the expanded plant. The Chatree mine lease area is also surrounded by highly prospective exploration ground that is currently under application. Any discoveries within these application areas should substantially extend the mine life at Chatree.\n\nChallenger gold production of 66,216 ounces was 24 percent lower than last year due to additional dilution and depletion at Challenger Deeps and a shortfall in planned development. Following the fall in the gold price, a strategic review of Challenger was implemented that has resulted in a new mine plan to focus primarily on the higher grade Challenger West orebody. The new mine plan will be implemented during the first three months of the 2014 financial year.\n\nThe development projects continued to advance during the year. At Nueva Esperanza, the feasibility work shifted to focus on identifying the lowest cost and lowest power consumption development alternatives. This included reviewing a heap leach process option with on-site power generation. Further work is expected to be completed in the current financial year. At Bowdens, the feasibility work has confirmed the optimum process route. Completion of the technical feasibility study including mine planning, infrastructure and metallurgy, and lodging of the Environmental Impact Statement (\"EIS\") are scheduled for 2014.\n\nThe Board of Kingsgate is determined to reestablish the path to building shareholder wealth via profits and dividends despite a difficult external environment. Shareholders can look forward to a steady performance from Chatree and a turn-around at Challenger coupled with the completion of feasibility studies at the two major development projects over the coming year.\n\nI would also like to thank our Chief Executive Officer and Managing Director, Gavin Thomas, Kingsgate management and all of the Kingsgate, Akara and Challenger personnel and the project teams for their part in delivering the operational performance during what was a difficult year for your Company.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What does demonstatre the feasibility study on the Nueva Esperanza Project ?", - "target_page": 6, - "target_passage": "The study demonstrated that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible, although high capital and power costs negatively impacted project economic returns. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "VIETNAM\n\nCAMBODIA\n\nCAMBODIA A M\n\n0 100 200 300\n\nHighway Freeway Power lines Hydro power dam Thermal power station\n\nKilometres\n\nTHAILAND\n\nBangkok\n\nKhon Kaen Kh CHATREE\n\n10°\n\n10°\n\nPhuket\n\n20°\n\nCHALLENGER\n\nN T\n\n130°\n\nW A\n\n135°\n\n30°\n\nQLD\n\nN S W\n\nVIC\n\nAdelaide\n\n140°\n\n35°\n\nL A O S\n\nL A O S\n\n100°\n\nChiang Mai\n\nChumphon\n\n## Nueva Esperanza Project\n\nChile\n\nu\n\n## Summary\n\nThe Nueva Esperanza Project is 100% owned by Kingsgate since February 2012. Nueva Esperanza is located in the Maricunga Gold Belt near Copiapó, a regional mining centre in Northern Chile. The silver-rich mineralisation is hosted by the Esperanza high-sulphidation epithermal alteration system associated with the Cerros Bravos volcanic complex.\n\nCHALLENGER GOLD MINE\n\nBarton West\n\nCundeelee (Tropicana Belt)\n\nBarton Central Tenements Area\n\nBlue Dam\n\nYalla Burra\n\nGolden Point\n\nNorthling\n\nBryah\n\nPerenjori\n\nCalingiri\n\nKukerin\n\nBullock Pool\n\nNanicup Bridge\n\nHolleton West\n\nWongan Hills\n\nLabyrinth\n\nBulgunnia\n\nThe project consists of three well-defined mineralised deposits and a number of undeveloped exploration targets. The main deposits are Arqueros, Chimberos and Teterita. Arqueros was previously mined on a limited scale by underground methods and Chimberos was exploited as an open pit mine, delivering about 40 million ounces of silver in 1998/99. All three deposits currently have a combined Mineral Resources of about 93 million ounces of silver equivalent or 1.6 million ounces of gold equivalent (EQ60)1 .\n\nA feasibility study for a decision to mine the Arqueros portion of Nueva Esperanza was completed in late 2012, demonstrating that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible. Work remained to integrate the Teterita and Chimberos deposits into the project, as well as to test lower cost options for processing. Continued metallurgical testwork has shown that mineralisation from all three deposits by heap leaching is technically and economically feasible and the preferred alternative for development.\n\nEnvironmental approvals to commence construction and mining at Nueva Esperanza were granted in July 2013 for the original Arqueros project. Work is underway to modify and update the environmental assessment to incorporate the heap leach process.\n\n1 Equivalence is based on gold/silver price ratio of 60. Gold equivalence = gold content plus (silver content *divided* by 60), whereas Silver equivalent silver content plus (gold content multiplied by 60).\n\nBOWDENS SILVER\n\n150°\n\nDubbo\n\n145°\n\nVIC\n\nTA S\n\n30°\n\nS A\n\n35°\n\nMudgee\n\nQLD\n\nNewcastle\n\nSydney", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Development Projects\n\n#### Bowdens\n\nThe Bowdens Project continued to advance during the year with field programs supporting the ongoing feasibility and environmental studies. Sterilisation drilling and additional metallurgical sampling were undertaken with the resource evaluation drilling completed in October 2012.\n\nDuring 2013, the process design and engineering work for the Definitive Feasibility Study (\"DFS\") progressed to a point where the draft study was close to completion as at 30 June 2013. The study encompassed detailed process design based on using the most recent metallurgical test results, capital and operating cost estimates, project water and power supply, infrastructure requirements and mine optimisation.\n\nThe preparation for lodgement of an Environmental Impact Statement (\"EIS\") to the NSW Department of Planning continues. It is envisaged that the EIS will be completed and lodged in 2014. Data for flora and fauna, surface water, groundwater, meteorology, ambient noise and dust levels are collected routinely. Further investigations of cultural heritage, social-economic impact, traffic impact, soil type and agricultural suitability have also been undertaken.\n\nWith the fall in metal prices in late 2013, work and expenditure on the DFS and EIS have been phased to coordinate and synchronise the timing of the two programs with completion and lodgement now not expected before mid-2014.\n\n#### Nueva Esperanza\n\nThe Nueva Esperanza Project was advanced during the year with the completion of a draft feasibility study. This study included a decision to mine the Arqueros and Teterita portions of Nueva Esperanza. The study demonstrated that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible, although high capital and power costs negatively impacted project economic returns.\n\nAs a consequence, feasibility work has transitioned to assess a lower capital cost and lower power requirement options, namely the potential for heap leach processing. Metallurgical testwork recently completed demonstrated that processing of mineralisation from all three deposits by heap leaching has the potential to be technically and economically feasible and as a consequence may become the preferred alternative for development.\n\nEnvironmental approval for the original Arqueros Project was granted in July 2013.\n\n## Financials\n\nKingsgate made an after tax loss of $323.7 million for the full year to 30 June 2013 compared to an after tax profit of $75.0 million for the previous corresponding year. The result for the year reflected an impairment of $311.9 million pre-tax ($291.3 million post-tax) against the Challenger Mine and associated assets and an impairment of $20.4 million against greenfield exploration projects in Australia and Thailand.\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Financial Summary | $000 | $000 |\n| Total sales revenue | 329,282 | 357,372 |\n| EBITDA before significant items | 115,845 | 168,583 |\n| (Loss) / profit before tax | (339,615) | 91,277 |\n| Income tax benefit / (expense) | 15,889 | (16,271) |\n| (Loss) / profit after income after tax | (323,726) | 75,006 |\n| Dividend declared (¢/share) | 5 | 20 |", - "page_start": 5, - "page_end": 5, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Company Activities\n\nfor the year ended 30 June 2013\n\n**Operations Report 12** Chatree Gold Mine, Thailand 12 Challenger Gold Mine, South Australia . . . . . . . . 20 **Projects Report 26** Bowdens Silver Project, New South Wales 26 Nueva Esperanza Project, Chile 28 **Exploration Report 30** Ore Reserves and Mineral Resources . . . . . . . . .32 Competent Persons Statement . . . . . . . . . . .33 **Corporate Governance Statement 34 Senior Management . . . . . . . . . . . . . . . . . 39**", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and reflexivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers' closeness to the intervention and the clinical field may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of \"blind spots\", as the researchers may prejudice participants' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, findings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n#### 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals affiliated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT (n = 15) were included (Table 3).\n\n#### 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were TABLE 3 Participant demographic information.\n\n| Variable | Total (n = 15) |\n| --- | --- |\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |\n\nTABLE 4 Interview guide.\n\n| Theme | Potential questions |\n| --- | --- |\n| Overall experiences and | Generally, what are your main experiences of |\n| reflections from participation | participation? |\n| | What did you perceive as meaningful? |\n| | What did you perceive as negative? |\n| Content | How did you experience: |\n| | • The content of the sessions in general |\n| | • The high-intensity walking/running |\n| | • The specific exercises |\n| | • The combination of specific exercises and |\n| | intervals of running/walking |\n| | • The exercise intensity |\n| | How did you respond to the exercises? How did |\n| | you experience getting tired? |\n| | How do you perceive your specific movement |\n| | impairments (if any) being addressed? |\n| | Please elaborate on situations where you |\n| | experienced the feeling of mastery/failure. |\n| | If anything: What was challenging? What would |\n| | you prefer to have been done differently? What |\n| | did you enjoy? |\n| | What was the value of participating in the |\n| | indoor exercise group beforehand? |\n| | How did you experience this kind of exercise |\n| | intervention compared to other type of exercise |\n| | you may have experience with? |\n| The role of the physiotherapists | What did the physiotherapists do? What was |\n| | the value of this to you? |\n| The group setting | How did you experience the group setting? |\n| | How did you perceive the atmosphere in the |\n| | group? |\n| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park |\n| | environment for exercise? |\n| Closing questions | Are there any experiences from participation |\n| | that you would like to elaborate on? Is anything |\n| | related to this project that we have not talked |\n| | about that you would like to say? |\n| | How did you experience this interview? |\n\nOverall participants were asked to describe situations to exemplify their answers, and follow-up questions were used to capture in-depth reflections, for example, What was positive/negative?, How did it feel?, What do you think of that?, What does it mean to you?, Can you elaborate on that?.\n\nconducted (with pwMS who were not part of the sample), and the interview guide was then refined around the following themes: overall experience and reflections from participation, content, outdoor setting, the group, and the physiotherapists. Questions were open-ended to capture rich, in-depth reflections regarding participants' experiences, following a phenomenological approach. The interviewer asked for both negative and positive experiences", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "VIETNAM\n\nCAMBODIA\n\nCAMBODIA A M\n\n0 100 200 300 Kilometres\n\nHighway Freeway Power lines Hydro power dam Thermal power station\n\nTHAILAND\n\nBangkok\n\nKhon Kaen Kh CHATREE\n\n10°\n\n10°\n\nPhuket\n\n20°\n\nCHALLENGER\n\nN T\n\n130°\n\nW A\n\n135°\n\n30°\n\nQLD\n\nN S W\n\nVIC\n\nAdelaide\n\n140°\n\n35°\n\nL A O S\n\nL A O S\n\n100°\n\nChiang Mai\n\nChumphon\n\n70°\n\nNUEVA\n\nESPERANZA\n\nCHALLENGER GOLD MINE\n\nBarton West\n\nCundeelee (Tropicana Belt)\n\nBarton Central Tenements Area\n\nBlue Dam\n\nYalla Burra\n\nGolden Point\n\nNorthling\n\nBryah\n\nPerenjori\n\nCalingiri\n\nKukerin\n\nBullock Pool\n\nNanicup Bridge\n\nHolleton West\n\nWongan Hills\n\nLabyrinth\n\nBulgunnia\n\nBOLIVI A\n\nARGENTINA\n\n20°\n\n30°\n\nCOPIAPO\n\nAntofagasta\n\nPERU\n\nChañaral\n\nLa Serena\n\nSantiago\n\n3\n\n40°\n\n50°\n\n## Bowdens Silver Project\n\nNSW, Australia\n\n## Summary\n\nKingsgate Bowdens Pty Limited holds four Exploration Licences (\"ELs\") located in the Lue/ Rylstone area of central western NSW. EL 5920 is divided into two separate areas, one containing the Bowdens Project, is adjacent to the village of Lue and the second to the west of the town of Rylstone.\n\nSilver mineralisation was discovered at Bowdens in the mid 1980s. Programs of geophysical and geochemical exploration had also been undertaken. During 2012 Kingsgate completed 124 drill holes for 13,527 metres as a part of resource definition program. The new resource estimate comprising a total of 567 drill holes for 63,088 metres was completed in November 2012.\n\nDuring the year a comprehensive metallurgical testwork program was completed as a part of a Definitive Feasibility Study (DFS).\n\n## Geology\n\nThe Bowdens Silver Project is situated on the north-eastern margin of the Lachlan Fold Belt. Bowdens is hosted by flat-lying Early Permian Rylstone Volcanics. The Rylstone Volcanics are partially overlain by a sequence of marine sediments of the Sydney Basin (Shoalhaven Group). The Rylstone Volcanics range from 10 to 200 metres thick and are dominated by silica rich volcanically derived rocks.\n\nThe silver mineralisation occurs as flat-lying to moderately dipping zones of disseminations and silicic fracture-filling and is closely associated with sulphides of iron, arsenic, lead and zinc. High grade silver mineralisation is also hosted in steeply-dipping fracture zones which host banded sulphide veins.", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "given the heterogenic pathology and symptoms of MS (41, 42). However, our findings illuminate qualitative aspects of how to achieve tailored and meaningful intersubjective interactions in an exercise intervention.\n\nWe consider the instances of the physiotherapist running together with the participant, which were perceived as important for participants' performance, to be an example of \"participatory sense-making\" (22). As participants appreciated being guided or even pushed by the physiotherapists, it appears that the physiotherapists were trusted in directing this interaction. As such, we argue that the physiotherapists' ability to adapt to participants' movements, speech and gestures—tailoring the interaction to their needs—was important for this ability to be perceived as purposeful. This is supported by the few negative incidents described where the participant-physiotherapist interaction seemed to not be jointly coordinated and appeared to fail. The reported mutual influences of sensorimotor capabilities and interpersonal coordination, with the physiotherapists but also the group, are in accordance with sensorimotor capacities and intersubjective interactions being important for sensemaking in the world (35). The benefits of these individualized participant-physiotherapist interactions are also described in specific core-stability exercises in indoor groups (16, 43) and are in line with the theoretical framework of facilitation of movement through hands-on interaction previously proposed (44, 45). Our study informs new knowledge of physiotherapistparticipant interactions to achieve the recommended highintensity training and calls for physiotherapy clinical reasoning through bodily and verbal communication skills adapted to the participants' responses in an ongoing and situated way.\n\nEnjoyment has previously been reported to promote PA in pwMS, and our study brings requested knowledge of what can constitute enjoyment in an exercise intervention (46): playful group-exercise tasks, a cheerful physiotherapist, and the outdoor environment.\n\nThe appreciation of being active outdoors in the study sample aligns with that in the general population (47). The outdoors provided a natural environment, which both invited participants to actively explore abilities thought of as left behind after their diagnosis with MS, such as running, and provided an appreciated break from focusing on MS symptoms. We also suggest that the positive experiences of mastering the challenging weather conditions and the added meaning of exercising among other people in the city park can be explained according to such terms. These positive experiences show how we are enmeshed in our history, context and social encounters (35) and how these aspects should also be accounted for when designing exercise interventions.\n\n#### 4.3 Methodological considerations\n\nThe design and methods were adequate for deriving knowledge from individuals' experiences. The participants selfreferred to the intervention and were recruited based on pre-set criteria. This approach yielded rich information from people with mild to moderate disabilities due to MS who were motivated for physical activity (PA), employed, and residing in northern Norway. Ethnicity or socio-economic class were not recorded. However, considering that all these factors can influence PA engagement (46), it is possible that additional aspects of the phenomenon could be uncovered in a different sample (48). There was a higher percentage of women participating than men; however, this corresponds to the gender distribution in the MS population (1).\n\nThe use of enactive theory was innovative within the field and allowed for, in particular, new aspects of importance for selfefficacy to be identified. Transference of our results to similar populations can be achieved through theoretical generalization (28).\n\n#### 4.4 Implications for clinical practice\n\nCombining high-intensity walking/running and detailed sensorimotor exercises was valued and provided meaningful embodied experiences, improving participants' ability to master PA and their beliefs of their own possibilities for being active in the future. However, the manner in which the content of an exercise intervention is delivered and the environment in which it is delivered should be accounted for, as these aspects were perceived to be of great importance in creating and shaping participants' experiences. In particular, tailored physiotherapistparticipant bodily interactions and an engaging group and outdoor environment were perceived to be pertinent for exploring one's own potential.\n\nTo minimize negative incidents in future interventions, we suggest that (1) the effort required from one's leg muscles during the detailed exercises (in between the running/walking intervals) should be low to minimize the negative consequences of leg muscle fatigue prior to high-intensity running/walking, (2) the capacity for running/walking at highintensity should be explored in one-to-one physiotherapy assessment prior to group training to optimize individuals capabilities and safety, and (3) homogenous and small-sized groups should be used to enable ongoing and tailored physiotherapist-participant interactions.\n\n#### Data availability statement\n\nThe datasets presented in this article are not readily available because of ethical and legal restrictions. Requests to access the datasets should be directed to stine.s.dahl@nord.no.\n\n#### Ethics statement\n\nThis study involving humans was approved by Regional Committee for Medical Research Ethics in North Norway (REK North: 174,837) and the Data Protection Officer at Nordlandssykehuset Hospital Trust, Norway. This study was conducted in accordance with the local legislation and", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "- 5. The Results window is displayed and shows the progress of the deployment (see Figure A-7). To continue, click **Close**.\n\n| × | Apache HTTP Server | | | |\n| --- | --- | --- | --- | --- |\n| Configuration | Information | Results | 3 | 2 |\n| Apache HTTP Server is being provisioned in project1. | This may take several minutes. | | | |\n| Continue to the project overview to check the status of your service. | | | | |\n| Cancel | < Back | Close | | |\n\n*Figure A-7 Results window*\n\n- 6. In the Application Console view, browse to the project1 project page by selecting the project from the project list, as shown in Figure A-8. Scroll down the list to find your project. Click **project1**.\n*Figure A-8 Application Console view*", - "page_start": 224, - "page_end": 224, - "source_file": "sg248459.pdf" - }, - { - "text": "| Model | Research institute | Country | Horizontal resolution |\n| --- | --- | --- | --- |\n| GFDL-ESM2M | Geophysical Fluid Dynamics Laboratory | Te United States | 144×90 |\n| HadGEM2-ES | Hadley Center for Climate Prediction and Research | Te United Kingdom | 192×145 |\n| IPSL-CM5A-LR | L' Institute Pierre-Simon Laplace | France | 96×96 |\n| NorESM1-M | Norway Climate Center | Norway | 144×96 |\n| MIROC-ESM | Center for Climate System Research, National Institute for Environmental Studies, and Frontier Research Center for Global Change | Japan | 128×64 |\n\n**Table 1.** Basic information of 5 ESMs in CMIP5. Horizontal resolution means the number of longitudinal grids×the number of latitudinal grids.\n\n**Figure 1.** Changes of global temperature of 20 years moving average from 2020 to 2099 simulated by 5 ESMs under 4 RCP scenarios. Note: Te black horizontal dashed lines: global warming by 1.5 °C and 2.0 °C; the black vertical solid line: the years when global warming reaches 1.5 °C and 2.0 °C simulated by the selected models and scenarios.\n\nAlthough, so far there are plenty of research on the impacts of global warming by 1.5 °C temperature, including the impacts comparison of global warming by 1.5 °C versus 2.0 °C44. It is necessary to do more quantitative impacts assessments of global warming by 1.5 °C and 2.0 °C on crops yield and market price to address research gaps and support the requirement of the scientifc community and governments. In this paper, the future climate situations were selected and analyzed which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C, based on the simulation results from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. Ten the per unit yield changes of maize all over the world under global warming by 1.5 °C and 2.0 °C were analyzed and the spatial distributions of changes in maize yield were revealed relative to the baseline from 1985 to 2006, applying crop model DSSAT (Decision Support System for Agrotechnology Transfer). Next, we examine the efects of the resulting maize production shocks in diferent countries; the market price of maize is simulated using GTAP to reveal the impacts of climate change on global crop trade. Finally, the future trend of maize yield and market price in the main breadbasket is assessed and the adaptation suggestions are put forward for maize cultivation.\n\n#### **Materials and methods**\n\n**Data processing.** In this study, historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. Te dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data45.\n\nFor future (2020–2099), the original climate scenario data (Table 1) were extracted from output archives of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) under four RCPs (RCP2.6, RCP4.5, RCP6.0, RCP8.5) retrieved from the CMIP website. Te climate scenario data was interpolated into 0.5°×0.5° horizontal resolution and bias-corrected with respect to historical observations to remove systematic errors46. Te data of maize-planting regions are from the gridded global dataset in 2000 by combining two data products47,48.\n\n**Simulation of climate scenarios with global warming by 1.5 °C and 2.0 °C.** In this study, climate data of global warming by 1.5 °C and 2.0 °C are determined according to the results of global climate models driven by typical concentration paths (RCPs) of greenhouse gas emissions. Eligible data are selected from a total of 20 sets of data under four RCP scenarios of fve ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M), which estimate the temperature, precipitation and sunshine hours (Fig. 1).\n\nVol:.(1234567890)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed9.pdf" - }, - { - "text": "Right now, one of the most active Asian countries in the Open Data arena is India, which also signed an Open Government partnership with the USA in November 2010. In January 2011 the Indian Congress Party announced plans for a new law to fight corruption among public servants and politicians. Anti-corruption websites (including ones in local dialects) like Indiaagainstcorruption.org, already existed, including one, Ipaidabribe.com, that collected more than 3,000 people reports of graft in its first four months.\n\nAs it happens in Asia, even Latin America is currently focused, at least outside Public Administration circles, on how to open public data to achieve actual transparency. This appears even from the way many projects are labeled, that is \"Civic Information\" instead of Open Data (which is an idea starting from data *reuse*) or Open Government.\n\nThe reason is that even where good Freedom of Information laws exist in Latin America, they still have too little practical effects. Mexico, for example, already has a digital system to manage Freedom of Information requests, but there are reports of complaints filed against municipal officials that either have no effect at all, or aren't possible in the first place, because relevant information has not been updated in years, or omits key data like (in the case of budget reports) *\"descriptions of how the money was spent\"*.\n\nEven with these difficulties, the Latin America Open Data/Civic Information landscape is active and definitely worthwhile following. The list of interesting Civic Information projects in Latin America include (from Sasaki's Access to Information: Is Mexico a Model for the Rest of the World?:\n\n- Mexico\n\t- Mexican Farm Subsidies an online tool to analyze how the federal government allocates those subsidies\n\t- Compare Your School: compares aggregate test results from any school with the municipal, regional, and national averages\n\t- Rebellion of the Sick built for patients with chronic diseases whose expenses are not covered by the government subsidized health coverage.\n- Argentina: Public Spending in Bahía analyzes how public funds are used.\n- Colombia: Visible Congress monitors the actions of the Colombian congress\n- Brazil\n\t- Eleitor 2010: a website to submit reports of electoral fraud during the Brazil 2010", - "page_start": 8, - "page_end": 8, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "Policy information about availability of data\n\nAll manuscripts must include a data availability statement. This statement should provide the following information, where applicable:\n\n- Accession codes, unique identifiers, or web links for publicly available datasets\n- A description of any restrictions on data availability\n- For clinical datasets or third party data, please ensure that the statement adheres to our policy\n\nThe dataset consists of 26 MRI scans (T1w, T2w, and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The data is publicly available on https://openneuro.org/datasets/ds005299.\n\n# Research involving human participants, their data, or biological material\n\nPolicy information about studies with human participants or human data. See also policy information about sex, gender (identity/presentation), and sexual orientation and race, ethnicity and racism.\n\n| Reporting on sex and gender | Our study focused on a single female participant to explore how pregnancy shapes the human brain. |\n| --- | --- |\n| Reporting on race, ethnicity, or | The subject was white. |\n| other socially relevant | |\n| groupings | |\n| Population characteristics | This was a precision imaging study of one 38-year old primiparous woman. |\n| Recruitment | Our participant (corresponding author E.R.C.) was a healthy primiparous woman who underwent in-vitro fertilization (IVF) to |\n| | achieve pregnancy. The project was conceived by E.R.C. and she wished to use herself as the participant, as has been done in |\n| | previous \"dense-sampling\" studies (cf. Poldrack et al., 2015; Pritschet et al., 2020). |\n| Ethics oversight | The participant gave written informed consent and the study was approved by the University of California, Irvine Human |\n| | Subjects Committee. |\n\nNote that full information on the approval of the study protocol must also be provided in the manuscript.\n\n# Field-specific reporting\n\nPlease select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.\n\n|\n| |\n\nFor a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf\n\n# Life sciences study design\n\nAll studies must disclose on these points even when the disclosure is negative.\n\n| Sample size | We used precision imaging to deeply-phenotype, densely-sample an individual over the gestational window. As this study was the first of it's |\n| --- | --- |\n| | kind, our sample size was an N=1 design. Although this limits the generalizability of our findings, this project serves as a proof-of-concept, |\n| | showcasing the value and feasibility of studying a woman's brain during the transition to motherhood. |\n| Data exclusions | no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking |\n| Replication | This is the first study of it's kind; therefore, there are no study replications as of yet. However, to reproduce our results internally across |\n| | software packages, we also ran the T1w data through the longitudinal FreeSurfer cortical thickness pipeline (Dale et al., 1999), which |\n| | corroborated our finding that gray matter volume declines throughout gestation (e.g., successful internal replication). This pattern of results |\n| | not only held across software packages, but also brain parcellations (e.g., Schaefer 400-cortical atlas and Desikan-Killiany cortical atlas). |\n| Randomization | This was an observational study design, and therefore not randomized. |\n| Blinding | For medial temporal lobe segmentation, scans were randomized and segmentation was performed in a random order, blind to pregnancy |\n| | stage. No other blinding was applicable, given the observational study of brain changes in response to advancing gestational week. |\n\n# Reporting for specific materials, systems and methods\n\nWe require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What is the Kingsgate net cash outflows from finiancing activities in 2013 ?", - "target_page": 11, - "target_passage": " Net cash outflows from financing activities was $1.7 million", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Financing Arrangements\n\n#### Corporate loan facility\n\nKingsgate has a three year secured loan facility with Investec which was amended during the year. The amended facility has a limit of $40 million (30 June 2012: $50 million), of which $20 million has been drawn down as at 30 June 2013 (30 June 2012: $40 million).\n\n#### Convertible loan facility\n\nKingsgate has a five year A$35 million convertible loan facility with Investec entered into in a prior period to provide funding for the Bowdens acquisition. Kingsgate has the option to make a prepayment against the facility with an issue of Kingsgate shares.\n\n#### Restructure of corporate loan and convertible loan facilities\n\nAs indicated previously in the Preliminary Final report, at balance date it was the Group's intention to restructure and amalgamate these facilities in the next financial year. This relates to the potential for completion of the Initial Public Offering (\"IPO\") of Akara on the Stock Exchange of Thailand and the updated mine plan for Challenger. Any restructure would optimise the Group's anticipated balance sheet liquidity and operational cash flows. Accordingly, the Group classified the total amount drawn down under these facilities of $55 million as a current liability at 30 June 2013.\n\nSubsequent to the end of the financial year, the Group received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40 million. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n- 〉 Tranche one will be a $25 million Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO, although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n- 〉 Tranche two is an amortising facility with $5 million to be repaid during the 2014 financial year and the balance of $10 million repaid during the 2015 financial year.\n\n#### Convertible revolving credit facility\n\nThe Group also has a three year $25 million Convertible Revolving Credit Facility available. As at the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the company to repay any cash drawdown under the facility by the issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\n#### Multi-currency and syndicated loan facilities\n\nKingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125 million (fully drawn as at period end) and an additional Thai Baht denominated working capital facility equivalent to US$15 million (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100 million Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n## Financial Position\n\nShareholders' equity at 30 June 2013 was $474 million (2012: $776 million). The decrease of $302 million reflects the year's loss together with dividends paid.\n\n## Dividends\n\nNo final dividend has been declared for the year ended 30 June 2013.\n\nAn interim dividend declared for the half-year ended 31 December 2012 of 5 cents per fully paid share was paid on 12 April 2013.\n\nA final dividend declared for the year ended 30 June 2012 of 10 cents per fully paid share was paid on 1 October 2012.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "*Kingsgate is a highly successful gold mining, development and exploration company with two operating gold mines and two advanced development projects. Shareholders can look forward to the benefits of this strong operating and development platform, where Kingsgate aims to build value though operating, earnings and dividend growth for the benefit of all stakeholders.*\n\nCHILE\n\nAUSTRALIA\n\nTHAILAND", - "page_start": 1, - "page_end": 1, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "#### **LIQUIDITY AND CAPITAL RESOURCES**\n\nWe strive to maintain a level of liquidity sufficient to allow us to cover our seasonal cash needs and to maintain appropriate levels of shortterm borrowings. We believe that our operating cash flows, available credit facilities and potential future borrowings are sufficient to finance our cash requirements for the next 12 months and beyond.\n\nOver the long term, we manage our cash and capital structure to maximize shareholder return, maintain our financial position, manage refinancing risk and allow flexibility for strategic initiatives. We regularly assess our debt and leverage levels, capital expenditure requirements, debt service payments, dividend payouts, potential share repurchases and other future investments. We believe that as of January 31, 2015, our existing cash and cash equivalents on-hand of $827, available credit facilities of $800 and potential future operating cash flows and borrowings will be sufficient to fund these scheduled future payments and potential long-term initiatives. Additionally, if an agreement is reached and a transaction is consummated in regards to our credit card receivables, it could result in additional cash flows to further support our capital requirements and strategic initiatives.\n\n#### **Operating Activities**\n\nNet cash provided by operating activities was $1,220 in 2014, $1,320 in 2013 and $1,110 in 2012. The majority of our operating cash inflows are derived from sales. We also receive cash payments for property incentives from developers. Our operating cash outflows generally consist of payments to our merchandise vendors (net of vendor allowances), payments to our employees for wages, salaries and other employee benefits and payments to our landlords for rent. Operating cash outflows also include payments for income taxes and interest payments on our short-term and long-term borrowings.\n\nCash provided by operating activities decreased in 2014 compared with 2013, which was primarily due to higher state tax payments made in 2014 compared with 2013, as well as changes in working capital in 2014.\n\nCash provided by operating activities increased in 2013 compared with 2012, resulting from less state tax payments made in 2013 due to additional payments made in 2012 as a result of the 53rd week, along with increased property incentives received from developers and changes in working capital.\n\n#### **Investing Activities**\n\nNet cash used in investing activities was $889 in 2014, $822 in 2013 and $369 in 2012. Our investing cash flows primarily consist of capital expenditures, changes in restricted cash accumulated for debt maturities and changes in credit card receivables associated with cardholder purchases outside of Nordstrom using our Nordstrom Visa credit cards.\n\n#### Capital Expenditures\n\nOur capital expenditures over the last three years totaled $2,177, with $861 in 2014, $803 in 2013 and $513 in 2012. Capital expenditures increased in 2014 compared with 2013 primarily due to ongoing store expansion and increased technology investments.\n\nCapital expenditures increased in 2013 compared with 2012 as we continued to make progress executing our customer strategy through increased investments in technology, ecommerce, remodels and new stores, including Nordstrom Rack and our Manhattan full-line store.\n\nThe following table summarizes our store count and square footage activity:\n\n| | | Store count | | | Square footage | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Fiscal year | 2014 | 2013 | 2012 | 2014 | 2013 | 2012 |\n| Total, beginning of year | 260 | 240 | 225 | 26.0 | 25.3 | 24.7 |\n| Store openings: | | | | | | |\n| Nordstrom full-line stores - U.S. | 2 | — | 1 | 0.3 | — | 0.1 |\n| Nordstrom Rack and other stores1 | 29 | 22 | 15 | 1.2 | 0.7 | 0.6 |\n| Stores acquired | 4 | — | — | — | — | |\n| Stores closed | (3) | (2) | (1) | (0.4) | — | (0.1) |\n| Total, end of year | 292 | 260 | 240 | 27.1 | 26.0 | 25.3 |\n\n1 Other stores include Jeffrey boutiques, Trunk Club showrooms, our Nordstrom Canada full-line store and Last Chance.\n\nWe had no store relocations in 2014, compared with one Nordstrom full-line store and two Nordstrom Rack relocations in 2013 and three Nordstrom Rack relocations in 2012. Our 2014 new store openings increased our square footage by 5.5%.\n\nTo date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The contractual cash flows presented above in respect of 30 June 2013 and the increase in the one year or less time category of $46,132,000 when compared to 30 June 2012 mainly relates to classification of the corporate loan facility of $20,000,000 and the convertible loan facility of $35,000,000 as current liability at 30 June 2013. These facilities were mainly included in the one to two years and two to five years' time category at 30 June 2012. As indicated in Note 16, these facilities have been classified as current liabilities at 30 June 2013 on the basis that at balance sheet date it was the Group's intention to restructure and amalgamate these facilities in the next financial year.\n\nSubsequent to the end of the financial year, the Group has received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40,000,000. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n- 〉 Tranche one will be a $25,000,000 Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n- 〉 Tranche two is an amortising facility with $5,000,000 to be repaid during the 2014 financial year and the balance of $10,000,000 repaid during the 2015 financial year.\n\nThe Group also has a three year $25,000,000 Convertible Revolving Credit Facility available. At the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the Company to repay any cash drawdown under the facility by issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\nAs indicated in Note 16, Kingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125,000,000 (fully drawn as at year end) and an additional Thai Baht denominated working capital facility equivalent to US$15,000,000 (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100,000,000 Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n#### (d) Fair value measurements\n\nThe carrying values of financial assets and liabilities of the Group approximate their fair values. Fair values of financial assets and liabilities have been determined for measurement and / or disclosure purposes.\n\n#### Fair value hierarchy\n\nThe Group classifies assets and liabilities carried at fair value using a fair value hierarchy that reflects the significance of the inputs used in determining that value. The table following analyses financial instruments carried at fair value, by the valuation method. The different levels in the hierarchy have been defined as follows:\n\n- 〉 Level 1: quoted prices (unadjusted) in active markets for identical assets or liabilities;\n- 〉 Level 2: inputs other than quoted prices included within Level 1 that are observable for the asset or liability, either directly (as prices) or indirectly (derived from prices); and\n- 〉 Level 3: inputs for the asset or liability that are not based on observable market data (unobservable inputs).\n\n| | Level 1 | Level 2 | Level 3 | Total |\n| --- | --- | --- | --- | --- |\n| | $'000 | $'000 | $'000 | $'000 |\n| 30 June 2013 | | | | |\n| Available-for-sale financial asset | *767 | – | – | 767 |\n| Derivatives held for trading | – | (1,271) | – | (1,271) |\n| Total as at 30 June 2013 | 767 | (1,271) | – | (504) |\n| 30 June 2012 | | | | |\n| Available-for-sale financial asset | 1,751 | – | – | 1,751 |\n| Derivatives held for trading | – | (2,685) | – | (2,685) |\n| Total as at 30 June 2012 | 1,751 | (2,685) | – | (934) |\n\n* Level 1 asset includes available-for-sale financial assets of $767,000 at 30 June 2013 which relate to investments in listed entities.", - "page_start": 104, - "page_end": 104, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Senior Management\n\nKingsgate's executives have a comprehensive range of skills and experience including mine development and operations, exploration, finance and administration. They are supported by highly qualified specialists, whose backgrounds cover the full scope of mining resources activities.\n\nSenior members of Kingsgate's management team are:\n\n## Gavin Thomas\n\nBSc (Geology), FAusIMM\n\n#### Managing Director and Chief Executive Officer\n\nGavin Thomas was appointed Chief Executive Officer of Kingsgate in 2004 and joined the Kingsgate Board on 16th November 2007. Gavin has had a successful career in developing mining companies from the exploration phase into mid-tier gold or copper producers. He has over 42 years of international experience in exploring for, evaluating, developing, operating and reclaiming mines in North and South America, Australia, the Southwest Pacific, Asia and Europe. Amongst Gavin's credits is the discovery of \"Lihir\" in Papua New Guinea, one of the largest gold deposits in the world. In particular, he has extensive experience in Thailand and South America.\n\n#### Duane Woodbury BEc (Hons)\n\n#### Chief Financial Officer\n\nDuane Woodbury was appointed Chief Financial Officer of Kingsgate on 1 September 2011. Duane has a BEc (Hons) Degree and has worked in various financial, accounting and advisory roles during his career in a number of locations, including London, New York and Singapore. He has been assisting Kingsgate in its business development initiatives since August 2007 and brings over 20 years of experience in financial markets and corporate finance transactions, principally with the Macquarie Group.\n\n#### Tim Benfield\n\nDip CSM (mining), MBA, MAusIMM\n\n#### Chief Operating Officer\n\nTim Benfield joined Kingsgate in February 2012 as Chief Operating Officer. Tim is a mining engineer with over 21 years underground and open pit experience in the mining industry in both operational and corporate roles. He has operational and project development experience in Australia, Africa and Saudi Arabia. This includes 10 years with Barrick Gold of Australia where he provided support to four operating mines and two development projects. Tim was most recently General Manager of the Pajingo Gold mine in Queensland for Evolution Mining Limited.\n\n#### Ross Coyle BA, FCPA, FCIS\n\n#### General Manager Finance and Administration Company Secretary\n\nRoss Coyle joined Kingsgate in March 2011 following the Company's acquisition of Dominion Mining Limited and was with the Dominion group for over 25 years. He is a qualified accountant and has over 30 years experience in finance and accounting within the resource industry. He was Finance Director of Dominion from 1996. Ross was appointed Kingsgate's Company Secretary in September 2011.\n\n#### Joel Forwood Bsc (Hons) FFin\n\n#### General Manager Corporate and Markets\n\nJoel Forwood joined Kingsgate in November 2010 and has over 27 years experience in the resource and investment industries covering investor relations, funds management and exploration. For over 12 years, he has been leading investor relations at a number of listed companies, most recently for Lihir Gold Limited. Prior to this he was a fund manager with Queensland Investment Corporation (QIC) following his early career in mineral exploration with BHP and corporate development with RGC.\n\n## Ronald James\n\nBSc (Geology), MAusIMM, MAIG\n\n#### General Manager Exploration and Resource Development\n\nRon James has 30 years of experience in exploration and mining at management level inclusive of setting up gold mines and exploration projects from their earliest stages through to development and sustainability. Before joining Kingsgate, he was Chief Mine Geologist at the Gold Ridge Mine in the Solomon Islands and later Group Exploration Manager for Ross Mining NL. Ron is familiar with the technical and operating requirements for emerging projects in a variety of terrains and environments and has a strong focus on maximising returns from ore bodies through optimum waste and ore classification as well as increasing reserves from nearmine resource development.", - "page_start": 40, - "page_end": 40, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "#### **LIQUIDITY AND CAPITAL RESOURCES**\n\n### **Cash Flows – Summary**\n\n#### Our cash flows consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Net cash provided by operations $ 829,247 | | $740,812 | $ 846,546 |\n| Investing cash flows: | | | |\n| Proceeds from the sale of subsidiaries, net | 345,730 | — | — |\n| Capital expenditures | (702,862) | (550,232) | (300,039) |\n| Investments in unconsolidated affiliates | (11,602) | (41,350) | (80,314) |\n| Other | 20,981 | 35,894 | 9,143 |\n| Net cash used in investing activities | (347,753) | (555,688) | (371,210) |\n| Financing cash flows: | | | |\n| Net repayment under bank credit facilities (1,574,489) | | (285,087) | (270,126) |\n| Issuance of long-term debt | 1,528,957 | 600,000 | — |\n| Purchase of treasury stock | (348,895) | (442,864) | (207,590) |\n| Other | 68,455 | (37,284) | 23,231 |\n| Net cash used in financing activities | (325,972) | (165,235) | (454,485) |\n| Net increase in cash and cash equivalents $ 155,522 | | $ 19,889 | $ 20,851 |\n\n#### **Cash Flows – Operating Activities**\n\nTrends in our operating cash flows tend to follow trends in our operating income, excluding non-cash charges, since our business is primarily cash-based. Cash flow from operations in 2004 increased from 2003 due to higher operating income offset by higher tax payments. Cash flow from operations in 2003 decreased from 2002, resulting from the decrease in operating income and higher cash paid for taxes.\n\nAt December 31, 2004 and 2003, we held cash and cash equivalents of $435 million and $280 million, respectively. We require a certain amount of cash on hand to operate our resorts. Beyond our cash on hand, we utilize a company-wide cash management system to minimize the amount of cash held in banks. Funds are swept from accounts at our resorts daily into central bank accounts, and excess funds are invested overnight or are used to repay borrowings under our bank credit facilities. Included in cash and cash equivalents at December 31, 2004 is $141 million received from the sale of MGM Grand Australia and still held in Australia, pending clarification of the tax rule for repatriated earnings, as discussed earlier.\n\n#### **Cash Flows – Investing Activities**\n\nThe sale of the Golden Nugget Subsidiaries closed in January 2004 with net proceeds to the Company of $210 million. The sale of MGM Grand Australia closed in July 2004 with net proceeds to the Company of $136 million.\n\nCapital expenditures in 2004 increased over 2003 due to continued spending on major projects at several of our resorts, including:\n\n• The Bellagio expansion completed in December 2004;\n\n• The theatre for *KÀ* at MGM Grand Las Vegas, completed in November 2004.\n\nSpending on these two projects totaled approximately $325 million. Other capital expenditures were made for maintenance capital activities, including room remodel projects at New York-New York and MGM Grand Las Vegas and new restaurant and entertainment amenities at several resorts. Capital expenditures in 2003 were significantly higher than 2002, due largely to major projects at our existing resorts, including projects described above which began in 2003, the *Zumanity* theatre at New York-New York, the Bellagio room remodel and slot technology improvements. Capital expenditures in 2002 included general property improvements at our resorts, such as a room remodel project at The Mirage, new restaurant and nightclub development at several of our resorts, and various other remodeling projects.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## 32. Parent entity financial information continued\n\n#### Contingent liabilities of the parent entity\n\nBank guarantees have been given by Kingsgate's controlled entities to participating banks in the syndicated loan facility and revolving loan facility as described in Note 16 as part of the security package.\n\nThese guarantees may give rise to liabilities in the parent entity if the controlled entities do not meet their obligations under the terms of the loans subject to guarantees. No material losses are anticipated in respect of the above contingent liabilities.\n\n## 33. Sale of exploration assets\n\nOn 28 March 2013, the Group sold its exploration assets in Western Australia and Queensland through the sale of shares in its subsidiary company, Quadrio Resources Limited, to Caravel Minerals Limited (\"Caravel\"), an Australian company listed on the ASX.\n\nKingsgate received 135,000,000 fully paid ordinary shares in the issued capital of Caravel and 20,000,000 unlisted options to acquire Caravel shares exercisable at 10 cents on or before three years from the date of issue. Subsequent to the sale, Kingsgate became the largest shareholder in Caravel with 35.54% held at 30 June 2013. Kingsgate's holding in Caravel reduced to 27.04% post 30 June 2013 following a rights issue by Caravel that Kingsgate did not participate in.\n\nThe financial impact of the sale transaction as at the date of disposal is summarised below:\n\n| | 2013 |\n| --- | --- |\n| Fair value of consideration | $'000 |\n| 135,000,000 Caravel shares at $0.025 per share | 3,375 |\n| 20,000,000 unlisted Caravel options | – |\n| Total consideration | 3,375 |\n| Carry value of the exploration assets sold | 20,084 |\n| Loss on sale | 16,709 |", - "page_start": 111, - "page_end": 111, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "### Capital Expenditures\n\nWe expect capital expenditures for fiscal 2013 to be in the $13.0 million to $14.0 million range, consisting of capital associated with additional information technology equipment and infrastructure investments. Depreciation for fiscal 2013 is expected to be in the range of $12.5 million to $13.5 million.\n\n### Share Repurchases\n\nThe Board of Directors has authorized the repurchase of shares of the Company's stock. These purchases may be made in open market and negotiated transactions, from time to time, depending upon market conditions. At June 30, 2012, we had authorization to purchase an additional 1,142,800 shares. In fiscal 2012, 2011 and 2010, we repurchased 997,200, 189,600 and 159,900 shares of the Company's common stock, respectively, at an average price per share of $31.12, $32.09 and $24.57, respectively.\n\n### Borrowing Arrangements\n\n4\n\nNet Cash Flows\n\n(Decrease) Increase in Cash\n\nfrom the June 30, 2009 levels.\n\nthousands.\n\nThe following table is included to aid in review of Applied's statements of consolidated cash flows; all amounts are in\n\nNet Cash Provided by (Used in): **2012** 2011 2010 Operating Activities $ **90,422** $ 76,842 $ 184,324 Investing Activities **(39,434 )** (47,887) (6,784 ) Financing Activities **(60,816 )** (116,523) (30,514 ) Exchange Rate Effect **(2,822 )** 2,883 1,109\n\nand Cash Equivalents $ **(12,650 )** $ (84,685 ) $ 148,135\n\nIn the last two fiscal years, and typical during periods of sales expansion, a portion of cash generated from operations is\n\nwhich by June 30, 2010 had resulted in a $101.4 million\n\nNet cash used in investing activities in fiscal 2012 included $26.0 million for capital expenditures and $14.7 million for acquisitions. Capital expenditures included $16.7 million related to the ERP project. In fiscal 2011, net cash used in investing activities included $30.5 million for acquisitions and $20.4 million for capital expenditures ($12.5 million related to the ERP project). Net cash used by investing activities was primarily used for capital expenditures in fiscal 2010. Capital expenditures consist primarily of information technology equipment and building improvements. Net cash used in financing activities in fiscal 2012 included $33.8 million for dividend payments and $31.0 million to repurchase 997,200 shares of treasury stock. These uses were partially offset\n\nby $3.7 million of excess tax benefits from share-based\n\n2012, 2011 and 2010, respectively.\n\ncompensation. In fiscal 2011, we repaid $50.0 million under our revolving credit facility, $25.0 million under our private placement debt and $12.8 million related to the associated cross-currency swaps. Additionally, we paid dividends of $29.8 million and repurchased 189,600 shares of treasury stock for $6.1 million. In fiscal 2010, financing activities included dividends of $25.4 million, repayment of a net $5.0 million on our revolving credit facility, and $3.9 million to repurchase 159,900 shares of treasury stock. The increase in dividends over the last three fiscal years is the result of increases in our dividend payout rates. We paid dividends of $0.80, $0.70 and $0.60 per share in fiscal\n\ninvested in working capital, particularly receivables and inventory. The most significant factor in the spike in 2010 operating cash flows related to the fiscal 2010 inventory management program\n\nreduction in U.S. bearing and drives products inventory amounts\n\nYear Ended June 30,\n\nIncome tax expense as a percent of income before taxes was 36.7% for fiscal 2011 and 37.2% for fiscal 2010. The net decrease in the effective tax rate reflects higher income levels earned in fiscal 2011 in foreign jurisdictions which have a lower overall statutory rate than the U.S. as well as the reversal of a valuation allowance no longer necessary. These factors were offset somewhat by provision made for U.S. income tax on a portion of undistributed earnings not considered permanently\n\nAs a result of the factors addressed above, net income for fiscal 2011 increased $30.9 million or 46.8% from fiscal year 2010.\n\nThe number of Company associates was 4,640 at June 30, 2011 and 4,468 at June 30, 2010. The net associate increase yearover-year was attributable primarily to acquisitions (net increase of 239 associates), partially offset by headcount reductions in pre-\n\nNet income per share increased at a comparable rate.\n\nLIQUIDITY AND CAPITAL RESOURCES Our primary source of capital is cash flow from operations, supplemented as necessary by bank borrowings or other sources of debt. At June 30, 2012 and June 30, 2011, we had no outstanding borrowings. Management expects that our existing cash, cash equivalents, funds available under the revolving credit facility, cash provided from operations, and the use of operating leases will be sufficient to finance normal working capital needs in each of the countries we operate in, payment of dividends, acquisitions, investments in properties, facilities and equipment, and the purchase of additional Company common stock. Management also believes that additional long-term debt and line of credit financing could be obtained based on the Company's credit standing and financial strength.\n\nThe Company's working capital at June 30, 2012 was $435.6 million compared to $404.2 million at June 30, 2011. The current ratio was 2.9 to 1 at June 30, 2012 and at June 30, 2011. The Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and\n\nfinal average earnings) and entry into the SERP effective\n\nDecember 31, 2011. This action constituted a plan curtailment, resulting in a reduction of postemployment benefits of $8.9 million and deferred tax assets of $3.4 million in the consolidated\n\nreinvested in our Canadian subsidiaries.\n\nexisting operations.\n\nbalance sheet.\n\nThe Company has a five-year committed revolving credit agreement that expires in May 2017. This agreement provides for unsecured borrowings of up to $150.0 million. We had no borrowings outstanding under our revolving credit agreements at June 30, 2012 or June 30, 2011. Unused lines under this facility, net of outstanding letters of credit, totaled $143.1 million and were available to fund future acquisitions or other capital and operating requirements. Borrowings under this agreement would be at variable interest rates tied to either LIBOR, prime, or the bank's cost of funds.\n\nWe also have an uncommitted long-term financing shelf facility which expires in February 2013 and enables us to borrow up to $100.0 million with terms of up to fifteen years. We had no outstanding borrowings under this facility at June 30, 2012 or June 30, 2011.\n\nThe revolving credit facility and uncommitted shelf facility contain restrictive covenants regarding liquidity, net worth, financial ratios, and other covenants. At June 30, 2012, the most restrictive of these covenants required that the Company have consolidated income before interest, taxes, depreciation and amortization at least equal to 300% of net interest expense. At June 30, 2012, the Company was in compliance with all covenants and expects to remain in compliance during the terms of the agreements.\n\n### Accounts Receivable Analysis\n\nThe following table is included to aid in analysis of accounts receivable and the associated provision for losses on accounts receivable (amounts in thousands):\n\n| June 30, | | 2012 | | 2011 |\n| --- | --- | --- | --- | --- |\n| Accounts receivable, gross | $ | 315,375 | $ | 297,767 |\n| Allowance for doubtful accounts | | 8,332 | | 7,016 |\n| Accounts receivable, net | $ | 307,043 | $ | 290,751 |\n| Allowance for doubtful accounts, | | | | |\n| % of gross receivables | | 2.6% | | 2.4 % |\n| Year Ended June 30, | | 2012 | | 2011. |\n| Provision for losses on accounts receivable $ | | 3,915 | $ | 2,029 |\n| Provision as a % of net sales | | 0.16% | | 0.09 % |\n\nAccounts receivable are reported at net realizable value and consist of trade receivables from customers. Management monitors accounts receivable by reviewing Days Sales Outstanding (DSO) and the aging of receivables for each of the Company's locations.\n\nOn a consolidated basis, DSO was 45.2 at June 30, 2012 versus 44.2 at June 30, 2011. Accounts receivable increased 5.6% this year, compared to a 7.3% increase in sales in the twelve months ended June 30, 2012. We primarily attribute the increase in DSO to higher sales to large contract accounts.\n\nLess than 3% of our accounts receivable balances are more than 90 days past due. On an overall basis, our provision for losses from uncollected receivables represents 0.16% of our sales in the year ended June 30, 2012. Historically, this percentage is around 0.15%. Management believes the overall receivables aging and provision for losses on uncollected receivables are at reasonable levels.\n\n### Inventory Analysis\n\n5\n\n25358_AIT_Report_WT.indd 9 8/23/12 8:33 AM\n\nInventories are valued at the lower of cost or market, using the last-in, first-out (LIFO) method for U.S. inventories and the average cost method for foreign inventories. Management uses an inventory turnover ratio to monitor and evaluate inventory. Management calculates this ratio on an annual as well as a quarterly basis and uses inventory valued at current costs. The annualized inventory turnover (using current costs) for the period ended June 30, 2012 was 4.6 versus 4.7 at June 30, 2011. We believe our inventory turnover ratio in fiscal 2013 will remain similar to the fiscal 2012 levels.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Statement of Cash Flows\n\nfor the year ended 30 June 2013\n\n| | | 2013 | 2012 |\n| --- | --- | --- | --- |\n| | Note | $'000 | $'000 |\n| Cash flows from operating activities | | | |\n| Receipts from customers (net of goods and services tax) | | 332,624 | 361,754 |\n| Payments to suppliers and employees (net of goods and services tax) | | (224,500) | (182,759) |\n| Interest received | | 2,587 | 1,394 |\n| Finance costs paid | | (10,120) | (8,431) |\n| Income tax paid | | (15,571) | (6,711) |\n| Net cash inflow from operating activities | 25 | 85,020 | 165,247 |\n| Cash flows from investing activities | | | |\n| Payments for property, plant and equipment | | (7,035) | (92,343) |\n| Payments for exploration, evaluation and development | | (122,722) | (75,054) |\n| Payments for acquisition of Bowdens Silver Project | | – | (41,000) |\n| Cash acquired on acquisition of subsidiaries, net of cash paid | | – | 136 |\n| Interest capitalised to expansion and development projects | | (3,948) | (6,939) |\n| Deposits and debt service reserve account | | (8,612) | (2,470) |\n| Payments for other assets | | (108) | (3,526) |\n| Net cash outflow from investing activities | | (142,425) | (221,196) |\n| Cash flows from financing activities | | | |\n| Proceeds from borrowings, net of transaction costs | | 133,968 | 96,627 |\n| Repayment of borrowings | | (116,250) | (26,622) |\n| Proceeds from the issue of shares | | – | 70,792 |\n| Payments for acquisition of non-controlling interests | | – | (11,359) |\n| Dividends paid | | (19,409) | (18,933) |\n| Net cash (outflow) / inflow from financing activities | | (1,691) | 110,505 |\n| Net (decrease) / increase in cash held | | (59,096) | 54,556 |\n| Cash at the beginning of the year | | 90,623 | 35,864 |\n| Effects of exchange rates on cash and cash equivalents | | 1,460 | 203 |\n| Cash at the end of the year | 7 | 32,987 | 90,623 |\n\nThe above Statement of Cash Flows should be read in conjunction with the accompanying notes.", - "page_start": 68, - "page_end": 68, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Directors' Report\n\nYour Directors present their report on the Group consisting of Kingsgate Consolidated Limited and the entities it controlled at the end of, or during, the year ended 30 June 2013.\n\n## Directors\n\nThe following persons were Directors of Kingsgate Consolidated Limited during the whole of the financial year and up to the date of this report.\n\n- 〉 Ross Smyth-Kirk Chairman\n- 〉 Peter Alexander Non-Executive Director\n- 〉 Craig Carracher Non-Executive Director\n- 〉 Peter McAleer Non-Executive Director\n- 〉 Gavin Thomas Executive Director\n\n## Principal activities\n\nThe principal activities of Kingsgate Consolidated Limited are mining and mineral exploration in Australia, South East Asia and South America.\n\n## Dividends\n\nDividends paid to members during the financial year were as follows:\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| | $'000 | $'000 |\n| Final dividend declared for the year ended 30 June 2012 of | 15,148 | 6,829 |\n| 10 cents per fully paid share paid on 1 October 2012 | | |\n| Interim dividend declared for the year ended 30 June 2013 of | 7,591 | 15,196 |\n| 5 cents per fully paid share paid on 12 April 2013 | | |\n| Total dividends | 22,739 | 22,025 |\n\n## Review of operations and results\n\n#### Operational performance\n\nKingsgate is a gold mining, development and exploration company based in Sydney, Australia. Kingsgate owns and operates two gold mines, the world class Chatree Mine in Thailand and the underground Challenger Mine in South Australia. In addition, the Company has two advanced development projects, the Nueva Esperanza Silver / Gold Project, in the highly prospective Maricunga Gold / Silver Belt in Chile, and the Bowdens Silver Project in New South Wales, Australia. From this operating and development platform, Kingsgate aims to build value for all shareholders.\n\nGroup gold production was 199,897 ounces, a decrease of 4% on the previous corresponding year. The contribution from Chatree was 133,681 ounces with 66,216 ounces from Challenger.\n\nChatree gold production was 10% higher than the previous corresponding period as a result of an increase in throughput from the expanded Chatree process plant and access to higher grade oxide ore from Q Prospect.\n\nChallenger gold production was 24% lower than the previous corresponding year given additional dilution and depletion at Challenger Deeps and a shortfall in planned development. This resulted in lower ore tonnes from the mine that was supplemented by low grade stockpiled ore. Following the fall in the gold price a strategic review of Challenger was implemented that has resulted in a new mine plan to focus primarily on the higher grade Challenger West orebody. The new mine plan will be implemented during the first three months of the 2014 financial year.\n\nA lower gold price and industry wide cost pressures had a negative impact on the underlying earnings of the Group which contributed to a major impairment to the carrying value of a number of Group assets, particularly assets relating to the Challenger Gold Operations. Impairments totalling $332,808,000 were the major contributor to the after tax loss of $323,726,000 for the year.\n\nThe development projects continued to advance during the year. At Nueva Esperanza, the feasibility work shifted to focus on identifying the lowest cost and lowest power consumption development alternatives. This included reviewing a heap leach process option with on-site power generation. Further work is expected to be completed in the December quarter 2013. At Bowdens, the feasibility work has confirmed the optimum process route. Completion of the technical feasibility study including mine planning, infrastructure and metallurgy, and lodging of the Environmental Impact Statement (\"EIS\") are scheduled for 2014.", - "page_start": 43, - "page_end": 43, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": "To which countries extend the marriage regulations ?", - "target_page": 1, - "target_passage": "These Regulations extend to England and Wales. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "I approve\n\n*Kevin Foster* Parliamentary Under Secretary of State 29th April 2021 Home Office\n\n## **EXPLANATORY NOTE**\n\n*(This note is not part of the Regulations)* \n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as \"registers of marriage services\" to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## **2021 No. 538**\n\n## **MARRIAGE, ENGLAND AND WALES**\n\n# The Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\n| Made - - - | - | 29th April 2021 |\n| --- | --- | --- |\n| Coming into force - | - | 4th May 2021 |\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949(**a**).\n\n#### **Citation, commencement, extent and interpretation**\n\n**1.**—(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n\n(2) These Regulations come into force on 4th May 2021.\n\n(3) These Regulations extend to England and Wales.\n\n(4) In these Regulations, \"chapel\" does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies(**b**).\n\n#### **Duty of parochial church councils to provide registers of marriage services**\n\n**2.**—(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England(**c**) in that parish in which banns of matrimony may be published.\n\n(2) Books provided under paragraph (1) are to be known as \"registers of marriage services\".\n\n(3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n\n(4) The register must be made of durable material.\n\n(5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it—\n\n(<b>a) 1949 c. 76 (12 & 13 Geo 6). Section 74 was amended by Schedule 2 to the Registration Service Act 1953 (c. 37) and by paragraph 5(1)(d) of Schedule 2 to the Transfer of Functions (Registration) Order 2008 (S.I. 2008/678) and subsequently renumbered as section 74(1) by article 12 of the Registration of Marriages etc. (Electronic Communications and Electronic Storage) Order 2009 (S.I. 2009/2821). Section 74(1) was amended by paragraph 19 of Schedule 15 to the Immigration Act 2016 (c. 19) and paragraph 43 of Schedule 1 to the Registration of Marriages Regulations 2021 (S.I. 2021/411), which also inserted subsection (1A).\n\n(<b>b) See section 68(2) of the Marriage Act 1949. The certification function of the Admiralty under that section was transferred to the Secretary of State by the Defence (Transfer of Functions) Act 1964 (c. 15).\n\n(<b>c) Section 78(2) of the Marriage Act 1949 provides for references to the Church of England to be construed as including references to the Church in Wales.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\n \n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n- (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n\n(6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## **Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales**\n\n**3.**—(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n\n(2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)—\n\n- (a) the date and place of the marriage;\n- (b) the name and surname of each party;\n- (c) the date of birth of each party;\n- (d) the occupation (if any) of each party;\n- (e) the address of each party at the time of the marriage;\n- (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n- (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n- (h) the name and surname of the clergyman by whom the marriage was solemnized.\n\n(3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n\n- (4) After making a record under paragraph (2) the clergyman must sign it.\n(5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n### **Requirements about the keeping of registers of marriage services**\n\n**4.**—(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must—\n\n- (a) ensure that the register is kept in that church or chapel, and\n- (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n\n(2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\n*Abi Tierney* Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "# SCHEDULES\n\nSCHEDULE 1 Regulation 2(1)\n\n# Category 1 countries and territories\n\n| Australia |\n| --- |\n| Brunei |\n| Falkland Islands |\n| Faroe Islands |\n| Gibraltar |\n| Iceland |\n| Israel |\n| New Zealand |\n| Portugal, including the Azores and Madeira |\n| Saint Helena, Ascension and Tristan da Cunha |\n| Singapore |\n| South Georgia and the South Sandwich Islands |\n\n# SCHEDULE 2 Regulation 2(1)\n\n# Category 2 countries and territories\n\nAny country or territory outside the common travel area not listed in Schedule 1 or Schedule 3.\n\n# SCHEDULE 3 Regulation 2(1)\n\n# Category 3 countries and territories\n\nAngola Argentina Bangladesh Bolivia Botswana Brazil Burundi Cape Verde\n\nChile", - "page_start": 31, - "page_end": 31, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "### **6.3 Guidance and support**\n\n**Supervision is only one approach to implementing legislation.** As mentioned, supervision by state authorities can only reach a small share of all enterprises, particularly not the many small ones and the self-employed. In addition to supervision and control, a broad variety of **prevention-supporting activities** has been developed during the past decades.388\n\nThe authors of EU-OSHA's 'Supporting compliance' reports state a strong increase in 'compliance promotion strategies'. They write: *'The regulatory changes have been matched in more recent times by an increasingly diverse set of compliance promotion strategies. Not only has public regulation sought to engage and encourage duty holders in the pursuit of forms of regulated self-regulation, but … the discourse on regulation itself has sought a far broader understanding of its meaning and the role of the private and public regulatory actors and processes potentially involved in both defining and securing compliance.'389*\n\nOne important type of means are **guidance and support tools** for enterprises and workers to extend the reach and impact of legislation. Labour inspectorates and other state institutions produce these tools either themselves or in collaboration with social partners or professional organisations.\n\n**Proactive research and preventive guidelines**, particularly in situations of new risks, have become a quite usual preventive activity (e.g. on nanotechnology, or on some developments in digitalisation). For very complex regulations, like REACH, national institutions installed helpdesks. European institutions also publish such guidance documents for EU-wide use, for example, the guidance on health and safety in agriculture,390 the guidance regarding the implementation of the Machinery directive,391 the guidance documents of EU-OSHA on COVID-19392 and the European Commission guidance documents on seasonal workers and COVID-19. 393 Practically all EU and international OSH institutions published guidance documents on how to identify and reduce psychosocial risk at workplaces.394\n\nA large amount of **OSH guidance** already exists in different formats,395 starting with classical written guidance documents, increasingly complemented by audio-visual and interactive tools. EU-OSHA covers a large variety of workplaces with its digital risk assessment tool OiRA (Online interactive Risk", - "page_start": 124, - "page_end": 124, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "accounts so that these terms are defined by reference to the date that those accounts ceased to be excluded accounts. Regulation 2(3) and (4)(a) make consequential amendments.\n\nRegulation 3 makes a transitional provision for the calendar year 2020 in relation to accounts which were previously excluded accounts.\n\nA Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-theuks-automatic-exchange-of-information-agreements. It remains an accurate summary of the impacts that apply to this instrument.\n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "### *7.4.1 Variety of OSH systems*\n\nIn the EU Member States there exists a considerable **variety of working conditions** and related prevention and monitoring systems, as well as a variety of OSH traditions, OSH cultures regarding risk awareness and preventive safety measures, different types of responsibilities of state institutions, and a variety of social dialogue schemes between employers' and workers' associations.\n\nMoreover, practically all EU OSH legislation is **issued as European Directive, not as European Regulation**. These directives set minimum standards that allow Member States to specify many details related to their national situation. In addition, in many fields already existing national legislation was aligned in different ways with EU OSH directives.\n\nConsequently, the understanding and assessment of OSH at an EU level causes methodological problems, for example, regarding the **methods of data collection, the application of indicators, harmonisation of monitoring approaches, and of terminology, recognition of work-related diseases, comparison of different infrastructures, and even the technical measurement standards.** \n\nIt is not easy to apply data and research generation methodologies that are on one side well harmonised and on the other side give room to understand the functioning and value of different systems and infrastructures. Some trends seem to be general and obvious, and some Member States or some sectors or enterprises can experience no trend at all in this direction, or even the contrary development. Structural developments can differ significantly between countries. Of relevance seems to be the intra-EU exchange of workforce, based on ongoing changes in production chains. The main flow of the workforce takes place — roughly described — from eastern and southern Europe to central, western and northern Europe.483\n\n**Example: Working conditions – contrary trend developments in EU Member States** \n\n*'The magnitude – and sometimes the direction – of these sectoral changes varies from one country cluster to another. An increase in physical routine tasks in a few sectors, and a decline in cognitive tasks in the other services sector can be observed in Eastern countries. These changes are partly linked to a reorganisation of the value chain within Europe, which saw a reallocation of routine tasks from western European countries.'*\n\nEurofound, 2020: Working conditions in sectors, p. 41\n\n### *7.4.2 Reliable and measurable indicators*\n\nSeveral indicators are used to **assess and estimate the quality and effectiveness of the preventive systems and processes**. Not only the EU but also other countries have introduced such indicatorbased monitoring systems, for example, Norway,484 South Korea,485 Japan,486 Taiwan,487 Singapore,488 United States,489 Canada,490 Australia491 and New Zealand.492\n\nThe responsible institutions or authorities in these countries often developed several types of indicator, for example, indicators for implementation of protective legislation, for system effectiveness as well as for health outcomes. International organisations have also developed such monitoring indicators, for example, the ILO,493 the ILO in its OSH country profiles,494 the WHO495 and the UN496 (see textbox).\n\n#### **United Nations, Sustainable Development Goals – Indicators for Target 8.8**\n\n**Target 8.8** Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment **Indicator 8.8.1:** Frequency rates of fatal and non-fatal occupational injuries, by sex and migrant status **Indicator 8.8.2:** Level of national compliance of labour rights (freedom of association and collective bargaining) based on ILO textual sources and national legislation, by sex and migrant status", - "page_start": 136, - "page_end": 136, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "A1 forms issued for postings of workers by EU27 countries, 2011-2020')\n\nEurostat: EU citizens living in another Member State - statistical overview, here\n\nEuropean Commission, 2019: Towards Fair Labour Mobility: Revision of EU Posting of Workers Rules, 2019\n\n309 The statistics distinguish between many different categories of migrants, for example, inside EU, from non-EUcountries, first generation, second generation, seasonal temporary, permanent status, etc. More information here\n\n310 Ibid.\n\n311 Fasani & Mazza, 2020: Immigrant Key Workers: Their Contribution to Europe's COVID-19 Response (p. 8). 312 Sometimes 'difficult' or 'demeaning' instead of 'demanding'. Taken from the Japanese: kitanai, kiken, kitsui\n\n313 Danaj et al., 2020: Labour Mobility and OSH Vulnerability of Posted Workers: The Cases of Austria and the Slovak Republic\n\n314 European Commission, 2022: Annual report on intra-EU labour mobility 2021 (p. 108, table 'Numbers of PD A1 forms issued for postings of workers by EU27 countries, 2011-2020').\n\n315 European Parliament, 2017: Posted workers: better protection and fair conditions for all\n\n316 It is hardly foreseeable how far the experience of interrupted supply chains during the COVID-19 pandemic will contribute to a de-globalisation and reduction of international supply chain dependency.\n\n317 Such methodologies exist for the environmental field, well-known is the 'ecological footprint'.\n\n318 Eurofound and the ILO have jointly produced a pilot report on worldwide working conditions to achieve a better evidence base for actions and policies, see: Eurofound & ILO, 2019: Working conditions in a global perspective\n\n319 See: https://www.globalreporting.org/ or UN-PRI (UN Principles of responsible investment)\n\nhttps://www.unpri.org/\n\n320 United Nations, Global Compact, here\n\n321 European Commission: Corporate sustainability due diligence\n\n322 Regulation (EU) 2017/821 of the European Parliament and of the Council of 17 May 2017 laying down supply chain due diligence obligations for Union importers of tin, tantalum and tungsten, their ores, and gold originating from conflict-affected and high-risk areas, here\n\n323 Centennial Declaration of the International Commission on Occupational Health, ICOH\n\n324 ILO: Monitoring Compliance with International Labour Standards The key role of the ILO Committee of Experts on the Application of Conventions and Recommendations, here\n\n325 ILO: Conventions and Recommendations\n\n326 ILO : Convention C-155\n\n327 ILO : Convention C-187\n\n328 ILO: Safety and health at work\n\n329 ILO: Health and Safety at the Workplace\n\n330 International Social Security Association (ISSA): Vison Zero Overview, Section Companies, here\n\n331 United Nations, Social Development Goals (SDGs), Goal 8, here and here\n\n**332** WHO: Protecting workers' health, Key facts\n\n333 WHO, 2013: WHO Global Plan of Action on Workers' Health (2008-2017): baseline for implementation: global country survey 2008/2009: executive summary and survey findings, here\n\n334 United Nations, SDGs, Goal 8, here and here\n\n335 ILO Constitution\n\n336 ILO: Conventions and Recommendations\n\n337 Treaty Establishing the European Coal and Steel Community and Annexes I-III, PARIS, 18 APRIL 1951, Article 3e\n\n(DRAFT ENGLISH TEXT), here\n\n338 Consolidated Version of the Treaty on the Functioning of the European Union Official Journal of the European Union, C 326/47, 6.10.2012, Article 151 and Article 153, here\n\n339 The European Parliament, the Council and the Commission: The European Pillar of Social Rights in 20 principles, here\n\n340 EU-OSHA, 2021: Directive 89/391/EEC – OSH \"Framework Directive\" of 12 June 1989 on the introduction of measures to encourage improvements in the safety and health of workers at work - \"Framework Directive\", here 341 Ibid., Framework Directive – Section 2 Employers' obligations. 342 Ibid., Framework Directive – Section 3 Workers' obligations.", - "page_start": 152, - "page_end": 152, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": "What the parochial church council must provide to make marriage records ?", - "target_page": 1, - "target_passage": " The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England(c) in that parish in which banns of matrimony may be published.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n- (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n\n(6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## **Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales**\n\n**3.**—(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n\n(2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)—\n\n- (a) the date and place of the marriage;\n- (b) the name and surname of each party;\n- (c) the date of birth of each party;\n- (d) the occupation (if any) of each party;\n- (e) the address of each party at the time of the marriage;\n- (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n- (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n- (h) the name and surname of the clergyman by whom the marriage was solemnized.\n\n(3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n\n- (4) After making a record under paragraph (2) the clergyman must sign it.\n(5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n### **Requirements about the keeping of registers of marriage services**\n\n**4.**—(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must—\n\n- (a) ensure that the register is kept in that church or chapel, and\n- (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n\n(2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\n*Abi Tierney* Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "I approve\n\n*Kevin Foster* Parliamentary Under Secretary of State 29th April 2021 Home Office\n\n## **EXPLANATORY NOTE**\n\n*(This note is not part of the Regulations)* \n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as \"registers of marriage services\" to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## **2021 No. 538**\n\n## **MARRIAGE, ENGLAND AND WALES**\n\n# The Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\n| Made - - - | - | 29th April 2021 |\n| --- | --- | --- |\n| Coming into force - | - | 4th May 2021 |\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949(**a**).\n\n#### **Citation, commencement, extent and interpretation**\n\n**1.**—(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n\n(2) These Regulations come into force on 4th May 2021.\n\n(3) These Regulations extend to England and Wales.\n\n(4) In these Regulations, \"chapel\" does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies(**b**).\n\n#### **Duty of parochial church councils to provide registers of marriage services**\n\n**2.**—(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England(**c**) in that parish in which banns of matrimony may be published.\n\n(2) Books provided under paragraph (1) are to be known as \"registers of marriage services\".\n\n(3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n\n(4) The register must be made of durable material.\n\n(5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it—\n\n(<b>a) 1949 c. 76 (12 & 13 Geo 6). Section 74 was amended by Schedule 2 to the Registration Service Act 1953 (c. 37) and by paragraph 5(1)(d) of Schedule 2 to the Transfer of Functions (Registration) Order 2008 (S.I. 2008/678) and subsequently renumbered as section 74(1) by article 12 of the Registration of Marriages etc. (Electronic Communications and Electronic Storage) Order 2009 (S.I. 2009/2821). Section 74(1) was amended by paragraph 19 of Schedule 15 to the Immigration Act 2016 (c. 19) and paragraph 43 of Schedule 1 to the Registration of Marriages Regulations 2021 (S.I. 2021/411), which also inserted subsection (1A).\n\n(<b>b) See section 68(2) of the Marriage Act 1949. The certification function of the Admiralty under that section was transferred to the Secretary of State by the Defence (Transfer of Functions) Act 1964 (c. 15).\n\n(<b>c) Section 78(2) of the Marriage Act 1949 provides for references to the Church of England to be construed as including references to the Church in Wales.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "The division of the Metropolis of Lyon in large electoral wards often grouping various communes and dividing the commune of Lyon into six wards was criticized by the suburban mayors, as it ended the rule of 'one commune, one metropolitan councilor'. The goal of this electoral division of the metropolis was to focus metropolitan elections more on metropolitan issues than parochial communal issues, and ensure the 'one person, one vote' rule be respected, by creating electoral wards of more homogeneous population sizes. Opponents said it diluted the voice of the small suburban communes, which are now part of large electoral wards and do not each possess a representative in the metropolitan council anymore.\n\n#### **Presidents of the Metropolitan Council**\n\nThe two first presidents of the Metropolis of Lyon's metropolitan council were chosen by indirectly elected metropolitan councilors. The current president since July 2020 was elected by new metropolitan councilors following their election by universal suffrage in March (1st round) and June (2nd round) 2020, the first direct election of a metropolitan council in France.\n\n| President of the Metropolitan Council | Term start | Term end | Party |\n| --- | --- | --- | --- |\n| Gérard Collomb | 1 January 2015 | 10 July 2017 | PS |\n| David Kimelfeld | 10 July 2017 | 2 July 2020 | LREM |\n| Bruno Bernard | 2 July 2020 | Incumbent | EELV |\n\nMap showing the 14 electoral wards of the Metropolis of Lyon\n\n# **Main sights**\n\n### **Antiquity**\n\n- The Roman ruins on the hillside near the Fourvière Basilica, with the Ancient Theatre of Fourvière, the Odeon of Lyon and the accompanying Gallo-Roman museum\n- Amphitheatre of the Three Gauls ruins of a Roman amphitheatre.\n\nAncient Theatre of Fourvière Odeon of Lyon Amphitheatre of the Three Gauls\n\n#### **Middle Ages and Renaissance**\n\n- Cathedral of St. John, a medieval church with architectural elements of the 13th, 14th and 15th centuries, also the principal religious structure in the city and the seat of the Archbishop of Lyon\n- Basilica of St-Martin-d'Ainay, one of the rare surviving Romanesque basilica-style churches in Lyon\n- Église Saint-Paul, Romanesque (12th and 13th century) and Gothic (15th–16th century) church\n- Église Saint-Bonaventure, 14th- and 15th-century Gothic church\n- Église Saint-Nizier, Gothic church from the 15th century, having a doorway carved in the 16th century by Philibert Delorme\n- Vieux Lyon (English: Old Lyon) area, Medieval and Renaissance quarter of the town, with shops, dining and cobbled streets\n- The many Renaissance *hôtels particuliers* of the Old Lyon quarter, such as the *Hôtel de Bullioud*, were also built by Philibert Delorme", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia4.pdf" - }, - { - "text": "(d) the Industrial Court.\n\n(2) In this Constitution, unless the context otherwise requires, references to offices in the public service shall be construed as including references to the offices of judges of the Court of Appeal and judges of the High Court and the offices of members of all subordinate courts (being offices the emoluments attaching to which, or any part of the emoluments attaching to which, are paid directly out of moneys provided by Parliament).\n\n(3) For the purposes of this Constitution a person shall not be considered to be a public officer by reason only that he or she is in receipt of any remuneration or allowance as the President, Vice-President, a Minister or Assistant Minister, Speaker, Deputy Speaker or Member of the Assembly, a Member of the Ntlo ya Dikgosi or a member of any Commission established by this Constitution.\n\n(4) For the purposes of this Constitution, a person shall not be considered as holding a public office by reason only of the fact that he or she is in receipt of a pension or other like allowance in respect of service under the Government of Botswana or the former Protectorate of Bechuanaland.\n\n(5) In this Constitution, unless the context otherwise requires, a reference to the holder of an office by the term designating his or her office shall be construed as including a reference to any person for the time being lawfully acting in or performing the functions of that office:\n\nProvided that nothing in this subsection shall apply to references to the President or Vice-President in section 35, 36 or 39 of this Constitution.\n\n(6) In this Constitution, unless it is otherwise provided or required by the context, a reference to the power to make appointments to any office shall be construed as including a reference to the power to make appointments on promotion and transfer and to confirm appointments and to the power to appoint a person to act in or perform the functions of that office at any time when the office is vacant or the holder thereof is unable (whether by reason of absence or infirmity of mind or body or any other cause) to perform the functions of that office.\n\n(7) References in this Constitution to the power to remove a public officer from his or her office shall be construed as including references to any power conferred by any law to require or permit that officer to retire from the public service:\n\nProvided that nothing in this subsection shall be construed as conferring on any person or authority power to require a judge of the Court of Appeal or the High Court, the Auditor-General or the Director of Public Prosecutions to retire from the public service.\n\n(8) Any provision in this Constitution that vests in any person or authority power to remove any public officer from his or her office shall be without prejudice to the power of any person or authority to abolish any office or to any law providing for the compulsory retirement of public officers generally or in any class of public officer on attaining an age specified therein.\n\n(9) Where power is vested by this Constitution in any person or authority to appoint any person to act in or perform the functions of any office if the holder thereof is himself unable to perform those functions, no such appointment shall be called in question on the ground that the holder of the office was not unable to perform those functions.\n\n(10) No provision of this Constitution that any person or authority shall not be subject to the direction or control of any other person or authority in the exercise of any functions under this Constitution shall be construed as precluding a court of law from exercising jurisdiction in relation to any question whether that person or authority has performed those functions in accordance with this Constitution or any other law.\n\n(11) Where any power is conferred by this Constitution to make any Act, order,", - "page_start": 54, - "page_end": 54, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "By clicking on the \"**Data->Licensing Assistant**\" link in the main menu, the Licence Assistant is opened in a new window, displaying relevant information of all supported licences by the tool.\n\n| | | Newsletter FAQ Search Contact Cookies Legal notice English (en) | > |\n| --- | --- | --- | --- |\n| | | Search site content ... | ರ |\n| European Data Portal > Licensing Assistant | | | |\n| 11 What we do - | Data~ Providing Data . | Using Data - Resources . | |\n| Datasets Cataloques | Metadata Quality Licensing Assistant | SPARQL Manager Statistics | |\n| Licensing Assistant | | | |\n| Data which is shared with a licence becomes Open Data. There are many licences available. | The licence assistant provides a description of the available licences. It also gives an overview | | |\n| of how to apply licences as re-publisher/distributor of Open Data and how to combine multiple | | | |\n| licences. | | | |\n| Please find a licence by selecting the preferred licence terms below: | | | |\n| Advanced settings | | | |\n| Obligation | Permission | Prohibition | |\n| Lesser Copyleft Attribution | Derivative Works Distribution | Commercial use | |\n| Sharealike Notice Copyleft | Reproduction Sublicensing | | |\n| State Changes | Use patent claims | | |\n| Name Terms | | | |\n| CC BY 3.0 Austria | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY 4.0 | Obligation: Attribution Permission: Derivative Works | Permission: Distribution Obligation: Notice | |\n| | Obligation: State Changes Permission: Reproduction | | |\n| CC-BY 3.0 NL | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY-NC 4.0 | Obligation: Attribution Permission: Derivative Works | Obligation: Notice | |\n| | Prohibition: Commercial use Permission: Distribution | Obligation: State Changes | |\n| | Permission: Reproduction | | |\n| CC-BY-NC-ND 4.0 | Obligation: Attribution Obligation: Notice | Prohibition: Commercial use Permission: Distribution | |\n| | Obligation: State Changes Permission: Reproduction | | |", - "page_start": 34, - "page_end": 34, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "communication be to the public generally or to any person or class of persons) and freedom from interference with his or her correspondence.\n\n(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality or public health; or\n- (b) that is reasonably required for the purpose of protecting the reputations, rights and freedoms of other persons or the private lives of persons concerned in legal proceedings, preventing the disclosure of information received in confidence, maintaining the authority and independence of the courts, regulating educational institutions in the interests of persons receiving instruction therein, or regulating the technical administration or the technical operation of telephony, telegraphy, posts, wireless, broadcasting or television; or\n- (c) that imposes restrictions upon public officers, employees of local government bodies, or teachers,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **13. Protection of freedom of assembly and association**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of assembly and association, that is to say, his or her right to assemble freely and associate with other persons and in particular to form or belong to trade unions or other associations for the protection of his or her interests.\n\n(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality or public health;\n- (b) that is reasonably required for the purpose of protecting the rights or freedoms of other persons;\n- (c) that imposes restrictions upon public officers, employees of local government bodies, or teachers; or\n- (d) for the registration of trade unions and associations of trade unions in a register established by or under any law, and for imposing reasonable conditions relating to the requirements for entry on such a register (including conditions as to the minimum number of persons necessary to constitute a trade union qualified for registration, or of members necessary to constitute an association of trade unions qualified for registration) and conditions whereby registration may be refused on the grounds that any other trade union already registered, or association of trade unions already registered, as the case may be, is sufficiently representative of the whole or of a substantial proportion of the interests in respect of which registration of a trade union or association of trade unions is sought,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **14. Protection of freedom of movement**\n\n(1) No person shall be deprived of his or her freedom of movement, and for the purposes of this section the said freedom means the right to move freely throughout Botswana, the right to reside in any part of Botswana, the right to enter Botswana and immunity from expulsion from Botswana.\n\n(2) Any restriction on a person's freedom of movement that is involved in his or", - "page_start": 11, - "page_end": 11, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Louis XIV in 1685, the year he revoked the Edict of Nantes\n\nrewarded converts to Catholicism.[68] This discrimination did not encounter much Protestant resistance, and a steady conversion of Protestants occurred, especially among the noble elites.\n\nIn 1681, Louis dramatically increased his persecution of Protestants. The principle of *cuius regio, eius religio* generally also meant that subjects who refused to convert could emigrate, but Louis banned emigration and effectively insisted that all Protestants must be converted. Secondly, following the proposal of René de Marillac and the Marquis of Louvois, he began quartering dragoons in Protestant homes. Although this was within his legal rights, the *dragonnades* inflicted severe financial strain on Protestants and atrocious abuse. Between 300,000 and 400,000 Huguenots converted, as this entailed financial rewards and exemption from the *dragonnades*. [69]\n\nOn 15 October 1685, Louis issued the Edict of Fontainebleau, which cited the redundancy of privileges for Protestants given their scarcity after the extensive conversions. The Edict of Fontainebleau revoked the Edict of Nantes and repealed all the privileges that arose therefrom.[4] By his edict, Louis no longer tolerated the existence of Protestant groups, pastors, or churches in France.\n\nNo further churches were to be constructed, and those already existing were to be demolished. Pastors could choose either exile or secular life. Those Protestants who had resisted conversion were now to be baptised forcibly into the established church.[70]\n\nProtestant peasants rebelled against the officially sanctioned *dragonnades* (conversions enforced by dragoons, labeled \"missionaries in boots\") that followed the Edict of Fontainebleau.\n\nHistorians have debated Louis's reasons for issuing the Edict of Fontainebleau. He may have been seeking to placate Pope Innocent XI, with whom relations were tense and whose aid was necessary to determine the outcome of a succession crisis in the Electorate of Cologne. He may also have acted to upstage Emperor Leopold I and regain international prestige after the latter defeated the Turks without Louis's help. Otherwise, he may simply\n\nhave desired to end the remaining divisions in French society dating to the Wars of Religion by fulfilling his coronation oath to eradicate heresy. [71][72]\n\nMany historians have condemned the Edict of Fontainebleau as gravely harmful to France.[73] In support, they cite the emigration of about 200,000 highly skilled Huguenots (roughly one quarter of the Protestant population, or 1% of the French population) who defied royal decrees and fled France for various Protestant states, weakening the French economy and enriching that of Protestant states. On the other hand, some historians view this as an exaggeration. They argue that most of France's preeminent Protestant businessmen and industrialists converted to Catholicism and remained.[74]\n\nWhat is certain is that the reaction to the Edict was mixed. Even while French Catholic leaders exulted, Pope Innocent XI still argued with Louis over Gallicanism and criticized the use of violence. Protestants across Europe were horrified at the treatment of their co-religionists, but most Catholics in France applauded the move. Nonetheless, it is indisputable that Louis's public image in most of Europe, especially in Protestant regions, was dealt a severe blow.\n\nIn the end, however, despite renewed tensions with the Camisards of south-central France at the end of his reign, Louis may have helped ensure that his successor would experience fewer instances of the religion-based disturbances that had plagued his forebears. French society would sufficiently change by the time of his descendant, Louis XVI, to welcome tolerance in the form of the 1787 Edict of Versailles, also known as the Edict of Tolerance. This restored to non-Catholics their civil rights and the freedom to worship openly. [75] With the advent of the French Revolution in 1789, Protestants were granted equal rights with their Roman Catholic counterparts.\n\n## **Nine Years' War**\n\n#### **Causes and conduct of the war**", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia5.pdf" - }, - { - "text": "### own procedure.\n\n(14) Except as may be otherwise provided in its rules or procedure, the Commission may act notwithstanding any vacancy in its membership or the absence of any member and its proceedings shall not be invalidated by the presence or participation of any person not entitled to be present at or to participate in those proceedings.\n\n(15) Any decision of the Commission shall require the concurrence of a majority of all the members thereof.\n\n(16) A member of the Commission shall not, during the tenure of his or her office or during the three years immediately following such tenure, be eligible for appointment to any public office other than that of Ambassador, High Commissioner or other principal representative of Botswana in any other country or accredited to any international organization.\n\n### **110. Appointment, etc., of public officers**\n\n(1) Subject to the provisions of this section and of sections 111, 113 and 114 of this Constitution, power to appoint persons to hold or to act in any office in the public service, to exercise disciplinary control over persons holding or acting in such offices and to remove from such offices shall vest in such person or persons as may be prescribed by Act of Parliament.\n\n(2) The provisions of this section shall not apply in relation to the following offices, that is to say-\n\n- (a) the office of judge of the Court of Appeal or of the High Court;\n- (b) any office to which section 104 or 112 of the Constitution applies.\n\n(3) Before any person or persons as may have been prescribed under the provisions of subsection (1) exercise power to appoint to or to act in any public office any person who holds or is acting in any office the power to make appointments to which is vested by this Constitution in the President acting in accordance with the advice of the Judicial Service Commission such person shall consult with the Judicial Service Commission.\n\n### **111. Appeals to President**\n\n(1) Any person other than a member of the Botswana Police Force or the Prison Service who has been removed from office or subjected to any other punishment by the exercise of any powers conferred on any person under the provisions of section 110 of this Constitution may appeal to the Public Service Commission who may dismiss such appeal or allow it wholly or in part.\n\n(2) Subject to the provisions of subsection (3) every decision of the Public Service Commission under the provisions of this section shall be final.\n\n(3) Notwithstanding anything contained in subsection (2) if the Public Service Commission dismisses an appeal or allows it in part only the person who appealed may appeal to the President.\n\n(4) If any person appeals to the President in accordance with the provisions of subsection (3) of this section the President shall either dismiss the appeal or shall order that it be heard by a tribunal appointed by the President, the Chairman of which shall be a person who holds or has held high judicial office or is qualified to be appointed as a judge of the High Court.\n\n(5) If the President appoints a tribunal to hear an appeal in accordance with subsection (4) of this section the tribunal shall hear the appeal and shall advise the President whether or not the appeal should be allowed either wholly or in part, and the President shall act in accordance with that advice.\n\n### **112. Powers of President in relation to certain public offices**\n\n(1) The power to appoint a person to hold or act in offices to which this section applies and to remove from office and to exercise disciplinary control over persons", - "page_start": 47, - "page_end": 47, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "description of the boundaries of those constituencies.\n\n(2) The boundaries of each constituency shall be such that the number of inhabitants thereof is as nearly equal to the population quota as is reasonably practicable:\n\nProvided that the number of inhabitants of a constituency may be greater or less than the population quota in order to take account of natural community of interest, means of communication, geographical features, density of population, and the boundaries of Tribal Territories and administrative districts.\n\n(3) In this section \"population quota\" means the number obtained by dividing the number of inhabitants of Botswana (as ascertained by reference to the latest comprehensive national population census in Botswana) by the number of constituencies into which Botswana is divided under section 63 of this Constitution.\n\n(4) The President shall as soon as practicable after the submission of the report of the Delimitation Commission, by Proclamation published in the Gazette, declare the boundaries of the constituencies as delimited by the Commission.\n\n(5) A Proclamation made under subsection (4) of this section shall come into force at the next dissolution of the National Assembly after it is made.\n\n(6) The Commission may by regulation or otherwise regulate its own procedure and may, subject to its rules of procedure, act notwithstanding any vacancy in its membership or the absence of any member and its proceedings shall not be invalidated by the presence or participation of any person not entitled to be present at or to participate in those proceedings:\n\nProvided that any decision of the Commission shall require the concurrence of a majority of all its members.\n\n(7) In the exercise of its functions under this section the Delimitation Commission shall not be subject to the direction or control of any other person or authority.\n\n(8) A Delimitation Commission shall stand dissolved upon the date on which its report is delivered to the President.\n\n## **65A. Appointment of Independent Electoral Commission**\n\n(1) There shall be an Independent Electoral Commission which shall consist of-\n\n- (a) a Chairman who shall be a judge of the High Court appointed by the Judicial Service Commission;\n- (b) a legal practitioner appointed by the Judicial Service Commission; and\n- (c) five other persons who are fit, proper and impartial, appointed by the Judicial Service Commission from a list of persons recommended by the All Party Conference.\n\n(2) Where the All Party Conference fail to agree on all or any number of persons referred to in subsection (1)(c) of this section up to dissolution of Parliament, the Judicial Service Commission shall appoint such person or persons as are necessary to fill any\n\nvacancy.(3) For the purposes of this section, \"All Party Conference\" means a meeting of all registered political parties convened from time to time by the Minister.\n\n(4) The first appointments of the Chairman and the Members of the Commission shall be made not later than 31st January, 1999, and thereafter subsequent appointments shall be made at the last dissolution of every two successive lives of Parliament.\n\n(5) The Chairman and the members of the Commission shall hold office for a period of two successive lives of Parliament.\n\n(6) A person shall not be qualified to be appointed as a member of the Independent Electoral Commission if-\n\n(a) he or she has been declared insolvent or adjudged or otherwise declared", - "page_start": 29, - "page_end": 29, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "What is the prison population grew in average by year between 1993 and 2008 ?", - "target_page": 8, - "target_passage": "The prison population grew rapidly between 1993 to 2008, at an average of 4% a year.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **2. Recent trends in the population**\n\nThe 'Story of the Prison Population 1993 to 2012' is an in-depth look at what happened to the prison population between 1993 and 2012 and the major factors contributing to the changes.4 \n\nThe prison population grew rapidly between 1993 to 2008, at an average of 4% a year. This rapid rise was driven by:\n\n- increased numbers of people sentenced to immediate custody from 1993 to 2002;\n- increases in the average custodial sentence length and increased use of indeterminate sentences; and\n- an increase in numbers recalled to prison following breaches of the conditions of licence and these offenders spending longer in prison once recalled.\n\nThe rise in the prison population slowed considerably from the summer of 2008, in part due to the introduction of the Criminal Justice and Immigration Act (CJIA) 20085 which changed sentencing and offender management in ways which helped to reduce growth in the prison population.\n\nThis flatter trend continued until the public disorder seen in UK cities from 6 to 9 August 2011 which had an immediate but temporary impact on the prison population.\n\nDuring 2012 and into 2013, the prison population began to fall due to a falling remand population and a continued decline in the number of under 18s in custody. The falling remand population during 2012 reflected falling volumes going through the courts plus the introduction, in December 2012, of measures restricting the use of remand for all offenders who would be unlikely to receive a custodial sentence.6\n\nFrom the end of August 2013 to the end of October 2013, the remand population rose sharply, driving an overall increase in the prison population. This was being driven by an increase in demand in the Crown Courts, especially among more serious tri-able either way cases. The total population has continued to rise since the beginning of 2014 and reached 85,9257 on the\n\n4 Story of the Prison Population: www.gov.uk/government/publications/story-of-the-prisonpopulation-1993-2012\n\n5 services.parliament.uk/bills/2007-08/criminaljusticeandimmigration.html 6 http://services.parliament.uk/bills/2010-11/legalaidsentencingandpunishmentofoffenders.html 7 www.gov.uk/government/statistics/prison-population-figures-2014", - "page_start": 7, - "page_end": 7, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **Key points**\n\nThis bulletin presents projections of the prison population in England and Wales from November 2014 to December 2020. The prison population projections are based on assumptions about future custodial convictions and incorporate the anticipated impacts of agreed policy and procedural initiatives.\n\nThe \"Central Scenario\" estimates that the prison population will increase from the current position 85,9251 to 87,700 by June 2015. By the end of June 2020 the prison population is projected to be 90,200. This Central Scenario is our best estimate based on the available information. The projected prison population under our Central Scenario is shown in Chart 1.\n\nThe prison population projections are produced using a model of flows of offenders into and out of prison which counts the resulting prison population each month.\n\n## **Chart 1: Projected prison population (Central Scenario)**\n\nThe Central Scenario has been modelled assuming custodial convictions are broadly in line with recent trends and average length of sentence to be flat based on recent trends.\n\nThe projections do not attempt to estimate the impact of any future Government policy that is yet to achieve Royal Assent, and therefore become less certain over time.\n\n1 As at 21 November 2014: www.gov.uk/government/statistics/prison-population-figures-2014", - "page_start": 3, - "page_end": 3, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## **4. Results**\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\n#### **Chart 2: Projected monthly prison population (all scenarios)**\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## **3a) Producing prison population projections**\n\nPrison population projections are produced using the Prison Population Stock-Flow Model. The principal sub-populations in prison – determinate sentence, life sentence, imprisonment for public protection (IPP) and remand – are modelled using stock-flow structures based on the generic structure shown in Figure B2. The stock-flow structures model the flow of offenders into and out of prison and count the resulting prison population at the end of each month.\n\n#### **Figure B2: Generic stock-flow structure in the Prison Population Stock-Flow Model**\n\nAverage Time Served\n\nFor the determinate population, the monthly inflows to prison are based on the custodial convictions projections described above. These custodial convictions include offenders that may already be serving a sentence for a previous crime or those who would serve their whole custodial sentence on remand, meaning that they would not be a new reception to prison. To convert from custodial convictions to prison receptions we apply a conversion ratio derived from the historical proportions of custodial convictions to prison receptions for each sub-population averaged over the last twelve months of historical data (April 2013 to March 2014 inclusive).\n\nMonthly outflows for the determinate population are based on observed custodial sentence lengths and the observed percentage of sentence length served taken from October 2013 to April 2014. Each projected offender that enters the model is given a custodial sentence length that is randomly selected from the relevant distribution. These distributions are populated with custodial sentence lengths from actual offender receptions who share the same characteristics of offence, gender and age group in the observed time period. The percent of custodial sentence length served is derived in the same manner, except that the observed distribution is made up of discharged offenders further disaggregated by custodial sentence length band.\n\nFor offenders who receive the new EDS sentence an adjustment is made to the percent of custodial length served to reflect that these offenders will spend a greater proportion of their sentence in custody than standard determinate sentenced offenders discharged to date.\n\nProjected prison receptions are sub-divided by age category (Juvenile, Young Adult, Adult) with the exact age of the offender attributed in the same manner as the custodial sentence lengths. This allows the model to explicitly age the offenders whilst in prison (e.g. move from Juvenile to Young Adult categories).", - "page_start": 26, - "page_end": 26, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "At the core of the method is a model of flows of offenders into and out of prison which counts the resulting prison population each month for sentenced, recall and remand prisoners.\n\nInputs to the prison projections model include projections of future custodial convictions. These are generated from time series projections of numbers of defendants entering the criminal courts and take into account the age, gender and offence of defendants entering the system, the flow of cases through the courts and the sentences which concluded cases attract.\n\nThe prison projections model monitors the sizes of the sentenced, recall and remand prison populations. These populations depend on the inflows defined above and the outflows. These outflows are defined by observed distributions of custodial sentence lengths, and the proportion of custodial sentences served for subsets of these populations. The model also simulates the ageing of the prison population over time.\n\nThe projection model is based on data up to June 2014 from various sources including court proceedings and performance data, sentencing data and prison receptions and population data.\n\nThe results of the prison projections model are supplemented with an estimate of the future non-criminal and fine defaulter populations, which is based on the latest available data to September 2014.\n\nThree scenarios have been modelled. These scenarios track the impact of three different incremental changes in sentencing behaviour:\n\n- The Central Scenario assumes custodial convictions are broadly in line with recent trends. The average length of sentence is assumed to be flat based on recent trends in sentence lengths. This broadly reflects the assumptions for Scenario 2 in the November 2013 projections.\nWe also consider two illustrative scenarios\n\n- Scenario 1 assumes that custodial convictions will fall against recent trends. The average length of sentence is assumed to be lower than what has been observed in recent trends in sentence lengths.\n- Scenario 2 assumes a rise in custodial convictions when compared to recent trends. Also the average length of sentence is assumed to be higher than what has been observed in recent trends in sentence lengths.\n\nThe three scenarios also incorporate the impact of:\n\n- trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;", - "page_start": 10, - "page_end": 10, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **1. Central Scenario**\n\nThis bulletin presents prison population projections for England and Wales from November 2014 to December 2020. The central projection is produced to aid development, capacity planning and resource allocation within the Criminal Justice System (CJS) and the National Offender Management Service (NOMS). The latest published useable operational capacity (21 November 2014) is 88,0152 .\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nThe Central Scenario tracks the impact of current trends in sentencing on custodial convictions, custodial sentence lengths and hence on the resulting prison population. These assumptions have been agreed through a consultative process. Government policy is only included in these projections when it has received Royal Assent. These projections also take into account other drivers including:\n\n- trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;\n- assumptions regarding future parole hearing frequency and expected outcomes for indeterminate (Life and Indeterminate for the Public Protection) sentences;\n- the Home Office gaining access to all 580 places at the Verne Immigration Removal Centre (IRC) by January 2015;\n- the impacts of the Offender Rehabilitation Act 20143 which achieved Royal Assent on 13 March 2014 meaning offenders sentenced to custodial sentences of less than 12 months will be released subject to licence. There will also be a new post-sentence supervision period following licence for offenders released from custodial sentences of less than 2 years;\n- the impacts of the Release on Temporary Licence (ROTL) review deciding that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances.\n\n2 www.gov.uk/government/statistics/prison-population-figures-2014 3 www.justice.gov.uk/transforming-rehabilitation", - "page_start": 5, - "page_end": 5, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The approach for the other sub-populations is similar and has not been substantially revised since the 2013 publication. The methodology applied to each is briefly outlined below.\n\nThe recall population is projected going forward based on time-series data available to October 2014.\n\nFor remand prisoners the average time served on remand is calculated from the ratio of the remand population to remand receptions. The modelled stock of prisoners is calibrated to historical actuals by varying levels of receptions. The remand population is generated in two parts both using this approach – untried remand and unsentenced remand populations being treated separately.\n\nIPP and life sentence prisoners have an extra section in the stock-flow structure which models the indeterminate nature of their sentence lengths. Outflows for IPP and life sentence prisoners depend on the tariff lengths they receive and on the frequency and outcome of Parole Board hearings. The values of these parameters are set and calibrated to reflect the most recent data on Parole Board outcomes.\n\nNOMS have made an agreement with the Home Office to hold an increased number of immigration detainees, which are only seen in the final two periods of historical data. The projected size of the non-criminal population is therefore set equal to the average size of the non-criminal population over the last two months of available data. This ensures that the non-criminal projections reflect the latest and most accurate count of the non-criminal population.\n\nThe population in prison at the end of each modelled month is aggregated into the categories defined by gender, current age group and, for determinate sentence prisoners, sentence length band, to produce raw, unadjusted prison population projections.\n\n## **3b) Accounting for the impacts of circumstance, legislation, and for seasonal effects**\n\nThe raw, unadjusted prison population projections are subject to model adjustments to show the impact of certain provisions in the Offender Rehabilitation Act 2014, changes at the Verne and the ROTL review. Model adjustments are also used to account for seasonal variation in the population. Model adjustments have been applied equally to all the scenarios modelled.\n\nThe Home Office is to gain access to all 580 places at the Verne IRC by January 2015. The estimated impacts have been applied to the non-criminal projection in the model.\n\nProvisions in the Offender Rehabilitation Act 2014 will mean that offenders sentenced to custodial sentences of less than 12 months will be released subject to licence (in the same way as offenders currently released from", - "page_start": 27, - "page_end": 27, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **Appendix B: Detail of models, scenarios and assumptions**\n\n## **The updated modelling approach**\n\nThe prison projections form part of the Ministry of Justice's wider work to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe prisons model used to generate the 2014 projections has not changed substantially from that used in the 2013 projections. As in the 2013 projections custodial sentence lengths used in the model are disaggregated by gender, age of the offender and offence type. The total time to be served in prison by projected future prisoners is assigned by matching their gender and age characteristics to relevant distributions of (i) custodial sentence lengths and (ii) the percentage of custodial sentence served. These distributions are derived from data for the period October 2013 to April 2014. This allows us to:\n\n- understand the Criminal Justice System factors which contribute to change in the prison population, including sentences lengths issued, the percentage of sentence served in custody, trial court and sentencing court changes, or shifts in the demographic characteristics of defendants;\n- model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes; and\n- quantify the impact of uncertainty around the time a defendant serves in prison on the prison population.\n\n## **Overview of the modelling approach**\n\nCentral to the modelling approach is the Prison Population Stock-Flow model. Projections of future custodial convictions are fed into this model and outputs are adjusted to account for the impact of changes in legislation and process on the prison population, as shown in Figure B1, and described below.", - "page_start": 22, - "page_end": 22, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "longer custodial terms). During the licence period, offenders are under probation supervision and can be subject to various conditions for the purposes of rehabilitation and public protection. The 2014 Act will also introduce a new post-sentence supervision period that follows licence for offenders released from custodial sentences of less than 2 years. Breaches of these licence or supervision periods could result in the offender being recalled or committed to custody, impacting the prison population. The estimated impacts have been applied to the recall populations in the model.\n\nThe impact of the ROTL review has also been included as a post model adjustment. The review decided that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances. Alongside protecting the public this may have the impact of delaying the release decision for such offenders impacting the prison population. The estimated impacts have been applied to the determinate population with sentences of greater than 12 months and the indeterminate population.\n\nOther ongoing changes within the system – included in previous published projections as model adjustments – are assumed to be captured in the past data and the trends detected therein.\n\nCustodial conviction projections for each sub-population were smoothed using a centred 12 month average. No seasonality in prison receptions and discharges was modelled explicitly. Seasonality was measured in the historical prison population and applied as a series of percentage adjustments to the final population projections. Seasonal factors for a set of sub-population categories (Remand, Determinate by sentence length band and Recall) were identified for each month by measuring statistically significant deviations from a centred 12 month average.", - "page_start": 28, - "page_end": 28, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The assumptions used are based on consultation with policy and operational experts at the Ministry of Justice and the National Offender Management Service. They also take into account observed data trends:\n\n- These projections represent a change from last year where the 2013 Scenario 2 (central) saw the population gradually falling over the six year lifetime of the projection. The Central Scenario in the projections this year shows the population rising over the next six years. This change arises from the fact that the latest projections capture a recent upward trend in prosecutions of more serious offences.\n- Despite the fact that overall crime is falling there has been an increase in recorded crime for certain offence types:\n\t- o Prosecutions for sexual offences are the highest in the decade and increased by 19% in the 12 months ending June 2014, in line with a 21% increase in recorded crime. Offenders sentenced for sexual offences had an Average Custodial Sentence Length (ASCL) of 59.7 months, a rise of 2.4 months, compared with year ending June 2013.\n\t- o Violence against the person proceedings for indictable offences have increased by 7% in the 12 months ending June 2014. This is in line with an 11% increase in recorded crime.\n\nFurther statistics and commentary on the changes seen in Court proceedings and sentencing over the last year is presented in the Criminal Justice System Statistics Quarterly publication. This is available online on GOV.UK at: www.gov.uk/government/collections/criminal-justice-statistics-quarterly", - "page_start": 4, - "page_end": 4, - "source_file": "legal4_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "Do you know the prison population estimation for the and of June 2020 ?", - "target_page": 13, - "target_passage": "The Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## **4. Results**\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\n#### **Chart 2: Projected monthly prison population (all scenarios)**\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **Key points**\n\nThis bulletin presents projections of the prison population in England and Wales from November 2014 to December 2020. The prison population projections are based on assumptions about future custodial convictions and incorporate the anticipated impacts of agreed policy and procedural initiatives.\n\nThe \"Central Scenario\" estimates that the prison population will increase from the current position 85,9251 to 87,700 by June 2015. By the end of June 2020 the prison population is projected to be 90,200. This Central Scenario is our best estimate based on the available information. The projected prison population under our Central Scenario is shown in Chart 1.\n\nThe prison population projections are produced using a model of flows of offenders into and out of prison which counts the resulting prison population each month.\n\n## **Chart 1: Projected prison population (Central Scenario)**\n\nThe Central Scenario has been modelled assuming custodial convictions are broadly in line with recent trends and average length of sentence to be flat based on recent trends.\n\nThe projections do not attempt to estimate the impact of any future Government policy that is yet to achieve Royal Assent, and therefore become less certain over time.\n\n1 As at 21 November 2014: www.gov.uk/government/statistics/prison-population-figures-2014", - "page_start": 3, - "page_end": 3, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **3. Modelling methodology and projection scenarios**\n\nThe prison projections model is part of wider work within the Ministry of Justice to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe custodial convictions model uses projections of numbers of defendants entering the criminal courts. In order to project volumes of defendants being given a custodial sentence, it also takes into account:\n\n- the age, gender and offence of defendants entering the system;\n- the flow of cases through the courts; and\n- the sentences which concluded cases attract.\n\nThe prison population projections model takes projections of custodial convictions, converts them to projections of prison receptions and then models the amount of time that offenders spend in prison to calculate the resulting prison population.\n\nThe benefits of this method are that it allows us to:\n\n- explicitly project custodial convictions (rather than just convictions);\n- understand the Criminal Justice System factors which contribute to change in the prison population, such as time served, sentences given, trial and sentencing court changes or shifts in defendant demographics; and\n- more easily model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes relating to specific offences or specific sentences.\n\nAppendix B provides details of the methods used to produce the prison population projections and the assumptions behind them.\n\nThe assumptions informing these projections, and therefore the projections themselves, are subject to significant uncertainty. This is represented by the three scenarios, with each scenario being only as likely as the assumptions which inform it.\n\nThe method used for generating projections of the prison population in England and Wales for the 2014-2020 projections is consistent with the approach used to generate the 2013-2019 projections published on 7 November 2013.", - "page_start": 9, - "page_end": 9, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **2. Recent trends in the population**\n\nThe 'Story of the Prison Population 1993 to 2012' is an in-depth look at what happened to the prison population between 1993 and 2012 and the major factors contributing to the changes.4 \n\nThe prison population grew rapidly between 1993 to 2008, at an average of 4% a year. This rapid rise was driven by:\n\n- increased numbers of people sentenced to immediate custody from 1993 to 2002;\n- increases in the average custodial sentence length and increased use of indeterminate sentences; and\n- an increase in numbers recalled to prison following breaches of the conditions of licence and these offenders spending longer in prison once recalled.\n\nThe rise in the prison population slowed considerably from the summer of 2008, in part due to the introduction of the Criminal Justice and Immigration Act (CJIA) 20085 which changed sentencing and offender management in ways which helped to reduce growth in the prison population.\n\nThis flatter trend continued until the public disorder seen in UK cities from 6 to 9 August 2011 which had an immediate but temporary impact on the prison population.\n\nDuring 2012 and into 2013, the prison population began to fall due to a falling remand population and a continued decline in the number of under 18s in custody. The falling remand population during 2012 reflected falling volumes going through the courts plus the introduction, in December 2012, of measures restricting the use of remand for all offenders who would be unlikely to receive a custodial sentence.6\n\nFrom the end of August 2013 to the end of October 2013, the remand population rose sharply, driving an overall increase in the prison population. This was being driven by an increase in demand in the Crown Courts, especially among more serious tri-able either way cases. The total population has continued to rise since the beginning of 2014 and reached 85,9257 on the\n\n4 Story of the Prison Population: www.gov.uk/government/publications/story-of-the-prisonpopulation-1993-2012\n\n5 services.parliament.uk/bills/2007-08/criminaljusticeandimmigration.html 6 http://services.parliament.uk/bills/2010-11/legalaidsentencingandpunishmentofoffenders.html 7 www.gov.uk/government/statistics/prison-population-figures-2014", - "page_start": 7, - "page_end": 7, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **Appendix B: Detail of models, scenarios and assumptions**\n\n## **The updated modelling approach**\n\nThe prison projections form part of the Ministry of Justice's wider work to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe prisons model used to generate the 2014 projections has not changed substantially from that used in the 2013 projections. As in the 2013 projections custodial sentence lengths used in the model are disaggregated by gender, age of the offender and offence type. The total time to be served in prison by projected future prisoners is assigned by matching their gender and age characteristics to relevant distributions of (i) custodial sentence lengths and (ii) the percentage of custodial sentence served. These distributions are derived from data for the period October 2013 to April 2014. This allows us to:\n\n- understand the Criminal Justice System factors which contribute to change in the prison population, including sentences lengths issued, the percentage of sentence served in custody, trial court and sentencing court changes, or shifts in the demographic characteristics of defendants;\n- model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes; and\n- quantify the impact of uncertainty around the time a defendant serves in prison on the prison population.\n\n## **Overview of the modelling approach**\n\nCentral to the modelling approach is the Prison Population Stock-Flow model. Projections of future custodial convictions are fed into this model and outputs are adjusted to account for the impact of changes in legislation and process on the prison population, as shown in Figure B1, and described below.", - "page_start": 22, - "page_end": 22, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The approach for the other sub-populations is similar and has not been substantially revised since the 2013 publication. The methodology applied to each is briefly outlined below.\n\nThe recall population is projected going forward based on time-series data available to October 2014.\n\nFor remand prisoners the average time served on remand is calculated from the ratio of the remand population to remand receptions. The modelled stock of prisoners is calibrated to historical actuals by varying levels of receptions. The remand population is generated in two parts both using this approach – untried remand and unsentenced remand populations being treated separately.\n\nIPP and life sentence prisoners have an extra section in the stock-flow structure which models the indeterminate nature of their sentence lengths. Outflows for IPP and life sentence prisoners depend on the tariff lengths they receive and on the frequency and outcome of Parole Board hearings. The values of these parameters are set and calibrated to reflect the most recent data on Parole Board outcomes.\n\nNOMS have made an agreement with the Home Office to hold an increased number of immigration detainees, which are only seen in the final two periods of historical data. The projected size of the non-criminal population is therefore set equal to the average size of the non-criminal population over the last two months of available data. This ensures that the non-criminal projections reflect the latest and most accurate count of the non-criminal population.\n\nThe population in prison at the end of each modelled month is aggregated into the categories defined by gender, current age group and, for determinate sentence prisoners, sentence length band, to produce raw, unadjusted prison population projections.\n\n## **3b) Accounting for the impacts of circumstance, legislation, and for seasonal effects**\n\nThe raw, unadjusted prison population projections are subject to model adjustments to show the impact of certain provisions in the Offender Rehabilitation Act 2014, changes at the Verne and the ROTL review. Model adjustments are also used to account for seasonal variation in the population. Model adjustments have been applied equally to all the scenarios modelled.\n\nThe Home Office is to gain access to all 580 places at the Verne IRC by January 2015. The estimated impacts have been applied to the non-criminal projection in the model.\n\nProvisions in the Offender Rehabilitation Act 2014 will mean that offenders sentenced to custodial sentences of less than 12 months will be released subject to licence (in the same way as offenders currently released from", - "page_start": 27, - "page_end": 27, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# **1. Central Scenario**\n\nThis bulletin presents prison population projections for England and Wales from November 2014 to December 2020. The central projection is produced to aid development, capacity planning and resource allocation within the Criminal Justice System (CJS) and the National Offender Management Service (NOMS). The latest published useable operational capacity (21 November 2014) is 88,0152 .\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nThe Central Scenario tracks the impact of current trends in sentencing on custodial convictions, custodial sentence lengths and hence on the resulting prison population. These assumptions have been agreed through a consultative process. Government policy is only included in these projections when it has received Royal Assent. These projections also take into account other drivers including:\n\n- trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;\n- assumptions regarding future parole hearing frequency and expected outcomes for indeterminate (Life and Indeterminate for the Public Protection) sentences;\n- the Home Office gaining access to all 580 places at the Verne Immigration Removal Centre (IRC) by January 2015;\n- the impacts of the Offender Rehabilitation Act 20143 which achieved Royal Assent on 13 March 2014 meaning offenders sentenced to custodial sentences of less than 12 months will be released subject to licence. There will also be a new post-sentence supervision period following licence for offenders released from custodial sentences of less than 2 years;\n- the impacts of the Release on Temporary Licence (ROTL) review deciding that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances.\n\n2 www.gov.uk/government/statistics/prison-population-figures-2014 3 www.justice.gov.uk/transforming-rehabilitation", - "page_start": 5, - "page_end": 5, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "At the core of the method is a model of flows of offenders into and out of prison which counts the resulting prison population each month for sentenced, recall and remand prisoners.\n\nInputs to the prison projections model include projections of future custodial convictions. These are generated from time series projections of numbers of defendants entering the criminal courts and take into account the age, gender and offence of defendants entering the system, the flow of cases through the courts and the sentences which concluded cases attract.\n\nThe prison projections model monitors the sizes of the sentenced, recall and remand prison populations. These populations depend on the inflows defined above and the outflows. These outflows are defined by observed distributions of custodial sentence lengths, and the proportion of custodial sentences served for subsets of these populations. The model also simulates the ageing of the prison population over time.\n\nThe projection model is based on data up to June 2014 from various sources including court proceedings and performance data, sentencing data and prison receptions and population data.\n\nThe results of the prison projections model are supplemented with an estimate of the future non-criminal and fine defaulter populations, which is based on the latest available data to September 2014.\n\nThree scenarios have been modelled. These scenarios track the impact of three different incremental changes in sentencing behaviour:\n\n- The Central Scenario assumes custodial convictions are broadly in line with recent trends. The average length of sentence is assumed to be flat based on recent trends in sentence lengths. This broadly reflects the assumptions for Scenario 2 in the November 2013 projections.\nWe also consider two illustrative scenarios\n\n- Scenario 1 assumes that custodial convictions will fall against recent trends. The average length of sentence is assumed to be lower than what has been observed in recent trends in sentence lengths.\n- Scenario 2 assumes a rise in custodial convictions when compared to recent trends. Also the average length of sentence is assumed to be higher than what has been observed in recent trends in sentence lengths.\n\nThe three scenarios also incorporate the impact of:\n\n- trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;", - "page_start": 10, - "page_end": 10, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "longer custodial terms). During the licence period, offenders are under probation supervision and can be subject to various conditions for the purposes of rehabilitation and public protection. The 2014 Act will also introduce a new post-sentence supervision period that follows licence for offenders released from custodial sentences of less than 2 years. Breaches of these licence or supervision periods could result in the offender being recalled or committed to custody, impacting the prison population. The estimated impacts have been applied to the recall populations in the model.\n\nThe impact of the ROTL review has also been included as a post model adjustment. The review decided that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances. Alongside protecting the public this may have the impact of delaying the release decision for such offenders impacting the prison population. The estimated impacts have been applied to the determinate population with sentences of greater than 12 months and the indeterminate population.\n\nOther ongoing changes within the system – included in previous published projections as model adjustments – are assumed to be captured in the past data and the trends detected therein.\n\nCustodial conviction projections for each sub-population were smoothed using a centred 12 month average. No seasonality in prison receptions and discharges was modelled explicitly. Seasonality was measured in the historical prison population and applied as a series of percentage adjustments to the final population projections. Seasonal factors for a set of sub-population categories (Remand, Determinate by sentence length band and Recall) were identified for each month by measuring statistically significant deviations from a centred 12 month average.", - "page_start": 28, - "page_end": 28, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## **3a) Producing prison population projections**\n\nPrison population projections are produced using the Prison Population Stock-Flow Model. The principal sub-populations in prison – determinate sentence, life sentence, imprisonment for public protection (IPP) and remand – are modelled using stock-flow structures based on the generic structure shown in Figure B2. The stock-flow structures model the flow of offenders into and out of prison and count the resulting prison population at the end of each month.\n\n#### **Figure B2: Generic stock-flow structure in the Prison Population Stock-Flow Model**\n\nAverage Time Served\n\nFor the determinate population, the monthly inflows to prison are based on the custodial convictions projections described above. These custodial convictions include offenders that may already be serving a sentence for a previous crime or those who would serve their whole custodial sentence on remand, meaning that they would not be a new reception to prison. To convert from custodial convictions to prison receptions we apply a conversion ratio derived from the historical proportions of custodial convictions to prison receptions for each sub-population averaged over the last twelve months of historical data (April 2013 to March 2014 inclusive).\n\nMonthly outflows for the determinate population are based on observed custodial sentence lengths and the observed percentage of sentence length served taken from October 2013 to April 2014. Each projected offender that enters the model is given a custodial sentence length that is randomly selected from the relevant distribution. These distributions are populated with custodial sentence lengths from actual offender receptions who share the same characteristics of offence, gender and age group in the observed time period. The percent of custodial sentence length served is derived in the same manner, except that the observed distribution is made up of discharged offenders further disaggregated by custodial sentence length band.\n\nFor offenders who receive the new EDS sentence an adjustment is made to the percent of custodial length served to reflect that these offenders will spend a greater proportion of their sentence in custody than standard determinate sentenced offenders discharged to date.\n\nProjected prison receptions are sub-divided by age category (Juvenile, Young Adult, Adult) with the exact age of the offender attributed in the same manner as the custodial sentence lengths. This allows the model to explicitly age the offenders whilst in prison (e.g. move from Juvenile to Young Adult categories).", - "page_start": 26, - "page_end": 26, - "source_file": "legal4_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "What is the phone number of the Ministry of Justice press office ?", - "target_page": 30, - "target_passage": "Press enquiries should be directed to the Ministry of Justice press office, telephone: 020 3334 3536 ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **Contact Points for further information**\n\nCurrent and previous editions of this publication are available for download from www.justice.gov.uk/publications/statistics-and-data/index.htm\n\nPress enquiries should be directed to the Ministry of Justice press office, telephone: 020 3334 3536\n\nOther enquiries about these statistics should be directed to:\n\nJustice Statistics Analytical Services Ministry of Justice 7th Floor 102 Petty France London SW1H 9AJ\n\nGeneral enquiries about the statistical work of the Ministry of Justice can be emailed to: statistics.enquiries@justice.gsi.gov.uk\n\nGeneral information about the official statistics system of the UK is available from www.statistics.gov.uk", - "page_start": 29, - "page_end": 29, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "Alternative format versions of this report are available on request from the Ministry of Justice at statistics.enquiries@justice.gsi.gov.uk\n\n© Crown copyright Produced by the Ministry of Justice", - "page_start": 30, - "page_end": 30, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "holding or acting in such offices shall, subject to the provisions of sections 113 and 114 of this Constitution, vest in the President.\n\n(2) The offices to which this section applies are-\n\n- (a) Ambassador, High Commissioner or other principal representative of Botswana in any other country or accredited to any international organisation;\n- (b) Secretary to the Cabinet;\n- (c) Attorney-General;\n- (cA) Director of Public Prosecutions;\n- (d) Permanent Secretary;\n- (e) Commissioner of Police; and\n- (f) any other superscale office (other than an office to which this Constitution makes specific provision for appointment or an office to which appointment is made under the provisions of section 104 of this Constitution) which may be prescribed by Act of Parliament.\n\n**113. Tenure of office of Director of Public Prosecutions** (1) Subject to the provisions of this section, a person appointed as Director of Public Prosecutions shall hold office for a 5 year renewable term or until he or she attains the age of 60 years, whichever is the earlier.\n\n(2) A person holding the office of Director of Public Prosecutions may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or any other cause) or for misbehaviour or for incompetence and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the President considers that the question of removing a person holding the office of Director of Public Prosecutions from office ought to be investigated then-\n\n- (a) he or she shall appoint a tribunal which shall consist of a Chairman and not less than two other members, who hold or have held high judicial office; and\n- (b) the tribunal shall enquire into the matter and report on the facts thereof to the President and advise the President whether the person holding the office of Director of Public Prosecutions ought to be removed from office under this section for inability as aforesaid or for misbehaviour or for incompetence.\n- (4) Where a tribunal appointed under subsection (3) of this section advises the President that a person holding the office of Director of Public Prosecutions ought to be removed from office for inability as aforesaid or for misbehaviour or for incompetence, the President shall remove such person from office.\n\n(5) If the question of removing a person holding the office of Director of Public Prosecutions from office has been referred to a tribunal under this section, the President may suspend that person from performing the functions of his or her office, and any such suspension may at any time be revoked by the President and shall in any case cease to have effect if the tribunal advises the President that the person ought not to be removed from office.\n\n# **114. Tenure of office of Auditor-General**\n\n(1) Subject to the provisions of this section, a person holding the office of Auditor- General shall vacate his or her office when he or she attains the age of 60 years or such other age as may be prescribed by Parliament.\n\n(2) A person holding the office of Auditor-General may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or any other cause) or for misbehaviour and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the National Assembly resolves that the question of removing a person holding the office of Auditor-General from office under this section ought to be", - "page_start": 48, - "page_end": 48, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "than two other members, who hold or have held high judicial office;\n\n- (b) the tribunal shall enquire into the matter and report on the facts thereof to the President and advise the President whether the judge ought to be removed from office under this section for inability as aforesaid or for misbehaviour.\n(4) Where a tribunal, appointed under subsection (3) of this section, advises the President that a judge of the Court of Appeal ought to be removed from office for inability as aforesaid or for misbehaviour, the President shall remove such judge from office.\n\n(5) If the question of removing a judge of the Court of Appeal from office has been referred to a tribunal under subsection (3) of this section, the President may suspend the judge from performing the functions of his or her office, and any such suspension may at any time be revoked by the President and shall in any case cease to have effect if the tribunal advises the President that the judge ought not to be removed from office.\n\n## **102. Oaths to be taken by judges of Court of Appeal**\n\nA judge of the Court of Appeal shall not enter upon the duties of his or her office unless he or she has taken and subscribed such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n## **PART III**\n\n## **Judicial Service Commission (ss 103-104)**\n\n## **103. Composition and procedure**\n\n(1) There shall be a Judicial Service Commission for Botswana which shall consist of-\n\n- (a) the Chief Justice who shall be Chairman;\n- (b) the President of the Court of Appeal (not being the Chief Justice or the most Senior Justice of the Court of Appeal);\n- (c) the Attorney-General;\n- (d) the Chairman of the Public Service Commission;\n- (e) a member of the Law Society nominated by the Law Society; and\n- (f) a person of intergrity and experience not being a legal practitioner appointed by the President.\n\n(2) A member nominated under paragraph (e) or appointed under paragraph (f) of subsection (1) shall hold office for a period of two years, but shall be eligible for re nomination or re-appointment, as the case may be, for another term of office for two years:\n\nProvided that-\n\n- (i) a member nominated under paragraph (e) may be removed from office by the rest of the members of the Commission acting together only for inability of the member to discharge the functions of his or her office whether arising from infirmity of mind or body or any other cause or for gross misbehaviour; or\n- (ii) a member appointed under paragraph (f) may be removed from office by the President only for inability of the member to discharge the functions of his or her office whether arising from infirmity of mind or body or any other cause or for gross misbehaviour.\n\n(3) A member of the Commission shall not enter upon the duties of his or her office until he or she has taken and subscribed such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n(4) The Judicial Service Commission shall not be subject to the direction or control of any other person or authority in the exercise of its functions under this Constitution.\n\n(5) The Commission may regulate its own procedure and, subject to that procedure, may act notwithstanding any vacancy in its membership or the absence of", - "page_start": 44, - "page_end": 44, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "she has attained the age of 70 years or such other age as may be prescribed for the purposes of section 101 of this Constitution;\n\n- (ii) a person appointed under this subsection, who is not a judge of the Court of Appeal, may, notwithstanding the assumption or resumption of the functions of the office of President of the Court of Appeal by the holder of that office, continue to act as a judge of the Court of Appeal for so long thereafter and to such extent as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her previously thereto.\n(6) If the office of a Justice of Appeal is vacant or if any Justice of Appeal is appointed to act as Chief Justice or President of the Court of Appeal or is for any reason unable to perform the functions of his or her office, the President, acting in accordance with the advice of the Judicial Service Commission, may appoint a person qualified for appointment as a Justice of Appeal to act as a Justice of Appeal:\n\nProvided that a person may be so appointed notwithstanding that he or she has attained the age of 70 years or such other age as may be prescribed for the purposes of section 101 of this Constitution.\n\n(7) Any person appointed under subsection (6) of this section to act as a Justice of Appeal, shall subject to the provisions of section 101(4) and (5) of this Constitution, continue to act for the period of his or her appointment or, if no such period is specified, until his or her appointment is revoked by the President, acting in accordance with the advice of the Judicial Service Commission:\n\nProvided that the President, acting in accordance with the advice of the Judicial Service Commission, may permit a person whose appointment to act as a Justice of Appeal has expired or been revoked to continue to act as such a judge for such period as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her previously thereto.\n\n# **101. Tenure of office of judges of Court of Appeal**\n\n(1) Subject to the provisions of this section, a person holding the office of a judge of the Court of Appeal shall vacate that office on attaining the age of 70 years or such other age as may be prescribed by Parliament:\n\nProvided that-\n\n- (i) the President, acting in accordance with the advice of the Judicial Service Commission, may permit a judge who has attained that age to continue in office for such period as may be necessary to enable him or her to deliver judgment or to do any other thing in relation to proceedings that were commenced before him or her before he or she attained that age;\n- (ii) a person may be appointed as President of the Court of Appeal or as a Justice of Appeal for a fixed period of three years notwithstanding that he or she has attained the age referred to in this subsection or that he or she will before the expiry of his or her appointment have attained that age; and\n- (iii) the appointment as President of the Court of Appeal or as Justice of Appeal serving for a fixed period under paragraph (ii) above shall not affect the date at which he or she is due to retire.\n\n(2) A judge of the Court of Appeal may be removed from office only for inability to perform the functions of his or her office (whether arising from infirmity of body or mind or from any other cause) or for misbehaviour, and shall not be so removed except in accordance with the provisions of this section.\n\n(3) If the President considers that the question of removing a judge of the Court of Appeal under this section ought to be investigated then-\n\n- (a) he or she shall appoint a tribunal which shall consist of a Chairman and not less", - "page_start": 43, - "page_end": 43, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "### PART IV\n\n#### Powers of Parliament\n\n- 86. Legislative powers\n- 87. Mode of exercising legislative powers\n- 88. Introduction of Bills\n- 89. Alteration of Constitution\n\n#### PART V\n\n#### Summoning, Prorogation and Dissolution\n\n- 90. Sessions of Parliament\n- 91. Prorogation and dissolution of Parliament\n- 92. Vote of no confidence in the Government\n- 93. Sittings of National Assembly\n\n#### PART VI\n\n#### Interpretation\n\n- 94. Votes of two-thirds of the Assembly\n#### CHAPTER VI\n\n### The Judicature\n\n# PART I\n\n- The High Court\n- 95. Jurisdiction and composition\n- 96. Appointment of judges of High Court\n- 97. Tenure of office of judges of High Court\n- 98. Oaths to be taken by judges of High Court\n\n### PART II\n\n#### Court of Appeal\n\n- 99. Composition and jurisdiction\n- 100. Appointment of judges of Court of Appeal\n- 101. Tenure of office of judges of Court of Appeal\n- 102. Oaths to be taken by judges of Court of Appeal\n\n### PART III\n\n#### Judicial Service Commission\n\n- 103. Composition and procedure\n- 104. Appointment, etc., of judicial officers\n\n#### PART IV\n\n#### Interpretation of the Constitution\n\n- 105. Reference to High Court of cases involving interpretation of Constitution\n- 106. Appeal to Court of Appeal\n\n#### PART V\n\n#### Judicial Committee\n\n107. ......\n\n#### CHAPTER VII\n\n### The Public Service\n\n- 108. Power to specify qualifications for certain offices\n- 109. Public Service Commission\n- 110. Appointment, etc., of public officers\n- 111. Appeals to President\n- 112. Powers of President in relation to certain public offices", - "page_start": 2, - "page_end": 2, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "citizenship in force at that time, shall be regarded as a citizen by descent.\n\n# **34. Tenure of office of President**\n\n(1) The President shall, subject to the provisions of this section, hold office for an aggregate period not exceeding 10 years beginning from the date of his or her first assumption of office of President after the commencement of this Act.\n\n(2) The President shall cease to hold the office of President if at any time during his or her tenure of office any circumstances arise that would, if he or she were not a member of the National Assembly, cause him or her to be disqualified for election\n\nthereto.(3) The President shall cease to hold office of President at the expiry of the period prescribed under subsection (1) of this section, or when the person elected at the next election of President following a dissolution of Parliament assumes office.\n\n# **35. Vacancy in office of President**\n\n(1) Whenever the President dies, resigns or ceases to hold office, the Vice- President shall assume office as President with effect from the date of the death, resignation or ceasing to be President.\n\n(2) If the office of President-\n\n- (a) becomes vacant in circumstances in which there is no Vice-President; or\n- (b) is vacant whilst the Vice-President is absent from Botswana or is, by reason of physical or mental infirmity unable to perform the functions of his or her office,\n\nthe functions of the office of President shall, until such time as a new President assumes office in accordance with this section or section 32 of this Constitution, be performed by such Minister as the Cabinet shall appoint. For the purposes of this subsection, a certificate of the Chief Justice that the Vice-President is by reason of physical or mental infirmity unable to discharge the functions of his or her office, shall, in respect of any period for which it is in force, be conclusive and shall not be questioned in any court.\n\n(3) Any person performing the functions of the office of President by virtue of subsection (1) or (2) of this section shall not exercise the power of the President to revoke the appointment of Vice-President or to dissolve Parliament.\n\n(4) If the office of President becomes vacant, the National Assembly shall, unless Parliament is dissolved, and notwithstanding that it may be prorogued, meet on the seventh day after the office of President becomes vacant, or on such earlier day as may be appointed by the Speaker, and shall elect a person to the office in such manner as is prescribed by the next following subsection and, subject thereto, by or under an Act of Parliament.\n\n- (5) In an election of a President under this section-\n- (a) the Speaker shall preside at the meeting and conduct the election;\n- (b) a person may be a candidate if and shall not be a candidate unless he or she has been nominated as a candidate with his or her consent prior to the sitting of the National Assembly at which the election takes place, by not less than 10 Members of the National Assembly entitled to vote in that election;\n- (c) at the election every Member of the Assembly except the Speaker shall be entitled to vote;\n- (d) the votes of the Members of the Assembly who are entitled to vote shall be given by ballot in such manner as not to disclose how any particular Member voted, and any person who receives the votes of more than one half of the total number of persons entitled to vote shall be declared elected as President;\n- (e) a person elected as President under this section shall assume the office of President on the day upon which he or she is declared to be elected;\n- (f) not more than three ballots shall be taken unless in the opinion of the Speaker the holding of further ballots is likely to result in the election of a President, in", - "page_start": 18, - "page_end": 18, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Assistant Minister.\n\n# **43. Tenure of office of Ministers and Assistant Ministers**\n\nThe office of any Minister or Assistant Minister shall become vacant-\n\n- (a) in the case of a Minister or Assistant Minister appointed from among the Members of the National Assembly, or in the case of a Minister or Assistant Minister appointed from among persons who are not Members of the Assembly who becomes a Member of the Assembly before the expiration of four months from the date of his or her appointment-\n\t- (i) if he or she ceases to be a Member of the National Assembly otherwise than by reason of a dissolution of the National Assembly; or\n\t- (ii) if, at the first sitting of the Assembly after a general election, he or she is not a Member of the Assembly;\n- (b) in the case of a Minister or Assistant Minister appointed from among persons who are not Members of the Assembly, if before the expiration of four months from the date of his or her appointment-\n\t- (i) circumstances arise (other than a dissolution of the Assembly) that, if he or she were such a Member, would cause him or her to vacate his or her seat in the Assembly; or\n\t- (ii) he or she does not become a Member of the Assembly;\n- (c) if the holder of the office is removed from office by the President;\n- (d) upon the assumption by any person of the office of President.\n\n# **44. Cabinet**\n\n(1) There shall be a Cabinet which shall consist of the President, Vice-President and the Ministers.\n\n(2) There shall preside at meetings of the Cabinet-\n\n- (a) the President;\n- (b) in the absence of the President, the Vice-President; or\n- (c) in the absence of the President and the Vice-President, such Minister as the President may designate.\n\n(3) The Cabinet may act notwithstanding any vacancy in its membership.\n\n# **45. Oaths to be taken by Ministers and Assistant Ministers**\n\nThe Vice-President, a Minister or an Assistant Minister shall not enter upon the duties of his or her office unless he or she has taken and subscribed the oath of allegiance and such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n# **46. Secretary to the Cabinet**\n\n(1) There shall be a Secretary to the Cabinet whose office shall be a public office.\n\n(2) The Secretary to the Cabinet shall have charge of the Cabinet Office and shall be responsible, in accordance with such instructions as may be given to him or her by the President, for arranging the business for, and keeping the minutes of, the Cabinet, for conveying decisions of the Cabinet to the appropriate person or authority, and shall have such other functions as the President may from time to time direct.\n\n# **PART III**\n\n# **Executive Functions (ss 47-56)**\n\n# **47. Functions of President**\n\n(1) The executive power of Botswana shall vest in the President and, subject to the provisions of this Constitution, shall be exercised by him or her either directly or through officers subordinate to him or her.\n\n(2) In the exercise of any function conferred upon him or her by this Constitution or any other law the President shall, unless it is otherwise provided, act in his or her own deliberate judgment and shall not be obliged to follow the advice tendered by any other", - "page_start": 22, - "page_end": 22, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "tribunal advises the President that the judge ought not to be removed from office.\n\n# **98. Oaths to be taken by judges of High Court**\n\nA judge of the High Court shall not enter upon the duties of his or her office unless he or she has taken and subscribed such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n# **PART II**\n\n# **Court of Appeal (ss 99-102)**\n\n# **99. Composition and jurisdiction**\n\n(1) There shall be a Court of Appeal for Botswana which shall have such jurisdiction and powers as may be conferred on it by this Constitution or any other law.\n\n- (2) The judges of the Court of Appeal shall be-\n- (a) the President of the Court of Appeal;\n- (b) such number, if any, of Justices of Appeal as may be prescribed by Parliament; and\n- (c) the Chief Justice and the other judges of the High Court:\n\nProvided that Parliament may make provision for the office of President of the Court of Appeal to be held by the Chief Justice ex-officio.\n\n(3) The office of a Justice of Appeal shall not be abolished while there is a substantive holder thereof.\n\n(4) The Court of Appeal shall be a superior court of record and save as otherwise provided by Parliament shall have all the powers of such a court.\n\n# **100. Appointment of judges of Court of Appeal**\n\n(1) The President of the Court of Appeal shall, unless that office is held ex-officio by the Chief Justice, be appointed by the President.\n\n(2) The Justices of Appeal, if any, shall be appointed by the President, acting in accordance with the advice of the Judicial Service Commission.\n\n(3) A person shall not be qualified to be appointed as a judge of the Court of Appeal unless-\n\n- (a) he or she holds, or has held office as, a judge of a court having unlimited jurisdiction in civil and criminal matters in Botswana, in a Commonwealth country or in any country outside the Commonwealth that may be prescribed by Parliament or a court having jurisdiction in appeals from such a court; or\n- (b) he or she is qualified to practise as an advocate or attorney in such a court and has been qualified for not less than ten years to practise as an advocate or attorney in such a court; or\n- (c) he or she is qualified to practise as an advocate or attorney and he or she has had experience in the teaching of law in a recognised university for not less than ten years.\n\n(4) In computing, for the purposes of subsection (3) of this section, the period during which any person has been qualified to practise as an advocate or attorney any period during which he or she has held judicial office after becoming so qualified shall be included.\n\n(5) If the office of President of the Court of Appeal is vacant or if the President of the Court of Appeal is for any reason unable to perform the functions of his or her office, then, until a person has been appointed to and has assumed the functions of that office or until the President of the Court of Appeal has resumed those functions, as the case may be, those functions shall be performed by such one of the other judges of the Court of Appeal or such other person qualified for appointment as a judge of the Court of Appeal as the President may appoint for that purpose:\n\nProvided that-\n\n- (i) a person may be appointed under this subsection notwithstanding that he or", - "page_start": 42, - "page_end": 42, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Elected or Specially Elected Member of the Assembly his or her resignation shall be addressed to the Speaker, and in the case of a Member of the Ntlo ya Dikgosi his or her resignation from that office shall be addressed to the Chairman of the Ntlo ya Dikgosi.\n\n(2) The resignation of any person from any office established by this Constitution shall take effect on the date or at the time indicated in the writing signifying the resignation or, if no such date or time is so indicated, at the time the writing is received by the person or authority to whom it is addressed or by any person authorized by that person or authority to receive it.\n\n### **126. Reappointments and concurrent appointments**\n\n(1) Where any person has vacated any office established by this Constitution, he or she may, if qualified, again be appointed or elected to hold that office in accordance with the provisions of this Constitution.\n\n(2) Where a power is conferred by this Constitution upon any person to make any appointment to any office, a person may be appointed to that office notwithstanding that some other person may be holding that office, when that other person is on leave of absence pending the relinquishment of the office; and where two or more persons are holding the same office by reason of an appointment made in pursuance of this subsection, then, for the purposes of any function conferred upon the holder of that office, the person last appointed shall be deemed to be the sole holder of the office.\n\n## **127. Interpretation**\n\n(1) In this Constitution unless the context otherwise requires-\n\n**\"the Assembly\"** means the National Assembly;\n\n**\"Botswana\"** means the territory that, on 29th September, 1966, was comprised in the former Protectorate of Bechuanaland;\n\n**\"financial year\"** means the period of 12 months ending on 31st March in any year or on such other day as Parliament may prescribe;\n\n**\"the Gazette\"** means the Botswana Government Gazette;\n\n**\"high judicial office\"** means the office of a judge of a court of unlimited jurisdiction in civil and criminal matters in Botswana, a Commonwealth country or in any country outside the Commonwealth that may be prescribed by Parliament or the office of judge of a court having jurisdiction in appeals from such a court;\n\n**''Kgosana'' (pl. Dikgosana)** means Headman;\n\n**''Kgosi'' (pl. Dikgosi)** means Chief or Sub-Chief as defined in the Chieftainship Act;\n\n**\"oath\"** includes affirmation;\n\n**\"the oath of allegiance\"** means such oath of allegiance as may be prescribed by law;\n\n**\"public office\"** means, subject to the provisions of subsections (2) and (3) of this section, an office of emolument in the public service;\n\n**\"public officer\"** means a person holding or acting in any public office;\n\n**\"the public service\"** means the civil service of the Government;\n\n**\"session\"** means the sittings of the National Assembly beginning when it first sits after the coming into operation of this Constitution or after Parliament is prorogued or dissolved at any time and ending when Parliament is prorogued or is dissolved without having been prorogued;\n\n**\"sitting\"** means a period during which the National Assembly is sitting without adjournment and includes any period during which it is in committee;\n\n**\"subordinate court\"** means any court established for Botswana other than-\n\n- (a) the Court of Appeal;\n- (b) the High Court;\n- (c) a court martial; or", - "page_start": 53, - "page_end": 53, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "What is SOLR ?", - "target_page": 4, - "target_passage": "Search engine used for portal content search and dataset search ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "### **3.7 SPARQL Manager**\n\nThe SPARQL Manager provides a graphical user interface (GUI) for sending user defined queries to the Virtuoso SPARQL query engine.\n\nThe powerful SPARQL Protocol and RDF Query Language are primarily aimed at professionals for querying metadata as Linked Data. A basic knowledge of the DCAT-AP specification is highly recommended.\n\nIn the future, users of the SPARQL Manager will be able to save their queries for scheduled execution. Additionally a notification will be send to the user when a result has changed.\n\nClicking the info icon in the upper right corner will display a step-by-step walkthrough of all components with a short info about their function.\n\nThis is possible in both of modes of the SPARQL Manager, the search and the assistant mode, which will be described in the following sections.\n\n| Newsletter FAQ Search Contact Cookies Legal notice Login | | | | | English (en) | > |\n| --- | --- | --- | --- | --- | --- | --- |\n| PORTAL | | | | Search site content ... | | ರ |\n| European Data Portal > Home | | | | | | |\n| 0 What we do ▼ Data - | | Using Data ▼ | Providing Data - | | Resources - | |\n| Search with SPARQL-Query | | | | | | |\n| Search for metadata in the European Data Portal triple store using SPARQL queries. | | | | | | |\n| The SPARQL manager sends user defined SPARQL queries to the Virtuoso SPARQL query engine. | | | | | | |\n| SPARQL specifications can be found on the W3C web site Please note that this is a tool for SPARQL experts. | | | | | | |\n| 0 Prefixes | | | | Examples: | | |\n| | | Number of datasets | | | | |\n| 1 SELECT (count(*) AS ?count) WHERE { { ?s a dcat:Dataset } } LIMIT 100 | | | | | | |\n| | | All dataset URIs | | | | |\n| | | Categories/themes and their number of | | | | |\n| | | datasets | | | | |\n| Format > | RDF/XML | | Q Execute the SPARQL query. | | | |\n| Limit | 100 | > | | | | |\n\n#### **3.7.1 SPARQL Search**\n\nIn this mode you can load some predefined example queries from the right side into the editable text area to introduce yourself with the very basic SPARQL syntax. Limiting the number of returned results is possible by selecting a value from the Limit-dropdown or by editing the query directly. Furthermore the format for the result can be selected. After clicking the Search-Button the result is displayed in Result data preview area below. The preview may be truncated depending on the size of the result. The complete result could always be downloaded as a file by clicking the Download-link on the right side.", - "page_start": 53, - "page_end": 53, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### **3.7.3 SPARQL Saving/Modifying a Query**\n\n| EUROPEAN | | Newsletter FAQ Search Contact Cookies Legal notice Login | | | English (en) | > |\n| --- | --- | --- | --- | --- | --- | --- |\n| DATA PORTAL | | | | Search site content ... | | ರ |\n| European Data Portal > Home | | | | | | |\n| C What we do ▼ | Data ▼ | Providing Data - | Using Data ▼ | | Resources - | |\n| Query details | | | Query Results | | | i |\n| Query name | Public query | | | | | |\n| | | नेंद्र | Creation time | | | |\n| SPARQL query | SELECT ?theme (count(?theme) AS ?count) WHERE {?s a dcat:Dataset . ?s dcat:theme ? | | 50 2018-03-01 | | 0 Details | |\n| theme} | GROUP BY ?theme LIMIT 100 | | 51 2018-03-02 | | 0 Details | |\n| | | | Next run | | | |\n| Format | RDF/XML | > | disabled | | | |\n| Query comment | Query comment | | | | | |\n| Enabled 0 | | | | Q Search | | |\n| ਨ Public | | | | | | |\n| Back | | | | | | |\n\nOnce a user is logged-in, he/she has the oppotunity to save custom queries. A corresponding name and the actual query as well as a result format have to be provided.\n\nThe user may as well enter an email address in order to receive a notification email from the system and a comment which describes the query.\n\nSelecting a schedule string lets the query run automatically if the checkbox \"Enabled\" is selected.\n\nIf the user likes to share and lets other users see the query he/she may select the \"Public\" checkbox.\n\nThe same page will be displayed if the user decides to modify one of his/her queries. A list of all results on the right side lets the user decide which result he/she likes to display.", - "page_start": 55, - "page_end": 55, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### **3.2.1 Entering the Datasets-View**\n\nThe user has the following possibilities to enter the datasets view:\n\n- Browsing directly to http://europeandataportal.eu/data\n- Opening the \"Data\" item in the main menu, then clicking on \"Datasets\" in the submenu\n- Clicking on \"Search\" in the \"Search Datasets\" area, with or without a search keyword entered\n\n| EUROPEAN | | | | Newsletter FAQ Search Contact Cookies Legal notice Login | | English (en) | A |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| DATA PORTAL | | | | | Search site content ... | | Q |\n| European Data Portal | | | | | | | |\n| 合 What we do - | | Data- | | Providing Data . | Using Data- | Resources . | |\n| | Datasets | Catalogues | Metadata Quality | Licensing Assistant | SPARQL Manager | | |\n| Enter keywords .. | | | Search Q | | | | |\n| | SPARQL Search | | | | | | |\n\n#### **3.2.2 How to filter datasets by using \"Faceted Search\"**\n\nThe user can find suitable datasets by performing a \"Faceted Search\". This means the user systematically adds properties, which the desired dataset should fulfill, e.g. a dataset should be part of a specific catalogue or category. The following properties are available:\n\n- Countries,\n- Catalogues,\n- Categories,\n- Tags,\n- Formats,\n- Licences.\n\nThose facets are presented on the left side of the main dataset page. The available options for each facet always reflect the availability of it in the current set of results. The numbers in brackets indicate how many datasets in total have that property e.g. there are 117,610 datasets with a distribution in CSV format.\n\n| Formats | |\n| --- | --- |\n| CSV | 117610 |\n| ISON | 25451 |\n| html | 14713 |\n| XML | 12744 |\n| XLS | 11687 |\n| XLSX | 6019 |\n| GeoJSON | 5996 |", - "page_start": 26, - "page_end": 26, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "| 4.14 Defining an Enumerated Class 44 |\n| --- |\n| 4.15 Adding Spiciness as a Property 45 |\n| 4.16 Cardinality Restrictions 46 |\n| Chapter 5 Datatype Properties 48 |\n| 5.1 Defining a Data Property 48 |\n| 5.2 Customizing the Protégé User Interface 50 |\n| Chapter 6 Adding Order to an Enumerated Class 58 |\n| Chapter 7 Names: IRI's, Labels, and Namespaces 60 |\n| Chapter 8 A Larger Ontology with some Individuals 62 |\n| 8.1 Get Familiar with the Larger Ontology 63 |\n| Chapter 9 Queries: Description Logic and SPARQL 66 |\n| 9.1 Description Logic Queries 66 |\n| 9.2 SPARQL Queries 67 |\n| 9.21 Some SPARQL Pizza Queries 67 |\n| 9.22 SPARQL and IRI Names 70 |\n| Chapter 10 SWRL and SQWRL 72 |\n| Chapter 11 SHACL 76 |\n| 11.1 OWA and Monotonic Reasoning 76 |\n| 11.2 The Real World is Messy 76 |\n| 11.3 Basic SHACL Concepts 77 |\n| 11.4 The Protégé SHACL Plug-In 77 |\n| Chapter 12 Web Protégé 83 |\n| Chapter 13 Conclusion: Some Personal Thoughts and Opinions 88 |\n| Chapter 14 Bibliography 89 |\n| 14.1 W3C Documents 89 |\n| 14.2 Web Sites, Tools, And Presentations 89 |\n| 14.3 Papers 89 |\n| 14.4 Books 90 |\n| 14.5 Vendors 90 |", - "page_start": 3, - "page_end": 3, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| | JPEAN | | Newsletter FAQ Search Contact Cookies Legal notice Login | | English (en) | > |\n| --- | --- | --- | --- | --- | --- | --- |\n| DATA PORTAL | | | | Search site content ... | | ರ |\n| European Data Portal > Home | | | | | | |\n| 合 | What we do ▼ | Data - | Providing Data - | Using Data - | Resources - | |\n| SPARQL assistant | | | | | | i |\n| Search for keyword in | | - field | - Category / theme | - Distribution format | | |\n| | Only count number of results | | | | | |\n| Prefixes | | | | | | 0 |\n| 1 | | SELECT (count(*) AS ?count) WHERE { { ?s a dcat:Dataset } } LIMIT 100 | | | | |\n| Format | RDF/XML | | > | Q Execute the SPARQL query. | | |\n| Limit | 100 | | > | | | |\n\nThe SPARQL assistant extends the functionality of the simple SPARQL search described in the previous section. More complex queries for datasets could be built by clicking several options in the GUI.\n\n- 1. Category/Theme\nSelect one or more Categories/Themes defined in the DCAT-AP standard to filter the results. Datasets will only be listed in the result if they belong to all selected categories/themes.\n\n- 2. Distribution format\nDatasets can contain distributions in different formats. You can select one or more formats to exclude datasets from the result which distributions do not contain the desired formats. If your format selection by the DCAT-AP standard is to strict you may enter a custom format string at the bottom of the list, which will result in a simple text comparison in the format label field.\n\n- 3. Search for keyword\nEnter a search term which must be found in the dataset. You can limit this to the title or the description. By enabling both options the term has to occur in the title and the description.\n\n- 4. Count number of results\nBy clicking the \"only count number of results\" option, only the number of results that matches the SPARQL query is returned.", - "page_start": 54, - "page_end": 54, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "| Class hierar 团田目回区 | | DL Query Snap SPARQL Query | | | |\n| --- | --- | --- | --- | --- | --- |\n| Asserted | | DL query: | | | ■■□网 |\n| owl:Thing | | Query (class expression) | | | |\n| Person | Customer | Customer and purchasedPizza some ( hasTopping some (hasSpiciness value Hot)) | | | |\n| Pizza | Employee | | | | |\n| PizzaBase | | Execute Add to ontology | | | |\n| PizzaTopping | | | | | |\n| Spiciness | | | | | |\n| | | Query results | | | |\n| | | Superclasses (3 of 3) | | Query for | |\n| | | Customer | | Direct superclasses | |\n| | | Person | ? | Superclasses | |\n| | | owl:Thing | | Equivalent classes | |\n| | | Subclasses (1 of 1) | | Direct subclasses | |\n| | | owl:Nothing | 2 | V Subclasses | |\n| | | | | Instances | |\n| | | Instances (4 of 4) | | | |\n| | | Customer1 | | | |\n| | | Customer4 | | Result filters | |\n| | | Customer8 | | Name contains | |\n| | | Customer9 | | | |\n| | | | Reasoner active | V Show Inferences | 를 |\n\nFigure 9.1 The DL Query Tab\n\n#### 9.2 SPARQL Queries\n\nSPARQL is a powerful language, and one could write a whole book about it. In fact, there are books written about it. The best one I have seen is the O'Reilly book Learning SPARQL by Bob DuCharme. This is an excellent book that not only goes into SPARQL but into topics such as RDF/RDFS and how triples are used to represent all information in OWL. I will only touch on those issues here, there is much more to say about them and DuCharme's book is a great place to learn more. If some of the following is a bit hard to understand don't be discouraged. This is just an attempt to give a very high level introduction to something that requires significant study to really understand.\n\nEssentially SPARQL is to the Semantic Web and Knowledge Graphs as SQL is to relational databases. Just as SQL can do more than just query, it can also assert new information into a database, so SPARQL can as well. The current SPARQL plugins for Protégé are somewhat limited and don't support the statements such as INSERT for entering new data so we will just cover the basics of using SPARQL as a query language but keep in mind there is a lot more to it than what we briefly cover here.\n\n#### 9.21 Some SPARQL Pizza Queries\n\nTo start with go to the SPARQL Query tab. If it isn't already there you can as always add it using Window>Tabs>SPARQL Query. This tab consists of two views, the top which holds the query and the bottom which holds the results. There should be some text already there. It may look confusing, but we'll explain it. Just to start with hit the Execute button at the bottom of the tab. You should see a bunch of classes and class expressions returned.", - "page_start": 67, - "page_end": 67, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| Acronym | Description |\n| --- | --- |\n| SPARQL | Query language for linked data (RDF) |\n| SSL | Secure Socket Layer |\n| URL | Uniform Resource Locator |\n| XML | Extensible Markup Language |\n\n*Table 1-2: Abbreviations and Acronyms*", - "page_start": 4, - "page_end": 4, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "|\n| |\n\nThe third tab displays all queries set to \"public\". Upon creating or modifying a query the user may set the attribute \"public\", which will make the query accessible to everyone.\n\nOnce the user is logged-in, another list with all private queries including the owned public queries will be displayed. Besides \"Query name\" and \"Query comment\", the attribute \"Enabled\" visualizes if the query is currently running on a recurring time interval.\n\nAll attributes may be changed upon selecting \"Details\" if the logged-in user is the owner of selected query.", - "page_start": 56, - "page_end": 56, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "When you access the application link http://myapp.apps.domain.example.com, as shown in Figure 7-1, notice the index page Page view count reads \"No database configured\". To fix this issue, add a MongoDB service.\n\n*Figure 7-1 http://myapp.apps.domain.example.com without MongoDB*\n\n- 6. Deploy the mongodb-36-rhel7 application from the Red Hat registry, as shown in Example 7-6.\n*Example 7-6 Deploy mongodb application*\n\n```\n# oc new-app \\\n -e MONGODB_USER=admin \\\n -e MONGODB_PASSWORD=secret \\\n -e MONGODB_DATABASE=mongo_db\\\n -e MONGODB_ADMIN_PASSWORD=super-secret \\\n registry.access.redhat.com/rhscl/mongodb-36-rhel7\n--> Found Docker image 3414ee2 (4 weeks old) from registry.access.redhat.com for \n\"registry.access.redhat.com/rhscl/mongodb-36-rhel7\"\n MongoDB 3.6\n```\n-----------\n\n MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server.\n\nTags: database, mongodb, rh-mongodb36\n\n * An image stream tag will be created as \"mongodb-36-rhel7:latest\" that will track this image\n\n* This image will be deployed in deployment config \"mongodb-36-rhel7\"\n\n- * Port 27017/tcp will be load balanced by service \"mongodb-36-rhel7\"\n- * Other containers can access this service through the hostname \"mongodb-36-rhel7\"\n\n * This image declares volumes and will default to use non-persistent, host-local storage.\n\n You can add persistent volumes later by running 'volume dc/mongodb-36-rhel7 --add ...'\n\n--> Creating resources ... imagestream.image.openshift.io \"mongodb-36-rhel7\" created", - "page_start": 181, - "page_end": 181, - "source_file": "sg248459.pdf" - }, - { - "text": "- SPARQL Saving/Modifying a Query\n- SPARQL Queries\n\n*Table 1-3: Main functions of the Portal Version 3.0*", - "page_start": 6, - "page_end": 6, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "What is the function of the Graphical Data Visualisation Tool module ?", - "target_page": 6, - "target_passage": "How to visualize graphical data from a dataset resource ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **2 Approach**\n\nThe approach used for this User Manual was based on the identification of the main user functions of the Portal and the description of each function from the user's perspective in terms of \"*How to*…\".\n\nEach main function documentation consists of a screen snapshot, the steps required to execute the function and optionally a screenshot with the results.\n\n# **3 Main User Functions of the Portal**\n\nThis section describes all of the main user functions supported by the Portal Version 3.0.\n\n| The table 1-3 below lists the described functions by module. |\n| --- |\n\n| | Module Name | Function |\n| --- | --- | --- |\n| 1 | Portal HomePage | - How to browse through the Editorial Content Data) - How to view / search for \"Latest News\" - How to view / search for \"Open Data Events\" |\n| | | (how to access Resources on Open Data: eLearning |\n| | | modules, Training Companion, Reports about Open |\n| | | - How to subscribe to the EDP Newsletter |\n| | | - How to view \"Tweets\" on the EDP |\n| | | - How to switch to another User Language |\n| | | - How to search for EDP Site Content |\n| | | - How to search for Datasets by Data Category |\n| | | - How to search for Datasets by Keyword |\n| 2 | Datasets (Data Platform) | Entering the Datasets-View |\n| | | How to filter datasets by using \"Faceted Search\" |\n| | | How to store personal queries |\n| | | How to filter datasets by geographical area |\n| | | How to download dataset distributions |\n| | | How to view licensing information |\n| | | How to switch to another user language |\n| | | How to browse by data catalogues |\n| 3 | Visualization of Geo-Spatial | How to visualize geo-spatial data from a dataset resource |\n| | Data (map.apps) | |\n| 4 | Graphical Data Visualisation | How to visualize graphical data from a dataset resource |\n| | Tool | |\n| 5 | Help Desk | How to contact The Portal's Help Desk |\n| 6 | Metadata Quality Assurance | Monitoring tool for the metadata quality: |\n| | (MQA) | ‐ The Global Dashboard View |\n| | | ‐ The Catalogue details view |\n| 7 | SPARQL Manager | How to run SPARQL Queries using: |\n| | | - SPARQL Search |", - "page_start": 5, - "page_end": 5, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "- **Data modeling** build new data models, or design models based on existing data models.\n- **Data visualization** map queries and visualize the access patterns (facets) of the application without writing code. Every facet corresponds to a different access pattern in DynamoDB. You can manually add data to your data model.\n- **Operation builder** use the *operation builder* to develop and test queries, and query live datasets. You can also build and perform data plane operations, including creating projection and condition expressions, and generating sample code in multiple languages.\n\nYou can also run a local instance of DynamoDB on your workstation. Combined with NoSQL workbench, this can provide a fast local setup for experimentation and learning.\n\nRelated resources:\n\n- NoSQL Workbench & Building data models with NoSQL Workbench model and query data with a desktop tool\n- Setting up DynamodDB local (downloadable version)", - "page_start": 83, - "page_end": 83, - "source_file": "serverless-core.pdf" - }, - { - "text": "IBM STAT can be downloaded from this IBM Support web page.\n\nYou can download the Storage Tier Advisor Tool and install it on your Windows-based computer. The tool is packaged as an ISO file that must be extracted to a temporary location.\n\nThe tool installer is at temporary_location\\IMAGES\\STAT\\Disk1\\InstData\\NoVM\\. By default, the Storage Tier Advisor Tool is installed in C:\\Program Files\\IBM\\STAT\\*.*\n\nOn IBM Storwize V7000, the heat data files are found in the /dumps/easytier directory on the configuration node and are named dpa_heat.node_panel_name.time_stamp.data. Any heat data file is erased when it exists for longer than 7 days.\n\nHeat files must be offloaded and Storage Tier Advisor Tool started from a Windows command prompt console with the file specified as a parameter, as shown in Example 10-6.\n\n*Example 10-6 Running STAT in Windows command prompt*\n\n```\nC:\\Program Files (x86)\\IBM\\STAT>stat dpa_heat.7822DFF-1.181028.073824.data\n```\nThe Storage Tier Advisor Tool creates a set of .html and .csv files that can be used for Easy Tier analysis.\n\nTo download a heat data file, open **Settings** → **Support** → **Support Package** → **Download Support Package** → **Download Existing Package**, as shown in Figure 10-8.\n\n*Figure 10-8 Download Easy Tier heat file: Download Support Package*", - "page_start": 435, - "page_end": 435, - "source_file": "sg247938.pdf" - }, - { - "text": "# **Installation**\n\nContent Manager OnDemand provides the ARSPDF32.API file to enable PDF viewing from the client.\n\nIf you install the client after you install Adobe Acrobat, the installation program copies the application programming interface (API) file to the Acrobat plug-in directory.\n\nIf you install the client before you install Adobe Acrobat, you must copy the API file to the Acrobat plug-in directory manually.\n\nIf you upgrade to a new version of Acrobat, you must copy the API file to the new Acrobat plug-in directory.\n\nThe default location of the ARSPDF32.API file is:\n\nC:\\Program Files (x86)\\IBM\\OnDemand Clients\\V9.5\\PDF\n\nThe default Acrobat plug-in directory is C:\\Program Files (x86)\\Adobe\\Acrobat *x.y*\\Acrobat\\plug_ins. The variables x.y represent the version of Acrobat, for example, C:\\Program Files (x86)\\Adobe\\Acrobat 10.0\\Acrobat\\plug_ins.\n\n# **Graphical indexer example**\n\nBy using the graphical indexer, you can define triggers, fields, and indexes for PDF reports within the application component of Content Manager OnDemand in a similar way to defining them for line data. This section serves as an introduction to the PDF graphical indexer by stepping through an example of indexing a PDF document.\n\nThe example describes how to use the graphical indexer from the report wizard to create indexing information for an input file. The indexing information consists of a trigger that uniquely identifies the beginning of a document in the input file and the fields and indexes for each document. We elaborate on this example by clarifying several of the instructions, and throughout each step, we add important hints, tips, and explanations.\n\nThe process consists of these steps:\n\n- 1. Start the Administrator Client and log on to a server.\n- 2. Start the report wizard. Click the report wizard icon on the toolbar.\n- 3. In the Sample Data window, select **PDF** from the drop-down list of data types, and then click **Select Sample Data**.\n- 4. In the Open window, enter the name or full path name of your file in the space that is provided or use the **Browse** option to locate your PDF file.\n- 5. Click **Open**. The graphical indexer opens the input file in the report window.\n\nIf the PDF data fails to display, or an error message, such as the message that is shown in Figure 7-2, is displayed, you must follow the steps in \"Installation\" on page 169 to verify that the API file is in the correct Acrobat plug-in directory.\n\n| ARSADM32 | |\n| --- | --- |\n| × | Adobe Acrobat (AcroExch. App rc =- 2146959355) could not be loaded. |\n| | OK |\n\nFigure 7-2 Error message if PDF does not display", - "page_start": 192, - "page_end": 192, - "source_file": "sg246915.pdf" - }, - { - "text": "Depending on the data that you are working with, consider these options:\n\n- - For Line Data:\n\t- The line data applet supports annotations. It can work with large object (LOB) reports if the large object functionality is employed at load time.\n\t- The Ajax viewer and direct rendering capabilities of Content Navigator work only on shorter reports. Additionally, the viewing of annotations and large object documents is not supported.\n- - For AFP data:\n\t- The AFP plug-in is the best choice, because it is almost identical to the client. However, it does not support annotations.\n\nThe only viewers that use this functionality are the line data applet, the AFP plug-in viewer, and the Content Manager OnDemand Windows client.\n\n- AFP to PDF is a choice that does not require a plug-in rollout at the users' computers if the Acrobat plug-in is installed on their workstations. Font mappings must be configured at a central location. The additional workload on a rendering system and additional license costs must be considered. Large reports might not be able to be rendered or viewed.\n**Note:** The AFP viewer plug-in, which is available with ODWEK and Content Manager OnDemand, is a version of the AFP viewer plug-in from the InfoPrint Solutions Company. Although the standard InfoPrint viewer can be used for viewing AFP, the ODWEK version uses direct communication with the Content Manager OnDemand server, enabling segmented document transfer for LOB documents.\n\n# **Annotations**\n\nOnly the native ODWEK viewers and the Windows client support annotations. These viewers and Windows clients support annotations in the following ways:\n\n- - Line data applet: Supports text. Starting with version 9, the viewer can work with graphical annotations, also.\n- -Windows Client: Supports maximum capabilities for all data types.\n- - Other viewers, for example, the AFP plug-in viewer: Do not support and are not aware of annotations.\n\nWeb clients, such as Content Navigator or the ODWEK Java API, can work with annotations and provide access to them through the hit list. Graphical annotations cannot be accessed that way because they are not exposed through the Java API.\n\n# **Large object support**\n\nLarge object (LOB) support is the methodology for working with large reports. For more information about how LOB affects your reports, see \"Large object\" on page 52.\n\nFrom a viewer's perspective, if a large document is transferred, it generates high network traffic, resource consumption, and long wait times for users. If the viewer supports LOB documents, the viewer communicates with the server to transfer only the chunk of data that the user is looking at (for example, a 200 page chunk out of a 10,000 page report). If the user scrolls to a different chunk of pages, the viewer downloads only that relevant portion of the document that the user scrolled to.", - "page_start": 212, - "page_end": 212, - "source_file": "sg246915.pdf" - }, - { - "text": "On any of these views, you can select any point by using your cursor to know the exact value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure A-7.\n\n*Figure A-7 Viewing performance with details*\n\nFor each of the resources, various metrics are available and you can select which to be displayed. For example, as shown in Figure A-8, from the four available metrics for the MDisks view (Read, Write, Read latency, and Write latency) only Read and Write IOPS are selected.\n\n*Figure A-8 Displaying performance counters*\n\n# **Performance data collection and IBM Spectrum Control**\n\nAlthough you can obtain performance statistics in standard .xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze Storwize V7000 performance statistics.", - "page_start": 773, - "page_end": 773, - "source_file": "sg247938.pdf" - }, - { - "text": "# **Report wizard settings**\n\nAs you move through the report wizard, standard options are selected for you. Use the defaults unless you have a specific reason not to use them. Depending on how you use the Report Wizard, you might not see all of the windows that we will describe.\n\n# **Introduction window**\n\nThe Report Wizard introduction window provides a brief explanation of the report wizard. Your first step is to select the indexer that you want to use to index the data. For all indexers, you specify the type of data that you want to store. For indexers other than Generic and XML, you specify the location of the sample data.\n\nChoose the indexer and type of data and then set up the sample data, as shown in Figure 3-11.\n\n| eport Wizard | x | | |\n| --- | --- | --- | --- |\n| This wizard steps you through the process of defining a report to | OnDemand. | | |\n| Select the indexer that will be used to index the data. | | | |\n| Sample data will be used to provide most of the information. The | remaining information will come in the form of questions. Select | the name of a sample data file and begin defining field and index | |\n| information. | | | |\n| Select the data type that will be used when viewing the data once | it is stored in OnDemand. | | |\n| Line | Saland Serapiz Data ... | | |\n| Cancel | Help | < Fack | Next > |\n\nFigure 3-11 Setting up the report wizard\n\nOn z/OS or Multiplatform implementations, if AFP is selected as the data type and the report data is line data, it is converted to AFP before it is loaded into Content Manager OnDemand. The report wizard cannot be used to define a report to Content Manager OnDemand if the report is already AFP data.\n\n# **Report window**\n\nThe Report window (Figure 3-12 on page 60) displays the sample data and provides easy-to-use tools to help you define indexing information, database fields, and folder fields. Press F1 to display the online help for options and commands that are available on the Report window. Use the online help to learn how to define triggers, fields, indexes, database fields, and folder fields.\n\n**Important:** When you finish defining the indexing, database, and folder information, save your changes.", - "page_start": 82, - "page_end": 82, - "source_file": "sg246915.pdf" - }, - { - "text": "Consider the following information about Table 7-1 on page 164:\n\n- - The Generic indexer requires the user to manually create an index file in the generic index format before the user starts the load process. The Generic indexer allows the capture of documents, index values, and resources that are identified to it. These documents, index values, and resources are then loaded into the Content Manager OnDemand archive and stored in the same manner as though they were loaded through any of the other indexers. An existing resource file can be loaded with a generic index file.\nFor more information about the generic index format, see IBM Content Manager OnDemand - Indexing Reference, SC19-3354.\n\n- - The ACIF, PDF, XML, and OS/400 indexers all generate intermediate files. These files are then used to load the indexes and data into the Content Management OnDemand system.\n- - The OS/390 indexer creates the index data while it loads the indexes and data into the Content Management OnDemand system.\n- - *Conversion* refers to a conversion by the indexer. Other products integrate with Content Manager OnDemand that also convert data.\n- - Because of the architecture of PDF documents, large object support for PDF documents is not possible.\n- - Starting with V9.5, the PDF Indexer runs in the PASE environment on IBM i. PASE is a prerequisite on IBM i for V9.5.\n- -Starting with V9.5, the PDF Indexer is no longer supported on z/OS.\n\n# **7.2 Getting started with PDF indexing**\n\nPDF is a standard that is specified by Adobe Systems, Incorporated, for the electronic distribution of documents. PDF files are compact. They can be distributed globally through email, the web, intranets, or CD-ROM, and viewed with Adobe Reader.\n\nPDF is a data type or file format that is platform (hardware, operating system)-independent. A PDF file contains a complete PDF document that is composed of text, graphics, and the resources that are referenced by that document.\n\nTwo PDF file layouts are possible:\n\n- -Non-Linear (not \"optimized\")\nThis file layout is optimized for space savings. Storing a PDF file by using a Non-Linear layout consumes less disk space than storing the same PDF file linearly. It is slower to access or display this type of layout because portions of the data that is required to assemble pages of the document are scattered throughout the PDF file, so the whole PDF file must be downloaded and accessed before the file can be displayed.\n\n- -Linear (\"optimized\" or \"web optimized\")\nIn this file format, the PDF file is created in a linear (in page order) fashion. This file format allows the PDF viewer to start displaying the PDF document pages when they are downloading without waiting for the whole PDF file to be downloaded.", - "page_start": 188, - "page_end": 188, - "source_file": "sg246915.pdf" - }, - { - "text": "**Note:** Comprestimator can run for a long period (a few hours) when it is scanning a relatively empty device. The utility randomly selects and reads 256 KB samples from the device. If the sample is empty (that is, full of null values), it is skipped. A minimum number of samples with data is required to provide an accurate estimation. When a device is mostly empty, many random samples are empty. As a result, the utility runs for a longer time as it tries to gather enough non-empty samples that are required for an accurate estimate. The scan is stopped if the number of empty samples is over 95%.\n\n# **10.6.2 Evaluating compression and deduplication**\n\nTo help with the profiling and analysis of user workloads that must be migrated to the new system, IBM provides a highly accurate data reduction estimation tool that supports both deduplication and compression. The tool operates by scanning target workloads on any legacy array (from IBM or third party) and then merging all scan results to provide an integrated system level data reduction estimate.\n\nThe Data Reduction Estimator Tool (DRET) utility uses advanced mathematical and statistical algorithms to perform an analysis with low memory footprint. The utility runs on a host that can access the devices to be analyzed. It performs only read operations so it has no effect on the data stored on the device.\n\nThe following sections provide information about installing DRET on a host and using it to analyze devices on it. Depending on the environment configuration, in many cases DRET is used on more than one host to analyze more data types.\n\nWhen DRET is used to analyze a block device that is used by a file system, all underlying data in the device is analyzed, regardless of whether this data belongs to files that were deleted from the file system. For example, you can fill a 100 GB file system and make it 100% used, and then, delete all the files in the file system to make it 0% used. When scanning the block device that is used for storing the file system in this example, the DRET accesses the data that belongs to the files that are deleted.\n\n**Important:** The preferred method of using DRET is to analyze volumes that contain as much active data as possible rather than volumes that are mostly empty of data. This increases the accuracy level and reduces the risk of analyzing old data that is deleted, but might still have traces on the device.\n\nFor more information and the latest version of this utility, see this IBM Support web page.\n\n# **10.7 Data deduplication and compression on external storage**\n\nStarting from IBM Spectrum Virtualize V8.1.x, it supports over-provisioning on selected back-end controllers. This means that if back-end storage performs data deduplication or data compression on LUs provisioned from it, they still can be used as external MDisks on IBM Storwize V7000.\n\nThin-provisioned MDisks from controllers that are supported by this feature can be used as managed mode MDisks in IBM Storwize V7000 and added to storage pools (including DRPs).\n\nImplementation steps for thin-provisioned MDisks are same as for fully allocated storage controllers. Extra caution is used when planning capacity for such configurations.", - "page_start": 452, - "page_end": 452, - "source_file": "sg247938.pdf" - }, - { - "text": "The generic applet viewer (\"applet viewer\") is a Java applet, which can handle various types of documents, such as PDF and Microsoft Office documents (which it renders), images, line data, and AFP documents. The generic applet viewer might be an option if you work with images that are stored in Content Manager OnDemand.\n\nIf you want to avoid the use of Java applets and your content is viewable by browsers (for example, certain image types or textual data), try the browser pass-through viewer, which lets the browser handle the data natively. If you work with AFP and must use the AFP browser plug-in, register the Content Navigator plug-in, AFPViewerPlugin.jar, and configure the viewer map that is assigned to your Content Navigator desktop to use the AFP viewer for the application/afp MIME type. The AFPViewerPlugin.jar file ships with Content Navigator. You must choose the web browser pass-through viewer.\n\nThe Ajax viewer is a Web 2.0 JavaScript application that provides basic document functions, such as page-wise browsing, rotation, or zoom. It is not a Java applet.\n\nThe generic applet viewer, the built-in PDF and HTML conversion, and the Ajax viewer can all work with various data types:\n\n- -Images (such as TIFF, JPEG, and DICOM)\n- -Office documents\n- -PDF\n- -Most line data documents\n- -Certain AFP data\n\nHowever, they all use a rendering engine to display Office, PDF, and AFP data into an image. This rendering might work well with certain Office and PDF files, but it fails on most non-basic AFP data streams.\n\nFor more information, see 8.1.1, \"Viewer options\" on page 186.\n\n**Note:** Content Navigator is a Web 2.0 client and relies on HTML 5 and JavaScript for its core client functionality and especially for the Ajax viewers. Not all browsers are suitable for running Content Navigator fast and efficiently, especially for Microsoft Internet Explorer browsers before version 9. Test Content Navigator with your user browser thoroughly before you consider a deployment.\n\n# **Extending Content Navigator**\n\nContent Navigator is not designed as a client that is dedicated solely to Content Manager OnDemand, so a more complex configuration is necessary than with simpler client options. Content Navigator provides many configuration and customization options through its API and plug-in methodology. For more information about the customization options of Content Navigator, see Customizing and Extending IBM Content Navigator, SG24-8055.\n\n# **8.2.2 Content Manager OnDemand Windows client**\n\nThe Content Manager OnDemand Windows client is a full function, feature-rich client that meets the needs of line-of-business application areas and customer service representatives. The Windows client displays content in its native format and is considered a corporate internal access client. Many technical aspects of the Windows client are described in 8.1.1, \"Viewer options\" on page 186 and 8.1.2, \"Client infrastructure options\" on page 190.", - "page_start": 220, - "page_end": 220, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "How to view “Tweets” on the EDP ?", - "target_page": 20, - "target_passage": "The Home Page displays the latest tweets on the European Data Portal in the “Tweets” panel on the right hand side. ‐ ‐ Click on any of the tweets to display the complete tweet on twitter. Scroll vertically to see previous tweets. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### **3.1.5 How to view \"Tweets\" on the EDP**\n\nThe Home Page displays the latest tweets on the European Data Portal in the \"Tweets\" panel on the right hand side.\n\n- ‐ **Click on any of the tweets to display the complete tweet on twitter.**\n- ‐ **Scroll vertically to see previous tweets.**", - "page_start": 19, - "page_end": 19, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### **3. Methods** *3.1. Data Source*\n\n**3. Methods**\n\n#### *3.1. Data Source* As Twitter has been recognized as a popular discussion forum [75] and a social activity platform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate\n\nAs Twitter has been recognized as a popular discussion forum [75] and a social activity platform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population. distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population.\n\n*Int. J. Environ. Res. Public Health* **2020**, *xx*, 5 5 of 22\n\n#### *3.2. Data 3.2. Data* In this research, we were interested in tweets containing either #climatechange or #globalwarming,\n\nIn this research, we were interested in tweets containing either #climatechange or #globalwarming, as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies. as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies.\n\nTo collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study. To collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study.\n\nGiven our goal of exploring the difference between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to differentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained \"#globalwarming\". The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a. Given our goal of exploring the difference between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to differentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained \"#globalwarming\". The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a.\n\n**Figure 1.** The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 (**a**). The number of hashtags contained in the \"climate change\" or \"global warming\" datasets, and their ratio from 2009 to 2018 (**b**). **Figure 1.** The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 (**a**). The number of hashtags contained in the \"climate change\" or \"global warming\" datasets, and their ratio from 2009 to 2018 (**b**).", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed10.pdf" - }, - { - "text": "*Figure 2: EDP Home Page (lower part)*", - "page_start": 8, - "page_end": 8, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "# **2 Approach**\n\nThe approach used for this User Manual was based on the identification of the main user functions of the Portal and the description of each function from the user's perspective in terms of \"*How to*…\".\n\nEach main function documentation consists of a screen snapshot, the steps required to execute the function and optionally a screenshot with the results.\n\n# **3 Main User Functions of the Portal**\n\nThis section describes all of the main user functions supported by the Portal Version 3.0.\n\n| The table 1-3 below lists the described functions by module. |\n| --- |\n\n| | Module Name | Function |\n| --- | --- | --- |\n| 1 | Portal HomePage | - How to browse through the Editorial Content Data) - How to view / search for \"Latest News\" - How to view / search for \"Open Data Events\" |\n| | | (how to access Resources on Open Data: eLearning |\n| | | modules, Training Companion, Reports about Open |\n| | | - How to subscribe to the EDP Newsletter |\n| | | - How to view \"Tweets\" on the EDP |\n| | | - How to switch to another User Language |\n| | | - How to search for EDP Site Content |\n| | | - How to search for Datasets by Data Category |\n| | | - How to search for Datasets by Keyword |\n| 2 | Datasets (Data Platform) | Entering the Datasets-View |\n| | | How to filter datasets by using \"Faceted Search\" |\n| | | How to store personal queries |\n| | | How to filter datasets by geographical area |\n| | | How to download dataset distributions |\n| | | How to view licensing information |\n| | | How to switch to another user language |\n| | | How to browse by data catalogues |\n| 3 | Visualization of Geo-Spatial | How to visualize geo-spatial data from a dataset resource |\n| | Data (map.apps) | |\n| 4 | Graphical Data Visualisation | How to visualize graphical data from a dataset resource |\n| | Tool | |\n| 5 | Help Desk | How to contact The Portal's Help Desk |\n| 6 | Metadata Quality Assurance | Monitoring tool for the metadata quality: |\n| | (MQA) | ‐ The Global Dashboard View |\n| | | ‐ The Catalogue details view |\n| 7 | SPARQL Manager | How to run SPARQL Queries using: |\n| | | - SPARQL Search |", - "page_start": 5, - "page_end": 5, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "All the hashtags in the tweets were automatically extracted with the Regular Expression Library in Python. Hashtags were transformed to lowercase letters, and clear synonyms were stemmed (e.g., #trump, #DonaldTrump, #donaldtrump). As all the tweets in the \"climate change\" dataset contained the #climatechange hashtag and all the tweets in the \"global warming\" dataset contained the #globalwarming hashtag, we did not document these two hashtags when processing data. The number of hashtags contained in the two discourses in each year is displayed in Figure 1b. Hashtags whose frequency was lower than ten times are excluded in the network analysis. As hashtags are intended to be a topic anchor [52], extremely low frequency means that the hashtag is not recognized socially, and excluding them helps researchers focus on meaningful rather than occasional associations.\n\n#### *3.3. Measurement*\n\n#### 3.3.1. Hashtag Co-Occurrence Network\n\nThe co-occurrence patterns of hashtags in tweets from two datasets were documented to build semantic networks for climate change and global warming. For instance, for \"#cimatechange redistributes #fish species at high latitudes. @_OScience @AarhusUni #Arctic\", a tweet in the climate change dataset, hashtags #fish and #arctic were documented as co-occurring and their associations plus one in the semantic network of climate change. In the semantic network, nodes represent hashtags and the weight of edge refers to the frequency at which two hashtags co-occurred.\n\nWe visualized the network using Gephi software [81]. Following the established literature [60,61,82], only the most prominent hashtags were included in the visualization to concentrate our analysis on the most important hashtags. In this research, the top 50 hashtags with the highest centrality in each network were selected for visualization. Modularity analysis was then analyzed to identify the clusters of hashtags in each semantic network, and hashtags belonging to the same cluster were drawn in the same color. The network spatialization was conducted with Gephi's built-in force-directed layout algorithm proposed by Fruchterman and Reingold [83], where the more associated the hashtags, the closer they are to each other in the spatial layout.\n\n#### 3.3.2. Temporal Analysis\n\nA temporal analysis was introduced to understand the evolution of the two climate discourses over a long period. We first examined how the two semantic networks evolved in the past years. All the nodes once ranked top 50 in any of the 10 years were gathered to form a union set for each dataset. Then, they were clustered according to the strength of their associations in the whole dataset and mapped with a force-directed layout algorithm in Gephi to produce a graph of nodes. With the dynamic network function supplied by Gephi, we then added the associations between the nodes ranked on the top 50 list in 2009 to the graph of nodes and obtained the relationship of the top 50 nodes for 2009. Similarly, we produced a total of 10 graphs from 2009 to 2018, where the positions of the nodes on the 10 maps are the same, but the strengths of their associations are different to represent the changes in the associations of key hashtags for each discourse.\n\nThe correlation between climate change and global warming discourses was measured every year to observe whether the two discourses converged or diverged over time. Considering computing power limitations, only key hashtags ranked in either of the top 50 lists for the two discourses in that year were included in the calculations. First, we measured to what extent the two discourses resemble each other in the order of importance for the hashtags in each year. For every year, the top 50 hashtags in each network were selected with a rank order according to their centrality. Then, Spearman's rank correlation coefficient was used to examine the correlation of the rank orders of the selected nodes in the two discourses [84], where a high Spearman correlation indicates that the hashtags in the two discourses were ranked similarly. Secondly, we measured to what extent the two discourses resembled each other in the associations between the key hashtags for each year. For every year, we obtained the union of the two top 50 nodes lists and used the name of the nodes in the union as the row name and", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed10.pdf" - }, - { - "text": "# **Portal Version 4.3 – User Manual**\n\n*V1.0*\n\n*October 2019*\n\n# **Table of Contents**\n\n| 1 | Introduction 4 |\n| --- | --- |\n| 1.1 | Purpose of the Document 4 |\n| 1.2 | Reference Documents 4 |\n| 1.3 | Terminology 4 |\n| 2 | Approach 6 |\n| 3 | Main User Functions of the Portal 6 |\n| 3.1 | Portal Home Page 8 |\n| 3.1.1 | How to browse through the Editorial Content of the Portal 10 |\n| 3.1.2 | How to view / search for \"Latest News\" 17 |\n| 3.1.3 | How to view / search for \"Open Data Events\" 18 |\n| 3.1.4 | How to subscribe to the EDP Newsletter 19 |\n| 3.1.5 | How to view \"Tweets\" on the EDP 20 |\n| 3.1.6 | How to switch to another User Language 21 |\n| 3.1.7 | How to search for EDP Site Content 22 |\n| 3.1.8 | How to Search for Datasets by Data Category 23 |\n| 3.1.9 | How to Search for Datasets by Keyword 25 |\n| 3.2 | Datasets (Data Platform) 26 |\n| 3.2.1 | Entering the Datasets-View 27 |\n| 3.2.2 | How to filter datasets by using \"Faceted Search\" 27 |\n| 3.2.3 | How to store personal queries 29 |\n| 3.2.4 | How to filter datasets by geographical area 31 |\n| 3.2.5 | How to download dataset distributions 33 |\n| 3.2.6 | How to view licensing information 34 |\n| 3.2.7 | How to switch to another user language 36 |\n| 3.2.8 | How to browse by data catalogues 37 |\n| 3.3 | Visualization of Geo-Spatial Data (map.apps) 38 |\n| 3.3.1 | How to visualize geo-spatial data from a dataset resource 38 |\n| 3.4 | Graphical Data Visualisation Tool 43 |\n| 3.4.1 | How to visualize graphical data from a dataset resource 43 |", - "page_start": 1, - "page_end": 1, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "- 58. Yang, L.; Sun, T.; Zhang, M.; Mei, Q. We know what@ you# tag: Does the dual role affect hashtag adoption? In Proceedings of the 21st international conference on World Wide Web, Lyon, France, 16–20 April 2012; pp. 261–270.\n- 59. Weller, K.; Dröge, E.; Puschmann, C. Citation Analysis in Twitter: Approaches for Defining and Measuring Information Flows within Tweets during Scientific Conferences. In Proceedings of the Making Sense of Microposts 2011, Heraklion, Greece, 30 May 2011; pp. 1–12.\n- 60. Meraz, S. Hashtag wars and networked framing: The private/public networked protest repertoires of occupy on twitter. In *Between the Public and Private in Mobile Communication*; Routledge: Abingdon, UK, 2017; pp. 303–323.\n- 61. Meraz, S.; Papacharissi, Z. Networked gatekeeping and networked framing on# Egypt. *Int. J. Press.* **2013**, *18*, 138–166.\n- 62. Papacharissi, Z.; de Fatima Oliveira, M. Affective news and networked publics: The rhythms of news storytelling on# Egypt. *J. Commun.* **2012**, *62*, 266–282.\n- 63. Wang, X.; Wei, F.; Liu, X.; Zhou, M.; Zhang, M. Topic sentiment analysis in twitter: A graph-based hashtag sentiment classification approach. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Scotland, UK, 24–28 October 2011; pp. 1031–1040.\n- 64. Laniado, D.; Mika, P. Making sense of twitter. In Proceedings of the International Semantic Web Conference 2010, Shanghai, China, 7–11 November 2010; pp. 470–485.\n- 65. González-Ibánez, R.; Muresan, S.; Wacholder, N. Identifying sarcasm in Twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers—Volume 2, Portland, OR, USA, 19–24 June 2011; pp. 581–586.\n- 66. Conover, M.D.; Ratkiewicz, J.; Francisco, M.; Gonçalves, B.; Menczer, F.; Flammini, A. Political polarization on twitter. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, Barcelona, Spain, 17–21 July 2011.\n- 67. Kitzie, V.; Ghosh, D. # Criming and# Alive: Network and content analysis of two sides of a story on twitter. In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, St. Louis, MO, USA, 6–10 October; 2015; p. 41.\n- 68. Burgess, J.; Galloway, A.; Sauter, T. Hashtag as hybrid forum: The case of# agchatoz. In *Hashtag Publics. The Power and Politics of Discursive Networks*; Peter Lang: New York, NY, USA, 2015; pp. 61–76.\n- 69. Rushkoff, D. 17. Permanent revolution: Occupying democracy. In *The Playful Citizen*; Amsterdam University Press: Amsterdam, The Netherlands, 2013; p. 335.\n- 70. Grundberg, M.D.; Lindgren, S. Translocal frame extensions in a networked protest: Situating the# IdleNoMore hashtag. *IC Rev. Científica De Inf. Y Comun.* **2015**, *11*, 49–57.\n- 71. Bruns, A.; Burgess, J.E. # ausvotes: How Twitter covered the 2010 Australian federal election. *Commun. Politics Cult.* **2011**, *44*, 37–56.\n- 72. Pearce, W.; Holmberg, K.; Hellsten, I.; Nerlich, B. Climate change on Twitter: Topics, communities and conversations about the 2013 IPCC Working Group 1 report. *PLoS ONE* **2014**, *9*, e94785. [CrossRef]\n- 73. Zhao, W.X.; Jiang, J.; Weng, J.; He, J.; Lim, E.P.; Yan, H.; Li, X. Comparing twitter and traditional media using topic models. In Proceedings of the European Conference on Information Retrieval, Dublin, Ireland, 18–21 April 2011; pp. 338–349.\n- 74. Doctor, V. Hashtag History: When and What Started It? Available online: https://www.hashtags.org/featured/ hashtag-history-when-and-what-started-it/ (accessed on 16 January 2020).\n- 75. Newman, T.P. Tracking the release of IPCC AR5 on Twitter: Users, comments, and sources following the release of the Working Group I Summary for Policymakers. *Public Underst. Sci.* **2017**, *26*, 815–825. [CrossRef]\n- 76. Segerberg, A.; Bennett, W.L. Social media and the organization of collective action: Using Twitter to explore the ecologies of two climate change protests. *Commun. Rev.* **2011**, *14*, 197–215. [CrossRef]\n- 77. Statista. Number of Monthly Active Twitter Users Worldwide from 1st Quarter 2010 to 1st Quarter 2019 (in Millions). 2019. Available online: https://www.statista.com/statistics/282087/number-of-monthly-activetwitter-users/ (accessed on 10 October 2019).\n- 78. Liu, Y.; Kliman-Silver, C.; Mislove, A. The tweets they are a-changin': Evolution of Twitter users and behavior. In Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, Ann Arbor, MI, USA, 1–4 June 2014.", - "page_start": 19, - "page_end": 19, - "source_file": "pubmed10.pdf" - }, - { - "text": "# **1.1 Purpose of the Document**\n\nThe main purpose of this document is to present a User Manual for the main user functionalities of the **Portal Version 4.3**, launched in production in May 2019. This document consists of an update of the User Manual for the Portal Version 3.0 published in November 2017[4].\n\n# **1.2 Reference Documents**\n\n| Id | Reference | Title | Version |\n| --- | --- | --- | --- |\n| [1] | EDP_S1_MAN | EDP_S1_MAN_Portal-Version1-UserManual_v1.0 | 1.0 |\n| [2] | EDP_S1_MAN | EDP_S1_MAN_Portal-Version1.3-UserManual_v1.2 | 1.3 |\n| [3] | EDP_S1_MAN | EDP_S1_MAN_Portal-Version2.0-UserManual_v1.0 | 2.0 |\n| [4] | EDP_S1_MAN | EDP_S1_MAN_Portal-Version3.0-UserManual_v1.0 | 3.0 |\n\n*Table 1-1: Reference Documents*\n\n# **1.3 Terminology**\n\n| Acronym | Description |\n| --- | --- |\n| API | Application Programmer Interface |\n| CKAN | (replaced by the \"Data Platform\") |\n| CSV | Comma separated values |\n| Data Platform | Single page web app for managing and displaying datasets |\n| DCAT-AP | DCAT Application Profile - Metadata specification based on the Data Catalogue vocabulary (DCAT) |\n| DRUPAL | Content Management System |\n| ECAS / EU-Login | EU user login page |\n| EDP | European Data Portal |\n| FME | Feature Manipulation Engine |\n| GUI | Graphical User Interface |\n| HTTP | Hypertext Transfer Protocol |\n| JSON | JavaScript Object Notation (a lightweight data-interchange format) |\n| maps.app | Geo-spatial data visualization application |\n| MQA | Metadata Quality Assistant |\n| RDF | Resource Description Framework |\n| SOLR | Search engine used for portal content search and dataset search |", - "page_start": 3, - "page_end": 3, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### **3.1.7 How to search for EDP Site Content**\n\nIn order to search within the Portal's site content (i.e. editorial content, articles, events, reports etc.), **enter any keyword in the \"Search site content\" text box and click on the button**.\n\n| | EUROPEAN | | Newsletter FAQ Search Contact Cookies Legal notice Login | English (en) |\n| --- | --- | --- | --- | --- |\n| | DATA PORTAL | | Search site content ... | |\n| European Data Portal | | | | |\n| What we do - | | Data - | Providing Data▼ Using Data - | Resources - |\n\nThe site will display all matching content found (here for keyword \"Brussels\"):\n\n| | | Newsletter FAQ Search Contact Cookies Legal notice English (en) > |\n| --- | --- | --- |\n| | | Search site content ... ರ |\n| European Data Portal > Search | | |\n| f What we do - | Providing Data - | Data - Using Data - Resources - |\n| Search | | |\n| Current search | | Brussels ರ Sort by V |\n| Search found 164 item(s) | | |\n| Brussels | | |\n| | | @ Reqister for European Week of Regions and Cities |\n| | | Brussels, Belgium 03/10/2019 If you haven't already, save the date and join the European Week of Regions and .. |\n| | | Cities (EWRC) in Brussels, Belgium on 7 to 10 October 2019 next week. EWRC is an annual four-day event ... |\n| Filter by content type: | | |\n| EDP Events (68) | | Two weeks until the European Week of Regions and Cities |\n| | | Brussels, Belgium! 26/09/2019 Save the date and join the European Data Portal (EDP) at the European Week of .. |\n| Article (50) | | in EU policymaking. This year, the event will be held in Brussels, Belgium on 7 to 10 October 2019 |\n| 因 Document (33) | | @ Save the date: Connected Mobility Summit 2019 |\n| 트 Simplenews newsletter | | organisation based in Brussels, Belgium that covers politics and policy from across the European Union and .. |\n| (7) | | |\n| | | European Week of Regions and Cities |\n| Highlights (4) | | Monday, 7 October, 2019 - Thursday, 10 October, 2019 |\n| | | Brussels, Belgium |\n| Library Use Case (2) | | |\n| | | @ Open food data on the European Data Portal |\n| | | ' from data.gov.be - a dataset on the location of food trucks in the City of Brussels. ' Nutrition ... |\n\n#### **Note:**\n\nThe \"Search site content\" does **not** perform any search on datasets.\n\nIn order to search for datasets from the EDP Home page, the user should refer to section 3.2.", - "page_start": 21, - "page_end": 21, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "most similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as \"unintelligible\" [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia, by discarding any page containing one of a list of about 400 \"Dirty, Naughty, Obscene or Otherwise Bad Words\" [p.6].14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people.15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n#### 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.\n\nAn important caveat is that social movements which are poorly documented and which do not receive significant media attention will not be captured at all. Media coverage can fail to cover protest events and social movements [41, 96] and can distort events that challenge state power [36]. This is exemplified by media outlets that tend to ignore peaceful protest activity and instead focus on dramatic or violent events that make for good television but nearly always result in critical coverage [81]. As a result, the data underpinning LMs stands to misrepresent social movements and disproportionately align with existing regimes of power.\n\nDeveloping and shifting frames stand to be learned in incomplete ways or lost in the big-ness of data used to train large LMs — particularly if the training data isn't continually updated. Given the compute costs alone of training large LMs, it likely isn't feasible for even large corporations to fully retrain them frequently enough to keep up with the kind of language change discussed here. Perhaps fine-tuning approaches could be used to retrain LMs, but here again, what would be required is thoughtful curation practices to find appropriate data to capture reframings and techniques for evaluating whether such fine-tuning appropriately captures the ways in which new framings contest hegemonic representations.\n\n## 4.3 Encoding Bias\n\nIt is well established by now that large LMs exhibit various kinds of bias, including stereotypical associations [11, 12, 69, 119, 156, 157], or negative sentiment towards specific groups [61]. Furthermore, we see the effects of intersectionality [34], where BERT, ELMo, GPT and GPT-2 encode more bias against identities marginalized along more than one dimension than would be expected based on just the combination of the bias along each of the axes [54, 132]. Many of these works conclude that these issues are a reflection of training data characteristics. For instance, Hutchinson et al. find that BERT associates phrases referencing persons with disabilities with more negative sentiment words, and that gun violence, homelessness, and drug addiction are overrepresented in texts discussing mental illness [61]. Similarly, Gehman et al. show that models like GPT-3 trained with at least 570GB of data derived mostly from Common Crawl16 can generate sentences with high toxicity scores even when prompted with non-toxic sentences [53]. Their investigation of GPT-2's training data17 also finds 272K documents from unreliable news sites and 63K from banned subreddits.\n\nThese demonstrations of biases learned by LMs are extremely valuable in pointing out the potential for harm when such models are deployed, either in generating text or as components of classification systems, as explored further in §6. However, they do not represent a methodology that can be used to exhaustively discover all such risks, for several reasons.\n\nFirst, model auditing techniques typically rely on automated systems for measuring sentiment, toxicity, or novel metrics such as 'regard' to measure attitudes towards a specific demographic group [119]. But these systems themselves may not be reliable\n\n14Available at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/en, accessed Jan 18, 2021\n\n15This observation is due to William Agnew.\n\n16https://commoncrawl.org/the-data/\n\n17GPT-3's training data is not openly available, but GPT-2's training data was used indirectly to construct GPT-3's [53].", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "Where can we open a document saved on OneDrive ?", - "target_page": 2, - "target_passage": "When you save this document in OneDrive, you’ll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Count on Word to count your words\n\nTry it: Hit return after this line and type some words.\n\nThe status bar at the bottom of the window keeps a running count of the number of words in the document.\n\n### Save this for later, access it anywhere\n\nWhen you save this document in OneDrive, you'll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.\n\n| Save As | Info | | |\n| --- | --- | --- | --- |\n| New | 1 = OneDrive - Contoso | | |\n| Recent | Open | Enter file name here | |\n| Word Document (*. docx) | Contoso | Save | More options ... |\n| OneDrive - Contoso | Save As | IrvinS@Contoso.com | |\n| Name ↑ | Print | Sites - Contoso | |\n| Share | Attachments | IrvinS@Contoso.com | |\n| Personal | Export | Forms | |\n| OneDrive - Personal | Close | My Stuff | irvinsayers 1@outlook.com |\n\nTry it: Select File > Save As, and then select OneDrive and give this document a name.\n\nIf you sign in to Office 365 on another device, this document will be in your list of recent files. You can pick up where you left off… even if you left the document open on the computer you're using now.", - "page_start": 1, - "page_end": 1, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Create something\n\nBegin with a **Blank document** to get right to work. Or start with a template to save yourself time and steps. Just select **File** > **New**, and then select or search for the template you want.\n\n| | New |\n| --- | --- |\n| (n) Home | |\n| New | |\n| Open | |\n| Info | |\n| Save a Copy | |\n| Save as Adobe PDF | Blank document |\n| Print | |\n| Share | Search for online templates Q |\n| Export | Suggested searches Business Cards Flyers Letters Education Resumes and Cover Letters Holiday |\n| Transform | Aa NAME |\n| Clase | Take a tour |\n\n### Access files anywhere\n\nNeed to work on the go and across different devices? Click **File** > **Account** to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n#### Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting **File** > **Open** takes you to your recently used documents and any files that you may have pinned to your list.\n\n| € | Open | | | | |\n| --- | --- | --- | --- | --- | --- |\n| (2 Home | | | | | |\n| D New | L Recent | | 0 Search | | |\n| | | | Documents Folders | | |\n| Open | 08 | Shared with Me | | | |\n| | Contass | | 13 Name | | Date modified |\n| Info | | OneDrive - Contoso | Pinned | Pin files you want to easily find later. Click the pin icon that appears when you hover over a file. | |\n| Save a Copy | | MeganB@contoso.com | | | |\n| | | | Today | | |\n| Save as Adobe PCC | | Sites - Contoso MeganB@contoso.com | 四元 Connector - Elbow.doco Desktop | | 11/4/2021 3:01 AM |\n| Print | | | | | |\n| Share | This PC | | CE Annual Report.docx W OneDrive - Contoso | | 11/4/2021 2:48 AM |\n| | Add a Place | | | | |\n| Export | | | Older | | |\n| Transform | Browse | | Document (8).doco W | | 10/S/2021 4:48 PM |\n| | | | OneOrive - Contaso | | |\n| Close | | | 8 | Voice Capture Document.docx | 10/5/2021 4:37 PM |\n| | | | OneOrive - Contoso | | |\n| | | | W | Manufacturing and delivery plan.docx Mark 8 Project Team > Research and Development | 9/16/2021 8:28 AM |\n\n### Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the **Table Design** and **Layout** tabs, which offer additional options.\n\n| Review | View | Help | Acrobat | Table Design | | Layout | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | | | | 1/2 pt | |\n| | | | | | Shading | Border | | Borders Border |\n| | | | | | | | Styles × | Painter |\n| Table Styles | | | | | | | Borders | 7 |", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "# Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share, and send a link to this document. (keyboard shortcut – Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n# Add visuals with pictures from the web\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures, and then search for something, like puppy clip art.\n- 2. Select the picture you want, and select Insert.", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "You might be tempted to resolve this thread (note the Resolve link at the top of the initial comment) however, we aren't really done. Remember that we need to not just create the class but also define the axiom that a ChicagoPizza must have a DeepPanBase. Since we can't add axioms in Web Protégé we need to export our ontology back to Protégé. Typically, we would collect many more comments and changes before exporting but we want to demonstrate how round-trip editing works between Protég�� and Web Protégé. We could of course just export the ontology from Web Protégé to Protégé and then create another new Project, but it would be cumbersome to have to constantly create new projects every time you want to make a change in Protégé and if we did this, we would lose our audit trail of comments and changes. Luckily, there is a better way to do it.\n\nTo start we need to export the ontology to a file. Note that one of the tabs at the top is History. Select that tab. This tab shows a list of each version of the ontology. There should be 2 versions labelled R1 and R2 (in the right corner of each version). The most recent version is always at the top since that is typically what you want although it is also possible to roll back changes to previous versions. We want to export the latest version R2. Click on the R2 icon. This should give you a drop-down menu with two options: Revert changes in revision 2 and Download revision 2. Select Download revision 2. This will prompt you with the standard file browser for your OS to save a zip file with the new ontology. The ontology is saved with a zip file because ontologies can be large and since Web Protégé is working over a network we may want to limit the network traffic for large ontologies. Select the appropriate place to save the Zip archive file on the machine where you have Protégé. Do the standard things you would do to unzip the file and load it into Protégé. Note that when you unzip the file it will create a directory as well, so the file won't be directly under whatever directory you save it to. Instead, there will be a directory titled something like pizza-with-data-ontologies-owl-REVISION-2 that the OWL file will be in.\n\nLoad the downloaded file into Protégé. Go to the Class hierarchy tab and navigate to the new ChicagoPizza class under NamedPizza. Add the axiom (refer back to chapter 4 if you need to remember how to add axioms to classes) hasBase some DeepPanBase. Save the file. Now go back to Web Protégé and your version of the Pizza ontology there. Note that in the upper right corner of the window there are links (drop down menus) such as Display and Project. Select Project and from the drop down menu select Apply External Edits. This will give you a small dialog titled Upload ontologies with a little button to Choose File. Click on Choose File. That will give you the standard OS dialog for selecting a file. Navigate to the file you saved from Protégé and select that then choose OK. That should result in a new pop-up window titled Merge ontologies where you will see the changes (in this case only the addition of the ChicagoPizza axiom) and a text box where you can describe the changes. Add an appropriate Commit message or just take the default and select OK. You should get a message that says the changes were successfully applied.\n\nIf you navigate back to ChicagoPizza you should see that it now has that axiom. You can also navigate back to NamedPizza. In the right most column, you should see the comments about needing to add ChicagoPizza as a subclass. Now that this has been done you can click on the Resolve link in the upper right corner of the comment thread and the comments will be removed from NamedPizza.", - "page_start": 87, - "page_end": 87, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "**Note:** You see one set of messages for each object server on which you run the **ARSMAINT** program.\n\nFor example, when expiration processing starts on a specified server, you might see the following message:\n\n\"109 Cache Expiration (Date) (Min%) (Max%) (Server)\"\n\nMigration processing uses the specified date (the default is \"today\" in internal format). Expiration processing begins on each cache file system that exceeds the Max% (default 80%) and ends when the free space that is available in the file system falls below the Min% (default 80%).\n\nOne of these messages shows for each storage object that is deleted from cache storage. A storage object is eligible to be deleted when its \"Cache Document Data for n Days\" or \"Life of Data\" period passes (whichever occurs first).\n\nA storage deletion message looks similar to the following message:\n\n\"196 Cache Migration (ApplGrp) (ObjName) (Server)\"\n\nAlso, information-only messages report the percentage of space that is used in the file system.\n\nAn information message looks similar to the following message:\n\n```\n\"124 Filesystem Statistics (filesystem) (% full) (server)\"\n```\n#### **Load table (ARSLOAD)**\n\nThe ARSLOAD table can be used to track loads for expiration. This table maintains a record of all successful loads to application groups with the \"expire by load\" expiration type.\n\n# **10.5.3 Removing documents from the Tivoli Storage Manager archive**\n\nRemoving a document from archive storage means that the backup (if the primary document copy is in cache) or long-term copy (if the primary document copy is in archive) of the document is deleted from the system. You remove documents from archive storage when you no longer have a business or legal requirement to keep them.\n\nA *management class* contains an archive copy group that specifies the criteria that makes a document eligible for deletion. Documents become eligible for deletion under the following conditions:\n\n- -Administrators delete documents from client nodes\n- - An archived document exceeds the time criteria in the archive copy group (how long archived copies are kept)\n\nASM does not delete information about expired documents from its database until expiration processing runs. You can run expiration processing either automatically or manually by command. Ensure that expiration processing runs periodically to allow ASM to reuse storage pool space that is occupied by expired documents.\n\nWhen expiration processing runs, ASM deletes documents from its database. The storage space that these documents used to occupy then becomes reclaimable. For more information, see \"Reclaiming space in storage pools\" on page 233.", - "page_start": 255, - "page_end": 255, - "source_file": "sg246915.pdf" - }, - { - "text": "To view and configure DNS server information in IBM Spectrum Virtualize, complete the following steps:\n\n- 1. In the left pane, click the **DNS** icon and enter the **IP address** and the **Name** of each DNS server. The IBM Spectrum Virtualize supports up two DNS Servers, IPv4 or IPv6 (see Figure 5-75).\n\n| DNS |\n| --- |\n| You can create, delete, or change domain name servers, which manage names of |\n| resources that are located on external networks. Read More. |\n| DNS Server IPv4 Address ▼ Name: |\n\n*Figure 5-75 DNS information*\n\n- 2. Click **Save** after you complete entering the DNS server information.\n# **Transparent Cloud Tiering**\n\nTransparent cloud tiering is a licensed function that enables volume data to be copied and transferred to cloud storage. The system supports creating connections to cloud service providers to store copies of volume data in private or public cloud storage.\n\nWith transparent cloud tiering, administrators can move older data to cloud storage to free up capacity on the system. Point-in-time snapshots of data can be created on the system and then copied and stored on the cloud storage. An external cloud service provider manages the cloud storage, which reduces storage costs for the system. Before data can be copied to cloud storage, a connection to the cloud service provider must be created from the system.\n\nA cloud account is an object on the system that represents a connection to a cloud service provider by using a particular set of credentials. These credentials differ depending on the type of cloud service provider that is being specified. Most cloud service providers require the host name of the cloud service provider and an associated password, and some cloud service providers also require certificates to authenticate users of the cloud storage.\n\nPublic clouds use certificates that are signed by well-known certificate authorities. Private cloud service providers can use self-signed certificate or a certificate that is signed by a trusted certificate authority. These credentials are defined on the cloud service provider and passed to the system through the administrators of the cloud service provider. A cloud account defines whether the system can successfully communicate and authenticate with the cloud service provider by using the account credentials.\n\nIf the system is authenticated, it can then access cloud storage to copy data to the cloud storage or restore data that is copied to cloud storage back to the system. The system supports one cloud account to a single cloud service provider. Migration between providers is not supported.", - "page_start": 199, - "page_end": 199, - "source_file": "sg247938.pdf" - }, - { - "text": "Many operating systems and applications provide mechanism to stop I/O operations and ensure that all data is flushed from host cache. If these mechanisms are available, they can be used with snapshot operations. When these mechanisms are not available, it might be necessary to flush the cache manually by quiescing the application and unmounting the file system or logical drives.\n\nWhen choosing cloud object storage as a backup solution, be aware that the object storage must be managed as a whole. Backup and restore of individual files, folders, and partitions, are not possible.\n\nTo interact with cloud service providers or a private cloud, the IBM Spectrum Virtualize requires interaction with the correct architecture and specific properties. Conversely, cloud service providers offered attractive prices per object storage in cloud and deliver an easy-to-use interface. Normally, cloud providers offer low-cost prices for object storage space, and charges are applied for the cloud outbound traffic only.\n\n# **11.3.3 Restore using Transparent Cloud Tiering**\n\nTransparent Cloud Tiering can also be used to restore data from any snapshot that is stored in cloud providers. When the cloud accounts' technical and security requirements are met, the storage objects in the cloud can be used as a data recovery solution. The recovery method is similar to backup, except that the reverse direction is applied. Transparent Cloud Tiering running on IBM Spectrum Virtualize queries for object storage stored in a cloud infrastructure. It enables users to restore the objects into a new volume or set of volumes.\n\nThis approach can be used for various applications, such as recovering your production database application after an errant batch process that caused extensive damage.\n\n**Note:** You should always consider the bandwidth characteristics and network capabilities when choosing to use Transparent Cloud Tiering.\n\nThe restore of individual files using Transparent Cloud Tiering is not possible. As mentioned, object storage is unlike a file or a block so object storage must be managed as a whole unit piece of storage, and not partially. Cloud object storage is accessible by using an HTTP-based REST API.\n\n# **11.3.4 Transparent Cloud Tiering restrictions**\n\nThis section describes the following restrictions that must be considered before using Transparent Cloud Tiering:\n\n- - Because the object storage is normally accessed by using the HTTP protocol on top of a TPC/IP stack, all traffic that is associated with cloud service flows through the node management ports.\n- - The size of cloud-enabled volumes cannot change. If the size of the volume changes, a snapshot must be created so that a new Object Storage is constructed.\n- - Transparent Cloud Tiering cannot be applied to volumes that are part of traditional copy services, such as FlashCopy, Metro Mirror, Global Mirror, and HyperSwap.\n- - Volume containing two physical copies in two different storage pools cannot be part of Transparent Cloud Tiering.\n- - Cloud Tiering snapshots cannot be taken from a volume that is part of migration activity across storage pools.", - "page_start": 521, - "page_end": 521, - "source_file": "sg247938.pdf" - }, - { - "text": "| Modify Capacity Savings | × | Synchronized | Protocol Typ |\n| --- | --- | --- | --- |\n| Modify Volume vdisk3 | | | SCSI |\n| Select the new capacity savings method: | | Yes | ISDS |\n| None > | | Yes | SCSI |\n| Not Set | | | SCSI |\n| None | | | SCSI |\n| | matting) | | |\n| Thin Provisioning | | | |\n| Compression | | Volume data is compressed when written to save disk space. | |\n\n*Figure 10-17 Selecting capacity setting*\n\nAfter the copies are fully synchronized, the original volume copy is deleted automatically.\n\nAs a result, compressed data is on the volume. This process is nondisruptive, so the data remains online and accessible by applications and users.\n\nThis capability enables clients to regain space from the storage pool, which can then be reused for other applications.\n\nWith the virtualization of external storage systems, the ability to compress stored data significantly enhances and accelerates the benefit to users. This capability enables them to see a tremendous return on their Storwize V7000 investment.\n\nOn the initial purchase of an Storwize V7000 with Real-time Compression, clients can defer their purchase of new storage. When storage is needed, IT purchases a lower amount of the required storage before compression.\n\nFor more information about volume migration for compressed volumes on standard pools to DRPs, see \"Migrating to and from DRP\" on page 425.\n\n# **10.6 Saving estimation for compression and deduplication**\n\nThis section provides information about the tools that are used for sizing the environment for compression and deduplication.\n\n# **10.6.1 Evaluate compression savings by using IBM Comprestimator**\n\nIBM Comprestimator is an integrated GUI and CLI host-based utility that estimates the space savings that are achieved when compressed volumes are used for block devices. This utility provides a quick and easy view of showing the benefits of the use of compression. The utility performs read-only operations and therefore, does not affect the data that is being stored on device.\n\nIf the compression savings prove to be beneficial in your environment, volume mirroring can be used to convert volumes to compressed volumes in the data reduction pools.", - "page_start": 450, - "page_end": 450, - "source_file": "sg247938.pdf" - }, - { - "text": "You can dynamically increase or decrease the amount of storage available to the node as the deployed containers change. Figure 2-6 shows an example of how several containers across multiple virtual machines can share one set of volumes.\n\n*Figure 2-6 Storage Backed By Volumes Managed Through PowerVC*\n\n**Note:** For more information about the use of IBM PowerVC storage with containers, see IBM Knowledge Center.\n\n#### **2.2.3 Virtual machines and containers in a hybrid multicloud architecture**\n\n*Hybrid multicloud* is a cloud environment that combines a private cloud and public clouds that allows applications and data to be shared between them. A *multicloud* refers to an environment that is made of up of more than one cloud provider (or vendor). Hybrid is an environment that combines a private cloud and a public cloud that allows applications to take advantage of the resources on either cloud. Therefore, hybrid multicloud combines a private cloud, a public cloud, and more than one cloud service from more than one cloud vendor.\n\nIBM Power Systems customers use a simplified deployment of enterprise resources by using various virtual machines (LPARs). Enterprises worldwide are exploring container technology and developing plans for how to integrate them into their enterprise. They want to start to deploy, manage, and operate containerized applications smoothly by integrating them into virtual machines.\n\nIT administrators, developers, and line-of-business users want to continue a simplified access to the infrastructure and applications, which is possible by adopting the IBM Power Systems cloud technologies within the data center.", - "page_start": 32, - "page_end": 32, - "source_file": "sg248459.pdf" - }, - { - "text": "# **Application Group Identifier and the Application Group ID**\n\nThe Application Group Identifier and the Application Group ID (AGID) are unique identifiers that are used by Content Manager OnDemand to identify the application group in system tables.\n\n# **Migrate Data from Cache**\n\nThe Migrate Data from Cache value determines when documents and resources are migrated to archive storage. A storage set that is associated with a Tivoli Storage Manager client node must be selected to enable migration to archive storage.\n\nThe following values are valid:\n\n- - No: Data is never migrated from cache. This option is unavailable when a storage set that is associated with a Tivoli Storage Manager client node is selected for the application group.\n- - When data is loaded: Data is migrated to archive storage when the data is loaded into the application group.\n- - Next cache migration: Data is migrated to archive storage the next time that **ARSMAINT** is run with the **-m** option. The **-m** option indicates that data and resources are copied from cache to archive storage.\n- - After __ days in cache: This value specifies the number of days that data remains in cache storage. After the prescribed number of days in cache storage are reached, the data is copied to archive storage the next time that **ARSMAINT** is run with the **-m** option for data migration.\n\n# **5.2.7 IBM System Storage Archive Manager**\n\nCertain regulations require data to be stored in devices that are read only. In the past, physical storage devices, such as tapes and optical disks that are Write Once Read Many (WORM), were used.\n\nWORM disks, such as the NetApp SnapLock or EMC Centera, can be used to store data in the same manner as WORM tapes or optical platters. IBM System Storage Archive Manager allows critical data to be retained for a mandated period without the possibility of being rewritten or erased.\n\nIn this section, we describe System Storage Archive Manager and how Content Manager OnDemand can be configured to use this subsystem to support these WORM disk devices.\n\n**Note:** Verify support for any particular device on a particular platform through the Tivoli Storage Manager Device support matrix before you plan your implementation.\n\nFor more information about the Tivoli Storage Manager support of WORM disk devices, such as NetApp SnapLock, or EMC Centera, see the following IBM Knowledge Center documents:\n\n- -Tivoli Storage Manager for AIX Administrator's Guide\n- -Tivoli Storage Manager for Windows Administrator's Guide\n\nYou can obtain these documents from the IBM Tivoli Storage Manager Knowledge Center at the following web address:\n\nhttp://www.ibm.com/support/knowledgecenter/SSGSG7/welcome?lang=en:", - "page_start": 127, - "page_end": 127, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "What is the bold keyboard shortcut on word ?", - "target_page": 4, - "target_passage": "Bold (keyboard shortcut: Ctrl+B)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Make your meaning more visual by formatting text\n\nTo format text, select it, and then select a button in the Font or Paragraph area on the Home tab.\n\nTry it: Select text in the lines below and choose formatting options so that the text is an example of the formatting it's describing:\n\n| Bold (keyboard shortcut: Ctrl+B) |\n| --- |\n| Italic (keyboard shortcut: Ctrl+I) |\n| Highlight |\n| Font color |\n| Bullets |\n| Numbering |\n\nPro tip: If you selected whole words for this exercise, did you notice that Word popped up a little toolbar, with the font formatting options?\n\n| Segoe UI - 11 | - A A | Aa - | Po |\n| --- | --- | --- | --- |\n| B I U v abe X2 X2 | | A - all - A - | |\n\nBetween that and keyboard shortcuts like Ctrl+B and Ctrl+I, you save time by not having to go up to the Home tab all the time.", - "page_start": 3, - "page_end": 3, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "### **CHANGING FONTS**\n\nThe appearance that you choose for your text is referred to as the *font* or *typeface*. Font traditionally refers to a combination of typeface, style and size in points (e.g. Arial Bold 12 pt).\n\nIn Excel 2007, *font* just refers to the typeface or shape of the letters. Typical classic fonts include Times New Roman, Arial, Century Gothic and **Copperplate**.\n\n| | Try This Yourself: | 1 |\n| --- | --- | --- |\n| | Continue using the previous | |\n| Same File | file with this exercise, or open the file E722 Font | |\n| | Formatting_1.xls... | |\n|  | Click in cell A1 to make the | |\n| | cell with the main heading the | |\n| | active cell | |\n|  | Click on the drop arrow next to | |\n| | the Font command in | |\n| | the Font group on the Home | 4 |\n| | tab to display a gallery of | |\n| | available fonts | |\n|  | Point to Arial Narrow, then Book Antiqua, Garamond and Gill | |\n| | Sans MT | |\n| | If you don't have these fonts, | |\n| | try different ones. As you point | |\n| | to each font, the preview will | |\n| | change... | |\n|  | Scroll to and click on Comics | |\n| | Sans MS, or another font of | |\n| | your choice if you don't have | |\n| | this one | |\n| | This time the font formatting | |\n| | has changed in the cell and is | |\n| | no longer just a preview – it | |\n| | won't change again unless you | |\n| | make another font selection. | |\n\n### **For Your Reference…**\n\nTo *apply font formatting*:\n\n- 1. Select the text\n- 2. Click on the drop arrow for *Font*\n- 3. Point to a font to preview it\n- 4. Click on the font to apply it\n\n### **Handy to Know…**\n\n- You can jump directly to a font. For example, if you want to preview Garamond, click on the name of the font in the *Font* command and press . Excel will jump to the fonts that start with *G* and *Live Preview* will display the text temporarily. Keep typing the name until you reach the required font.", - "page_start": 21, - "page_end": 21, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **NAVIGATING IN A FILE**\n\n| Arrow | Move one cell to the right, left, up or down |\n| --- | --- |\n| Keys | |\n| Tab | Move once cell to the right |\n| Ctrl+Home | To beginning file |\n| Ctrl+End | To end of typed information |\n| Home | Beginning of a line |\n| End | End of a line |\n| Page Down | Down one screen |\n| Page Up | Up one screen |\n| F5 | To a specific page |\n| Scroll bars | Appear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move |\n| | through the document. |", - "page_start": 5, - "page_end": 5, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Make magic: use Heading styles\n\nThe heading for this part (\"Make magic: use Heading styles\") looks the same as the other headings in this document, but it's not as useful. It's formatted with font settings (font, size, and color), while the other headings are formatted with a Heading style (Heading 1, to be exact).\n\nSee the little triangle when you mouse over those other headings?\n\nYou can collapse and expand everything under a heading, like an outline. But this one's not working. Let's fix it.\n\n#### Try it: Apply the Heading 1 style:\n\n- 1. Put your cursor somewhere in the heading above (\"Make magic: use Heading styles\") don't select anything.\n- 2. On the Home tab, find Styles, and select Heading 1 (keyboard shortcut Ctrl+Alt+1).\n\nTa-da! Now it looks like a heading, and acts like one too.", - "page_start": 4, - "page_end": 4, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "### **CHANGING FONT SIZE**\n\nOne way that text can be emphasised is by changing the *size* of the font. For example, if your normal text is 11 pt, you may like to make the headings 13 pt or larger. Font size may also be changed for small detailed items, such as comments or a caption. Main headings in a worksheet usually appear in a slightly larger font size compared to the rest of the data.\n\n| | Try This Yourself: | 1 |\n| --- | --- | --- |\n| | Continue using the previous | |\n| Same File | file with this exercise, or open | |\n| | the file E722 Font | |\n| | Formatting_2.xlsx... | |\n| | Click in cell A1 to make the | |\n|  | cell with the main heading the | 2 |\n| | active cell | |\n| | Click on the drop arrow next to | |\n|  | the Font Size command | |\n| | in the Font group on | |\n| | the Home tab to display a | |\n| | gallery of available sizes | |\n| | Point to various sizes and | |\n|  | notice how Live Preview | |\n| | shows you how the heading | |\n| | will look | |\n|  | Click on 16 to change the | |\n| | heading to 16 pt | |\n| | You can also change the font | |\n| | size of parts of a document, | |\n| | and you can use the Mini | |\n| | toolbar... | 8 |\n|  | Click in cell A2 | |\n|  | Click with the right-mouse button to display the mini | |\n| | toolbar and the shortcut menu | |\n| | Click on the drop arrow next to | |\n|  | | |\n| | Font Size and | |\n| | click on 14 | |\n| | Click in cell A3 to hide the | |\n|  | toolbar | |\n\n### **For Your Reference…**\n\n#### To *change font size*:\n\n- 1. Select the cell or range that you want to change\n- 2. Click on the drop arrow of *Font Size*\n- 3. Click on the required font size\n\n#### **Handy to Know…**\n\n- You may have noticed that the text didn't change size when you used the mini toolbar until you actually clicked on a different font size. This is because *Live Preview* doesn't work with the mini toolbar.", - "page_start": 22, - "page_end": 22, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "# Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share, and send a link to this document. (keyboard shortcut – Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n# Add visuals with pictures from the web\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures, and then search for something, like puppy clip art.\n- 2. Select the picture you want, and select Insert.", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "Instructions you can edit, share, and print\n\nUnlike old-school user guides, this doc is yours to tailor exactly for your needs. Reading it will teach you some basics about Word, but this document isn't just for reading. It's for editing too, so you can learn by doing.\n\nFor practice using Word features, watch for Try it text in red throughout this document.\n\nTime saver: If you've only got a minute and you want to see how this works, watch this Video: Welcome to Word.\n\n### Write eloquently, with a little help\n\nWord automatically checks spelling and grammar, and marks misspelled words with a red squiggly underline. Grammatical glitches get a blue double underline.\n\nTry it: Put your cursor at the end of this paragraph, and hit Enter to start a new paragraph. Write a sentence with some spelling or grammatical mistakes, and press Enter to finish the paragraph.\n\nRight-click the text that's marked with underlines, or Press F7. Choose a suggestion to correct the mistakes.", - "page_start": 0, - "page_end": 0, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "### **Bookmarks**\n\nBookmarks are included in the PDF for headings or Word bookmarks depending on the option selected.\n\n## **Availability**\n\nThe information in this article is applicable to the following versions of Word.\n\n- Word for Windows Version 2408 and later.\n- Word for Mac Version 16.89 and later.\n- Word for iOS Version 2.89 and later.\n- Word for Android Build 16.0.18025.XXXXX or later.\n- Word for the web Build 16.0.18025.XXXXX or later.\n\nIt is available to customers with Office 2024 or Office LTSC 2024 and to customers with a Microsoft 365 subscription on Current Channel or Monthly Enterprise Channel. For customers with a Microsoft 365 subscription on Semi-Annual Enterprise Channel it will be available on January 14, 2025.", - "page_start": 60, - "page_end": 60, - "source_file": "office-pdf.pdf" - }, - { - "text": "# Get help with Word\n\n| Q Add watermark | |\n| --- | --- |\n| 区 | Watermark |\n| 물 | Insert Picture |\n| E | Insert Rows Above |\n| E | Add a Blank Page |\n| 電 | Insert Rows Below |\n| 2 | Get Help on \"Add watermark\" |\n| 0 | Smart Lookup on \"Add water ... |\n\nThe Tell me search box takes you straight to commands and Help in Word.\n\n#### Try it: Get help:\n\n- 1. Go to Tell me what you want to do at the top of the window.\n- 2. Type what you want to do.\n\nFor example, type:\n\n- Add watermark to quickly get to the watermark command.\n- Help to go to Word help.\n- Training to see the list of Word training courses.\n- What's new for a list of the most recent updates to Word\n\n### Let us know what you think\n\nPlease give us feedback on this template, so we can provide content that's truly useful and helpful. Thanks!", - "page_start": 7, - "page_end": 7, - "source_file": "welcome_to_word_template.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "What is the advise to make the style sets and themes work well ? ", - "target_page": 6, - "target_passage": "They work best when your document is formatted with styles", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Give your doc an instant makeover\n\nStyle sets and themes let you completely change the look of your document in an instant. They work best when your document is formatted with styles (so it's good that we fixed that Heading style, above).\n\nTry it: Explore style sets and themes:\n\n- 1. On the Design tab, select Themes, and choose a theme from the drop-down. Notice that the gallery of style sets updates to reflect the theme you picked.\n- 2. Select any theme you like from the drop-down and click to apply.", - "page_start": 5, - "page_end": 5, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Make magic: use Heading styles\n\nThe heading for this part (\"Make magic: use Heading styles\") looks the same as the other headings in this document, but it's not as useful. It's formatted with font settings (font, size, and color), while the other headings are formatted with a Heading style (Heading 1, to be exact).\n\nSee the little triangle when you mouse over those other headings?\n\nYou can collapse and expand everything under a heading, like an outline. But this one's not working. Let's fix it.\n\n#### Try it: Apply the Heading 1 style:\n\n- 1. Put your cursor somewhere in the heading above (\"Make magic: use Heading styles\") don't select anything.\n- 2. On the Home tab, find Styles, and select Heading 1 (keyboard shortcut Ctrl+Alt+1).\n\nTa-da! Now it looks like a heading, and acts like one too.", - "page_start": 4, - "page_end": 4, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Introduction\n\nThis Remuneration Report forms part of the Directors' Report. It outlines the Remuneration Policy and framework applied by the Company as well as details of the remuneration paid to Key Management Personnel. Key Management Personnel are defined as those persons having the authority and responsibility for planning, directing and controlling the activities of the Company, directly or indirectly, including Directors and members of the Executive Management group.\n\nThe information provided in this report has been prepared in accordance with s300A and audited as required by section 308 (3c) of the *Corporations Act 2001*.\n\nThe objective of the Company's remuneration philosophy is to ensure that Directors and senior staff are remunerated fairly and responsibly at a level that is competitive, reasonable and appropriate, in order to attract and retain suitably skilled and experienced people.\n\nDuring the year the Company introduced a STI Plan that is based on Key Management Personnel individual performance measures and a Long-Term Incentive (\"LTI\") Executive Rights Plan that provides performance-based remuneration to members of management through the issue of Deferred Rights and Performance Rights vesting over a period of three years. These new plans are discussed in further detail later in this report.\n\n## Voting and comments made at the Company's 2012 AGM\n\nThe table below provides a summary of the Board's action and / or comments in response to concerns raised by shareholders at the 2012 AGM in relation to remuneration.\n\nKey issues raised were:\n\n- 〉 the granting of deferred rights;\n- 〉 definition of what compromises 'fixed pay'; and\n- 〉 a lack of understanding of the TSR Alpha™ concept recommended as the LTI performance assessment process.\n\n## Remuneration Policy\n\nThe Remuneration Policy has been designed to align the interests of shareholders, Directors, and employees. This is achieved by setting a framework to:\n\n- 〉 help ensure an applicable balance of fixed and at-risk remuneration, with the at-risk component linking incentive and performance measures to both Group and individual performance;\n- 〉 provide an appropriate reward for Directors and Executive Management to manage and lead the business successfully and to drive strong, long-term growth in line with the Company's strategy and business objectives;\n- 〉 encourage executives to strive for superior performance;\n- 〉 facilitate transparency and fairness in executive remuneration policy and practices;\n- 〉 be competitive and cost effective in the current employment market; and\n- 〉 contribute to appropriate attraction and retention strategies for Directors and executives.\n\nIn consultation with external remuneration consultants, the Group has structured an executive remuneration framework that is market competitive and complimentary to the business strategy of the organisation.\n\nThe framework is intended to provide a mix of fixed and variable remuneration, with a blend of short and long-term incentives as appropriate. As executives gain seniority within the Group, the balance of this mix shifts to a higher proportion of \"at risk\" rewards (refer to chart – Remuneration Reward Mix on the following page).\n\n## Remuneration Governance\n\n#### Role of the Remuneration Committee\n\nThe Remuneration Committee is a committee of the Board and has responsibility for setting policy for determining the nature and amount of emoluments of Board members and senior executives. The Committee makes recommendations to the Board concerning:\n\n- 〉 Non-Executive Director fees;\n- 〉 remuneration levels of Executive Directors and other Key Management Personnel;\n- 〉 the executive remuneration framework and operation of the incentive plan; and\n- 〉 key performance indicators and performance hurdles for the executive team.\n\nIn forming its recommendations the Committee takes into consideration the Group's stage of development, remuneration in the industry and performance. The Corporate Governance Statement provides further information on the role of this committee.\n\n#### Remuneration Consultants\n\nThe Group engages the services of independent and specialist remuneration consultants from time to time. Under the *Corporations Act 2001*, remuneration consultants must be engaged by the Non-Executive Directors and reporting of any remuneration recommendations must be made directly to the Remuneration Committee.\n\n#### Concern Action or Comment\n\nThe Company has benchmarked the issuing of LTIs to the Managing Director and other Key Management Personnel against all companies of comparable market position as part of a broader remuneration comparison using AON Hewitt / McDonald, a review of survey data from the Egan and Associates \"The KMP Report\" and validation from Godfrey's Remuneration Group. The findings confirm the level of remuneration, inclusive of performance rights, to be comparable to similarly experienced Managing Directors and other Key Management Personnel with companies of comparable market positioning within the industry.\n\nThe Company has sought to discuss key elements contained in the Remuneration Report with shareholders, shareholder representative groups and proxy advisory groups. Further details regarding the TSR Alpha™ benchmarking methodology are included in the LTI section of this Report.\n\nDeferred rights for the Managing Director were transitional with eligibility for performance rights only in the future.\n\nDetails of the STI and LTI Plans are provided later in this Report.", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "#### **2.2 Remuneration and Nominations Committee**\n\nThe remuneration and nominations committee is structured so that it:\n\n- Consists of a majority of independent Directors;\n- Is chaired by an independent Director; and\n- Has at least three members.\n\nThe responsibilities of the committee include recommendations to the Board about:\n\n- Remuneration practices and levels of Executives and Non-executive Directors;\n- The necessary and desirable competencies of Directors;\n- Review of board succession plans;\n- The development of a process for evaluation of the performance of the board, its committees and Directors; and,\n- The appointment and re-election of Directors.\n\nThe combined Remuneration and Nominations Committee consists of three independent Non-Executive Directors and reports its recommendations to the Board for approval. Formal minutes are kept of each meeting and submitted to the Board for review. The members of the Remuneration and Nominations Committee is listed on page 26 of the Directors' Report. A Remuneration and Nominations Committee charter is published on the Company's website.\n\nThe Board reviews the composition and skill sets of the Committee on a regular basis, and considers that the current composition, size and skills of the Committee to be appropriate.\n\nCurrently no formal description of the procedure for the selection and appointment of new Directors or the re-election of incumbent Directors exists as it is considered that due to the size of the Company that this process is effectively managed by the Board. However, this activity is discussed by the Committee from time to time.\n\n#### **2.3 Director Performance Review and Evaluation**\n\nIn fiscal year 2014, Sundance's Board regularly met, both formally and informally, to discuss Board matters and to ensure that the Board acts in an effective way. The Board is provided with information that allows it to discharge its duties effectively, and Non-Executive Directors can and do request additional information as necessary to make informed decisions. The skills, experience and expertise relevant to the position of Director held by each director in office at the date of the annual report can be found in the Directors' Report on pages 23 to 25.\n\nNo formal process exists for Directors to access continuing education, as this is not considered practicable for the size of the Company and the financial resources available. However the four Non-Executive Directors have wide experience of directors' duties and are involved in a variety of outside business and professional activities that add to their knowledge and professionalism.\n\nThe Company Secretary is D Connor. He is accountable to the Board through the Chairman and accessible to all Directors. The appointment and removal of the Company Secretary is a matter for decision by the Board as a whole.\n\n# **Principle 3: Promote Ethical and Responsible Decision-making**\n\n#### **3.1 Code of Conduct**\n\nThe Company has a Code of Conduct and Ethics which establishes the practices that Directors, management and staff must follow in order to comply with the law, meet shareholder expectations, maintain public confidence in the Company's integrity, and provide a process for reporting and investigating unethical practices. The Code of Conduct is available in the corporate governance section of Sundance's website.\n\n#### **3.2 Diversity**\n\nSundance believes it is important to maintain a diverse, empowered and inclusive workforce to gain valuable perspectives from people of different gender, race, religion, marital status, disability or national origin. Sundance management recruits on the basis of skills, qualifications, abilities and achievements of the individual. Sundance has published a Diversity policy which is available in the corporate governance section of Sundance's website.", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "# TABLE OF CONTENTS:\n\n- 1. General Language Tips to Get You Started\n- 2. Parts of Speech\n- 3. Punctuation\n- 4. Commonly Confused Words and Phrases\n- 5. Tips for Filling in Your College Registration Form\n- 6. Learn How to Summarise Your Study Material\n- 7. How to Ask for Help from Your Tutor\n- 8. Tips for Completing Your Written Assignments\n- 9. Tips for Answering Exam Questions\n- 10. Language Skills at Work How to Write a Cover Letter\n- 11. Language Skills at Work How to Write a Resignation Letter\n- 12. Language Skills at Work Sending E-mails to Your Colleagues", - "page_start": 2, - "page_end": 2, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### **3.4 Mobile work, home as workplace and domestic work**\n\nThe place of work — premises of the employer or any other place — is another major characteristic of working conditions, which significantly influences the risks and the preventive measures. This chapter takes a closer look at three types of work, that is, mobile work, private homes as workplace and domestic work, All pose — in a broad sense — similar challenges for OSH.77\n\nFor OSH, the major **question for all mobile and non-stationery work** is: **to what degree** does the OSH level at these workplaces' deviate **from the OSH level at stationary workplaces**? Current OSH legislation illustrates these difficulties: The Workplace Directive78 excludes several types of mobile work, and the Display screen equipment directive79 was issued in 1990 and does not reflect the variety and specific OSH issues of digital equipment development of the past 30 years. Both directives are currently under revision.\n\n**Mobile work** is a standard characteristic of work in the **construction and transport sector**, extreme for workers in the maritime and other long-distance and international transport sectors, often in tourism and also for certain categories of **sales personnel**, and often standard for qualified **craft workers** during service or construction of plants and installations and during maintenance.80\n\nTriggered by developments in digital and communication technologies, several new types of mobile work have developed. In principle, the place of work can be anywhere, in a car, train, hotel, at the premises of other employers, at remote office-like locations, or at the client's workplace or at private homes of clients; it is not 'place-bound'. Most of this mobile work still takes place in the contractual form of regular employment, but mobile work is also a major field for many new forms of new work contracts, triggered by the technological possibilities.\n\n**Traditional home-based work** consists of the production of small goods that — from a technical point of view — can be produced in private homes (clothes, artisan work and very repetitive work like sorting). This work is performed for an enterprise or a person contracted by the enterprise for the organisation of home-based work and is located at the homes of the workers. It might require extra technical equipment, but sometimes usual private equipment is sufficient. The traditional home-based work very probably has decreased to a low level, the quantity of this type of home-based work is not monitored at EU level.81 Regulation of OSH for such home-based work has a long tradition in OSH legislation, mostly aimed at achieving working conditions as similar as possible to the other employees in an enterprise, regarding wages, social protection, and safety and health.\n\n**Work at, from and in homes.** We can distinguish major types: **work at (own) home**, either as independent work (self-employed) or classical home-based work; **work from private home** embedded in daily routine work processes in an enterprise or institution; and **work in homes of others**. Long-term care work, domestic work and teaching are large categories of work in homes; the work is performed in the private homes of clients. Regarding work that is done at **home, from home and in homes**, the application of some basic OSH standards has to take into account the dominantly private character of a home. This triggers the question of **responsibility and supervision**: Who is responsible for risk assessment and prevention measures? Is a supervision of compliance by state authorities in private homes legally possible?\n\nThe craft workers who are doing **technical services in homes** of clients are statistically not counted as home workers but as workers at the premises of clients (Eurostat). In some important OSH aspects, it is similar to the work of the other professions; these service workers fulfil their work tasks in a private environment where the employer can hardly perform a risk assessment, for example, of the electrical appliances or safety of floors, handrails or roofs. The risk assessment is done by the worker on the spot, based on experience. This is similar to short-term care work.", - "page_start": 48, - "page_end": 48, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### First Paragraph\n\nIntroduce yourself, and explain why you are writing the letter. If you are responding to a job advertisement, state which advertisement you are responding to, and indicate where you found it.\n\n#### For example:\n\n\"I would like to apply for the position of Graphic Designer, as advertised in the Career Times on 1 March 2015.\"\n\nIf possible, mention a mutual contact or acquaintance.\n\nFor example:\n\n\"Samantha Stevens mentioned that you are looking for an experienced Graphic Designer with a keen interest in the fashion industry.\"\n\n#### Second Paragraph\n\nMention your qualifications, skills and experience, and relate them to the needs of the company. Give relevant examples of how you have used your skills in the past to perform similar tasks and responsibilities to those set out in the job description.\n\n#### Third Paragraph\n\nExplain why you want to work for this organisation in particular. Where relevant, explain any gaps in your CV. If you don't have the required academic qualifications, for example, you can explain how your practical work experience makes up for it.\n\n#### Fourth paragraph\n\nMention any documents or attachments that you have included with your cover letter, and state your availability for an interview.\n\n#### Close\n\nThank the recipient for taking the time to read your letter, and sign off with a professional greeting, such as \"Yours sincerely\" or \"Kind regards\", followed by your full name, telephone number and e-mail address.", - "page_start": 46, - "page_end": 46, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 10:\n\n### LANGUAGE SKILLS AT WORK HOW TO WRITE A COVER LETTER\n\nIf you've ever applied for a job, you'll know that writing the cover letter is the most difficult part of almost any job application. Your cover letter creates the first impression, and often determines whether an employer will even look at your CV.\n\nYou need to use this opportunity to introduce yourself and your skills, and to set yourself apart from all the other candidates. You can also use this opportunity to explain any gaps in your CV, and to motivate why you are the right person for the job.\n\n### tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips", - "page_start": 44, - "page_end": 44, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "respect to a proposed appointee to the Board and the workings of the Board and its Committees are conveyed in interviews with the Chairman and induction procedures include access to appropriate executives in relation to details of the business of the Company.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 31\n\nThe Chairman of the Board is the Chairman of the Nomination Committee. The current members of the Nomination Committee, all of whom are independent non-executive Directors, are Mr S Gerlach (Chairman), Mr P C Barnett and Mr G W McGregor.\n\n## **3. REVIEW OF BOARD AND EXECUTIVE PERFORMANCE**\n\nThe Board Guidelines provide that:\n\n- non-executive Directors are to be appointed on the basis that their nomination for re-election as a Director is subject to review and support by the Board;\n- there should be appropriate circumstances justifying reelection after a specified period of service as a Director; and\n- the contribution of the Board and of individual Directors is the subject of formal review and discussion on a biennial and annual basis, respectively.\n\nAs the biennial review of the Board and of its Committees was conducted by an independent consultant in 2003, no formal performance appraisal of the Board was conducted in 2004.\n\nPerformance evaluation of key executives is undertaken on a quarterly and annual basis by the CEO and summarised in presentation to the Remuneration Committee of the Board, both specifically for determination of remuneration and generally in relation to management succession planning for review by the Board.\n\n## **4. INDEMNITY, ACCESS TO INFORMATION AND INDEPENDENT PROFESSIONAL ADVICE**\n\nInformation in respect to indemnity and insurance arrangements for Directors and senior executives appears in the Directors' Statutory Report on page 49 of this Annual Report.\n\nThe Board Guidelines set out the circumstances and procedures pursuant to which a Director, in furtherance of his or her duties, may seek independent professional advice at the Company's expense. Those procedures require prior consultation with, and approval by, the Chairman and assurances as to the qualifications and reasonableness of the fees of the relevant expert and, under normal circumstances, the provision of the expert's advice to the Board.\n\nPursuant to a deed executed by the Company and each Director, a Director also has the right to have access to all documents which have been presented to meetings of the Board or to any Committee of the Board or otherwise made available to the Director whilst in office. This right continues for a term of seven years after ceasing to be a Director or such longer period as is necessary to determine relevant legal proceedings that commenced during that term.\n\n### **5. REMUNERATION**\n\nThe role, responsibilities and composition of the Remuneration Committee and details of\n\nthe Company's remuneration objectives and principles, nonexecutive Director remuneration and executive remuneration are set out on pages 37 to 40 of this Annual Report in the Directors' and Executives' Remuneration section, as well as in the Directors' Statutory Report and in Notes 18 and 26 of the Financial Statements.\n\nDetails of the nature and amount of the remuneration of:\n\n- the Directors; and\n- the Specified Executives;\n\nare set out on pages 37 to 40 of this Annual Report.\n\n#### **6. AUDIT COMMITTEE**\n\nThe role of the Audit Committee is documented in a Charter, approved by the Board. This Charter was revised in August 2004 in line with contemporary best practice, and can be found on the Company's website.\n\n#### **6.1 Composition of the Audit Committee**\n\nThe Committee is required to consist of no less than three members and to meet at least three times per year. All members must be independent, non-executive Directors and financially literate, with at least one member having past employment experience in finance and accounting, requisite professional certification in accounting or other comparable experience or background. The Chairman of the Board is precluded from being the Chairman of the Audit Committee.\n\nThe current members of the Audit Committee, all of whom are independent non-executive Directors, are: Mr G W McGregor (Chairman), Professor J Sloan\n\nand Mr R M Harding. The external auditors, CEO, Chief Financial Officer (\"CFO\"), Manager Risk and Audit, and Manager – Financial Planning and Analysis attend Committee meetings by invitation. There were 4 meetings held in 2004.\n\n## **6.2 Role of the Audit Committee**\n\nThe primary objective of the Audit Committee is to assist the Board to fulfil its corporate governance and oversight responsibilities related to financial accounting practices, external financial reporting, financial reporting, risk management and internal control, and the internal and external audit function.\n\nSpecifically, the role of the Audit Committee includes:\n\n- examining the accounting policies of the Company to determine whether they are appropriate and in accordance with generally accepted practices;\n- ensuring that truth and fairness is reflected in the preparation and publication of the Company's financial reports;\n- meeting regularly with the internal and external auditors to reinforce their respective independence and to determine the appropriateness of internal and external audit procedures;\n- reviewing the performance of the internal and external auditors and providing them with confidential access to the Board;\n- receiving from the external auditors a formal written statement delineating all", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "- 〉 is employed, or has previously been employed in an executive capacity by the Company, and there has not been a period of at least three years between ceasing such employment and serving on the Board;\n- 〉 has within the last three years been a principal of a material professional adviser or a material consultant to the Company, or an employee materially associated with the above mentioned adviser / consultant;\n- 〉 is a material supplier or customer of the Company, or an officer of or otherwise associated directly or indirectly with a material supplier or customer; and\n- 〉 has a material contractual relationship with the Company other than as a Director.\n\nThe concept of 'materiality' is considered from both the Company and the individual Director perspective. The determination of materiality requires consideration of both quantitative and qualitative elements. An item is presumed to be quantitatively immaterial if it is equal or less than 5% of the appropriate base amount. It is presumed to be material (unless there is qualitative evidence to the contrary) if it is equal to or greater than 10% of the appropriate base amount. Qualitative factors considered include whether a relationship is strategically important, the competitive landscape, the nature of the relationship and the contractual or other arrangements governing it and other factors.\n\n## Appointment of Directors\n\nNominations of new Directors, recommended by the Nomination Committee, are considered by the full Board.\n\nThe Nomination Committee employs external consultants to access a wide base of potential Directors, considering their range of skills and experience required in light of the:\n\n- 〉 current composition of the Board;\n- 〉 need for independence;\n- 〉 the Company's Diversity Policy;\n- 〉 strategic direction and progress of the Company; and\n- 〉 nature of the Company's business.\n\nThe Board assesses nominated Directors against a range of criteria including experience, professional expertise, personal qualities, potential conflicts of interest and their capacity to commit themselves to the Board's activities.\n\n## Performance Review of the Board and Senior Executives\n\nEach year the Board receives reports from management detailing interactions with and outlining the expressed views of the Company's shareholders. The Nomination Committee is responsible for evaluation of the Board, its committees and its key executives.\n\nPerformance evaluations of the Board, its committees, the individual Directors and key executives were undertaken in the 2013 financial year in accordance with the above processes.\n\nThe Managing Director undertakes an annual review of the performance of each Senior Executive against individual tasks and objectives.\n\n## Independent Professional Advice\n\nDirectors are able to access members of the management team at any time to request relevant information.\n\nIt is also Board policy that Directors may seek independent advice at the Company's expense.\n\n## Board Committees\n\nTo assist the Board in fulfilling its responsibilities, the Board has established three committees to consider certain issues and functions. These committees are as follows:\n\n- 〉 Audit Committee;\n- 〉 Remuneration Committee; and\n- 〉 Nomination Committee.\n\nEach committee operates under its own charter.\n\n## Audit Committee\n\nThe members of the Audit Committee as at the date of this Report are:\n\n- 〉 Mr Craig Carracher (Chairman of Audit Committee);\n- 〉 Mr Ross Smyth-Kirk; and\n- 〉 Mr Peter McAleer.\n\nThe Committee has appropriate financial expertise. All members of the Committee are financially literate and have an appropriate understanding of the industry in which the Company operates.\n\nThe Audit Committee's role is to assist the Board to fulfil its responsibilities associated with the Company's accounts, its external financial reporting, its internal control structure, risk\n\nmanagement systems and audit function. The primary functions of the Audit Committee are to:\n\n- 〉 review the financial information provided by the Board to shareholders and other parties ensuring that it is true and fair and complies with relevant accounting standards;\n- 〉 ensure that corporate risk management policies and internal controls are in place and are maintained in accordance with appropriate standards and statutory requirements;\n- 〉 oversee and evaluate the quality of the audits conducted by the external auditors;\n- 〉 provide for open communication between the external auditors and the Board for the exchange of views and information; and\n- 〉 recommend to the Board the nomination and remuneration of the external auditors and ensure their independence and integrity.\n\nIn fulfilling its responsibilities, the Audit Committee has rights of access to management and to auditors (external and internal) without management present and may seek explanations and additional information.\n\nThe Audit Committee met twice during the 2013 financial year.\n\nThe Audit Committee operates in accordance with a charter published in the 'Corporate Governance' section of the Company's website.\n\n## Auditor Independence and Engagement\n\nThe charter adopted by the Audit Committee confirms its role in assisting the Board in respect of the appointment, compensation, retention and oversight of the Company's external auditors. The external auditors are required to confirm that they have maintained their independence in accordance with the *Corporations Act 2001* (Cth) and the rules of professional accounting bodies.\n\nThe performance of the external auditor is reviewed annually and applications for tender of external audit services are requested when deemed appropriate, taking into consideration assessment of performance, existing value and tender costs.\n\nAn analysis of fees paid to the external auditors, including a breakdown of fees for non-audit services, is provided in the Directors' Report. It is the policy of the external auditors to provide an annual declaration of their independence to the Audit Committee.", - "page_start": 36, - "page_end": 36, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "Where are the peaks of the VHE blazars ?", - "target_page": 1, - "target_passage": " VHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both \"quiescent\" and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VER-ITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0.3 < z < 0.7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n### **Acknowledgments**\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument.\n\n### **References**\n\n- [1] F. Aharonian et al. 2007, ApJ, 664, L71\n- [2] F. Aharonian et al. 2006, Nature, 440, 1018\n- [3] F. Aharonian et al. 2007, A&A, 475, L9\n- [4] J. Holder, et al. 2008, AIPC, 1085, 657\n- [5] L. Costamante & G. Ghisellini 2002, A&A, 384, 56\n- [6] E.S. Perlman 2000, AIPC, 515, 53\n- [7] F.W. Stecker et al. 1996, ApJ, 473, L75\n- [8] P. Giommi et al. 2005, A&A, 434, 385\n- [9] S. Turriziani et al. 2007, A&A, 472, 699\n- [10] L. Costamante 2006, arXiv:0612709\n- [11] P. Padovani et al. 2002, ApJ, 581, 895\n- [12] R. Muhkerjee et al. 2001, AIPC, 558, 324\n- [13] A.A. Abdo et al. 2009, ApJ, 700, 597\n- [14] V.A. Acciari et al. 2008, ApJ, 684, L73\n- [15] V.A. Acciari et al. 2009, ApJ, 707, 612\n- [16] V.A. Acciari et al. 2009, ApJ, 690, L126\n- [17] V.A. Acciari et al. 2009, ApJ, 693, L104\n- [18] L.C. Reyes 2009, arXiv:0907.5175\n- [19] R.A. Ong 2009, ATel, 1941\n- [20] R.A. Ong et al. 2009, ATel, 2272\n- [21] V.A. Acciari et al. 2009, ApJ, 708, L100\n- [22] R.A. Ong et al. 2009, ATel, 2301\n- [23] R.A. Ong et al. 2009, ATel, 2260\n- [24] R.A. Ong et al. 2009, ATel, 2309\n- [25] W. Benbow 2009, arXiv:0908.1412\n- [26] V.A. Acciari et al. 2009, ApJ, submitted\n- [27] V.A. Acciari et al. 2009, ApJ, 695, 1370\n- [28] V.A. Acciari et al. 2009, ApJ, in press\n- [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "| Object | | Class Redshift |\n| --- | --- | --- |\n| M 87 | FR I | 0.004 |\n| Mkn 421 | HBL | 0.030 |\n| Mkn 501 | HBL | 0.034 |\n| 1ES 2344+514 | HBL | 0.044 |\n| 1ES 1959+650 | HBL | 0.047 |\n| W Comae† | IBL | 0.102 |\n| RGB J0710+591† | HBL | 0.125 |\n| H 1426+428 | HBL | 0.129 |\n| 1ES 0806+524† | HBL | 0.138 |\n| 1ES 0229+200 | HBL | 0.139 |\n| 1ES 1218+304 | HBL | 0.182 |\n| RBS 0413† | HBL | 0.190 |\n| 1ES 0502+675† | HBL | 0.341 |\n| 3C 66A† | IBL | 0.444? |\n| PKS 1424+240† | IBL | ? |\n| VER J0521+211† | ? | ? |\n\nTable I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n(∼5.5σ; 3% Crab flux above 300 GeV; ΓVHE ∼ 2.7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-092 . These data resulted in the discovery of VHE gamma-rays (>270γ, ∼6σ) at a flux (>200 GeV) of ∼2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERI-TAS and LAT collaborations.\n\n### **5.2. Discoveries Motivated by Fermi-LAT**\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES 0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object (z = 0.341) detected in the VHE band. In addition, VER J0521+211, likely associated with the radio-loud AGN RGB J0521.8+2112, was detected by VERTAS in ∼4 h of observations in October 2009 [23]. These observations were motivated by its identification as a >30 GeV γ-ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VER J0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n### **6. Blazars Upper Limits**\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ-ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ-rays, corresponding to a statistical significance of 4.8σ, observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected (>5σ), by VERITAS does not show a significant excess (∼120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with ΓVHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n# **7. Multi-wavelength Studies of VHE Blazars**\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (2009- 10 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VER-ITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered\n\n2RBS 0413 was observed further by VERITAS in Fall 2009.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "## **References**\n\n- [1] M. Sikora and G. Madejski, in American Institute of Physics Conference Series, edited by F. A. Aharonian and H. J. V¨olk (2001), vol. 558 of American Institute of Physics Conference Series, pp. 275–288.\n- [2] M. Sikora, in Blazar Demographics and Physics, edited by P. Padovani and C. M. Urry (2001), vol. 227 of Astronomical Society of the Pacific Conference Series, pp. 95–104.\n- [3] J. A. Stevens, S. J. Litchfield, E. I. Robson, D. H. Hughes, W. K. Gear, H. Terasranta, E. Valtaoja, and M. Tornikoski, ApJ 437, 91 (1994).\n- [4] P. T. P. Ho, J. M. Moran, and K. Y. Lo, ApJl 616, L1 (2004).\n- [5] M. A. Gurwell, A. B. Peck, S. R. Hostler, M. R. Darrah, and C. A. Katz, in From Z-Machines to ALMA: (Sub)Millimeter Spectroscopy of Galaxies, edited by A. J. Baker, J. Glenn, A. I. Harris,\n\nJ. G. Mangum, and M. S. Yun (2007), vol. 375 of Astronomical Society of the Pacific Conference Series, p. 234.\n\n- [6] S. E. Healey, R. W. Romani, G. Cotter, P. F. Michelson, E. F. Schlafly, A. C. S. Readhead, P. Giommi, S. Chaty, I. A. Grenier, and L. C. Weintraub, ApJS 175, 97 (2008).\n- [7] A. A. Abdo, M. Ackermann, M. Ajello, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, B. M. Baughman, et al., ApJ 700, 597 (2009).\n- [8] T. Hovatta, E. Nieppola, M. Tornikoski, E. Valtaoja, M. F. Aller, and H. D. Aller, A&A 485, 51 (2008).\n- [9] B. C. Kelly, J. Bechtold, and A. Siemiginowska, ApJ 698, 895 (2009).\n- [10] M. Sikora, R. Moderski, and G. M. Madejski, ApJ 675, 71 (2008).", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "What are the blazars observed in the discovery program ?", - "target_page": 2, - "target_passage": "The blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. How ever, the program also includes IBLs (intermediate peaked) and LBLs (low-peaked), as well as flat spec trum radio quasars (FSRQs), in an attempt to in crease the types of blazars known to emit VHE γ-rays.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both \"quiescent\" and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VER-ITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0.3 < z < 0.7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n### **Acknowledgments**\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument.\n\n### **References**\n\n- [1] F. Aharonian et al. 2007, ApJ, 664, L71\n- [2] F. Aharonian et al. 2006, Nature, 440, 1018\n- [3] F. Aharonian et al. 2007, A&A, 475, L9\n- [4] J. Holder, et al. 2008, AIPC, 1085, 657\n- [5] L. Costamante & G. Ghisellini 2002, A&A, 384, 56\n- [6] E.S. Perlman 2000, AIPC, 515, 53\n- [7] F.W. Stecker et al. 1996, ApJ, 473, L75\n- [8] P. Giommi et al. 2005, A&A, 434, 385\n- [9] S. Turriziani et al. 2007, A&A, 472, 699\n- [10] L. Costamante 2006, arXiv:0612709\n- [11] P. Padovani et al. 2002, ApJ, 581, 895\n- [12] R. Muhkerjee et al. 2001, AIPC, 558, 324\n- [13] A.A. Abdo et al. 2009, ApJ, 700, 597\n- [14] V.A. Acciari et al. 2008, ApJ, 684, L73\n- [15] V.A. Acciari et al. 2009, ApJ, 707, 612\n- [16] V.A. Acciari et al. 2009, ApJ, 690, L126\n- [17] V.A. Acciari et al. 2009, ApJ, 693, L104\n- [18] L.C. Reyes 2009, arXiv:0907.5175\n- [19] R.A. Ong 2009, ATel, 1941\n- [20] R.A. Ong et al. 2009, ATel, 2272\n- [21] V.A. Acciari et al. 2009, ApJ, 708, L100\n- [22] R.A. Ong et al. 2009, ATel, 2301\n- [23] R.A. Ong et al. 2009, ATel, 2260\n- [24] R.A. Ong et al. 2009, ATel, 2309\n- [25] W. Benbow 2009, arXiv:0908.1412\n- [26] V.A. Acciari et al. 2009, ApJ, submitted\n- [27] V.A. Acciari et al. 2009, ApJ, 695, 1370\n- [28] V.A. Acciari et al. 2009, ApJ, in press\n- [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "| Object | | Class Redshift |\n| --- | --- | --- |\n| M 87 | FR I | 0.004 |\n| Mkn 421 | HBL | 0.030 |\n| Mkn 501 | HBL | 0.034 |\n| 1ES 2344+514 | HBL | 0.044 |\n| 1ES 1959+650 | HBL | 0.047 |\n| W Comae† | IBL | 0.102 |\n| RGB J0710+591† | HBL | 0.125 |\n| H 1426+428 | HBL | 0.129 |\n| 1ES 0806+524† | HBL | 0.138 |\n| 1ES 0229+200 | HBL | 0.139 |\n| 1ES 1218+304 | HBL | 0.182 |\n| RBS 0413† | HBL | 0.190 |\n| 1ES 0502+675† | HBL | 0.341 |\n| 3C 66A† | IBL | 0.444? |\n| PKS 1424+240† | IBL | ? |\n| VER J0521+211† | ? | ? |\n\nTable I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n(∼5.5σ; 3% Crab flux above 300 GeV; ΓVHE ∼ 2.7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-092 . These data resulted in the discovery of VHE gamma-rays (>270γ, ∼6σ) at a flux (>200 GeV) of ∼2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERI-TAS and LAT collaborations.\n\n### **5.2. Discoveries Motivated by Fermi-LAT**\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES 0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object (z = 0.341) detected in the VHE band. In addition, VER J0521+211, likely associated with the radio-loud AGN RGB J0521.8+2112, was detected by VERTAS in ∼4 h of observations in October 2009 [23]. These observations were motivated by its identification as a >30 GeV γ-ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VER J0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n### **6. Blazars Upper Limits**\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ-ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ-rays, corresponding to a statistical significance of 4.8σ, observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected (>5σ), by VERITAS does not show a significant excess (∼120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with ΓVHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n# **7. Multi-wavelength Studies of VHE Blazars**\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (2009- 10 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VER-ITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered\n\n2RBS 0413 was observed further by VERITAS in Fall 2009.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "As you notice, the migration steps are just two lines and can be automated in many ways. Open the database and on the Aggregation tab, look for the document that was inserted in the cluster on-premises. The result is shown in Figure 7-15.\n\n*Figure 7-15 Data inserted on-premises seen at the public cloud MongoDB*\n\nFor a seamless experience, a network solution that knows where the application is running must be in place. For more information, see Appendix C, \"Seamless application movement across multicloud environments\" on page 241.", - "page_start": 204, - "page_end": 204, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "How many VHE blazar candidates were observed by VERITAS between September 2007 andJune 2009 ?", - "target_page": 3, - "target_passage": "More than 50 VHE blazar candidates were observed by VERITAS betweenSeptember 2007 andJune 2009.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both \"quiescent\" and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VER-ITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0.3 < z < 0.7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n### **Acknowledgments**\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument.\n\n### **References**\n\n- [1] F. Aharonian et al. 2007, ApJ, 664, L71\n- [2] F. Aharonian et al. 2006, Nature, 440, 1018\n- [3] F. Aharonian et al. 2007, A&A, 475, L9\n- [4] J. Holder, et al. 2008, AIPC, 1085, 657\n- [5] L. Costamante & G. Ghisellini 2002, A&A, 384, 56\n- [6] E.S. Perlman 2000, AIPC, 515, 53\n- [7] F.W. Stecker et al. 1996, ApJ, 473, L75\n- [8] P. Giommi et al. 2005, A&A, 434, 385\n- [9] S. Turriziani et al. 2007, A&A, 472, 699\n- [10] L. Costamante 2006, arXiv:0612709\n- [11] P. Padovani et al. 2002, ApJ, 581, 895\n- [12] R. Muhkerjee et al. 2001, AIPC, 558, 324\n- [13] A.A. Abdo et al. 2009, ApJ, 700, 597\n- [14] V.A. Acciari et al. 2008, ApJ, 684, L73\n- [15] V.A. Acciari et al. 2009, ApJ, 707, 612\n- [16] V.A. Acciari et al. 2009, ApJ, 690, L126\n- [17] V.A. Acciari et al. 2009, ApJ, 693, L104\n- [18] L.C. Reyes 2009, arXiv:0907.5175\n- [19] R.A. Ong 2009, ATel, 1941\n- [20] R.A. Ong et al. 2009, ATel, 2272\n- [21] V.A. Acciari et al. 2009, ApJ, 708, L100\n- [22] R.A. Ong et al. 2009, ATel, 2301\n- [23] R.A. Ong et al. 2009, ATel, 2260\n- [24] R.A. Ong et al. 2009, ATel, 2309\n- [25] W. Benbow 2009, arXiv:0908.1412\n- [26] V.A. Acciari et al. 2009, ApJ, submitted\n- [27] V.A. Acciari et al. 2009, ApJ, 695, 1370\n- [28] V.A. Acciari et al. 2009, ApJ, in press\n- [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "| Object | | Class Redshift |\n| --- | --- | --- |\n| M 87 | FR I | 0.004 |\n| Mkn 421 | HBL | 0.030 |\n| Mkn 501 | HBL | 0.034 |\n| 1ES 2344+514 | HBL | 0.044 |\n| 1ES 1959+650 | HBL | 0.047 |\n| W Comae† | IBL | 0.102 |\n| RGB J0710+591† | HBL | 0.125 |\n| H 1426+428 | HBL | 0.129 |\n| 1ES 0806+524† | HBL | 0.138 |\n| 1ES 0229+200 | HBL | 0.139 |\n| 1ES 1218+304 | HBL | 0.182 |\n| RBS 0413† | HBL | 0.190 |\n| 1ES 0502+675† | HBL | 0.341 |\n| 3C 66A† | IBL | 0.444? |\n| PKS 1424+240† | IBL | ? |\n| VER J0521+211† | ? | ? |\n\nTable I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n(∼5.5σ; 3% Crab flux above 300 GeV; ΓVHE ∼ 2.7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-092 . These data resulted in the discovery of VHE gamma-rays (>270γ, ∼6σ) at a flux (>200 GeV) of ∼2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERI-TAS and LAT collaborations.\n\n### **5.2. Discoveries Motivated by Fermi-LAT**\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES 0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object (z = 0.341) detected in the VHE band. In addition, VER J0521+211, likely associated with the radio-loud AGN RGB J0521.8+2112, was detected by VERTAS in ∼4 h of observations in October 2009 [23]. These observations were motivated by its identification as a >30 GeV γ-ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VER J0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n### **6. Blazars Upper Limits**\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ-ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ-rays, corresponding to a statistical significance of 4.8σ, observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected (>5σ), by VERITAS does not show a significant excess (∼120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with ΓVHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n# **7. Multi-wavelength Studies of VHE Blazars**\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (2009- 10 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VER-ITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered\n\n2RBS 0413 was observed further by VERITAS in Fall 2009.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "#### **Creating Fibre Channel hosts**\n\nTo create a Fibre Channel host, complete the following steps:\n\n- 1. Rescan the SAN on Storwize V7000 by using the **detectmdisk** command (see Example 8-14).\n*Example 8-14 Rescanning the SAN* \n\nIBM_Storwize:ITSO-V7000:superuser>**detectmdisk**\n\n**Note:** The **detectmdisk** command does not return any response.\n\nIf the zoning was implemented correctly, any new WWPNs are discovered by the Storwize V7000 system after running the **detectmdisk** command.\n\n- 2. List the candidate WWPNs and identify the WWPNs belonging to the new host, as shown in Example 8-15.\n*Example 8-15 Available WWPNs*\n\n```\nIBM_Storwize:ITSO-V7000:superuser>lsfcportcandidate\nfc_WWPN \n2100000E1E09E3E9 \n2100000E1E30E5E8 \n2100000E1E30E60F \n2100000E1EC2E5A2 \n2100000E1E30E597 \n2100000E1E30E5EC\n```\n- 3. Run the **mkhost** command with the required parameters, as shown in Example 8-16.\n*Example 8-16 Host creation*\n\n```\nIBM_Storwize:ITSO-V7000:superuser>mkhost -name ITSO-VMHOST-03 -fcwwpn \n2100000E1E30E597:2100000E1E30E5EC\nHost, id [3], successfully created\nIBM_Storwize:ITSO-V7000:superuser>\n```\n#### **Creating iSCSI hosts**\n\nBefore you create an iSCSI host in Storwize V7000, the iSCSI qualified name (IQN) address of the host must be known. See your host operating system-specific documentation to find the IQN of the host.\n\nCreate a host by completing the following steps:\n\n- 1. Create the iSCSI host by using the **mkhost** command (see Example 8-17).\n*Example 8-17 Creating an iSCSI host by using the mkhost command*\n\n```\nIBM_Storwize:ITSO-V7000:superuser>mkhost -iscsiname \niqn.1994-05.com.redhat:e6ff477b58 -name RHEL-Host-06\nHost, id [4], successfully created\nIBM_Storwize:ITSO-V7000:superuser>\n```\n- 2. The iSCSI host can be verified by using the **lshost** command, as shown in Example 8-18.\n*Example 8-18 Verifying the iSCSI host by using the lshost command*\n\n```\nIBM_Storwize:ITSO-V7000:superuser>lshost 4\n```", - "page_start": 395, - "page_end": 395, - "source_file": "sg247938.pdf" - }, - { - "text": "# Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi\n\nDept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\nA. Camero-Arranz\n\nFundaci´on Espa˜nola de Ciencia y Tecnolog´ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\nE. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\nM.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n### I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It consists of 12 NaI detectors 500 in diameter by 0.500 thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 500 in diameter by 500 thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140◦ , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ±35◦ slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26.5 ◦ , individual occultation steps last for ∼10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "#### **6.3.5 Uninstalling the OpenShift Container Platform**\n\nComplete the following steps to uninstall the OpenShift Container Platform:\n\n- 1. Unregister OpenShift VMs from the RHSM:\n\n```\nansible -i nodes,lb -a 'subscription-manager remove --all'\nansible -i nodes,lb -a 'subscription-manager unregister'\nansible -i nodes,lb -a 'subscription-manager clean'\nwrknode01.domain.example.com | SUCCESS | rc=0 >>\nUnregistering from: subscription.rhsm.redhat.com:443/subscription\nSystem has been unregistered.\n...\nOutput truncated\n...\nlbsnode01.domain.example.com | SUCCESS | rc=0 >>\nUnregistering from: subscription.rhsm.redhat.com:443/subscription\nSystem has been unregistered.\n```\n- 2. Destroy the PowerVC infrastructure using Terraform:\n\n```\nterrafrom destroy\ndata.openstack_images_image_v2.vm1-image-name: Refreshing state...\n...\nOutput truncated\n...\nPlan: 0 to add, 0 to change, 33 to destroy.\nDo you really want to destroy all resources?\n Terraform will destroy all your managed infrastructure, as shown above.\n There is no undo. Only 'yes' will be accepted to confirm.\n Enter a value: yes\nnull_resource.vm4_post_install_config[0]: Destroying... [id=1118127418209700949]\nnull_resource.vm4_post_install_config[0]: Destruction complete after 0s\nopenstack_compute_volume_attach_v2.vm1_va_dockerdisk1[2]: Destroying... \n[id=5d929233-f9fc-4224-8960-5f1f050e4174/2563294f-d9ce-44a6-ac04-afc73a07593e]\nopenstack_compute_volume_attach_v2.vm1_va_dockerdisk1[0]: Destroying... \n[id=f7c4966f-7036-4494-bd35-8e66b84ed18e/3f5bc9d4-c91e-42db-b529-f7b03a1e963c]\nopenstack_compute_volume_attach_v2.vm3_va_dockerdisk1[0]: Destroying... \n[id=c47782d9-2d1b-4430-8d58-a755d652b07d/23c91f6b-790c-4af1-bda8-ee471ff75f1b]\nopenstack_compute_volume_attach_v2.vm3_va_dockerdisk1[1]: Destroying... \n[id=8cf475c5-9315-4032-9400-e46a1c6a3b7d/4b3c6d75-985b-4ba2-8363-8dd16e76e131]\nopenstack_compute_volume_attach_v2.vm1_va_dockerdisk1[1]: Destroying... \n[id=d6305c81-efd1-4257-bea1-2eb9a5ad8145/0569bceb-753f-48ee-82d0-9e00a3245b96]\nopenstack_compute_volume_attach_v2.vm3_va_dockerdisk1[2]: Destroying... \n[id=4d28442a-fa3d-4389-80d8-2a2fe3199b19/a80c34a6-42fa-4d3d-866b-0a6a4548ea56]\n...\nOutput truncated\n...\nopenstack_networking_subnet_v2.net1-subnet: Destruction complete after 1m58s\nopenstack_networking_network_v2.net1: Destroying... \n[id=42b5da3c-98ff-4f07-ba52-8fcd61e161ab]\nopenstack_networking_network_v2.net1: Destruction complete after 6s\nDestroy complete! Resources: 33 destroyed.\n```", - "page_start": 152, - "page_end": 152, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "For which language have been introduced the ActiveInference.jl library ?", - "target_page": 1, - "target_passage": " We introduce a new software package for the Julia programming language, the library ActiveInference.jl.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "*Article*\n\n# **Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models**\n\n**Samuel William Nehrer 1,† , Jonathan Ehrenreich Laursen 1,† , Conor Heins 2,3,* , Karl Friston 3,4 , Christoph Mathys 5 and Peter Thestrup Waade 5**\n\n- 1 School of Culture and Communication, Aarhus University, 8000 Aarhus, Denmark; 202204724@post.au.dk (S.W.N.); 202204836@post.au.dk (J.E.L.)\n- 2 Department of Collective Behaviour, Max Planck Institute of Animal Behavior, D-78457 Konstanz, Germany\n- 3 VERSES Research Lab., Los Angeles, CA 90016, USA; k.friston@ucl.ac.uk\n- 4 Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK\n- 5 Interacting Minds Centre, Aarhus University, 8000 Aarhus, Denmark; chmathys@cas.au.dk (C.M.); ptw@cas.au.dk (P.T.W.)\n- ***** Correspondence: cheins@ab.mpg.de\n- † These authors contributed equally to this work.\n\n**Abstract:** We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison.\n\n**Keywords:** active inference; free energy principle; predictive processing; Markov decision process; cognitive modelling; Julia\n\n**PACS:** 87.15.Aa\n\n**MSC:** 91-08\n\n**JEL Classification:** C63\n\n## **1. Introduction**\n\nWe introduce a novel software library for Julia, ActiveInference, which lets users produce the simulated behaviour of agents and their internal belief states with active inference (AIF) models, as well as fit such models to empirically observed behaviour. AIF [1–3] is a generally applicable formal framework for understanding and simulating intelligent behaviour that is based in neurobiology and first principles from statistical physics [4–8]. AIF treats action and perception as unified under a joint imperative: to minimise the variational free energy (*VFE*), which quantifies how well the agent's internal generative model explains incoming sensory observations. It is an upper bound on the the surprise from sensory observations, making AIF formally related to prediction error\n\nAcademic Editor: Astero Provata\n\nReceived: 25 October 2024 Revised: 2 January 2025 Accepted: 7 January 2025 Published: 12 January 2025\n\n**Citation:** Nehrer, S.W.; Ehrenreich Laursen, J.; Heins, C.; Friston, K.; Mathys, C.; Thestrup Waade, P. Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models. *Entropy* **2025**, *27*, 62. https://doi.org/10.3390/e27010062\n\n**Copyright:** © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters *θ*:\n\n$p(\\Theta)=Dir(\\Theta|\\theta)$ (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters *θ* with the following update equation:\n\n$$\\theta_{t+1}=\\omega*\\theta_{t}+\\eta*\\chi t\\tag{20}$$\n\nThe updated value for the concentration parameter (*θt*+1) is found by adding the previous concentration parameter *θt* multiplied by a forgetting rate *ω* to the observed data count *χ* (either the observation in the case of **A** learning, or the inferred state or state transition for other matrices) multiplied by a learning rate *η*. With this relatively simple update equation—which, in essence, amounts to just counting the occurrences of categories—an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## **3. Using ActiveInference.jl**\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference. This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n#### *3.1. Creating and Using a POMDP*\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\n✞ ☎\n\n```\nusing Pkg\nPkg.add( ActiveInference )\n✝ ✆\n```\nIt can then be loaded into the current project environment:\n\n✞ ☎ **using** ActiveInference ✝ ✆\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Julia uses its \"just-in-time\" (JIT) compilations via the LLVM framework to approach the speed of languages like C without relying on external compilers [36]. Julia is also natively auto-differentiable, which means it can solve what is called the two-language problem (i.e., that high-level languages often have to rely on lower-level languages, either for performance or for auto-differentiability; this is the case with standard tools for cognitive modelling, where languages like R [37] must rely on external languages like STAN [38] for Bayesian model fitting). This means that ActiveInference, in conjunction with Turing [39], Julia's powerful library for Bayesian model fitting, and its newly developed extension for behavioural modelling, ActionModels, makes it possible to use cutting-edge Markov Chain Monte Carlo [40] methods, as well as variational methods [35], for Bayesian model fitting with AIF. Crucially, this allows researchers to not only simulate AIF in a fast programming language, but to also fit them to empirical behaviour, as is performed in cognitive modelling and computational psychiatry. Importantly, this also places AIF models in an ecosystem of other models for computational psychiatry so that it can easily be compared with models, like Hierarchical Gaussian Filters [41], and reinforcement learning models, like the classic Rescorla–Wagner model [42]. As part of making ActiveInference.jl available to the scientific community, and to the larger software ecosystem within computational psychiatry, it is implemented as part of the Translational Algorithms for Psychiatry-Advancing Science (TAPAS) ecosystem [43].\n\nIn the next section, we provide a conceptual and formal introduction to AIF, particularly in the context of using POMDP generative models. In Section 3, we demonstrate how to use the package in practice, both for simulation and parameter estimation. In Section 4, we give a fully worked example of how ActiveInference can be used with a concrete simulated dataset. Finally, we discuss potential applications and future directions for developing the package.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "in [21, 93] and direct resources away from efforts that would facilitate long-term progress towards natural language understanding, without using unfathomable training data.\n\nFurthermore, the tendency of human interlocutors to impute meaning where there is none can mislead both NLP researchers and the general public into taking synthetic text as meaningful. Combined with the ability of LMs to pick up on both subtle biases and overtly abusive language patterns in training data, this leads to risks of harms, including encountering derogatory language and experiencing discrimination at the hands of others who reproduce racist, sexist, ableist, extremist or other harmful ideologies reinforced through interactions with synthetic language. We explore these potential harms in §6 and potential paths forward in §7.\n\nWe hope that a critical overview of the risks of relying on everincreasing size of LMs as the primary driver of increased performance of language technology can facilitate a reallocation of efforts towards approaches that avoid some of these risks while still reaping the benefits of improvements to language technology.\n\n#### 2 BACKGROUND\n\nSimilar to [14], we understand the term language model (LM) to refer to systems which are trained on string prediction tasks: that is, predicting the likelihood of a token (character, word or string) given either its preceding context or (in bidirectional and masked LMs) its surrounding context. Such systems are unsupervised and when deployed, take a text as input, commonly outputting scores or string predictions. Initially proposed by Shannon in 1949 [117], some of the earliest implemented LMs date to the early 1980s and were used as components in systems for automatic speech recognition (ASR), machine translation (MT), document classification, and more [111]. In this section, we provide a brief overview of the general trend of language modeling in recent years. For a more in-depth survey of pretrained LMs, see [105].\n\nBefore neural models, n-gram models also used large amounts of data [20, 87]. In addition to ASR, these large n-gram models of English were developed in the context of machine translation from another source language with far fewer direct translation examples. For example, [20] developed an n-gram model for English with a total of 1.8T n-grams and noted steady improvements in BLEU score on the test set of 1797 Arabic translations as the training data was increased from 13M tokens.\n\nThe next big step was the move towards using pretrained representations of the distribution of words (called word embeddings) in other (supervised) NLP tasks. These word vectors came from systems such as word2vec [85] and GloVe [98] and later LSTM models such as context2vec [82] and ELMo [99] and supported state of the art performance on question answering, textual entailment, semantic role labeling (SRL), coreference resolution, named entity recognition (NER), and sentiment analysis, at first in English and later for other languages as well. While training the word embeddings required a (relatively) large amount of data, it reduced the amount of labeled data necessary for training on the various supervised tasks. For example, [99] showed that a model trained with ELMo reduced the necessary amount of training data needed to achieve similar results on SRL compared to models without, as shown in one instance where a model trained with ELMo reached\n\n| Year | Model | # of Parameters | Dataset Size |\n| --- | --- | --- | --- |\n| 2019 | BERT [39] | 3.4E+08 | 16GB |\n| 2019 | DistilBERT [113] | 6.60E+07 | 16GB |\n| 2019 | ALBERT [70] | 2.23E+08 | 16GB |\n| 2019 | XLNet (Large) [150] | 3.40E+08 | 126GB |\n| 2020 | ERNIE-Gen (Large) [145] | 3.40E+08 | 16GB |\n| 2019 | RoBERTa (Large) [74] | 3.55E+08 | 161GB |\n| 2019 | MegatronLM [122] | 8.30E+09 | 174GB |\n| 2020 | T5-11B [107] | 1.10E+10 | 745GB |\n| 2020 | T-NLG [112] | 1.70E+10 | 174GB |\n| 2020 | GPT-3 [25] | 1.75E+11 | 570GB |\n| 2020 | GShard [73] | 6.00E+11 | – |\n| 2021 | Switch-C [43] | 1.57E+12 | 745GB |\n\nTable 1: Overview of recent large language models\n\nthe maximum development F1 score in 10 epochs as opposed to 486 without ELMo. This model furthermore achieved the same F1 score with 1% of the data as the baseline model achieved with 10% of the training data. Increasing the number of model parameters, however, did not yield noticeable increases for LSTMs [e.g. 82].\n\nTransformer models, on the other hand, have been able to continuously benefit from larger architectures and larger quantities of data. Devlin et al. [39] in particular noted that training on a large dataset and fine-tuning for specific tasks leads to strictly increasing results on the GLUE tasks [138] for English as the hyperparameters of the model were increased. Initially developed as Chinese LMs, the ERNIE family [130, 131, 145] produced ERNIE-Gen, which was also trained on the original (English) BERT dataset, joining the ranks of very large LMs. NVIDIA released the MegatronLM which has 8.3B parameters and was trained on 174GB of text from the English Wikipedia, OpenWebText, RealNews and CC-Stories datasets [122]. Trained on the same dataset, Microsoft released T-NLG,1 an LM with 17B parameters. OpenAI's GPT-3 [25] and Google's GShard [73] and Switch-C [43] have increased the definition of large LM by orders of magnitude in terms of parameters at 175B, 600B, and 1.6T parameters, respectively. Table 1 summarizes a selection of these LMs in terms of training data size and parameters. As increasingly large amounts of text are collected from the web in datasets such as the Colossal Clean Crawled Corpus [107] and the Pile [51], this trend of increasingly large LMs can be expected to continue as long as they correlate with an increase in performance.\n\nA number of these models also have multilingual variants such as mBERT [39] and mT5 [148] or are trained with some amount of multilingual data such as GPT-3 where 7% of the training data was not in English [25]. The performance of these multilingual models across languages is an active area of research. Wu and Drezde [144] found that while mBERT does not perform equally well across all 104 languages in its training data, it performed better at NER, POS tagging, and dependency parsing than monolingual models trained with comparable amounts of data for four low-resource languages. Conversely, [95] surveyed monolingual BERT models developed with more specific architecture considerations or additional monolingual data and found that they generally outperform\n\n1https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameterlanguage-model-by-microsoft/", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "On Reading-comprehension. *arXiv preprint arXiv:1912.06638*.\n\n- Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, and Kevin Gimpel. 2020. A Cross-Task Analysis of Text Span Representations. In *Proceedings of the 5th Workshop on Representation Learning for NLP*, pages 166–176, Online. Association for Computational Linguistics.\n- Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and Practical BERT Models for Sequence Labeling. *arXiv preprint arXiv:1909.00100*.\n- Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. *arXiv preprint arXiv:1908.08962*.\n- Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 5831–5837, Hong Kong, China. Association for Computational Linguistics.\n- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In *Advances in neural information processing systems*, pages 5998– 6008.\n- Jesse Vig. 2019. Visualizing Attention in Transformer-Based Language Representation Models. *arXiv:1904.02679 [cs, stat]*.\n- Jesse Vig and Yonatan Belinkov. 2019. Analyzing the Structure of Attention in a Transformer Language Model. In *Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP*, pages 63–76, Florence, Italy. Association for Computational Linguistics.\n- David Vilares, Michalina Strzyz, Anders Søgaard, and Carlos Gómez-Rodríguez. 2020. Parsing as pretraining. In *Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)*.\n- Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 4387–4397.\n- Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019b. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned. *arXiv preprint arXiv:1905.09418*.\n- Elena Voita and Ivan Titov. 2020. Information-Theoretic Probing with Minimum Description Length. *arXiv:2003.12298 [cs]*.\n- Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal Adversarial Triggers for Attacking and Analyzing NLP. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 2153–2162, Hong Kong, China. Association for Computational Linguistics.\n- Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019b. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. *arXiv preprint arXiv:1909.07940*.\n- Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP*, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.\n- Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. *arXiv:2002.01808 [cs]*.\n- Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019a. Struct-BERT: Incorporating Language Structures into", - "page_start": 20, - "page_end": 20, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "- [26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, \"BERT: Pre-training of deep bidirectional transformers for language understanding,\" in *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, 2019.\n- [27] D. Ding, A. Mallick, C. Wang, R. Sim, S. Mukherjee, V. Ruhle, L. V. Lakshmanan, and A. H. Awadallah, \"Hybrid ¨ LLM: Cost-efficient and quality-aware query routing,\" in *International Conference on Learning Representations (ICLR)*, 2024.\n- [28] Y. Dong, H. Chen, J. Chen, Z. Fang, X. Yang, Y. Zhang, Y. Tian, H. Su, and J. Zhu, \"How robust is Google's Bard to adversarial image attacks?\" *arXiv preprint arXiv:2309.11751*, 2023.\n- [29] N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat *et al.*, \"Glam: Efficient scaling of language models with mixture-of-experts,\" in *International Conference on Machine Learning (ICML)*, 2022.\n- [30] W. Fedus, B. Zoph, and N. Shazeer, \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity,\" *Journal of Machine Learning Research (JMLR)*, 2022.\n- [31] T. Feng, Y. Shen, and J. You, \"Graphrouter: A graph-based router for LLM selections,\" *arXiv preprint arXiv:2410.03834*, 2024.\n- [32] I. J. Goodfellow, J. Shlens, and C. Szegedy, \"Explaining and harnessing adversarial examples,\" in *International Conference on Learning Representations (ICLR)*, 2015.\n- [33] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, \"Not what you've signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection,\" in *ACM AISec*, 2023.\n- [34] J. Hayes, I. Shumailov, and I. Yona, \"Buffer overflow in mixture of experts,\" *arXiv preprint arXiv:2402.05526*, 2024.\n- [35] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, \"Measuring massive multitask language understanding,\" in *International Conference on Learning Representations (ICLR)*, 2021.\n- [36] N. Jain, A. Schwarzschild, Y. Wen, G. Somepalli, J. Kirchenbauer, P.-y. Chiang, M. Goldblum, A. Saha, J. Geiping, and T. Goldstein, \"Baseline defenses for adversarial attacks against aligned language models,\" *arXiv preprint arXiv:2309.00614*, 2023.\n- [37] F. Jelinek, \"Interpolated estimation of Markov source parameters from sparse data,\" 1980. [Online]. Available: https://api.semanticscholar.org/CorpusID:61012010\n- [38] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier *et al.*, \"Mistral 7B,\" *arXiv preprint arXiv:2310.06825*, 2023.\n- [39] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna, F. Bressand *et al.*, \"Mixtral of experts,\" *arXiv preprint arXiv:2401.04088*, 2024.\n- [40] D. Jiang, X. Ren, and B. Y. Lin, \"LLM-Blender: Ensembling large language models with pairwise ranking and generative fusion,\" in *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, 2023.\n- [41] C.-H. Lee, H. Cheng, and M. Ostendorf, \"OrchestraLLM: Efficient orchestration of language models for dialogue state tracking,\" in *Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)*, 2024.\n- [42] Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang, K. Wang, and Y. Liu, \"Jailbreaking ChatGPT via prompt engineering: An empirical study,\" *arXiv preprint arXiv:2305.13860*, 2023.\n- [43] D. Lowd and C. Meek, \"Adversarial learning,\" in *ACM International Conference on Knowledge Discovery in Data Mining (SIGKDD)*, 2005.\n- [44] S. Merity, C. Xiong, J. Bradbury, and R. Socher, \"Pointer sentinel mixture models,\" in *International Conference on Learning Representations (ICLR)*, 2016.\n- [45] S. Narayanan Hari and M. Thomson, \"Tryage: Real-time, intelligent routing of user prompts to large language models,\" *arXiv e-prints*, 2023.\n- [46] M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, C. A. Choquette-Choo, E. Wallace, F. Tramer, and K. Lee, \"Scalable extraction of training data from (production) language models,\" ` *arXiv preprint arXiv:2311.17035*, 2023.\n- [47] I. Ong, A. Almahairi, V. Wu, W.-L. Chiang, T. Wu, J. E. Gonzalez, M. W. Kadous, and I. Stoica, \"RouteLLM: Learning to route LLMs with preference data,\" *arXiv preprint arXiv:2406.18665*, 2024.", - "page_start": 19, - "page_end": 19, - "source_file": "arxiv1.pdf" - }, - { - "text": "- [48] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, \"Practical black-box attacks against machine learning,\" in *Proceedings of the 2017 ACM on Asia conference on computer and communications security*, 2017.\n- [49] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, \"The limitations of deep learning in adversarial settings,\" in *IEEE European symposium on security and privacy (EuroS&P)*, 2016.\n- [50] F. Perez and I. Ribeiro, \"Ignore previous prompt: Attack techniques for language models,\" in *NeurIPS ML Safety Workshop*, 2022.\n- [51] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, \"Language models are unsupervised multitask learners,\" https://cdn.openai.com/better-language-models/language models are unsupervised multitask learners.pdf, 2019.\n- [52] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby, \"Scaling vision with sparse mixture of experts,\" *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.\n- [53] M. Sakota, M. Peyrard, and R. West, \"Fly-swat or cannon? cost-effective language model choice via meta-modeling,\" ˇ in *Proceedings of the 17th ACM International Conference on Web Search and Data Mining*, 2024.\n- [54] S. Schulhoff, J. Pinto, A. Khan, L.-F. Bouchard, C. Si, S. Anati, V. Tagliabue, A. Kost, C. Carnahan, and J. Boyd-Graber, \"Ignore this title and HackAPrompt: Exposing systemic vulnerabilities of LLMs through a global prompt hacking competition,\" in *EMNLP*, 2023.\n- [55] A. Shafran, R. Schuster, and V. Shmatikov, \"Machine against the RAG: Jamming retrieval-augmented generation with blocker documents,\" *arXiv preprint arXiv:2406.05870*, 2024.\n- [56] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, \"Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,\" in *International Conference on Learning Representations*, 2016.\n- [57] T. Shnitzer, A. Ou, M. Silva, K. Soule, Y. Sun, J. Solomon, N. Thompson, and M. Yurochkin, \"Large language model routing with benchmark datasets,\" *arXiv preprint arXiv:2309.15789*, 2023.\n- [58] K. Srivatsa, K. K. Maurya, and E. Kochmar, \"Harnessing the power of multiple minds: Lessons learned from LLM routing,\" *arXiv preprint arXiv:2405.00467*, 2024.\n- [59] D. Stripelis, Z. Hu, J. Zhang, Z. Xu, A. Shah, H. Jin, Y. Yao, S. Avestimehr, and C. He, \"Tensoropera router: A multi-model router for efficient LLM inference,\" *arXiv preprint arXiv:2408.12320*, 2024.\n- [60] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, \"Intriguing properties of neural networks,\" *arXiv preprint arXiv:1312.6199*, 2013.\n- [61] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican *et al.*, \"Gemini: a family of highly capable multimodal models,\" *arXiv preprint arXiv:2312.11805*, 2023.\n- [62] Teknium, \"Openhermes 2.5: An open dataset of synthetic data for generalist LLM assistants,\" 2023. [Online]. Available: https://huggingface.co./datasets/teknium/OpenHermes-2.5\n- [63] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale *et al.*, \"Llama 2: Open foundation and fine-tuned chat models,\" *arXiv preprint arXiv:2307.09288*, 2023.\n- [64] S. Toyer, O. Watkins, E. A. Mendes, J. Svegliato, L. Bailey, T. Wang, I. Ong, K. Elmaaroufi, P. Abbeel, T. Darrell *et al.*, \"Tensor Trust: Interpretable prompt injection attacks from an online game,\" in *International Conference on Learning Representations (ICLR)*, 2023.\n- [65] F. Tramer, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, \"Stealing machine learning models via prediction ` APIs,\" in *USENIX Security Symposium*, 2016.\n- [66] A. Wei, N. Haghtalab, and J. Steinhardt, \"Jailbroken: How does LLM safety training fail?\" in *Advances in Neural Information Processing Systems (NeurIPS)*, 2023.\n- [67] I. Yona, I. Shumailov, J. Hayes, and N. Carlini, \"Stealing user prompts from mixture of experts,\" *arXiv preprint arXiv:2410.22884*, 2024.\n- [68] M. Yue, J. Zhao, M. Zhang, L. Du, and Z. Yao, \"Large language model cascades with mixture of thought representations for cost-efficient reasoning,\" in *International Conference on Learning Representations (ICLR)*, 2024.\n- [69] C. Zhang, T. Zhang, and V. Shmatikov, \"Controlled generation of natural adversarial documents for stealthy retrieval poisoning,\" *arXiv preprint arXiv:2410.02163*, 2024.\n- [70] Y. Zhang, N. Carlini, and D. Ippolito, \"Effective prompt extraction from language models,\" in *First Conference on Language Modeling*, 2024.", - "page_start": 20, - "page_end": 20, - "source_file": "arxiv1.pdf" - }, - { - "text": "## **References**\n\n- [1] M. Sikora and G. Madejski, in American Institute of Physics Conference Series, edited by F. A. Aharonian and H. J. V¨olk (2001), vol. 558 of American Institute of Physics Conference Series, pp. 275–288.\n- [2] M. Sikora, in Blazar Demographics and Physics, edited by P. Padovani and C. M. Urry (2001), vol. 227 of Astronomical Society of the Pacific Conference Series, pp. 95–104.\n- [3] J. A. Stevens, S. J. Litchfield, E. I. Robson, D. H. Hughes, W. K. Gear, H. Terasranta, E. Valtaoja, and M. Tornikoski, ApJ 437, 91 (1994).\n- [4] P. T. P. Ho, J. M. Moran, and K. Y. Lo, ApJl 616, L1 (2004).\n- [5] M. A. Gurwell, A. B. Peck, S. R. Hostler, M. R. Darrah, and C. A. Katz, in From Z-Machines to ALMA: (Sub)Millimeter Spectroscopy of Galaxies, edited by A. J. Baker, J. Glenn, A. I. Harris,\n\nJ. G. Mangum, and M. S. Yun (2007), vol. 375 of Astronomical Society of the Pacific Conference Series, p. 234.\n\n- [6] S. E. Healey, R. W. Romani, G. Cotter, P. F. Michelson, E. F. Schlafly, A. C. S. Readhead, P. Giommi, S. Chaty, I. A. Grenier, and L. C. Weintraub, ApJS 175, 97 (2008).\n- [7] A. A. Abdo, M. Ackermann, M. Ajello, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, B. M. Baughman, et al., ApJ 700, 597 (2009).\n- [8] T. Hovatta, E. Nieppola, M. Tornikoski, E. Valtaoja, M. F. Aller, and H. D. Aller, A&A 485, 51 (2008).\n- [9] B. C. Kelly, J. Bechtold, and A. Siemiginowska, ApJ 698, 895 (2009).\n- [10] M. Sikora, R. Moderski, and G. M. Madejski, ApJ 675, 71 (2008).", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0806.pdf" - }, - { - "text": "It is also an example predicated on copyright's limitations and exceptions — in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use.32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is \"an international community of research libraries committed to the long-term curation and availability of the cultural record.\" It started in what it calls the \"early 33 days of mass digitization\" — that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called \"digital humanities,\" which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., \"war on drugs\") became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from \"non-consumptive research,\" or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center \"[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.\" It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,\n\n<i>Authors Guild v. HathiTrust, 902 F.Supp.2d 445 (SDNY October 10, 2012) and *Authors Guild v.* 32 *HathiTrust*, 755 F.3d 87 (2d Cir. 2014).\n\nSee https://www.hathitrust.org/member-libraries/member-list/ — the membership is principally US 33 institutions, and most of the non-US members are from English speaking countries or institutions that use English as the primary language of operations.\n\nThis functionality is limited to scanned books provided by library partners in the US. 34", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "generative models, or even (deep learning-based) amortised inference models. These various extensions could provide valuable tools for using AIF models in both theoretical and applied research.\n\n**Author Contributions:** Conceptualisation, S.W.N., J.E.L. and P.T.W.; methodology, S.W.N., J.E.L. and P.T.W.; software, S.W.N., J.E.L. and P.T.W.; formal analysis, S.W.N. and J.E.L.; writing—original draft preparation, S.W.N. and J.E.L.; writing—review and editing, C.H., K.F., C.M. and P.T.W.; visualisation, S.W.N. and J.E.L.; supervision, C.M. and P.T.W.; project administration, P.T.W. All authors read and agreed to the published version of this manuscript.\n\n**Funding:** C.M. acknowledges funding from Aarhus Universitets Forskningsfonds (grant no. AUFF-E-2019-7-10) and from the Carlsberg Foundation (grant no. CF21-0439).\n\n**Institutional Review Board Statement:** Not applicable.\n\n**Informed Consent Statement:** Not applicable.\n\n**Data Availability Statement:** The original data presented in this study are openly available in ActiveInferenceJuliaPaper at URL: https://osf.io/j3k5q/.\n\n**Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses or interpretation of data; in the writing of this manuscript; or in the decision to publish the results.\n\n## **Abbreviations**\n\nThe following abbreviations are used in this manuscript:\n\n| AIF | Active inference |\n| --- | --- |\n| FEP | Free energy principle |\n| VFE | Variational free energy |\n| EFE | Expected free energy |\n| MCMC | Markov Chain Monte Carlo |\n| POMDP | Partially Observed Markov Decision Process |\n\n## **References**\n\n- 1. Parr, T.; Pezzulo, G.; Friston, K.J. *Active Inference: The Free Energy Principle in Mind, Brain, and Behavior*; The MIT Press: Cambridge, MA, USA, 2022. [CrossRef]\n- 2. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; O'Doherty, J.; Pezzulo, G. Active inference and learning. *Neurosci. Biobehav. Rev.* **2016**, *68*, 862–879. [CrossRef]\n- 3. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; Pezzulo, G. Active inference: A process theory. *Neural Comput.* **2017**, *29*, 1–49. [CrossRef]\n- 4. Friston, K.J.; Stephan, K.E. Free-energy and the brain. *Synthese* **2007**, *159*, 417–458. [CrossRef] [PubMed]\n- 5. Friston, K. The free-energy principle: A unified brain theory? *Nat. Rev. Neurosci.* **2010**, *11*, 127–138. [CrossRef] [PubMed]\n- 6. Friston, K. The free-energy principle: A rough guide to the brain? *Trends Cogn. Sci.* **2009**, *13*, 293–301. [CrossRef] [PubMed]\n- 7. Friston, K. A free energy principle for a particular physics. *arXiv* **2019**, arXiv:1906.10184. [CrossRef]\n- 8. Friston, K.; Da Costa, L.; Sajid, N.; Heins, C.; Ueltzhöffer, K.; Pavliotis, G.A.; Parr, T. The free energy principle made simpler but not too simple. *Phys. Rep.* **2023**, *1024*, 1–29. [CrossRef]\n- 9. Friston, K.; Kiebel, S. Predictive coding under the free-energy principle. *Philos. Trans. R. Soc. B Biol. Sci.* **2009**, *364*, 1211–1221. [CrossRef] [PubMed]\n- 10. Karl, F. A Free Energy Principle for Biological Systems. *Entropy* **2012**, *14*, 2100–2121. [CrossRef]\n- 11. Corcoran, A.W.; Pezzulo, G.; Hohwy, J. From allostatic agents to counterfactual cognisers: Active inference, biological regulation, and the origins of cognition. *Biol. Philos.* **2020**, *35*, 32. [CrossRef]\n- 12. Heins, C.; Millidge, B.; Da Costa, L.; Mann, R.P.; Friston, K.J.; Couzin, I.D. Collective behavior from surprise minimization. *Proc. Natl. Acad. Sci. USA* **2024**, *121*, e2320239121. [CrossRef] [PubMed]\n- 13. Patzelt, E.H.; Hartley, C.A.; Gershman, S.J. Computational Phenotyping: Using Models to Understand Individual Differences in Personality, Development, and Mental Illness. *Personal. Neurosci.* **2018**, *1*, e18. [CrossRef] [PubMed]", - "page_start": 29, - "page_end": 29, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "To which system does the AIF apply ?", - "target_page": 2, - "target_passage": "AIF was argued to be applicable to any self organising system that actively maintains a stable boundary that defines its integrity [10], a broad category that includes cells and plants [11], as well as humans [2] and even collectives [12].", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Artificial intelligence**\n\n**Artificial intelligence** (**AI**), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\"[2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]\n\nArtificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## **Goals**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\".[368]\n\n### **Evaluating approaches to AI**\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n#### **Symbolic AI and its limits**\n\nSymbolic AI (or \"GOFAI\")[370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\"[371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult.[372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge.[373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n#### **Neat vs. scruffy**\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n#### **Soft vs. hard computing**", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n#### **Other industry-specific tasks**\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes.[178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[301]\n\n#### **Regulation**\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\".[304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\".[312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as *Alpha Tensor*, *Alpha Geometry* and *Alpha Proof* all from Google DeepMind, [157] *Llemma* from eleuther[158] or *Julius*. [159]\n\nWhen natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks.\n\nSome models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[160]\n\n## **Finance**\n\nFinance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated \"robot advisers\" have been in use for some years.[161]\n\nWorld Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: \"the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation.\"[162]\n\n## **Military**\n\nVarious countries are deploying AI military applications.[163] The main applications enhance command and control, communications, sensors, integration and interoperability. [164] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. [163] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams.[164]\n\nAI has been used in military operations in Iraq, Syria, Israel and Ukraine.[163][165][166][167]\n\n## **Generative AI**\n\nIn the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models, [168][169] often in response to prompts. [170][171]\n\nIn March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it.[172] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts.[173][174]\n\n## **Agents**", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### ISSUE\n\nDecember 2024\n\n#### CATEGORIES\n\nTechnology & Cybersecurity Editor's Picks Finance - Personal Home - Interior\n\n# **The top AI-powered tech trends in 2025**\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n### AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops – or AI PC – is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors – also known as the brain of the computer – which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n### Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and\n\nnutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n# Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n# Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com Word Count: 346\n\n#### M ed i a A tt a ch m e n ts −\n\n#### View", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[249] By 2015, over fifty countries were reported to be researching battlefield robots.[250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[254]\n\n#### **Technological unemployment**\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI.[256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\".[p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; *The Economist* stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\".[262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]\n\nFrom the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 160. Alex McFarland: *7 Best AI for Math Tools.* (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n- 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n- 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n- 163. Congressional Research Service (2019). *Artificial Intelligence and National Security* (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n- 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n- 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). *+972 Magazine*. Retrieved 6 April 2024.\n- 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). *The Guardian*. Retrieved 4 December 2023.\n- 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender – deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). *Neue Zürcher Zeitung* (in German). Retrieved 10 August 2024.\n- 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n- 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n- 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). *The New York Times*. Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n- 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). *Bloomberg News*. Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "init_aif function, including the policy length and which parts of the information gain to use, as well as the policy precision *γ*.\n\n✞ ☎\n\n```\n# Infer policies\ninfer_policies !(aif)\n✝ ✆\n```\nFinally, sample_action! then samples the next action from the agent. This is performed by marginalising the policy probabilities to obtain the probabilities for the action on the next time step, and then softmax transforming it with the *α* action precision parameter.\n\n✞ ☎\n\n✝ ✆\n\n```\n# Sample Action\nsample_action !(aif)\n```\nThese functions can be combined by users in various ways, depending on their purpose. Often, however, users will want to combine them in a single function that implements the full action–perception loop that receives an observation and returns an action. This is implemented with the ActionModels sister package for behavioural modelling.\n\n#### *3.2. Simulation with* ActionModels\n\nActionModels is a library for implementing, simulating and fitting various behavioural models to data. Here, we show how to use it in conjunction with ActiveInference to make the simulation of AIF models easy and in a fully generalised framework that is compatible with other types of cognitive and behavioural models as well. ActiveInference provides a full \"action model\"—a full model of the action-generating process in an agent for using AIF called action_pomdp!. In this case, all this information is contained in the AIF object. action_pomdp! then takes the AIF object and a single-time-step observation as arguments, and then runs state inference, parameter learning and policy inference, and returns probability distributions over the possible actions of the agent.\n\n✞ ☎\n\n✝ ✆\n\n```\nobservation = [1] # observation with one modality\n# Run the action model for a single observation\naction_distributions = action_pomdp !( aif :: AIF , observation )\n```\nThis can conveniently be used in conjunction with an ActionModels agent, a more abstract structure that is used for running behavioural models in general, and which is used when fitting models to data. We therefore begin with initialising an agent that contains the AIF object:\n\n```\n✞ ☎\n# Initialize ActionModels Agent with active inference agent as a substruct .\nagent = init_agent (\n action_model = action_pomdp !, # The active inference action model\n substruct = aif , # The AIF object\n )\n✝ ✆\n```\nThe agent object can be used with a set of standard functions. single_input! provides the agent with an observation, updates it is beliefs and returns a sampled action; for nonaction-dependent observations, give_inputs! provides a series of observations across time steps and returns actions for each. These can be easily used in an agent-based simulation to have AIF agents evolve and act over time.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "What is the definition of POMDP ?", - "target_page": 4, - "target_passage": " The Partially Observable Markov Decision Process is a type of flexible generative model that is widely used in the AIF literature. In discrete time and usually a discrete state space, this model type is parametrised to fit a given task by a set matrices containing probability distributions.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters *θ*:\n\n$p(\\Theta)=Dir(\\Theta|\\theta)$ (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters *θ* with the following update equation:\n\n$$\\theta_{t+1}=\\omega*\\theta_{t}+\\eta*\\chi t\\tag{20}$$\n\nThe updated value for the concentration parameter (*θt*+1) is found by adding the previous concentration parameter *θt* multiplied by a forgetting rate *ω* to the observed data count *χ* (either the observation in the case of **A** learning, or the inferred state or state transition for other matrices) multiplied by a learning rate *η*. With this relatively simple update equation—which, in essence, amounts to just counting the occurrences of categories—an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## **3. Using ActiveInference.jl**\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference. This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n#### *3.1. Creating and Using a POMDP*\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\n✞ ☎\n\n```\nusing Pkg\nPkg.add( ActiveInference )\n✝ ✆\n```\nIt can then be loaded into the current project environment:\n\n✞ ☎ **using** ActiveInference ✝ ✆\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "- [20] C. Tomlinson, \"On the motion of certain liquids on the surface of water,\" Phil. Mag. Ser. 4 39, 32–48 (1870).\n- [21] C. G. Marangoni, \"Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberfl ¨ ache einer ¨ anderen,\" Ann. Phys. (Poggendorf) 143, 337–354 (1871).\n- [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, \"Formation of ordered mesoscopic poly- ¨ mer arrays by dewetting,\" Chaos 9, 308–314 (1999).\n- [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, \"Hole-growth instability in the dewetting of evaporating polymer solution films,\" J. Polym. Sci. Pt. B-Polym. Phys. 40, 2825–2832 (2002).\n- [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, \"Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,\" Adv. Mater. 19, 1413–1417 (2007).\n- [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, \"Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,\" Langmuir 24, 7923–7930 (2008).\n- [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, \"Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,\" Surface and Interface Analysis 25, 514–521 (1997).\n- [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, \"Dewetting of thin collagenous precursor films,\" Appl. Phys. A 66, S565–S568 (1998).\n- [28] U. Thiele, M. Mertig, and W. Pompe, \"Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,\" Phys. Rev. Lett. 80, 2869–2872 (1998).\n- [29] H. Maeda, \"An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,\" Langmuir 15, 8505–8513 (1999).\n- [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, \"Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,\" Phys. Rev. Lett. 96, 177801 (2006).\n- [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, \"Evaporative self-assembly from complex DNA-colloid suspensions,\" Langmuir 24, 3911–3917 (2008).\n- [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, \"Rings and hexagons made of nanocrystals: A Marangoni effect,\" J. Phys. Chem. B 104, 11871–11877 (2000).\n- [33] G. L. Ge and L. Brus, \"Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,\" J. Phys. Chem. B 104, 9573–9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "| Core Concepts | |\n| --- | --- |\n| AIF | Active inference is a formal framework for modelling behaviour and cog |\n| | nition. Perception and action are cast as minimising free energy—the VFE |\n| | and EFE, respectively—given a generative model of the environment. |\n| VFE | The variational free energy F quantifies how well a generative model |\n| | explains incoming sensory observations. It can be rewritten as the negative |\n| | log model evidence (called surprise) upper-bounded by the divergence |\n| | from the optimal posterior p(s o). Perception as inference is accomplished |\n| | by selecting the approximate posterior q(s) with the lowest associated |\n| | VFE. |\n| | F[q(s), o] ≜ DKL[q(s)∥p(o,s)] = DKL[q(s)∥p(s o)] − ln p(o) |\n| | {z } {z } Divergence Surprise |\n| EFE | The expected free energy G quantifies the expected future free energy |\n| | under an action policy π. It consists of an information gain term and a |\n| | pragmatic value term that provide a natural balance between exploratory |\n| | and goal-seeking behaviour. Action as inference is accomplished by select |\n| | ing the action policy with the lowest associated EFE. |\n| | = − Eq(o˜,s˜ π) [ln q(s˜ o˜, π) − ln q(s˜ π)] − Eq(o˜ π) [ln p(o˜ C)] Gπ |\n| | {z } {z } Information gain Pragmatic value |\n| Generative | The generative model is an agent's formal assumptions about the structure |\n| model | and dynamics of its environment, based on which perceptual and active |\n| | inferences are carried out. Many types of generative models exist that are |\n| | suitable for different environments and tasks. |\n| POMDP | The Partially Observable Markov Decision Process is a type of flexible |\n| | generative model that is widely used in the AIF literature. In discrete time |\n| | and usually a discrete state space, this model type is parametrised to fit a |\n| | given task by a set matrices containing probability distributions. |\n\n## **2. Active Inference with POMDPs**\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44–47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4–8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "quantities as its target: the variational free energy (*VFE*) in the case of perception and the expected free energy (*EFE*) in the case of action. The *VFE* is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The *EFE* is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low *EFE* lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n#### *2.1. POMDPs in Active Inference*\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nA discrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: **A**, **B**, **C**, **D** and **E** [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\n**A**, also called the *observation model*, represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state *s*, and a row for each possible observation *o*. Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If **A** is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.\n\n**B**, also called the *transition model*, describes the state-to-state transition probabilities of environmental states *s*. **B** encodes the agent's assumptions about how the environment changes over time, depending on its actions. It has a column and a row for each environmental state *s*, where each column is a categorical probability distribution over the states the environment will take on the next time step, given the state it is currently in. If the environment is modelled as multidimensional, there will be a matrix for each environmental state factor. Additionally, there is a separate matrix for each possible action (making each factor in **B** a tensor). This means that for every factor in the model, there may be one or more actions that pick out the appropriate slice of the tensor. Action therefore allows the agent to predict that the environment (and the corresponding observations) will change differently depending on the actions that it chooses. If **B** is imprecise (i.e., highly entropic),", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "*Article*\n\n# **Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models**\n\n**Samuel William Nehrer 1,† , Jonathan Ehrenreich Laursen 1,† , Conor Heins 2,3,* , Karl Friston 3,4 , Christoph Mathys 5 and Peter Thestrup Waade 5**\n\n- 1 School of Culture and Communication, Aarhus University, 8000 Aarhus, Denmark; 202204724@post.au.dk (S.W.N.); 202204836@post.au.dk (J.E.L.)\n- 2 Department of Collective Behaviour, Max Planck Institute of Animal Behavior, D-78457 Konstanz, Germany\n- 3 VERSES Research Lab., Los Angeles, CA 90016, USA; k.friston@ucl.ac.uk\n- 4 Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK\n- 5 Interacting Minds Centre, Aarhus University, 8000 Aarhus, Denmark; chmathys@cas.au.dk (C.M.); ptw@cas.au.dk (P.T.W.)\n- ***** Correspondence: cheins@ab.mpg.de\n- † These authors contributed equally to this work.\n\n**Abstract:** We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison.\n\n**Keywords:** active inference; free energy principle; predictive processing; Markov decision process; cognitive modelling; Julia\n\n**PACS:** 87.15.Aa\n\n**MSC:** 91-08\n\n**JEL Classification:** C63\n\n## **1. Introduction**\n\nWe introduce a novel software library for Julia, ActiveInference, which lets users produce the simulated behaviour of agents and their internal belief states with active inference (AIF) models, as well as fit such models to empirically observed behaviour. AIF [1–3] is a generally applicable formal framework for understanding and simulating intelligent behaviour that is based in neurobiology and first principles from statistical physics [4–8]. AIF treats action and perception as unified under a joint imperative: to minimise the variational free energy (*VFE*), which quantifies how well the agent's internal generative model explains incoming sensory observations. It is an upper bound on the the surprise from sensory observations, making AIF formally related to prediction error\n\nAcademic Editor: Astero Provata\n\nReceived: 25 October 2024 Revised: 2 January 2025 Accepted: 7 January 2025 Published: 12 January 2025\n\n**Citation:** Nehrer, S.W.; Ehrenreich Laursen, J.; Heins, C.; Friston, K.; Mathys, C.; Thestrup Waade, P. Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models. *Entropy* **2025**, *27*, 62. https://doi.org/10.3390/e27010062\n\n**Copyright:** © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "distance between particle clusters resulting from the demixing process that occurs already in the bulk liquid and is not related to the front instability at all. Note that one finds a similar sequence of regimes (i) to (iv) when increasing the particle-particle interaction strengths for fixed εnl (see Ref. [41]) for further details.\n\nFIG. 3: (Colour online) Dependence of the mean finger number left behind by the unstable dewetting front on the particle-liquid interaction strength εnl. The regions marked (i) to (iv) are discussed in the main text. The insets display typical snapshots obtained in the four different regions. Particles are black, liquid is grey (green online) and the empty substrate is white. The remaining parameters are kT = 0.2, M = 20, µ = −2.2, ρ av n = 0.1, nn = 2.0, domain size 1200 × 1200. For the insets, from left to right, nl = 1.2, 1.4, 1.45, 1.8.\n\nWe note also that the fingering process may be viewed as self-optimising the front motion – i.e. the front keeps its average velocity constant by expelling particles into the fingers. A similar effect exists for dewetting polymer films [18], where liquid is expelled from the growing moving rim which collects the dewetted polymer. There, the surplus liquid is left on the surface as a droplet pattern.\n\nThe kinetic Monte Carlo model is a very useful tool that helps one to understand the pattern formation in drying nanoparticle suspensions. One has, however, to keep in mind the restrictions", - "page_start": 12, - "page_end": 12, - "source_file": "1001.2669.pdf" - }, - { - "text": "init_aif function, including the policy length and which parts of the information gain to use, as well as the policy precision *γ*.\n\n✞ ☎\n\n```\n# Infer policies\ninfer_policies !(aif)\n✝ ✆\n```\nFinally, sample_action! then samples the next action from the agent. This is performed by marginalising the policy probabilities to obtain the probabilities for the action on the next time step, and then softmax transforming it with the *α* action precision parameter.\n\n✞ ☎\n\n✝ ✆\n\n```\n# Sample Action\nsample_action !(aif)\n```\nThese functions can be combined by users in various ways, depending on their purpose. Often, however, users will want to combine them in a single function that implements the full action–perception loop that receives an observation and returns an action. This is implemented with the ActionModels sister package for behavioural modelling.\n\n#### *3.2. Simulation with* ActionModels\n\nActionModels is a library for implementing, simulating and fitting various behavioural models to data. Here, we show how to use it in conjunction with ActiveInference to make the simulation of AIF models easy and in a fully generalised framework that is compatible with other types of cognitive and behavioural models as well. ActiveInference provides a full \"action model\"—a full model of the action-generating process in an agent for using AIF called action_pomdp!. In this case, all this information is contained in the AIF object. action_pomdp! then takes the AIF object and a single-time-step observation as arguments, and then runs state inference, parameter learning and policy inference, and returns probability distributions over the possible actions of the agent.\n\n✞ ☎\n\n✝ ✆\n\n```\nobservation = [1] # observation with one modality\n# Run the action model for a single observation\naction_distributions = action_pomdp !( aif :: AIF , observation )\n```\nThis can conveniently be used in conjunction with an ActionModels agent, a more abstract structure that is used for running behavioural models in general, and which is used when fitting models to data. We therefore begin with initialising an agent that contains the AIF object:\n\n```\n✞ ☎\n# Initialize ActionModels Agent with active inference agent as a substruct .\nagent = init_agent (\n action_model = action_pomdp !, # The active inference action model\n substruct = aif , # The AIF object\n )\n✝ ✆\n```\nThe agent object can be used with a set of standard functions. single_input! provides the agent with an observation, updates it is beliefs and returns a sampled action; for nonaction-dependent observations, give_inputs! provides a series of observations across time steps and returns actions for each. These can be easily used in an agent-based simulation to have AIF agents evolve and act over time.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "```\nimport com.ibm.edms.od.*;\npublic class CustomTransform {\n public static HashMap transformData(HashMap odMap) throws Exception {\n System.out.println(\"Inside transformData method\");\n // List this transform name from the XML file\n System.out.println(\" Transform name: \" + \n (String)odMap.get(ODTransform.TXFRM_REQ_NAME));\n // List the property keys and values ODWEK read from the transform XML\n // file and provided to this Custom Class\n System.out.println(\" Transform properties:\");\n Properties gtProps = (Properties)odMap.get(ODTransform.TXFRM_REQ_PROPS);\n Enumeration enumeration = gtProps.keys();\n List list = new ArrayList();\n while (enumeration.hasMoreElements()) {\n list.add((String)enumeration.nextElement());\n }\n Collections.sort(list);\n for (String key : list)\n System.out.println(String.format(\"%25s = %-25s\", key,\n gtProps.getProperty(key)));\n // Retrieve the native document from ODWEK\n byte[] inDoc = (byte [])odMap.get(ODTransform.TXFRM_REQ_DATA);\n System.out.println(\" Native document size: \" + (inDoc == null ? null:\n inDoc.length));\n // Retrieve the document resources from ODWEK\n byte[] inRes = (byte [])odMap.get(ODTransform.TXFRM_REQ_RES);\n System.out.println(\" Native doc resource size: \" + (inRes == null ? null:\n inRes.length));\n // Normally this is where you do the transform or do something with the\n byte data.\n // Let's just concat the resources if there are any to the doc\n byte[] transformedDoc;\n if (inRes != null) {\n transformedDoc = new byte[inRes.length + inDoc.length];\n System.arraycopy(inRes, 0, transformedDoc, 0, inRes.length);\n System.arraycopy(inDoc, 0, transformedDoc, inRes.length,\n inDoc.length);\n }\n else\n transformedDoc = inDoc;\n System.out.println(\" Concatenated resources to doc size: \" +\n transformedDoc.length);\n // Send the transformed data back to ODWEK\n HashMap rtnMap = new HashMap();\n rtnMap.put(ODTransform.TXFRM_RESP_DATA, transformedDoc);\n return rtnMap;\n }\n}\n```\nExample 9-4 on page 214 shows how to set up the HashMap to pass document byte arrays in and out of this custom interface, and how to define a custom Java class that contains the **transformData()** method.", - "page_start": 238, - "page_end": 238, - "source_file": "sg246915.pdf" - }, - { - "text": "FIG. 1: (Colour online) Images of strongly ramified dewetting structures obtained using Atomic Force Microscopy in the case of (a) an aqueous collagen solution on graphite (courtesy of U. Thiele, M. Mertig and W. Pompe; see also Ref. [42]. Image size: 5µm×5µm); (b) poly(acrylic acid) in water spin-coated onto a polystyrene substrate (reprinted with permission of John Wiley & Sons, Inc. from Ref. [23]; copyright John Wiley & Sons, Inc. 2002; Image size: 2.5µm×2.5µm); and in both (c) and (d), a solution of gold nanoparticles in toluene, spin-coated onto native oxide terminated silicon substrates (scale bars given in panels). In all the images the lighter areas correspond to the deposited solute and the dark areas to the empty substrate.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2669.pdf" - }, - { - "text": "- 34. Dunlop 2000, p. xii.\n- 35. Petitfils 2002, pp. 250–253, 254–260.\n- 36. Merryman 2007, p. .\n- 37. Antoine 1989, p. 33.\n- 38. Petitfils 2002, pp. 223–225\n- 39. Wolf 1968, p. 117.\n- 40. Dunlop 2000, p. 54.\n- 41. Israel 1990, pp. 197–199.\n- 42. Hutton 1986, pp. 299–300.\n- 43. Lynn 1999, pp. 109–110.\n- 44. McKay 1997, p. 206.\n- 45. Young 2004, p. 133.\n- 46. Black 2011, pp. 97–99.\n- 47. Panhuysen 2009, pp. 396–398.\n- 48. Frost 2000, p. 213.\n- 49. Panhuysen 2009, pp. 451.\n- 50. Lynn 1999, pp. 161–171.\n- 51. Merriman 2019, p. 319.\n- 52. Bailey 2018, p. 14.\n- 53. Faroqhi, Suraiya (2006). *The Ottoman Empire and the World Around It*. Bloomsbury Academic. p. 73 (https://book s.google.com/books?id=oMHwBktE9MMC&pg=PA73). ISBN 978-1-8451-1122-9.\n- 54. Bluche 1986, p. 439.\n- 55. Keay 1994, pp. 201–204.\n- 56. Pagani 2001, p. 182.\n- 57. Sullivan, Michael (1989). *The Meeting of Eastern and Western Art*. University of California Press. p. 98 (https://boo ks.google.com/books?id=8pLhEWdaMvEC&pg=PA98). ISBN 978-0-5202-1236-7.\n- 58. Barnes 2005, p. 85.\n- 59. Mungello 2005, p. 125 (https://archive.org/details/greatencounterof0000mung_m1v1/page/125).\n- 60. Philip Mansel, *King of the World: The Life of Louis XIV* (2020) cited in Tim Blanning, \"Solar Power,\" *The Wall Street Journal*, 17 October 2020, p. C9.\n- 61. Saint-Simon, Louis de Rouvroy, duc de (1876). *The Memoirs of the Duke de Saint-Simon on the Reign of Louis XIV. and the Regency* (https://books.google.com/books?id=-EM_AAAAYAAJ). Vol. 2. Translated by St. John, Bayle. London: Chatto and Windus. pp. 363, 365. Archived (https://web.archive.org/web/20230713192807/https:// books.google.com/books?id=-EM_AAAAYAAJ) from the original on 13 July 2023. Retrieved 22 March 2023.\n- 62. Halsall, Paul (August 1997). \"Modern History Sourcebook: Duc de Saint-Simon: The Court of Louis XIV\" (https://w eb.archive.org/web/20080410084543/http://www.fordham.edu/HALSALL/mod/17stsimon.html). *Internet Modern History Sourcebook*. History Department of Fordham University. Archived from the original (http://www.fordham.ed u/halsall/mod/17stsimon.html) on 10 April 2008. Retrieved 19 January 2008.\n- 63. Daubenton, Louis-Jean-Marie (2009) [1755]. \"Elephant\" (http://quod.lib.umich.edu/d/did/did2222.0000.944/--eleph ant?rgn=main;view=fulltext). *Encyclopedia of Diderot & d'Alembert Collaborative Translation Project*. Translated by Eden, Malcolm. Michigan Publishing, University of Michigan Library. Retrieved 1 April 2015.\n- 64. Lynn 1999, p. 46.\n- 65. Quoted in Symcox 1974, pp. 236–237\n- 66. Quoted in Symcox 1974, pp. 237, 242\n- 67. Sturdy 1998, pp. 89–99.\n- 68. Sturdy 1998, pp. 92–93.\n- 69. Sturdy 1998, p. 96, citing Pillorget, Suzanne; Pillorget, René (1996). *France Baroque, France Classique* (in French). Vol. I. Bouquins. p. 935. ISBN 978-2-2210-4868-9.\n- 70. Nolan 2008, p. 132.\n- 71. Sturdy 1998, pp. 96–97.\n- 72. Bluche 1986, pp. 20–21.\n- 73. \"Louis XIV, king of France\" (https://web.archive.org/web/20080107232540/http://www.bartleby.com/65/lo/Louis14F r.html). *The Columbia Encyclopedia* (6th ed.). 2007. Archived from the original (http://www.bartleby.com/65/lo/Loui s14Fr.html) on 7 January 2008. Retrieved 19 January 2008.\n- 74. Sturdy 1998, p. 98, citing Scoville, Warren C. (1960). *The Persecution of Huguenots and French Economic Development, 1680–1720*. University of California Press. OCLC 707588406 (https://search.worldcat.org/oclc/7075 88406).\n- 75. Edwards 2007, pp. 212–213.", - "page_start": 27, - "page_end": 27, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "What is dyspnea ?", - "target_page": 2, - "target_passage": "Dyspnea refers to a subjective sensation of breathing discomfort.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identified with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort.1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%.2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks.3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a significant burden on those with undiagnosed conditions. In a systematic review by Müller et al,4 the combined\n\n#### Study Design and Methods Recruitment of Undiagnosed Cases and Healthy Control Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case finding study. Approval for prevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily influenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants.5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identified potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits.7\n\nThe three objectives of our study were as follows: (1) to evaluate the impact of dyspnea in adults from the general population who had no prior diagnosis of respiratory disease but who reported having significant respiratory symptoms in the past 6 months; (2) to identify associated risk factors for dyspnea and estimate their influence on the symptom; and (3) to explore the relationship between dyspnea and health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nthe study was obtained from the research ethics boards of the 17 participating study sites across Canada. Informed, written consent was provided by all study participants.\n\nBoth landlines and cellphones within a 90-minute radius of any of the 17 study sites were dialed randomly. A\n\nDOI: https://doi.org/10.1016/j.chest.2024.07.183\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George's Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael's Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\n(P. H.), Dalhousie University, Halifax, NS; the Department of Medicine (I. M. and M. B.), University of Alberta, Edmonton, AB; the Department of Medicine (M. D. L.), Queen's University, Kingston; the Department of Medicine (C. J. L.), University of Western Ontario, London, ON; the Department of Medicine (T. A.), Memorial University, St. John's, NF; the Department of Medicine (N. E.), McGill University, Montreal, QC; the Department of Medicine (M. A.), University of Manitoba, Winnipeg, MN, Canada.\n\nDrs Bierbrier and Gerstein contributed equally to this manuscript.\n\nPart of this work has been presented at the American Thoracic Society Conference, May 17-22, 2024, San Diego, CA.\n\nCORRESPONDENCE TO: Shawn D. Aaron, MD; email: saaron@ohri.ca Copyright 2024 The Author(s). Published by Elsevier Inc under license from the American College of Chest Physicians. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# TABLE 8 ] Unadjusted and Adjusted Dyspnea Associations With Health Care Use\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | Value P | Dyspnea OR (95% CI) | Value P |\n| In the past 12 mo, did you visit your general | 1.011 (1.007-1.014) | < .001 | 1.011 (1.007-1.014) | < .001 |\n| practitioner or a nurse practitioner or another physician at a walk-in clinic for any breathing | | | | |\n| problems? | | | | |\n| In the past 12 mo, did you visit an emergency | 1.015 (1.009-1.021) | < .001 | 1.015 (1.009-1.022) | < .001 |\n| department for any breathing problems? | | | | |\n| In the past 12 mo, were you hospitalized for any | 1.021 (1.006-1.037) | .006 | 1.023 (1.007-1.039) | .005 |\n| breathing problems or respiratory illness? | | | | |\n\nData are presented as OR (95% CI) with Pvalues. Adjusted values are adjusted for age, sex, and BMI.\n\noutpatients with cardiorespiratory disease25 and the Dyspnea-12 in patients with asthma26 and found that the affective aspect of dyspnea can significantly influence the impact of dyspnea on health status, irrespective of the intensity of breathlessness.\n\nIn those with PRISm, there was a strong, positive association between higher values for the FEV1/FVC ratio and dyspnea. For the PRISm group, a higher FEV1/FVC ratio may reflect diminished lung compliance due to interstitial lung disease and/or respiratory system restriction due to obesity, which could contribute to worse dyspnea. Conversely, the association of dyspnea with the FEV1/FVC ratio was in the opposite direction for those with asthma or COPD, and a lower FEV1/FVC ratio correlated with worse dyspnea, as expected.\n\nOur study complements the literature by focusing on adults with undiagnosed respiratory symptoms who were randomly selected and recruited through active case finding in the community. This increases the generalizability of our results to a broader population. Our dyspnea questions were derived from widely used and validated respiratory health questionnaires, and our dyspnea assessment measure is a weighted average of responses to these validated questions. Consequently, the measure has an immediate interpretation in terms of the lived day-to-day experience of individuals.\n\nOur study has limitations. We did not undertake reliability/reproducibility testing of our questionnaire. The dyspnea impact assessment score was statistically associated with increased health care utilization, lower quality of life, and reduced work productivity; therefore, by virtue of this analysis, our questionnaire has construct validity. However, further attempts at external validation of the questionnaire using an independent data set would be important. Health care utilization during the preceding 12 months was assessed on entry into the study, and there is potential for impaired recall of events. Our study may have missed asthma in some participants because bronchial challenge testing was not conducted on those who tested negative for airflow obstruction or BD responsiveness. A previous study showed that an additional diagnostic step incorporating\n\n| TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| (working for pay)? | | | | |\n| | Dyspnea Coefficient | | Dyspnea Coefficient | |\n| Measurea | (95% CI) | Value P | (95% CI) | Value P |\n| Absenteeism | 0.061 (0.040-0.083) | <.001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | <.001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | <.001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | <.001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coefficients are presented with 95% CIs and P values. Adjusted coefficients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\na Measures calculated from WPAI questions.21", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case finding strategy identified asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD.27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status.28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions.29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective.30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits definitive clinical trials.31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al32 revealed that physicians underestimated their patients' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea.19 Patient underreporting of symptoms, coupled with inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population.33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case finding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n# Financial/Nonfinancial Disclosures\n\nNone declared.\n\n# Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David Savage; Natasha Verzosa; Ravneet Mahal; and Mary Justine Angeles; Queen Elizabeth II Health Sciences Centre, Halifax, NS: Scott Fulton, RRT; Hôpital du Sacré Coeur de Montréal, Montréal, QC: Simone Chaboillez, MT; and Meliza Benabdallah; St. Joseph's Hamilton, Hamilton, ON: Liz Johnson; St. Boniface Hospital, Winnipeg, MB: Cheryl Noble, RN; Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval, Québec, QC: Johane Lepage, BSc; Joanne Milot, RN; and Christiane Balizet, RN; University of Calgary, Calgary, AB: Lisette Machado, MD; and Curtis Dumonceaux, BSc; University of Alberta, Edmonton, AB: Miranda Bowen, RRT; Fay Hartt; Angie Hillaby, RRT; and Amy Haartsma, RRT; St. Michael's Hospital, Toronto, ON: Stephanie Segovia, PhD; and Carolyn Spiegel-Feld; Queen's University Kingston General Hospital, Kingston, ON: Ann Taite, BSc; Alison Morra, BScN; Emma Bullock, HBSc; and Taylar Wall, RRT; University of Saskatchewan Royal University Hospital, Saskatoon, SK: Nancy Zacher; Janet Baran, RN; and Yessica Lopez, BA; London Health Sciences Centre - Victoria Hospital, London, ON: Katie Maguire; Heba Almadhoun; and Robert Campbell-Pereira, BSc; St. Clare's Mercy Hospital, St John's, NL: Sarah Anthony, BNRN; and Tanya Nolan, BNRN; McGill University Health Centre, Montreal, QC: Francine Noel; Royal Victoria Regional Health Centre, Barrie, ON: Masoud Mahdavian; and Ashley Brown, RRT; and Michael Garron Hospital, Toronto, ON: Ian Fraser; Han Byul (Liz) Lee; and Yuna Lee, BA. We would also thank Dong Vo We (data manager, Ottawa Hospital Research Institute, Ottawa, ON). We also thank the thousands of study participants who gave their time and came in for the study visits. We also thank ASDE Survey Sampler, Inc (Gatineau, QC, Canada) for organizing the random digit dialing.\n\n# References\n\n- 1. Parshall MB, Schwarthzstein RM, Adams L, et al. An Official American Thoracic Society Statement: update on the mechanisms, assessment, and management of dyspnea. Am J Respir Crit Care Med. 2012;185:435-452.\n- 2. Ho SF, O'Mahony MS, Steward JA, et al. Dyspnoea and quality of life in older people at home. Age Ageing. 2001;30: 155-159.\n- 3. Laviolette L, Laveneziana P. Dyspnoea: a multidimensional and multidisciplinary approach. Eur Respir J. 2014;43: 1750-1762.\n- 4. Müller A, Mraz T, Wouters EFM, et al. Prevalence of dyspnea in general adult populations: a systematic review and meta-analysis. Respir Med. 2023;218: 107379.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Disease Group | Reversibility of FEV1, % | | Post-BD FEV1/FVC Ratio | | Post-BD FEV1 % predicted | Overall Value P |\n| --- | --- | --- | --- | --- | --- | --- |\n| Control | 0.163 (P ¼ .47) | | P 0.274 ( | [ .05) | 0.090 (P ¼ .17) | .096 |\n| Normal spirometry | 0.186 (P ¼ .16) | | 0.240 ( P | [ .005) | P < .001) 0.131 ( | < .001 |\n| Asthma | 0.545 ( P | [ .01) | 0.107 (P ¼ .58) | | 0.158 (P ¼ .08) | .009 |\n| COPD | P 0.392 ( | [ .002) | P 0.307 ( | [ .05) | 0.075 (P ¼ .37) | < .001 |\n| PRISm | 0.290 (P ¼ .39) | | 0.854 ( P | [ .002) | P [ .004) 0.650 ( | < .001 |\n\nTABLE 6 ] Dyspnea Regressed on Lung Function Variables Representing Severity of Impairment\n\nDyspnea regressed on lung function variables representing severity of impairment, after removing contributions of patient-specific factors and spirometry disease group Tables 4 and 5 (1.7% of variability explained). Boldface indicates statitistical significance. BD ¼ bronchodilator; PRISm ¼ preserved ratio impaired spirometry.\n\nApproximately 65% of the variability in dyspnea remained unexplained by the factors examined in our study. Most individuals in our study showed normal spirometry results but still carried a substantial burden of dyspnea, an inconsistency that needs explanation. Several factors not included in our analysis may have contributed to the unexplained variation. Environmental factors (eg, air pollution, allergen exposure, seasonal variations in symptoms) are potential contributors to this unexplained variability.22 Genetic predispositions could also play a significant role, as suggested by a study that revealed that parents with dyspnea were 1.8 times more likely to have offspring with dyspnea.23 Additionally, fitness could be a contributing factor, especially in individuals with undiagnosed PRISm, asthma, or COPD who may restrict their activities to avoid dyspnea, and hence become deconditioned.6\n\nThere were significant but modest differences in mean dyspnea levels across the 17 study sites (data not shown), which are not explained by the risk factors we accounted for in our study. This finding is not surprising because some of the potential contributing factors previously mentioned and other site-specific factors\n\n(eg, climate, air quality/industrialization, socioeconomic status) of the catchment population tend to vary across study sites.\n\nDyspnea is a complex, subjective symptom that is modified by nonrespiratory factors including psychosocial, social, and environmental influences.5 Interindividual variability in the perception of dyspnea, influenced by these nonrespiratory factors, may play an important role. A study conducted by Ziegler et al24 assessed the perception of dyspnea in 42 healthy individuals using a standardized inspiratory resistive loading stimulus. The study used the modified Borg scale to measure dyspnea perception levels. Among the participants subjected to the same inspiratory resistive load, 31%, 45%, and 24% of participants classified their level of dyspnea as low, intermediate, and high, respectively. The study revealed that differences between individuals contribute considerable variability to the perception of dyspnea, even among healthy participants.\n\nThe affective dimension of dyspnea can be captured using additional questionnaires (eg, Multidimensional Dyspnea Profile, Dyspnea-12). Studies have explored the use of the Multidimensional Dyspnea Profile in\n\n| TABLE 7 ] Unadjusted and Adjusted Dyspnea Associations With Quality of Life (SF-36) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea Coefficient (95% CI) | Value P | Dyspnea Coefficient (95% CI) | Value P |\n| Physical functioning | 0.693 (0.718 to 0.668) | < .001 | 0.655 (0.680 to 0.630) | < .001 |\n| Physical health limitations | 0.634 (0.666 to 0.603) | < .001 | 0.628 (0.661 to 0.595) | < .001 |\n| Emotional problems | 0.403 (0.438 to 0.369) | < .001 | 0.407 (0.443 to 0.370) | < .001 |\n| Energy/fatigue | 0.454 (0.479 to 0.428) | < .001 | 0.452 (0.479 to 0.425) | < .001 |\n| Emotional well-being | 0.230 (0.256 to 0.204) | < .001 | 0.239 (0.266 to 0.213) | < .001 |\n| Social functioning | 0.433 (0.466 to 0.399) | < .001 | 0.434 (0.469 to 0.399) | < .001 |\n| Pain | 0.410 (0.444 to 0.377) | < .001 | 0.387 (0.423 to 0.352) | < .001 |\n| General health | 0.390 (0.416 to 0.364) | < .001 | 0.382 (0.409 to 0.355) | < .001 |\n| Total score | 0.485 (0.504 to 0.467) | < .001 | 0.473 (0.493 to 0.454) | < .001 |\n\nAdjusted coefficients are adjusted for age, sex, and BMI. Regression coefficients are presented with 95% CIs and Pvalues.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# Impact of Dyspnea on Adults With Respiratory Symptoms Without a Defined Diagnosis\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\n> BACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\n> RESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\n> STUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George's Respiratory questionnaire.\n\n> RESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported significantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-specific risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classification and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\n> INTERPRETATION: Our findings showed that in community-based adults with undiagnosed respiratory symptoms, those identified with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case finding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "#### Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered first, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n#### Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36- Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n#### Results\n\nFigure 1 illustrates the results of the case finding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n#### Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-specific risk factors, spirometry disease classification, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 – Study flow diagram demonstrating the case finding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ¼ COPD Diagnostic Questionnaire; CF ¼ cystic fibrosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Risk Factor | Regression Coefficient | P Value |\n| --- | --- | --- |\n| Age | 0.0909 | .005 |\n| Female | 8.217 | < .001 |\n| BMI | 0.899 | < .001 |\n| Household income < CAD $30,000 | 1.420 | .40 |\n| Household income $ CAD $30,000 | 2.149 | .07 |\n| Smoking history, pack-y | 0.144 | < .001 |\n| Smoking exposure | 5.123 | < .001 |\n| Occupational exposure | 0.00975 | < .001 |\n| Congestive heart failure | 10.119 | .004 |\n| Coronary artery disease | 4.813 | .001 |\n| Depression/anxiety | 6.892 | < .001 |\n| Diabetes mellitus | 1.627 | .22 |\n| Hypertension | 3.433 | < .001 |\n| Anemia | 1.738 | .15 |\n| Cancer | 0.952 | .49 |\n| GERD | 4.663 | < .001 |\n| Liver disease | 1.081 | .61 |\n| Renal disease | 2.073 | .32 |\n| Stroke | 8.463 | < .001 |\n\nTABLE 4 ] Sequential Regression Analyses of Risk Factors Contributing to Variability in Dyspnea: Dyspnea Regressed on Patient-Specific Risk Factors (20.6% of Variability Explained)\n\nBoldface indicates statitistical significance. GERD¼ gastroesophageal reflux disease.\n\n1.011; P < .001 for general practitioner visits; OR, 1.015; P < .001 for emergency department visits; and OR, 1.023, P ¼ .005 for hospitalization for respiratory illness) (Table 8).\n\nAfter adjusting for age, sex, and BMI, dyspnea was associated with a reduced likelihood of current employment (OR, 0.993; P < .001), increased absenteeism (coefficient, 0.066; P < .001), increased presenteeism (coefficient, 0.349; P < .001), higher work\n\nTABLE 5 ] Dyspnea Regressed on Spirometry Disease Group\n\n| Disease Group | Regression Coefficient | Value P |\n| --- | --- | --- |\n| Control | 31.2 | < .001 |\n| Normal spirometrya | NA | NA |\n| Asthma | 4.6 | .001 |\n| COPD | 3.8 | .003 |\n| PRISm | 5.5 | .001 |\n| Constant | 51.9 | NA |\n\nDyspnea regressed on spirometry disease group, after removing contributions from subject-specific factors in Table 4 (12.4% of variability explained). Boldface indicates statitistical significance. NA ¼ not applicable; PRISm ¼ preserved ratio impaired spirometry. a Normal spirometry group is the reference category.\n\nproductivity loss (coefficient, 0.383; P < .001), and greater activity impairment (coefficient, 0.501; P < .001), as measured by the Work Productivity and Activity Impairment questionnaire21 (Table 9).\n\n### Discussion\n\nOur study explored dyspnea in community-based adults with undiagnosed respiratory symptoms identified via case finding. Surprisingly, we found that the dyspnea experienced by those with PRISm had a greater impact on their activities and health status than those with newly diagnosed COPD or asthma.\n\nThe prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-specific risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5 point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "- 5. Nishino T. Dyspnoea: underlying mechanisms and treatment. Br J Anaesth. 2011;106:463-474.\n- 6. Neder J, Berton D, Müller P, et al. Ventilatory inefficiency and exertional dyspnea in early chronic obstructive pulmonary disease. Ann Am Thorac Soc. 2017;14(suppl_1): S22-S29.\n- 7. Gruenberger JB, Vietri J, Keininger DL, Mahler DA. Greater dyspnea is associated with lower health- related quality of life among European patients with COPD. Int J Chron Obstruct Pulmon Dis. 2017;12: 937-944.\n- 8. Preteroti M, Whitmore GA, Vandemheen KL, et al. Population-based case-finding to identify subjects with undiagnosed asthma or COPD. Eur Respir J. 2020;55:2000024.\n- 9. Huynh C, Whitmore GA, Vandemheen KL, et al. Derivation and validation of the UCAP-Q case-finding questionnaire to detect undiagnosed asthma and COPD. Eur Respir J. 2022;60(3):2103243.\n- 10. Shin B, Cole SL, Park SJ, et al. A new symptom-based questionnaire for predicting the presence of asthma. J Investig Allergol Clin Immunol. 2010;20: 27-34.\n- 11. Price DB, Tinkelman DG, Nordyke RJ, et al. Scoring system and clinical application of COPD diagnostic questionnaires. Chest. 2006;129: 1531-1539.\n- 12. Price DB, Tinkelman DG, Halbert RJ, et al. Symptom-based questionnaire for identifying COPD in smokers. Respiration. 2006;73:285-295.\n- 13. Jones PW, Harding G, Berry P, et al. Development and first validation of the COPD Assessment Test. Eur Respir J. 2009;34:648-654.\n- 14. Jones PW. Quality of life measurement for patients with diseases of the airways. Thorax. 1991;46:676-682.\n- 15. Jones PW, Quirk FH, Baveystock CM. The St George's Respiratory Questionnaire. Respir Med. 1991;85:25-31.\n- 16. Jones PW. St George's Respiratory Questionnaire: MCID. J Chronic Obstr Pulm Dis. 2005;2:75-79.\n- 17. Global Initiative for Asthma. Global strategy for asthma management and prevention. Global Initiative for Asthma website. Accessed July 30, 2023. https:// ginasthma.org/wp-content/uploads/2023/ 07/GINA-2023-Full-report-23_07_06- WMS.pdf\n- 18. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease. Global Initiative for Chronic Obstructive Lung Disease website. Accessed July 30, 2023. https://goldcopd.org/wp-content/ uploads/2023/03/GOLD-2023-ver-1.3-17 Feb2023_WMV.pdf\n- 19. Magner KMA, Cherian M, Whitmore GA, et al. Assessment of preserved ratio impaired spirometry (PRISm) using pre and post bronchodilator spirometry in a randomly-sampled symptomatic cohort. Am J Resp Crit Care Med. 2023;208(10): 1129-1131.\n- 20. Hanania NA, O'Donnell DE. Activityrelated dyspnea in chronic obstructive pulmonary disease: physical and psychological consequences, unmet needs, and future directions. Int J Chron Obstruct Pulmon Dis. 2019;14: 1127-1138.\n- 21. Reilly Associates. WPAI scoring. Reilly Associates website. Accessed May 1, 2024. http://www.reillyassociates.net/wpai_ scoring.html\n- 22. Carlsen HK, Haga SL, Olsson D, et al. Birch pollen, air pollution and their interactive effects on airway symptoms and peak expiratory flow in allergic asthma during pollen season – a panel study in Northern and Southern Sweden. Environ Health. 2022;21:63.\n- 23. Ekström M, Johannessen A, Abramson MJ, et al. Breathlessness across generations: results from the RHINESSA generation study. Thorax. 2022;77(2): 172-177.\n- 24. Ziegler B, Fernandes AK, Sanches PR, Konzen GL, Dalcin Pde T. Variability of dyspnea perception in healthy subjects\n\nassessed through inspiratory resistive loading. J Bras Pneumol. 2015;41(2): 143-150.\n\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Profile (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res. 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma. 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway inflammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J. 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med. 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med. 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med. 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur. 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med. 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J. 2023;61(2): 2201721.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "prerecorded message then inquired whether any household member was $ 18 years of age and had experienced respiratory symptoms (eg, shortness of breath, wheezing, increased mucus or sputum, prolonged cough) within the past 6 months. Households with affirmative responses were subsequently contacted by the local study coordinator for a follow-up call. The household member reporting respiratory symptoms was verbally consented and screened for eligibility to participate in the study over the telephone.8,9\n\nExclusion criteria included the following: (1) a history of diagnosis of lung or airway disease, (2) use of respiratory inhalers aside from as-needed salbutamol, (3) contraindications for spirometry (eg, occurrences of myocardial infarction, stroke, aortic or cerebral aneurysm, eye surgery, detached retina within the last 3 months), (4) inability or refusal to provide informed consent, (5) being in the third trimester of pregnancy, and (6) being < 18 years of age.\n\nEach participant completed the Asthma Screening Questionnaire (ASQ)10 via telephone. Individuals aged $ 60 years, and those aged < 60 years who scored < 6 points on the ASQ, also completed the COPD-Diagnostic Questionnaire.11,12 Participants scoring $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire were invited to the study site for pre- and postbronchodilator (BD) spirometry.\n\nA control group without respiratory symptoms was selected randomly using identical random digit dialing methods. Control patients reported no respiratory symptoms in the preceding 6 months and obtained a score of 0 on the ASQ. Participants were recruited as control patients if they could be matched with an individual from the undiagnosed group based on age (- 5 years) and sex. This matching process aimed to have similar demographic profiles between the control group and the newly found cases. This matching was implemented solely to ensure demographic comparability across the study groups and not for pairing patients for statistical analysis purposes.\n\nAll participants filled out the COPD Assessment Test (CAT) questionnaire. Elevated CAT scores indicate a greater burden of respiratory symptoms impacting daily activities and health status.13 The St. George's Respiratory Questionnaire (SGRQ)14-16 was used to assess respiratory disease-related quality of life. Higher SGRQ scores indicate poorer health status. Both the CAT and SGRQ questionnaires were completed prior to spirometry to avoid influencing patients' perceptions of their dyspnea.\n\n### Classification of Undiagnosed Cases\n\nCertified study personnel administered spirometry tests before and after BD use. Participants showing an increase of at least 12% and 200 mL in their FEV1 after receiving 400 mg of salbutamol were classified as having spirometry indicative of asthma.17 Those whose post-BD ratio of FEV1/FVC fell below the lower 95% confidence limit (ie, FEV1/FVC < lower limit of normal) were classified as having spirometry indicative of COPD.18 Participants meeting the criteria for both conditions were labeled as having COPD. Those with a post-BD FEV1 < 80% of the predicted normal and a post-BD FEV1/FVC ratio > 0.70 were classified as having spirometry indicative of preserved ratio impaired spirometry (PRISm). PRISm was defined based on post-BD spirometry values for a more specific classification.19 Participants not meeting criteria for asthma, COPD, or PRISm were labeled as having normal spirometry.\n\nAssessment of the Impact of Participants' Dyspnea Although neither the CAT nor the SGRQ are dyspneaspecific tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea,20 and both yield a richer assessment of dyspnea than the modified Medical Research Council breathlessness scale.20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the first component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "```\n int idNode {};\n WCHAR rgwchNodeText[cwchMaxNodeText];\n int iDestPage {};\n float dytfvDestPage {};\n float dxtfvDestOffset {};\n float dytfvDestOffset {};\n} MSODOCEXOUTLINENODE;\n```\nThe members of the **MSODOCEXOUTLINENODE** are described as follows:\n\n- **idNode** The ID for the node. A value of **-1** indicates that this node cannot have child nodes in the outline. Otherwise, this member has a value that is unique across the document.\n- **rgwchNodeText** A Unicode string that represents the title text for each node. This text is not required to be unique across the outline.\n- **iDestPage** The page number of the page that contains the destination location within the document.\n- **dytfvDestPage** The height of the destination page in points. The offset specified by the **dytfvDestOffset** member is relative to the upper-left corner of the page. However, some fixed-format types use a coordinate system that is relative to the bottom-left corner of the page. For these types of documents, the page height is required to convert the offset.\n- **dxtfvDestOffset** The horizontal offset of the destination location on the destination page.\n- **dytfvDestOffset** The vertical offset of the destination location on the destination page.\n\n### **HrAddDocumentMetadataString**\n\nPublisher calls the **HrAddDocumentMetadataString** method to specify document metadata in the form of a Unicode string.\n\n```\nC++\nHRESULT HrAddDocumentMetadataString(\n MSODOCEXMETADATA metadataType, \n const WCHAR* pwchValue\n);\n```", - "page_start": 33, - "page_end": 33, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "What are the criterion to be control patient in the dyspnea study ?", - "target_page": 3, - "target_passage": "Control patients reported no respiratory symptoms in the preceding 6 months and obtained a score of 0 on the ASQ.", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identified with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort.1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%.2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks.3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a significant burden on those with undiagnosed conditions. In a systematic review by Müller et al,4 the combined\n\n#### Study Design and Methods Recruitment of Undiagnosed Cases and Healthy Control Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case finding study. Approval for prevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily influenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants.5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identified potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits.7\n\nThe three objectives of our study were as follows: (1) to evaluate the impact of dyspnea in adults from the general population who had no prior diagnosis of respiratory disease but who reported having significant respiratory symptoms in the past 6 months; (2) to identify associated risk factors for dyspnea and estimate their influence on the symptom; and (3) to explore the relationship between dyspnea and health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nthe study was obtained from the research ethics boards of the 17 participating study sites across Canada. Informed, written consent was provided by all study participants.\n\nBoth landlines and cellphones within a 90-minute radius of any of the 17 study sites were dialed randomly. A\n\nDOI: https://doi.org/10.1016/j.chest.2024.07.183\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George's Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael's Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\n(P. H.), Dalhousie University, Halifax, NS; the Department of Medicine (I. M. and M. B.), University of Alberta, Edmonton, AB; the Department of Medicine (M. D. L.), Queen's University, Kingston; the Department of Medicine (C. J. L.), University of Western Ontario, London, ON; the Department of Medicine (T. A.), Memorial University, St. John's, NF; the Department of Medicine (N. E.), McGill University, Montreal, QC; the Department of Medicine (M. A.), University of Manitoba, Winnipeg, MN, Canada.\n\nDrs Bierbrier and Gerstein contributed equally to this manuscript.\n\nPart of this work has been presented at the American Thoracic Society Conference, May 17-22, 2024, San Diego, CA.\n\nCORRESPONDENCE TO: Shawn D. Aaron, MD; email: saaron@ohri.ca Copyright 2024 The Author(s). Published by Elsevier Inc under license from the American College of Chest Physicians. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case finding strategy identified asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD.27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status.28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions.29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective.30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits definitive clinical trials.31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al32 revealed that physicians underestimated their patients' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea.19 Patient underreporting of symptoms, coupled with inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population.33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case finding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n# Financial/Nonfinancial Disclosures\n\nNone declared.\n\n# Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David Savage; Natasha Verzosa; Ravneet Mahal; and Mary Justine Angeles; Queen Elizabeth II Health Sciences Centre, Halifax, NS: Scott Fulton, RRT; Hôpital du Sacré Coeur de Montréal, Montréal, QC: Simone Chaboillez, MT; and Meliza Benabdallah; St. Joseph's Hamilton, Hamilton, ON: Liz Johnson; St. Boniface Hospital, Winnipeg, MB: Cheryl Noble, RN; Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval, Québec, QC: Johane Lepage, BSc; Joanne Milot, RN; and Christiane Balizet, RN; University of Calgary, Calgary, AB: Lisette Machado, MD; and Curtis Dumonceaux, BSc; University of Alberta, Edmonton, AB: Miranda Bowen, RRT; Fay Hartt; Angie Hillaby, RRT; and Amy Haartsma, RRT; St. Michael's Hospital, Toronto, ON: Stephanie Segovia, PhD; and Carolyn Spiegel-Feld; Queen's University Kingston General Hospital, Kingston, ON: Ann Taite, BSc; Alison Morra, BScN; Emma Bullock, HBSc; and Taylar Wall, RRT; University of Saskatchewan Royal University Hospital, Saskatoon, SK: Nancy Zacher; Janet Baran, RN; and Yessica Lopez, BA; London Health Sciences Centre - Victoria Hospital, London, ON: Katie Maguire; Heba Almadhoun; and Robert Campbell-Pereira, BSc; St. Clare's Mercy Hospital, St John's, NL: Sarah Anthony, BNRN; and Tanya Nolan, BNRN; McGill University Health Centre, Montreal, QC: Francine Noel; Royal Victoria Regional Health Centre, Barrie, ON: Masoud Mahdavian; and Ashley Brown, RRT; and Michael Garron Hospital, Toronto, ON: Ian Fraser; Han Byul (Liz) Lee; and Yuna Lee, BA. We would also thank Dong Vo We (data manager, Ottawa Hospital Research Institute, Ottawa, ON). We also thank the thousands of study participants who gave their time and came in for the study visits. We also thank ASDE Survey Sampler, Inc (Gatineau, QC, Canada) for organizing the random digit dialing.\n\n# References\n\n- 1. Parshall MB, Schwarthzstein RM, Adams L, et al. An Official American Thoracic Society Statement: update on the mechanisms, assessment, and management of dyspnea. Am J Respir Crit Care Med. 2012;185:435-452.\n- 2. Ho SF, O'Mahony MS, Steward JA, et al. Dyspnoea and quality of life in older people at home. Age Ageing. 2001;30: 155-159.\n- 3. Laviolette L, Laveneziana P. Dyspnoea: a multidimensional and multidisciplinary approach. Eur Respir J. 2014;43: 1750-1762.\n- 4. Müller A, Mraz T, Wouters EFM, et al. Prevalence of dyspnea in general adult populations: a systematic review and meta-analysis. Respir Med. 2023;218: 107379.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# TABLE 8 ] Unadjusted and Adjusted Dyspnea Associations With Health Care Use\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | Value P | Dyspnea OR (95% CI) | Value P |\n| In the past 12 mo, did you visit your general | 1.011 (1.007-1.014) | < .001 | 1.011 (1.007-1.014) | < .001 |\n| practitioner or a nurse practitioner or another physician at a walk-in clinic for any breathing | | | | |\n| problems? | | | | |\n| In the past 12 mo, did you visit an emergency | 1.015 (1.009-1.021) | < .001 | 1.015 (1.009-1.022) | < .001 |\n| department for any breathing problems? | | | | |\n| In the past 12 mo, were you hospitalized for any | 1.021 (1.006-1.037) | .006 | 1.023 (1.007-1.039) | .005 |\n| breathing problems or respiratory illness? | | | | |\n\nData are presented as OR (95% CI) with Pvalues. Adjusted values are adjusted for age, sex, and BMI.\n\noutpatients with cardiorespiratory disease25 and the Dyspnea-12 in patients with asthma26 and found that the affective aspect of dyspnea can significantly influence the impact of dyspnea on health status, irrespective of the intensity of breathlessness.\n\nIn those with PRISm, there was a strong, positive association between higher values for the FEV1/FVC ratio and dyspnea. For the PRISm group, a higher FEV1/FVC ratio may reflect diminished lung compliance due to interstitial lung disease and/or respiratory system restriction due to obesity, which could contribute to worse dyspnea. Conversely, the association of dyspnea with the FEV1/FVC ratio was in the opposite direction for those with asthma or COPD, and a lower FEV1/FVC ratio correlated with worse dyspnea, as expected.\n\nOur study complements the literature by focusing on adults with undiagnosed respiratory symptoms who were randomly selected and recruited through active case finding in the community. This increases the generalizability of our results to a broader population. Our dyspnea questions were derived from widely used and validated respiratory health questionnaires, and our dyspnea assessment measure is a weighted average of responses to these validated questions. Consequently, the measure has an immediate interpretation in terms of the lived day-to-day experience of individuals.\n\nOur study has limitations. We did not undertake reliability/reproducibility testing of our questionnaire. The dyspnea impact assessment score was statistically associated with increased health care utilization, lower quality of life, and reduced work productivity; therefore, by virtue of this analysis, our questionnaire has construct validity. However, further attempts at external validation of the questionnaire using an independent data set would be important. Health care utilization during the preceding 12 months was assessed on entry into the study, and there is potential for impaired recall of events. Our study may have missed asthma in some participants because bronchial challenge testing was not conducted on those who tested negative for airflow obstruction or BD responsiveness. A previous study showed that an additional diagnostic step incorporating\n\n| TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| (working for pay)? | | | | |\n| | Dyspnea Coefficient | | Dyspnea Coefficient | |\n| Measurea | (95% CI) | Value P | (95% CI) | Value P |\n| Absenteeism | 0.061 (0.040-0.083) | <.001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | <.001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | <.001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | <.001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coefficients are presented with 95% CIs and P values. Adjusted coefficients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\na Measures calculated from WPAI questions.21", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "#### Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered first, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n#### Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36- Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n#### Results\n\nFigure 1 illustrates the results of the case finding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n#### Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-specific risk factors, spirometry disease classification, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 – Study flow diagram demonstrating the case finding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ¼ COPD Diagnostic Questionnaire; CF ¼ cystic fibrosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Disease Group | Reversibility of FEV1, % | | Post-BD FEV1/FVC Ratio | | Post-BD FEV1 % predicted | Overall Value P |\n| --- | --- | --- | --- | --- | --- | --- |\n| Control | 0.163 (P ¼ .47) | | P 0.274 ( | [ .05) | 0.090 (P ¼ .17) | .096 |\n| Normal spirometry | 0.186 (P ¼ .16) | | 0.240 ( P | [ .005) | P < .001) 0.131 ( | < .001 |\n| Asthma | 0.545 ( P | [ .01) | 0.107 (P ¼ .58) | | 0.158 (P ¼ .08) | .009 |\n| COPD | P 0.392 ( | [ .002) | P 0.307 ( | [ .05) | 0.075 (P ¼ .37) | < .001 |\n| PRISm | 0.290 (P ¼ .39) | | 0.854 ( P | [ .002) | P [ .004) 0.650 ( | < .001 |\n\nTABLE 6 ] Dyspnea Regressed on Lung Function Variables Representing Severity of Impairment\n\nDyspnea regressed on lung function variables representing severity of impairment, after removing contributions of patient-specific factors and spirometry disease group Tables 4 and 5 (1.7% of variability explained). Boldface indicates statitistical significance. BD ¼ bronchodilator; PRISm ¼ preserved ratio impaired spirometry.\n\nApproximately 65% of the variability in dyspnea remained unexplained by the factors examined in our study. Most individuals in our study showed normal spirometry results but still carried a substantial burden of dyspnea, an inconsistency that needs explanation. Several factors not included in our analysis may have contributed to the unexplained variation. Environmental factors (eg, air pollution, allergen exposure, seasonal variations in symptoms) are potential contributors to this unexplained variability.22 Genetic predispositions could also play a significant role, as suggested by a study that revealed that parents with dyspnea were 1.8 times more likely to have offspring with dyspnea.23 Additionally, fitness could be a contributing factor, especially in individuals with undiagnosed PRISm, asthma, or COPD who may restrict their activities to avoid dyspnea, and hence become deconditioned.6\n\nThere were significant but modest differences in mean dyspnea levels across the 17 study sites (data not shown), which are not explained by the risk factors we accounted for in our study. This finding is not surprising because some of the potential contributing factors previously mentioned and other site-specific factors\n\n(eg, climate, air quality/industrialization, socioeconomic status) of the catchment population tend to vary across study sites.\n\nDyspnea is a complex, subjective symptom that is modified by nonrespiratory factors including psychosocial, social, and environmental influences.5 Interindividual variability in the perception of dyspnea, influenced by these nonrespiratory factors, may play an important role. A study conducted by Ziegler et al24 assessed the perception of dyspnea in 42 healthy individuals using a standardized inspiratory resistive loading stimulus. The study used the modified Borg scale to measure dyspnea perception levels. Among the participants subjected to the same inspiratory resistive load, 31%, 45%, and 24% of participants classified their level of dyspnea as low, intermediate, and high, respectively. The study revealed that differences between individuals contribute considerable variability to the perception of dyspnea, even among healthy participants.\n\nThe affective dimension of dyspnea can be captured using additional questionnaires (eg, Multidimensional Dyspnea Profile, Dyspnea-12). Studies have explored the use of the Multidimensional Dyspnea Profile in\n\n| TABLE 7 ] Unadjusted and Adjusted Dyspnea Associations With Quality of Life (SF-36) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea Coefficient (95% CI) | Value P | Dyspnea Coefficient (95% CI) | Value P |\n| Physical functioning | 0.693 (0.718 to 0.668) | < .001 | 0.655 (0.680 to 0.630) | < .001 |\n| Physical health limitations | 0.634 (0.666 to 0.603) | < .001 | 0.628 (0.661 to 0.595) | < .001 |\n| Emotional problems | 0.403 (0.438 to 0.369) | < .001 | 0.407 (0.443 to 0.370) | < .001 |\n| Energy/fatigue | 0.454 (0.479 to 0.428) | < .001 | 0.452 (0.479 to 0.425) | < .001 |\n| Emotional well-being | 0.230 (0.256 to 0.204) | < .001 | 0.239 (0.266 to 0.213) | < .001 |\n| Social functioning | 0.433 (0.466 to 0.399) | < .001 | 0.434 (0.469 to 0.399) | < .001 |\n| Pain | 0.410 (0.444 to 0.377) | < .001 | 0.387 (0.423 to 0.352) | < .001 |\n| General health | 0.390 (0.416 to 0.364) | < .001 | 0.382 (0.409 to 0.355) | < .001 |\n| Total score | 0.485 (0.504 to 0.467) | < .001 | 0.473 (0.493 to 0.454) | < .001 |\n\nAdjusted coefficients are adjusted for age, sex, and BMI. Regression coefficients are presented with 95% CIs and Pvalues.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# Impact of Dyspnea on Adults With Respiratory Symptoms Without a Defined Diagnosis\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\n> BACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\n> RESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\n> STUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George's Respiratory questionnaire.\n\n> RESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported significantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-specific risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classification and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\n> INTERPRETATION: Our findings showed that in community-based adults with undiagnosed respiratory symptoms, those identified with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case finding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "prerecorded message then inquired whether any household member was $ 18 years of age and had experienced respiratory symptoms (eg, shortness of breath, wheezing, increased mucus or sputum, prolonged cough) within the past 6 months. Households with affirmative responses were subsequently contacted by the local study coordinator for a follow-up call. The household member reporting respiratory symptoms was verbally consented and screened for eligibility to participate in the study over the telephone.8,9\n\nExclusion criteria included the following: (1) a history of diagnosis of lung or airway disease, (2) use of respiratory inhalers aside from as-needed salbutamol, (3) contraindications for spirometry (eg, occurrences of myocardial infarction, stroke, aortic or cerebral aneurysm, eye surgery, detached retina within the last 3 months), (4) inability or refusal to provide informed consent, (5) being in the third trimester of pregnancy, and (6) being < 18 years of age.\n\nEach participant completed the Asthma Screening Questionnaire (ASQ)10 via telephone. Individuals aged $ 60 years, and those aged < 60 years who scored < 6 points on the ASQ, also completed the COPD-Diagnostic Questionnaire.11,12 Participants scoring $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire were invited to the study site for pre- and postbronchodilator (BD) spirometry.\n\nA control group without respiratory symptoms was selected randomly using identical random digit dialing methods. Control patients reported no respiratory symptoms in the preceding 6 months and obtained a score of 0 on the ASQ. Participants were recruited as control patients if they could be matched with an individual from the undiagnosed group based on age (- 5 years) and sex. This matching process aimed to have similar demographic profiles between the control group and the newly found cases. This matching was implemented solely to ensure demographic comparability across the study groups and not for pairing patients for statistical analysis purposes.\n\nAll participants filled out the COPD Assessment Test (CAT) questionnaire. Elevated CAT scores indicate a greater burden of respiratory symptoms impacting daily activities and health status.13 The St. George's Respiratory Questionnaire (SGRQ)14-16 was used to assess respiratory disease-related quality of life. Higher SGRQ scores indicate poorer health status. Both the CAT and SGRQ questionnaires were completed prior to spirometry to avoid influencing patients' perceptions of their dyspnea.\n\n### Classification of Undiagnosed Cases\n\nCertified study personnel administered spirometry tests before and after BD use. Participants showing an increase of at least 12% and 200 mL in their FEV1 after receiving 400 mg of salbutamol were classified as having spirometry indicative of asthma.17 Those whose post-BD ratio of FEV1/FVC fell below the lower 95% confidence limit (ie, FEV1/FVC < lower limit of normal) were classified as having spirometry indicative of COPD.18 Participants meeting the criteria for both conditions were labeled as having COPD. Those with a post-BD FEV1 < 80% of the predicted normal and a post-BD FEV1/FVC ratio > 0.70 were classified as having spirometry indicative of preserved ratio impaired spirometry (PRISm). PRISm was defined based on post-BD spirometry values for a more specific classification.19 Participants not meeting criteria for asthma, COPD, or PRISm were labeled as having normal spirometry.\n\nAssessment of the Impact of Participants' Dyspnea Although neither the CAT nor the SGRQ are dyspneaspecific tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea,20 and both yield a richer assessment of dyspnea than the modified Medical Research Council breathlessness scale.20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the first component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Risk Factor | Regression Coefficient | P Value |\n| --- | --- | --- |\n| Age | 0.0909 | .005 |\n| Female | 8.217 | < .001 |\n| BMI | 0.899 | < .001 |\n| Household income < CAD $30,000 | 1.420 | .40 |\n| Household income $ CAD $30,000 | 2.149 | .07 |\n| Smoking history, pack-y | 0.144 | < .001 |\n| Smoking exposure | 5.123 | < .001 |\n| Occupational exposure | 0.00975 | < .001 |\n| Congestive heart failure | 10.119 | .004 |\n| Coronary artery disease | 4.813 | .001 |\n| Depression/anxiety | 6.892 | < .001 |\n| Diabetes mellitus | 1.627 | .22 |\n| Hypertension | 3.433 | < .001 |\n| Anemia | 1.738 | .15 |\n| Cancer | 0.952 | .49 |\n| GERD | 4.663 | < .001 |\n| Liver disease | 1.081 | .61 |\n| Renal disease | 2.073 | .32 |\n| Stroke | 8.463 | < .001 |\n\nTABLE 4 ] Sequential Regression Analyses of Risk Factors Contributing to Variability in Dyspnea: Dyspnea Regressed on Patient-Specific Risk Factors (20.6% of Variability Explained)\n\nBoldface indicates statitistical significance. GERD¼ gastroesophageal reflux disease.\n\n1.011; P < .001 for general practitioner visits; OR, 1.015; P < .001 for emergency department visits; and OR, 1.023, P ¼ .005 for hospitalization for respiratory illness) (Table 8).\n\nAfter adjusting for age, sex, and BMI, dyspnea was associated with a reduced likelihood of current employment (OR, 0.993; P < .001), increased absenteeism (coefficient, 0.066; P < .001), increased presenteeism (coefficient, 0.349; P < .001), higher work\n\nTABLE 5 ] Dyspnea Regressed on Spirometry Disease Group\n\n| Disease Group | Regression Coefficient | Value P |\n| --- | --- | --- |\n| Control | 31.2 | < .001 |\n| Normal spirometrya | NA | NA |\n| Asthma | 4.6 | .001 |\n| COPD | 3.8 | .003 |\n| PRISm | 5.5 | .001 |\n| Constant | 51.9 | NA |\n\nDyspnea regressed on spirometry disease group, after removing contributions from subject-specific factors in Table 4 (12.4% of variability explained). Boldface indicates statitistical significance. NA ¼ not applicable; PRISm ¼ preserved ratio impaired spirometry. a Normal spirometry group is the reference category.\n\nproductivity loss (coefficient, 0.383; P < .001), and greater activity impairment (coefficient, 0.501; P < .001), as measured by the Work Productivity and Activity Impairment questionnaire21 (Table 9).\n\n### Discussion\n\nOur study explored dyspnea in community-based adults with undiagnosed respiratory symptoms identified via case finding. Surprisingly, we found that the dyspnea experienced by those with PRISm had a greater impact on their activities and health status than those with newly diagnosed COPD or asthma.\n\nThe prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-specific risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5 point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "- 5. Nishino T. Dyspnoea: underlying mechanisms and treatment. Br J Anaesth. 2011;106:463-474.\n- 6. Neder J, Berton D, Müller P, et al. Ventilatory inefficiency and exertional dyspnea in early chronic obstructive pulmonary disease. Ann Am Thorac Soc. 2017;14(suppl_1): S22-S29.\n- 7. Gruenberger JB, Vietri J, Keininger DL, Mahler DA. Greater dyspnea is associated with lower health- related quality of life among European patients with COPD. Int J Chron Obstruct Pulmon Dis. 2017;12: 937-944.\n- 8. Preteroti M, Whitmore GA, Vandemheen KL, et al. Population-based case-finding to identify subjects with undiagnosed asthma or COPD. Eur Respir J. 2020;55:2000024.\n- 9. Huynh C, Whitmore GA, Vandemheen KL, et al. Derivation and validation of the UCAP-Q case-finding questionnaire to detect undiagnosed asthma and COPD. Eur Respir J. 2022;60(3):2103243.\n- 10. Shin B, Cole SL, Park SJ, et al. A new symptom-based questionnaire for predicting the presence of asthma. J Investig Allergol Clin Immunol. 2010;20: 27-34.\n- 11. Price DB, Tinkelman DG, Nordyke RJ, et al. Scoring system and clinical application of COPD diagnostic questionnaires. Chest. 2006;129: 1531-1539.\n- 12. Price DB, Tinkelman DG, Halbert RJ, et al. Symptom-based questionnaire for identifying COPD in smokers. Respiration. 2006;73:285-295.\n- 13. Jones PW, Harding G, Berry P, et al. Development and first validation of the COPD Assessment Test. Eur Respir J. 2009;34:648-654.\n- 14. Jones PW. Quality of life measurement for patients with diseases of the airways. Thorax. 1991;46:676-682.\n- 15. Jones PW, Quirk FH, Baveystock CM. The St George's Respiratory Questionnaire. Respir Med. 1991;85:25-31.\n- 16. Jones PW. St George's Respiratory Questionnaire: MCID. J Chronic Obstr Pulm Dis. 2005;2:75-79.\n- 17. Global Initiative for Asthma. Global strategy for asthma management and prevention. Global Initiative for Asthma website. Accessed July 30, 2023. https:// ginasthma.org/wp-content/uploads/2023/ 07/GINA-2023-Full-report-23_07_06- WMS.pdf\n- 18. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease. Global Initiative for Chronic Obstructive Lung Disease website. Accessed July 30, 2023. https://goldcopd.org/wp-content/ uploads/2023/03/GOLD-2023-ver-1.3-17 Feb2023_WMV.pdf\n- 19. Magner KMA, Cherian M, Whitmore GA, et al. Assessment of preserved ratio impaired spirometry (PRISm) using pre and post bronchodilator spirometry in a randomly-sampled symptomatic cohort. Am J Resp Crit Care Med. 2023;208(10): 1129-1131.\n- 20. Hanania NA, O'Donnell DE. Activityrelated dyspnea in chronic obstructive pulmonary disease: physical and psychological consequences, unmet needs, and future directions. Int J Chron Obstruct Pulmon Dis. 2019;14: 1127-1138.\n- 21. Reilly Associates. WPAI scoring. Reilly Associates website. Accessed May 1, 2024. http://www.reillyassociates.net/wpai_ scoring.html\n- 22. Carlsen HK, Haga SL, Olsson D, et al. Birch pollen, air pollution and their interactive effects on airway symptoms and peak expiratory flow in allergic asthma during pollen season – a panel study in Northern and Southern Sweden. Environ Health. 2022;21:63.\n- 23. Ekström M, Johannessen A, Abramson MJ, et al. Breathlessness across generations: results from the RHINESSA generation study. Thorax. 2022;77(2): 172-177.\n- 24. Ziegler B, Fernandes AK, Sanches PR, Konzen GL, Dalcin Pde T. Variability of dyspnea perception in healthy subjects\n\nassessed through inspiratory resistive loading. J Bras Pneumol. 2015;41(2): 143-150.\n\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Profile (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res. 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma. 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway inflammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J. 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med. 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med. 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med. 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur. 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med. 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J. 2023;61(2): 2201721.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and reflexivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers' closeness to the intervention and the clinical field may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of \"blind spots\", as the researchers may prejudice participants' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, findings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n#### 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals affiliated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT (n = 15) were included (Table 3).\n\n#### 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were TABLE 3 Participant demographic information.\n\n| Variable | Total (n = 15) |\n| --- | --- |\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |\n\nTABLE 4 Interview guide.\n\n| Theme | Potential questions |\n| --- | --- |\n| Overall experiences and | Generally, what are your main experiences of |\n| reflections from participation | participation? |\n| | What did you perceive as meaningful? |\n| | What did you perceive as negative? |\n| Content | How did you experience: |\n| | • The content of the sessions in general |\n| | • The high-intensity walking/running |\n| | • The specific exercises |\n| | • The combination of specific exercises and |\n| | intervals of running/walking |\n| | • The exercise intensity |\n| | How did you respond to the exercises? How did |\n| | you experience getting tired? |\n| | How do you perceive your specific movement |\n| | impairments (if any) being addressed? |\n| | Please elaborate on situations where you |\n| | experienced the feeling of mastery/failure. |\n| | If anything: What was challenging? What would |\n| | you prefer to have been done differently? What |\n| | did you enjoy? |\n| | What was the value of participating in the |\n| | indoor exercise group beforehand? |\n| | How did you experience this kind of exercise |\n| | intervention compared to other type of exercise |\n| | you may have experience with? |\n| The role of the physiotherapists | What did the physiotherapists do? What was |\n| | the value of this to you? |\n| The group setting | How did you experience the group setting? |\n| | How did you perceive the atmosphere in the |\n| | group? |\n| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park |\n| | environment for exercise? |\n| Closing questions | Are there any experiences from participation |\n| | that you would like to elaborate on? Is anything |\n| | related to this project that we have not talked |\n| | about that you would like to say? |\n| | How did you experience this interview? |\n\nOverall participants were asked to describe situations to exemplify their answers, and follow-up questions were used to capture in-depth reflections, for example, What was positive/negative?, How did it feel?, What do you think of that?, What does it mean to you?, Can you elaborate on that?.\n\nconducted (with pwMS who were not part of the sample), and the interview guide was then refined around the following themes: overall experience and reflections from participation, content, outdoor setting, the group, and the physiotherapists. Questions were open-ended to capture rich, in-depth reflections regarding participants' experiences, following a phenomenological approach. The interviewer asked for both negative and positive experiences", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "What is the revenue of Republic Services in 2002 ?", - "target_page": 2, - "target_passage": " $ 2,365.1", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "We also use significant judgment in the following areas:\n\n- determining cash generating units and the allocation of goodwill for the purpose of impairment testing (see note 13)\n- choosing methods for depreciating our property, plant, and equipment that we believe most accurately represent the consumption of benefits derived from those assets and are more representative of the economic substance of the use of the underlying assets (see *Property, Plant and Equipment*, below)\n- deciding to designate our spectrum licences as assets with indefinite useful lives since we believe they are likely to be renewed for the foreseeable future (see *Goodwill and Intangible assets*, below)\n- interpreting tax rules and regulations when we calculate income taxes (see note 27), and\n- determining the probability of loss when we assess contingent liabilities (see note 27).\n\n#### **Revenue Recognition**\n\nWe recognize revenue when we can estimate its amount and are reasonably assured that we can collect it. Revenue is recorded net of discounts.\n\n| Source of revenue | How we recognize it |\n| --- | --- |\n| Monthly subscriber fees for wireless, cable, telephony and Internet | • Record revenue as the service is provided |\n| services, rental of equipment, network services and media subscriptions | |\n| Revenue from airtime, data services, roaming, long-distance and optional | • Record revenue as the services or products are delivered |\n| services, pay-per-use services and other sales of products | |\n| Revenue from the sale of wireless and cable equipment | • Record revenue when the equipment is delivered and accepted by |\n| | the independent dealer or subscriber in direct sales |\n| Equipment subsidies related to providing equipment to new and existing | • Record a reduction of equipment revenues when the equipment is |\n| subscribers | activated |\n| Installation fees charged to subscribers in Cable | • These fees do not meet the criteria as a separate unit of accounting |\n| | • We defer and amortize these fees over the related service period, |\n| | which is approximately three years |\n| | • In Business Solutions we defer and amortize fees over the length of |\n| | the customer contract |\n| Activation fees charged to subscribers in Wireless | • These fees do not meet the criteria as a separate unit of accounting |\n| | • We record these fees as part of equipment revenue |\n| Advertising revenue | • Record revenue in the period the advertising airs on our radio or |\n| | television stations, is featured in our publications or displayed on our |\n| | digital properties |\n| Monthly subscription revenues received by television stations for | • Record revenue in the month the services are delivered to cable and |\n| subscriptions from cable and satellite providers | satellite providers' subscribers |\n| Toronto Blue Jays' revenue from home game admission and concessions | • Recognize revenue as the related games are played during the |\n| | baseball season and goods are sold |\n| Toronto Blue Jays' revenue from the Major League Baseball Revenue | • Recognize revenue when it can be determined |\n| Sharing Agreement which redistributes funds between member clubs | |\n| based on each club's relative revenues | |\n| Revenue from Toronto Blue Jays, radio and television broadcast | • Record revenue at the time the related games are aired |\n| agreements | |\n| Awards granted to customers through customer loyalty programs, which | • Estimate the portion of the original sale to allocate to the award |\n| are considered a separately identifiable component of the sales | credit based on the fair value of the future goods and services that |\n| transactions | can be obtained when the credit is redeemed |\n| | • Defer the allocated amount until the awards are redeemed by the |\n| | customer and we provide the goods or services |\n| | • Recognize revenue based on the redemption of award credits relative |\n| | to the award credits that we expect to be redeemed |\n| Interest income on credit card receivables | • Record revenue as earned using the effective interest rate method |", - "page_start": 98, - "page_end": 98, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "### **CONSENT OF INDEPENDENT REGISTERED PUBLIC ACCOUNTING FIRM**\n\nWe consent to the incorporation by reference in the Registration Statements (Form S-8 Nos. 333-81801, 333-78125, 333-45542 and 333-104048) pertaining to the Republic Services 401(k) Plan, 1998 Stock Incentive Plan, Republic Services, Inc. Amended and Restated Employee Stock Purchase Plan, and Republic Services, Inc. Amended and Restated 1998 Stock Incentive Plan, respectively, of our reports dated February 24, 2005, with respect to the consolidated Ñnancial statements and schedule of Republic Services, Inc., Republic Services, Inc. management's assessment of the eÅectiveness of internal control over Ñnancial reporting, and the eÅectiveness of internal control over Ñnancial reporting of Republic Services, Inc., included in this Annual Report (Form 10-K) for the year ended December 31, 2004.\n\n> /s/ ERNST & YOUNG LLP CertiÑed Public Accountants\n\nFort Lauderdale, Florida February 24, 2005", - "page_start": 102, - "page_end": 102, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "carrier accounts, but due to the telecommunication industry down-turn of the last few years, the Company experienced write-offs in this area of the business totaling $0.5 million in 2002, due to bankruptcy filings of several significant telecommunications companies. In 2003, the inter-carrier segment of the business improved and the Company recovered $240 thousand of bad debt from the sale of certain accounts that were previously written-off.\n\nBad Debt expense summary, net of recoveries for the three years ended December 31, 2003:\n\n| In thousands | | | |\n| --- | --- | --- | --- |\n| | 2003 2003 | 2002 2002 | 2001 2001 |\n| PCS subscribers | $1,716 | $ 3,744 | $ 1,241 |\n| Interexchange carriers | 48 | 488 | - |\n| Other subscribers and entities | 71 | 170 | 82 |\n| Total bad debt expense | $1,835 | $ 4,402 | $ 1,323 |\n\n#### *Revenue Recognition*\n\nThe Company recognizes revenues when persuasive evidence of an arrangement exists, services have been rendered or products have been delivered, the price to the buyer is fixed and determinable, and collectibility is reasonably assured. The Company's revenue recognition polices are consistent with the guidance in Staff Accounting Bulletin (\"SAB\") No. 101, Revenue Recognition in Financial Statements promulgated by the Securities and Exchange Commission, and the Emerging Issues Task Force (\"EITF\") 00-21, \"Revenue Arrangements with Multiple Deliverables\" (\"EITF 00-21\"). Effective July 1, 2003 the Company adopted EITF 00-21. The EITF guidance addresses how to account for arrangements that may involve multiple revenue-generating activities, i.e., the delivery or performance of multiple products, services, and/or rights to use assets. In applying this guidance, separate contracts with the same party, entered into at or near the same time, will be presumed to be a bundled transaction, and the consideration will be measured and allocated to the separate units based on their relative fair values. The consensus guidance was applicable to new PCS service agreements entered into for quarters beginning July 1, 2003. The adoption of EITF 00-21 required evaluation of each arrangement entered into by the Company for each sales channel. The Company will continue to monitor arrangements with its sales channels to determine if any changes in revenue recognition will need to be made in the future. The adoption of EITF 00-21 has resulted in substantially all of the PCS activation fee revenue generated through Company-owned retail stores and associated direct costs being recognized at the time the related wireless handset is sold and it is classified as equipment revenue and cost of equipment, respectively. Upon adoption of EITF 00-21, previously deferred PCS revenue and costs will continue to be amortized over the remaining estimated life of a subscriber, not to exceed 30 months. PCS revenue and costs for activations at other retail locations and through other sales channels will continue to be deferred and amortized over their estimated lives as prescribed by SAB 101. The adoption of EITF 00-21 had the effect of increasing equipment revenue by $68 thousand and increasing costs of equipment by $23 thousand, which otherwise would have been deferred and amortized.\n\nThe Company records equipment revenue from the sale of handsets and accessories to subscribers in its retail stores and to local distributors in its territories upon delivery. The Company does not record equipment revenue on handsets and accessories purchased from national third-party retailers, those purchased though the Company's business-tobusiness sales force, or directly from Sprint by subscribers in its territories. The Company believes the equipment revenue and related cost of equipment associated with the sale of wireless handsets and accessories is a separate earnings process from the sale of wireless services to subscribers. For competitive marketing reasons, the Company sells wireless handsets at prices lower than the cost. In certain instances the Company may offer larger handset discounts as an incentive for the customer to agree to a multi-year service contract. The Company also sells wireless handsets to existing customers at a loss in handset sales and the corresponding cost in cost of goods, and accounts for these transactions separately from agreements to provide customers wireless service. These transactions are viewed as a cost to retain the existing customers and deter churn.\n\nFor the Company's wireless customers that purchase and activate their service through a channel not covered by EITF 00-21, the wireless customers generally pay an activation fee to the Company when they initiate service. The Company defers the activation fee revenue (except when a special promotion reduces or waives the fee) over the average life of its subscribers, which is estimated to be 30 months. The Company recognizes service revenue from its subscribers as they use the service. The Company provides a reduction of recorded revenue for billing adjustments and the portion of revenue (8%) that is retained by Sprint. The Company also reduces recorded revenue for rebates and discounts given to subscribers on wireless handset sales in accordance with (\"EITF\") Issue No. 01-9 \"Accounting for Consideration Given by a Vendor to a Subscriber (Including a Reseller of the Vendor's Products).\" The Company", - "page_start": 45, - "page_end": 45, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Facility lease revenue contributed $5.5 million to wireline revenues, a decrease of $0.2 million or 3.5%. The decrease was primarily the result of the prolonged decline of lease rates associated with competitive pricing pressures and the economic downturn in the telecommunications industry. During 2002 the Company completed a second, diverse fiber route to its existing interconnection point in the Dulles airport area of Northern Virginia. This fiber route provides increased reliability for customers in the event of fiber cuts or breaks, and extends the availability of the Company's fiber network to additional market locations but to date has not added additional revenue to the Company's operation.\n\nBilling and collection services and other revenues contributed $0.4 million to wireline revenues, which was the same as 2002 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.4 million, an increase of $0.1 million or 1.7%. The number of subscribers and service plan prices remained relatively constant during 2003.\n\nOther revenues, primarily consisting of Internet and 511Virginia service revenues were $5.8 million in 2003, an increase of $0.7 million or 13.5%. The Company had 17,420 dial-up Internet subscribers at December 31, 2003, compared to 18,050 at the end of the previous year. During 2003, the Company's DSL high-speed Internet access subscriber count increased to 1,298 from 646. Total Internet service revenue was $4.5 million, an increase of $0.3 million or 10.7%. The 511Virginia contract with the Virginia Department of Transportation contributed $1.3 million to other revenues, an increase of $0.4 million or 41.3%. Telecommunications equipment sales, services and lease revenues were $1.1 million, which reflects a $0.1 million decrease from 2002 results.\n\nTotal operating expenses were $87.2 million, an increase of $3.6 million or 4.3%. The primary driver in the increase in operating expenses is continued growth in the PCS operation somewhat offset by a significant decline in bad debt expense compared to 2002.\n\nLate in 2003, the Company made an employee benefits policy change, which eliminated the requirement for the Company to accrue a vacation liability in advance of the year in which the benefit was used. The result of this change was a reduction of benefit expense of $0.5 million for the year compared to 2002. Benefit expenses impact all operating departments based on the amount of direct labor charged to the department. The change has a one-time impact on the financial statements of the Company. The benefits policy now provides that employees earn and use their paid time off in the same period. In the future, under this policy, unused hours can be banked but only used for extended illness, not carried over for use as vacation.\n\nCost of goods and services was $10.9 million, an increase of $0.4 million or 4.2%. The PCS cost of goods sold was $8.5 million, an increase of $0.2 million or 2.3%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. In 2003, the Company recorded approximately $1.8 million in handset costs related to existing subscribers upgrading their handsets. Prior to 2003, the Company did not track the specific costs related to subsidizing new handsets to existing customers. The cost of handset up-grades sold to existing customers is expected to increase as the customer base matures and handset manufacturers introduce new technologies in new handsets. The cable television programming (cost of service) expense was $1.6 million, an increase of $0.2 million or 16.3%. The Company has seen continuing upward pressure on the cost of cable TV programming by cable TV program providers.\n\nNetwork operating costs were $33.6 million, an increase of $1.1 million or 3.4%. The largest item in network operating costs is travel expense. These costs made up 31.8% and 32.9% of the total network and other costs in 2003 and 2002, respectively. Travel expense is the cost of minutes used by the Company's PCS subscribers on Sprint or other Sprint Affiliates' networks. Travel expense in 2003 was $10.8 million, an increase of $0.1 million due to a significant increase in travel minutes in 2003 which was offset by the impact of the rate decline. The travel rate declined from $0.10 per minute in 2002 to $0.058 per minute in 2003. Our PCS customers increased their average monthly travel minutes by 22% compared to 2002. In 2002, the average customer's travel usage was 130 minutes per month and in 2003 that average travel usage increased to 159 minutes per month.\n\nNetwork infrastructure maintenance costs were $4.9 million or 14.6% of total network operating costs, a decrease of $0.2 million from 2002. Rent for towers, tower sites, and buildings increased $0.9 million or 27.3% to $4.2 million. Lease escalators plus the increase in the number of sites leased contributed to the increase. Line costs in 2003 were $9.8 million or 29.1% of the network operating costs, an increase of $0.1 million.\n\nDepreciation and amortization expense was $16.6 million, an increase of $2.1 million or 14.8%. The PCS operation had depreciation expense of $10.2 million, an increase of $1.6 million or 18.9%. The 16 additional PCS base stations placed in service during 2003 resulted in higher depreciation expense for the year. In the telephone operation, depreciation", - "page_start": 48, - "page_end": 48, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Business Solutions generates revenue from services and equipment sales.\n\nNext generation revenue is generated by the provision of high-speed, high-reliability data and voice communications, provided on Rogers advanced IP and Ethernet and Cloud platforms and mainly over the extensive Rogers fibre, cable and wireless networks. Next generation revenue also includes Data Centre services revenue from the 2013 dates of business acquisitions.\n\nLegacy revenue is generated mainly by long distance, switched voice services and lower speed data communications, provided over TDM and end of life data platforms with client access primarily delivered through the use of third-party networks and tariffed ILEC services.\n\nBusiness Solutions continues to focus mainly on next generation IPbased services, and on leveraging higher margin on-net and near-net service revenue opportunities, using existing network facilities to expand offerings to the medium and large sized enterprise, public sector and carrier markets. Next generation services now represent 59% of total service revenue.\n\nRevenue from the lower margin off-net legacy business generally includes local and long-distance voice services and legacy data services which often use facilities that are leased rather than owned.\n\nFollowing our recent data centre business acquisitions, Business Solutions is now also focused on data centre colocation, hosting, cloud and disaster recovery services.\n\n#### **Higher Operating Revenue**\n\nOperating revenue was 7% higher this year compared to last year, the net result of:\n\n- higher revenue from next generation services, which grew by 31%, reflecting the impact of our acquisitions of Blackiron and Pivot Data Centres\n- continued execution of our plan to grow higher margin on-net and next generation IP-based services revenue\n- partially offset by ongoing decline in the legacy voice and data business, a trend management expects to continue as customers move to faster and more reliable IP services.\n\n#### **Higher Operating Expenses**\n\nWe assess Business Solutions operating expenses in two categories:\n\n- the cost of operating and maintaining telecom and data networking equipment\n- all other expenses involved in day-to-day operations, to service existing subscriber relationships and attract new subscribers.\n\nOperating expenses were higher this year, the net result of:\n\n- higher expenses related to our data centre acquisitions\n- partially offset by expected lower legacy service-related costs related to lower volumes and customer levels and ongoing initiatives to improve costs and productivity.\n\n#### **Higher Adjusted Operating Profit**\n\nAdjusted operating profit was 19% higher this year because of the contribution of new data centres, the ongoing growth in the higher margin on-net next generation business and cost efficiencies.\n\nExcluding the impact of the Blackiron and Pivot Data Centres acquisitions:\n\n- operating revenue would have been 3% lower this year compared to last year, instead of 7% higher as reported\n- adjusted operating profit would have been 11% higher this year compared to last year, instead of 19% higher as reported\n\nWe continue to work on data centre business integration and the optimization of Business Solutions' overall cost structures.", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "increased again on July 1, 2002 to $6.50, and comparable rate increases also impacted business subscribers. Tied to the SLC rate increases were declines in rates charged to interexchange carriers for interstate minutes of use. The 2002 results reflect a significantly larger increase in network usage, which more than offset the decline in rates.\n\nFacility lease revenue contributed $5.7 million to wireline revenues, a decrease of $0.8 million or 12.6% from 2001. The decrease was primarily the result of declining lease rates associated with competitive pricing pressure, and the economic downturn in the telecommunications industry.\n\nBilling and collection services contributed $0.4 million to wireline revenues, which was the same as 2001 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.3 million, an increase of $0.5 million or 14.5%. In December 2001, the Company increased its basic service charge by $6.00 per month, which produced $0.3 million of the increase in cable television revenue. The remaining $0.2 million was generated by an increased penetration of digital services and increased pay per view sales.\n\nWithin other revenues, Internet and 511Virginia contract revenues from the Virginia Department of Transportation, were $5.1 million in 2002, an increase of $1.2 million or 30.4%. The Company had 18,050 dial-up Internet subscribers at December 31, 2002, compared to 17,423 subscribers at the end of 2001. Total Internet service revenue was $4.2 million, an increase of $0.6 million or 15.7%. Services provided under the 511Virginia contract contributed $0.9 million to other revenues, an increase of $0.6 million. Telecommunications equipment sales, services and lease revenues were $1.2 million, a nominal increase over 2001 results.\n\nTotal operating expenses were $83.6 million, an increase of $21.3 million or 34.3%. The continued growth in the PCS operation was principally responsible for the change.\n\nCost of goods and services was $10.5 million, an increase of $3.1 million or 41.8%. The PCS cost of goods sold was $8.3 million, an increase of $2.8 million or 50.2%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. The cable television programming (cost of service) expense was $1.4 million, an increase of $0.1 million or 4.6%. The other cost of goods sold increased $0.3 million, compared to the same period in 2001.\n\nNetwork operating costs were $32.5 million, an increase of $5.8 million or 21.5%. Line and switching costs were $9.7 million, an increase of $2.6 million or 37.4%, due principally to the impact of the expanded PCS network. Travel expense, generated by the Company's PCS subscribers' use of minutes on other providers' portions of the Sprint wireless network, was $10.7 million, an increase of $0.9 million or 8.4%. The increase in customer travel usage more than offset the travel rate explained above in travel revenue. Plant specific costs were $9.6 million, which include the operation, and maintenance of the networks increased $2.3 million or 30.7%. Tower, building, and land rentals, as well as PCS equipment maintenance, were major contributors to the increase in plant specific expenses. Other network costs such as power, network administration, and engineering, were $2.7 million, the same as in 2001.\n\nDepreciation and amortization expense was $14.5 million, an increase of $3.2 million or 28.6%. The PCS operation had depreciation expense of $8.6 million, an increase of $3.6 million or 72.7%. The PCS operation added 53 additional base stations during 2002.\n\nSelling, general and administrative expenses were $26.1 million, an increase of $9.3 million or 55.0%. Customer support costs were $7.8 million, an increase of $2.8 million or 55.3%. The growth in Sprint wireless subscribers was the primary driver for this increase. Advertising expense was $4.3 million, an increase of $1.5 million or 55.8%. This change was primarily due to the stepped-up and ongoing marketing efforts to support the PCS operations in the Quad State market and particularly the Central Penn market. PCS sales staff expenses were $2.7 million, an increase of $0.7 million or 32.7%. The increase was principally due to the full year operations of the three retail locations and adding additional sales staff.\n\nThe Company experienced significant bad debt losses in its PCS operations related to the Sprint Clear PaySM program. The program was initially targeted at customers in sub-prime credit classes and did not require a deposit upon activation of service. As a result of default rates that exceeded projections, the Company experienced a substantial increase in bad debt expense, which rose from $1.2 million in 2001 to $4.4 million in 2002. The reinstatement of deposit requirements in April 2002 caused some moderation in bad debt expense by the end of the year. Total PCS bade debt expense for 2002 was $3.7 million of this expense is associated with several large telecommunications customers who filed bankruptcies in 2002. program. sm", - "page_start": 51, - "page_end": 51, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Media**\n\nThe trends in Media's results are generally the result of continual investment in prime-time and specialty programming, higher sports rights costs, higher subscriber fees, and fluctuations in advertising and consumer market conditions.\n\nSeasonal fluctuations relate to periods of increased consumer activity and their impact on advertising and related retail cycles, the MLB season, where revenues and expenses are concentrated in the spring, summer and fall months, and the NHL season, where advertising revenues and programming expenses are concentrated in the fall and winter months.\n\n#### 2012 FULL YEAR RESULTS COMPARED TO 2011\n\n#### **Operating Revenue**\n\nConsolidated revenue increased in 2012 by $140 million from 2011, Wireless contributed $142 million, Cable contributed $49 million and Media contributed $9 million, partially offset by decreases in revenue of $54 million in Business Solutions and in corporate items and intercompany eliminations of $6 million. The increase was due to overall higher subscriber levels, data revenue and equipment sales at Wireless and higher Internet revenue at Cable, partially offset by lower overall revenue at Business Solutions due to the phased exit of the legacy services business.\n\n#### **Adjusted Operating Profit**\n\nConsolidated adjusted operating profit increased in 2012 by $95 million from 2011, Wireless contributed $27 million, Cable contributed $56 million, Business Solutions contributed $3 million, and Media contributed $10 million. The increases at Wireless and Cable were due to the revenue growth described above combined with cost efficiencies.\n\n#### **Adjusted Net Income**\n\nConsolidated adjusted net income increased to $1,781 million in 2012, from $1,736 million in 2011, primarily due to increase in adjusted operating profit of 2%.", - "page_start": 59, - "page_end": 59, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Income from discontinued operations was $22.4 million after taxes, an increase of $15.0 million or 202%. The income from discontinued operations in 2003 includes the sale of the partnership interest in February 2003 and results from the two months of its operations in 2003.\n\nThe Company adopted FAS 143 \"Accounting for Asset Retirement Obligations.\" effective January 1, 2003, and as a result recorded a charge to earnings for the cumulative effect of this change in accounting of $76 thousand after taxes.\n\nNet income was $32.1 million, an increase of $27.6 million or 610%. The increase is a result of improved operating results in the PCS operations, the 2002 VeriSign stock loss and the sale of the cellular operations.\n\n#### **DISCONTINUED OPERATIONS**\n\nThe Company invested $2.0 million in the Virginia 10 RSA limited partnership in the early 1990's. The partnership's local customer base peaked in early 2000 with nearly 12,000 subscribers, then steadily declined to 6,700 by December 31, 2002. The decline was the result of competition with digital technologies and increased competition from national carriers in the area. As a result of the decline in the subscriber base, and the need for extensive capital expenditures to transform the analog network into a digital cellular network, the Company elected to sell its 66% interest in the partnership to one of the minority partners. The agreement was signed in November 2002, and closing was February 28, 2003. The Company's portion of the net income from its operations for 2003, 2002 and 2001 was $1.2 million, $7.4 million and $6.7 million, respectively.\n\n#### **CONTINUING OPERATIONS**\n\n#### **2002 compared to 2001**\n\nTotal revenue was $93.0 million in 2002, an increase of $24.3 million or 35.3%. Total revenues included $57.9 million of wireless revenues, an increase of $21.7 million or 60.2%; wireline revenues of $28.7 million, an increase of $1.3 million or 4.6%; and other revenues of $6.4 million, an increase of $1.2 million or 24.5%.\n\nWithin wireless revenues, the PCS operation contributed $55.5 million, an increase of $21.4 million, or 63.0%. PCS service revenues were $37.4 million, an increase of $18.3 million or 95.7%. The increase in the subscriber base, which totaled 67,842 at December 31, 2002, was an increase of 20,524 or 43% from the prior year end.\n\nPCS travel revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.5 million, an increase of $2.9 million or 21.3%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, and the travel exchange rate. The rate received on travel was $0.10 per minute in 2002. The rates in 2001 were $0.20 per minute from January 1, 2001 through April 30, 2001; $0.15 per minute from May 1, 2001 through September 30, 2001; and $0.12 per minute from October 1, 2001 through December 31, 2001.\n\nPCS equipment sales were $1.6 million, an increase of $0.3 million or 19.6%. The equipment sales are net of $0.3 million of rebates and discounts given at the time of sale, which became more pronounced during the year to meet industry competition for subscriber additions and subscriber retention.\n\nIn accordance with Sprint's requirements, the Company launched third generation (3G 1X) service in August 2002. The impact of 3G 1X-network enhancements on revenues was not significant in 2002.\n\nTower leases added $2.1 million to wireless revenues, an increase of $0.4 million or 24.5%. The increase was the result of other wireless carriers executing additional leases to use space on the Company's portfolio of towers. Of the 82 towers and poles owned by the Company as of December 31, 2002, 46 have tower space leased to other carriers.\n\nWireless revenues from the Company's paging operation were $0.3 million, a decrease of $0.1 million as the local customer base increasingly chose alternative digital wireless services. Paging service subscribers declined by 7.8% in 2002 from 3,190 subscribers to 2,940 subscribers.\n\nWithin wireline revenues, the Telephone operation contributed $22.5 million, an increase of $0.9 million, or 4.0%. Telephone access revenues were $10.9 million, an increase of $1.4 million or 14.8%. The growth in access revenues was driven by a 38.4% increase in access minutes of use on the Company's network and an increased percentage of minutes in the intrastate jurisdiction, where rates are higher than the interstate jurisdiction. On January 1, 2002 the Federal subscriber line charge (SLC) for residential customers increased from $3.50 to $5.00 per month. The SLC", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Note 1. Summary of Significant Accounting Policies (Continued)**\n\n101, (SAB No.101). Effective July 1, 2003, the Company adopted Emerging Issues Task Force (\"EITF\") No. 00-21, \"Accounting for Revenue Arrangements with Multiple Element Deliverables.\" The EITF guidance addresses how to account for arrangements that may involve multiple revenue-generating activities, i.e., the delivery or performance of multiple products, services, and/or rights to use assets. In applying this guidance, separate contracts with the same party, entered into at or near the same time, will be presumed to be a bundled transaction, and the consideration will be measured and allocated to the separate units based on their relative fair values. The adoption of EITF 00-21 has required evaluation of each arrangement entered into by the Company for each sales channel. The Company will continue to monitor arrangements with its sales channels to determine if any changes in revenue recognition would need to be made in the future. The adoption of EITF 00-21 has resulted in substantially all of the activation fee revenue generated from Company-owned retail stores and associated direct costs being recognized at the time the related wireless handset is sold and is classified as equipment revenue and cost of equipment, respectively. Upon adoption of EITF 00-21, previously deferred revenues and costs will continue to be amortized over the remaining estimated life of a subscriber, not to exceed 30 months. Revenue and costs for activations at other retail locations will continue to be deferred and amortized over their estimated lives as prescribed by SAB 101. The adoption of EITF 00- 21 had the effect of increasing equipment revenue by $68 thousand and increasing costs of activation by $23 thousand in 2003, which otherwise would have been deferred and amortized. The amounts of deferred revenue under SAB 101 at December 31, 2003, 2002 and 2001 were $1.2 million, $1.5 million and $1.2 million, respectively. The deferred costs at December 31, 2003, 2002 and 2001 were $0.4 million, $0.7 million and $0.7 million, respectively.\n\nThe Company records its PCS service revenue net of the 8% of collected revenue that is paid to Sprint. Under the management agreement with Sprint, through December 31, 2003 Sprint is entitled to retain 8% of all collected service revenue from subscribers whose service home is in the Company's territory, and 8% of the collected roaming revenue generated by non-Sprint wireless subscribers who use the Company's network. With the adoption of the new Amended Agreement, the Company will record its service revenue and receive payment from Sprint based on billed revenue, net of 8% of billed revenue retained by Sprint, customer credits, and allocated write-offs.\n\n*Stock Option Plan:* To account for its fixed plan stock options, the Company applies the intrinsic value-based method of accounting prescribed by Accounting Principles Board (APB) Opinion No. 25, \"Accounting for Stock Issued to Employees,\" and related interpretations including Financial Accounting Standards Board (FASB) Interpretation No. 44, \"Accounting for Certain Transactions involving Stock Compensation,\" an interpretation of APB Opinion No. 25 issued in March 2000. Under this method, compensation expense is recorded on the date of the grant only if the current market price of the underlying stock exceeded the exercise price. SFAS No. 123, \"Accounting for Stock-Based Compensation,\" established accounting and disclosure requirements using a fair value-based method of accounting for stock-based employee compensation plans. As allowed by SFAS No. 123, the Company has elected to continue to apply the intrinsic value-based method of accounting described above, and has adopted the disclosure requirements of SFAS No. 123, as amended by SFAS No. 148, \"Accounting for Stock-Based Compensation-Transition and Disclosure-an amendment of FASB Statement No. 123.\"\n\nGrants of options under the Plan are accounted for following the APB Opinion No. 25 and related interpretations. Accordingly, no compensation expense has been recognized under the Plan. Had compensation expense been recorded, based on fair values of the awards at the grant date (the method prescribed in SFAS No. 123), reported net income and earnings per share would have been reduced to the pro forma amounts shown in the following table:\n\n| | 2003 | | 2002 | | 2001 | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Net Income | | | (in thousands, except per share amounts) (in thousands, except per share amounts) | | | |\n| As reported | $ 32,074 | | $ 4,519 | | $16,372 | |\n| Pro forma | $ 31,889 $ 31,899 | | $ 4,307 | | $16,115 | |\n| Earnings per share, basic and diluted | | | | | | |\n| As reported, basic | $ | 4.23 | $ | 0.60 | $ | 2.18 |\n| As reported, diluted | $ | 4.22 | $ | 0.60 | $ | 2.17 |\n| Pro forma, basic | $ | 4.21 | $ | 0.57 | $ | 2.14 |\n| Pro forma, diluted | $ | 4.19 | $ | 0.57 | $ | 2.13 |\n\nEarnings per share: Basic income (loss) per share is computed by dividing net income (loss) by the weighted average number of common shares outstanding during the year. Diluted income (loss) per share is computed by dividing the income (loss) by the sum of the weighted average number of common shares outstanding and potential dilutive common shares determined using the treasury stock method. Because the Company reported a net loss from continuing operations in 2002, the diluted income (loss) per share is the same as basic income (loss) per share since including any", - "page_start": 22, - "page_end": 22, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### **Key Achievements**\n\n#### **Higher Operating Revenue and Adjusted Operating Profit**\n\n- Consolidated operating revenue was 2% higher this year compared to 2012, led by an increase in data revenue at Wireless, higher Internet revenue at Cable, higher Next Generation revenue at Business Solutions and higher subscriber revenue at Media. Revenue grew by 3% in Cable, 7% in Business Solutions and 5% in Media, while revenue at Wireless remained unchanged as the increase in data revenue was offset by the decrease in voice revenue.\n- Consolidated adjusted operating profit rose 3% this year to $4,993 million, with consolidated adjusted operating profit margins of 39.3%, resulting from higher revenue, the realization of cost efficiencies and shifts in the mix of revenue from products and services sold.\n- Postpaid Wireless subscriber growth continued with net additions of 228,000 and lower churn of 1.24%.\n- Cable high-speed Internet subscribers grew by 97,000 and cable telephony lines grew by 79,000, while television households decreased by 87,000 compared to 2012.\n\n#### **Strong Cash Flow**\n\n- Pre-tax free cash flow, defined as adjusted operating profit less spending on property, plant and equipment, and interest on longterm debt (net of capitalized interest), increased by 1% compared to 2012 to $2,044 million due to a 3% increase in adjusted operating profit offset by higher spending on property, plant and equipment. After-tax cash flow decreased by 6% from 2012 levels to $1,548 due to a 31% increase in cash taxes.\n#### **Strong Balance Sheet and Liquidity Position**\n\n- Issued and fully hedged US$2.5 billion of ten and thirty year senior notes at some of the lowest coupon rates ever achieved for Rogers corporate debt, in two separate offerings comprising:\n\t- US$500 million of 3.00% senior notes due 2023 and US$500 million of 4.50% senior notes due 2043\n\t- US$850 million of 4.10% senior notes due 2023 and US$650 million of 5.45% senior notes due 2043\n- Our overall weighted average cost of debt was 5.50% at December 31, 2013 compared to 6.10% at December 31, 2012 and the weighted average term to maturity on our debt was 11.3 years, compared to 9.2 years at December 31, 2012.\n\n- Ended the year with $4.5 billion of available liquidity, comprised of $2.3 billion cash on hand, $2 billion available under our bank credit facility and $0.2 billion available under our $0.9 billion accounts receivable securitization program.\n- In May 2013, each of Fitch Ratings and Standard and Poor's Ratings Services upgraded RCI's senior unsecured debt to BBB+ (from BBB) with a stable outlook, while Moody's Investors Service's comparable rating is Baa1 with a stable outlook remained unchanged from last year.\n\n#### **Growing Dividends**\n\n- We increased our annualized dividend rate in February 2013 by 10% to $1.74 per Class A Voting and Class B Non-Voting share and paid a quarterly dividend of $0.435 per share during 2013. We further increased our annualized dividend on February 12, 2014, by 5% to $1.83.\n#### **New CEO**\n\n- Guy Laurence joined Rogers in December 2013, as our new President and Chief Executive Officer, succeeding Nadir Mohamed who retired from Rogers. Mr. Laurence brings 30 years of global experience in the telecommunications and media industries.\n#### **Significant Developments**\n\n- Exclusive 12-year licensing agreement to broadcast national NHL games, beginning with the 2014-2015 season was signed. The agreement grants Rogers the exclusive distribution rights of all national regular season and playoff games within Canada, in multiple languages, across all platforms. At the same time, we executed separate agreements to sublicence certain of these broadcasting rights to TVA Sports and CBC.\n- Strategic acquisitions of Score Media Inc. (theScore), Mountain Cablevision Ltd. (Mountain Cable), Blackiron Data ULC (Blackiron) and Pivot Data Centres were completed.\n- Rogers First Rewards, a new loyalty program allowing customers to earn points on their eligible purchases and redeem them online for a wide selection of Rogers products and services, was launched in the Greater Toronto Area, Ottawa, Kingston, Sudbury and other cities throughout Ontario. We also received regulatory approval to launch a Rogers credit card which augments this loyalty program and will accelerate the rate at which customers earn points.\n\n**ADJUSTED OPERATING PROFIT BY SEGMENT**\n\n#### (IN MILLIONS OF DOLLARS) **CONSOLIDATED TOTAL ASSETS**\n\n(IN MILLIONS OF DOLLARS)", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "Who is the Vice Chairmain of the Board of Republic Services ?", - "target_page": 5, - "target_passage": " Harris W. Hudson1 Vice Chairman of the Board", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# BOARD OF DIRECTORS\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 41\n\n#### **STEPHEN GERLACH**\n\n#### LLB\n\nAge 59. Director since 5 September 1989 and Chairman since 4 May 2001. Chairman of Santos Finance Ltd and of the Environmental and Safety Committee, Finance Committee and Nomination Committee and member of the Remuneration Committee of the Board. Chairman of Futuris Corporation Ltd and Challenger Beston Limited and a Director of Southcorp Ltd. Former Managing Partner of the Adelaide legal firm, Finlaysons. Former Chairman of Amdel Ltd and Equitoral Mining Ltd.\n\n#### **JOHN CHARLES ELLICE-FLINT** BSc (Hons)\n\nAge 54. Managing Director since 19 December 2000, member of the Environmental and Safety Committee of the Board, Director of Santos Finance Ltd and also Chairman of other Santos Ltd subsidiary companies. Thirty years' experience in the international oil and gas industry including twenty six years with Unocal, including as Senior Vice President: Global Exploration and Technology and Vice President: Corporate Planning and Economics. Member and Chair of the South Australian Museum Board.\n\n## **PETER CHARLES BARNETT** FCPA\n\nAge 64. Director since 31 October 1995 and member of the Environmental and Safety Committee, Nomination Committee, Finance Committee and Remuneration Committee of the Board. Director of AMCIL Ltd and Opis Capital Ltd. Former Managing Director and Chief Executive Officer of Pasminco Ltd (1988–1995) and Chief Executive Officer of EZ Industries Ltd. Former director of Mayne Group Ltd.\n\n## **KENNETH ALFRED DEAN** FCPA, MAICD\n\nAge 52. Independent nonexecutive Director effective 23 February 2005. Extensive financial experience in the international petroleum industry, having held the position of Chief Executive Officer, Shell Financial Services. During his 30-year career with Shell, held several other senior executive positions in treasury, audit, accounting, IT and financial and corporate services. Fellow of the Australian Society of Certified Practising Accountants and member of the Australian Institute of Company Directors.\n\n#### **RICHARD MICHAEL HARDING** MSc\n\nAge 55. Director since 1 March 2004 and member of the Audit Committee of the Board. Former President and General Manager of BP Developments Australia Limited and former Vice-Chairman and Council member of the Australian Petroleum Production and Exploration Association. Chairman of the Ministry of Defence Command Support, Training and Simulation Project Governance Board and Director of Arc Energy Ltd.\n\n## **GRAEME WILLIAM MCGREGOR**\n\nAO, BEc, FCPA, FAIM, FAICD Age 66. Director since\n\n3 September 1999. Chairman of the Audit Committee and member of the Finance Committee and Nomination Committee of the Board. Director of Santos Finance Ltd. Director of Foster's Group Ltd, Nufarm Ltd, WMC Resources Ltd and Goldman Sachs JB Were Managed Funds Limited. Member of the Financial Reporting Council. Former Executive Director Finance of The Broken Hill Proprietary Company Limited and former Director of Community Foundation Network Ltd.\n\n# **MICHAEL ANTHONY O'LEARY** DipMinE, BSc, FAusIMM, FAIM,\n\nFAICD Age 69. Director since 15 October 1996 and member of the Environmental and Safety Committee of the Board. Director of Newcrest Mining Ltd. Former Chairman of Hamersley Iron, Argyle Diamonds, Dampier Salt, former Deputy Chairman of Bank of Western Australia Ltd and former Director of Rio Tinto Ltd and Rio Tinto plc.\n\n## **CHRISTOPHER JOHN RECNY**\n\nBSc, MSc, MBA Age 51. Independent nonexecutive Director effective 23 February 2005. Extensive international management and project management experience, including as global head of international consultancy L.E.K.'s natural resources practice – a company he helped establish in the 1980s. Regional head of Asia-Pacific for L.E.K. and previously spent eight years with Fluor Corporation as a project manager on, and undertaking feasibility studies for, major resource developments.\n\n## **PROFESSOR JUDITH SLOAN**\n\nBA (Hons), MA, MSc Age 50. Director since 5 September 1994. Chairperson of the Remuneration Committee and member of the Audit Committee of the Board. Deputy Chair of the Australian Broadcasting Corporation and Part-time Commissioner of the Productivity Commission. Former Professor of Labour Studies at the Flinders University of South Australia and Director of the National Institute of Labour Studies. Former Chairperson of SGIC Holdings Ltd and Director of Mayne Group Ltd.\n\n**Santos Board of Directors during November 2004 Board meeting held at Moomba, Cooper Basin. Left to right: Graeme McGregor, John Ellice-Flint, Peter Barnett, Stephen Gerlach, Michael Harding, Judith Sloan, Michael O'Leary and Frank Conroy (who retired in December 2004). Kenneth Dean and Christopher Recny subsequently joined the Board in February 2005.**", - "page_start": 42, - "page_end": 42, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# OFFICERS »\n\nAubrey K. McClendon Chairman of the Board and Chief Executive Officer\n\nMartha A. Burger Senior Vice President – Human and Corporate Resources\n\nJames C. Johnson Senior Vice President – Energy Marketing\n\nThomas S. Price, Jr. Senior Vice President – Corporate Development and Government Relations\n\nSteven C. Dixon Executive Vice President – Operations and Geosciences and Chief Operating Officer\n\nJeffrey A. Fisher Senior Vice President – Production\n\nMichael A. Johnson Senior Vice President – Accounting, Controller and Chief Accounting Officer\n\nJ. Mike Stice Senior Vice President – Natural Gas Projects and Chief Executive Officer Chesapeake Midstream Partners, L.P.\n\nDouglas J. Jacobson Executive Vice President – Acquisitions and Divestitures\n\nJennifer M. Grigsby Senior Vice President, Treasurer and Corporate Secretary\n\nStephen W. Miller Senior Vice President – Drilling\n\nCathy L. Tompkins Senior Vice President – Information Technology and Chief Information Officer\n\nDomenic J. Dell'Osso, Jr. Executive Vice President and Chief Financial Officer\n\nHenry J. Hood Senior Vice President – Land and Legal and General Counsel\n\nJeffrey L. Mobley Senior Vice President – Investor Relations and Research", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "of, non-executive, independent Directors, except for the Environmental and Safety Committee, which includes the CEO as a member.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 30\n\nThe Board Guidelines prescribe that the Board is to meet at least eight times a year, including a strategy meeting of two days duration. The number of meetings of the Board and of each of its Committees and the names of attendees at those meetings are set out on page 47 of this Annual Report. Board Meetings are structured in two separate sessions, without management present for one of those sessions. The agenda for meetings is prepared by the Company Secretary in conjunction with the Chairman and CEO, with periodic input from the Board. Comprehensive Board papers are distributed to Directors in advance of scheduled meetings. Board meetings take place both at the Company's head office and at key operating sites, to assist the Board in its understanding of operational issues.\n\nExecutive management attend Board and Committee meetings, at which they report to Directors within their respective areas of responsibility. This assists the Board in maintaining its understanding of the Company's business and assessing the executive management team. Where appropriate, advisors to the Company attend meetings of the Board and of its Committees.\n\n**2.3 Composition of the Board** The composition of the Board is determined in accordance with the Company's Constitution and the Board Guidelines which, among other things, require that:\n\n- the Board is to comprise a minimum of five and a maximum of ten Directors (exclusive of the CEO);\n- the Board should comprise a substantial majority of independent, non-executive Directors;\n- there should be a separation of the roles of Chairman and Chief Executive Officer of the Company; and\n- the Chairman of the Board should be an independent, non-executive Director.\n\nUnder the Company's Constitution approximately onethird of Directors retire by rotation each year and Directors appointed during the year are required to submit themselves for election by shareholders at the Company's next Annual General Meeting. The Board Guidelines encourage Directors to retire at the first Annual General Meeting after reaching the age of 72 years and not seek reappointment.\n\nCurrently, the Board comprises eight non-executive Directors and one executive Director. The Board has adopted the definition set out in the ASX Best Practice Recommendations and as defined in the 2002 guidelines of the Investment and Financial Services Association Limited and considers all current nonexecutive Directors, including the Chairman, to be independent directors.\n\nGenerally, the Board considers a Director to be independent if he or she is not a member of management and is free of any business or other relationship that could materially interfere with, or could reasonably be\n\nperceived to materially interfere with, the Director's ability to act in the best interests of the Company. The Board will assess the materiality of any given relationship that may affect independence on a case by case basis and has adopted materiality guidelines to assist in that assessment. Under these guidelines, the following interests are regarded as material in the absence of any mitigating factors:\n\n- a holding of 5% or more of the Company's voting shares or a direct association with an entity that holds more than 5% of the Company's voting shares;\n- an affiliation with an entity which accounts for 5% or more or the revenue or expense of the Company.\n\nThe Board has determined that there should not be any arbitrary length of tenure that should be considered to materially interfere with a Director's ability to act in the best interests of the Company, as it believes this assessment must be made on a case by case basis with reference to the length of service of all members of the Board.\n\nEach Director's independence is assessed by the Board on an individual basis, with reference to the above materiality guidelines and focussing on an assessment of each Director's capacity to bring independence of judgment to Board decisions. In this context, as mentioned below, Directors are required to promptly disclose their interests in contracts and other directorships and offices held.\n\nThe names and details of the experience, qualifications, special responsibilities, and term of office of each Director of the Company are set out on page 41 of this Annual Report. Details of each Director's attendance at Board and Committee Meetings and their shareholdings are also set out on page 47 of this Annual Report.\n\n## **2.4 Nomination Committee**\n\nThe role, responsibilities and membership requirements of the Nomination Committee are documented in the Board Guidelines and in a separate Charter, approved by the Board.\n\nUnder the Board Guidelines, it is the responsibility of the Nomination Committee to devise the criteria for, and review membership of, and nominations to, the Board. The primary criteria adopted in selection of suitable Board candidates is their capacity to contribute to the ongoing development of the Company having regard to the location and nature of the Company's significant business interests and to the candidates' age and experience by reference to the attributes of existing Board members.\n\nWhen a Board vacancy exists or where it is considered that the Board would benefit from the services of a new Director with particular skills, the Nomination Committee has responsibility for proposing candidates for consideration by the Board and, where appropriate, engages the services of external consultants.\n\nPrior to appointment, each Director is provided with a letter of appointment which encloses a copy of the Company's Constitution and of the relevant policies. Additionally, the expectations of the Board in", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "respect to a proposed appointee to the Board and the workings of the Board and its Committees are conveyed in interviews with the Chairman and induction procedures include access to appropriate executives in relation to details of the business of the Company.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 31\n\nThe Chairman of the Board is the Chairman of the Nomination Committee. The current members of the Nomination Committee, all of whom are independent non-executive Directors, are Mr S Gerlach (Chairman), Mr P C Barnett and Mr G W McGregor.\n\n## **3. REVIEW OF BOARD AND EXECUTIVE PERFORMANCE**\n\nThe Board Guidelines provide that:\n\n- non-executive Directors are to be appointed on the basis that their nomination for re-election as a Director is subject to review and support by the Board;\n- there should be appropriate circumstances justifying reelection after a specified period of service as a Director; and\n- the contribution of the Board and of individual Directors is the subject of formal review and discussion on a biennial and annual basis, respectively.\n\nAs the biennial review of the Board and of its Committees was conducted by an independent consultant in 2003, no formal performance appraisal of the Board was conducted in 2004.\n\nPerformance evaluation of key executives is undertaken on a quarterly and annual basis by the CEO and summarised in presentation to the Remuneration Committee of the Board, both specifically for determination of remuneration and generally in relation to management succession planning for review by the Board.\n\n## **4. INDEMNITY, ACCESS TO INFORMATION AND INDEPENDENT PROFESSIONAL ADVICE**\n\nInformation in respect to indemnity and insurance arrangements for Directors and senior executives appears in the Directors' Statutory Report on page 49 of this Annual Report.\n\nThe Board Guidelines set out the circumstances and procedures pursuant to which a Director, in furtherance of his or her duties, may seek independent professional advice at the Company's expense. Those procedures require prior consultation with, and approval by, the Chairman and assurances as to the qualifications and reasonableness of the fees of the relevant expert and, under normal circumstances, the provision of the expert's advice to the Board.\n\nPursuant to a deed executed by the Company and each Director, a Director also has the right to have access to all documents which have been presented to meetings of the Board or to any Committee of the Board or otherwise made available to the Director whilst in office. This right continues for a term of seven years after ceasing to be a Director or such longer period as is necessary to determine relevant legal proceedings that commenced during that term.\n\n### **5. REMUNERATION**\n\nThe role, responsibilities and composition of the Remuneration Committee and details of\n\nthe Company's remuneration objectives and principles, nonexecutive Director remuneration and executive remuneration are set out on pages 37 to 40 of this Annual Report in the Directors' and Executives' Remuneration section, as well as in the Directors' Statutory Report and in Notes 18 and 26 of the Financial Statements.\n\nDetails of the nature and amount of the remuneration of:\n\n- the Directors; and\n- the Specified Executives;\n\nare set out on pages 37 to 40 of this Annual Report.\n\n#### **6. AUDIT COMMITTEE**\n\nThe role of the Audit Committee is documented in a Charter, approved by the Board. This Charter was revised in August 2004 in line with contemporary best practice, and can be found on the Company's website.\n\n#### **6.1 Composition of the Audit Committee**\n\nThe Committee is required to consist of no less than three members and to meet at least three times per year. All members must be independent, non-executive Directors and financially literate, with at least one member having past employment experience in finance and accounting, requisite professional certification in accounting or other comparable experience or background. The Chairman of the Board is precluded from being the Chairman of the Audit Committee.\n\nThe current members of the Audit Committee, all of whom are independent non-executive Directors, are: Mr G W McGregor (Chairman), Professor J Sloan\n\nand Mr R M Harding. The external auditors, CEO, Chief Financial Officer (\"CFO\"), Manager Risk and Audit, and Manager – Financial Planning and Analysis attend Committee meetings by invitation. There were 4 meetings held in 2004.\n\n## **6.2 Role of the Audit Committee**\n\nThe primary objective of the Audit Committee is to assist the Board to fulfil its corporate governance and oversight responsibilities related to financial accounting practices, external financial reporting, financial reporting, risk management and internal control, and the internal and external audit function.\n\nSpecifically, the role of the Audit Committee includes:\n\n- examining the accounting policies of the Company to determine whether they are appropriate and in accordance with generally accepted practices;\n- ensuring that truth and fairness is reflected in the preparation and publication of the Company's financial reports;\n- meeting regularly with the internal and external auditors to reinforce their respective independence and to determine the appropriateness of internal and external audit procedures;\n- reviewing the performance of the internal and external auditors and providing them with confidential access to the Board;\n- receiving from the external auditors a formal written statement delineating all", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Our decentralized structure is an advantage. It gives us flexibility and speed in reacting to local conditions. Our division leaders are well-positioned to respond immediately to the needs, changes and developments among their customers. We in the corporate office set the goals, establish the discipline, provide financial resources, management and operational support, but it is in our local divisions where customer relationships are established and the work is done. Our community-based focus forges strong local relationships and ensures that, at the customer level, the highest expectations are exceeded.\n\n**Board of Directors**\n\nJames E. O'Connor 1 *Chairman & Chief Executive Officer*\n\nJames E. O'Connor\n\n**Officers**\n\nDavid A. Barclay\n\nTod C. Holmes\n\nLee V. Twyford\n\nBrian A. Bales\n\nTim M. Benter\n\nJerry S. Clark\n\nPaul J. Connealy *Vice President, Tax* Matthew E. Davies\n\nArthur J. Dudzinski\n\nKenneth M. Baylor\n\n*Vice President & Controller*\n\nMichael J. Cordesman\n\nW. Lee Nutter 2, 3, 4 *Chairman, Compensation Committee Chairman, President & Chief Executive Officer Rayonier, Inc. (a forest products company)*\n\n*Chairman & Chief Executive Officer*\n\n*President & Chief Operating Officer* \n\n*Senior Vice President & General Counsel*\n\n*Vice President, Corporate Development*\n\n*Vice President, Employee & Labor Relations*\n\n*Vice President & Associate General Counsel*\n\n*Regional Vice President - Western Region*\n\n*Vice President, Environmental Engineering & Compliance*\n\n*Senior Vice President & Chief Financial Officer*\n\n*Senior Vice President & Chief Information Officer*\n\nWilliam C. Flower\n\nAllan C. Sorensen 2, 3, 4 *Presiding Director President & Chief Executive Officer Interim Health Care, Inc. (a provider of temporary labor to the healthcare industry)*\n\nHarris W. Hudson 1 *Vice Chairman of the Board*\n\n1 *Member, Executive Committee* • 2 *Member, Audit Committee* • 3 *Member, Compensation Committee* • 4 *Member, Nominating and Corporate Governance Committee*\n\nRamon A. Rodriguez 2, 3, 4 *Chairman, Audit Committee President & Chief Executive Officer Madsen, Sapp, Mena, Rodriguez & Co. (a public accounting firm)*\n\nMatthew D. Katz\n\nRonald R. Krall\n\nEdward A. Lang III\n\nThomas E. Miller\n\nCraig J. Nichols\n\nCharles F. Serianni\n\nRobert N. Shepard\n\nKevin C. Walbridge\n\nGerard W. Wickett\n\nGary L. Sova\n\n*Vice President, Communications*\n\n*Vice President & Associate General Counsel*\n\nMichael W. Wickham 2, 3, 4 *Retired Chairman, President & Chief Executive Officer, Roadway Corporation*\n\nJohn W. Croghan 2, 3, 4 *Chairman, Nominating and Corporate Governance Committee Chairman, Rail-Splitter Capital Management, LLC (an investment management firm)*\n\n*Regional Vice President - Southwest Region*\n\n*Vice President & Chief Accounting Officer*\n\n*Regional Vice President - Southern Region*\n\n*Regional Vice President - Central Region*\n\n*Vice President, Purchasing & Maintenance*\n\n*Regional Vice President - Eastern Region*\n\n*Vice President, Finance & Treasurer*\n\n*Vice President, Human Resources*\n\n*Vice President, Marketing & Sales*\n\nUltimately, all the things we do as a Company are aimed at increasing value for our shareholders. We know the importance of strong and predictable cash flow in meeting our shareholders' expectations. Over time, our cash flow has proven to be a strong indicator of the quality of our earnings. Last year's record free cash flow enabled us to reinvest in our business, acquire new companies, repurchase $266 million of our common stock and double the quarterly dividend to $0.12 per share. The plan this year is similar. We will continue to use our strong free cash flow to grow and strengthen the Company by building our customer base through internal growth and strategic acquisitions. Additionally, we plan to repurchase Republic stock worth up to $275 million and pay a regular quarterly cash dividend to our shareholders. We believe these steps will increase shareholder value.\n\n#### **The Year Ahead**\n\n*Dear Fellow Shareholders:*\n\nI am pleased to report that 2004 was a very good year for Republic Services, Inc. Our team met and exceeded the important financial and management goals we told you about here a year ago, and we plan to work just as hard and\n\nRepublic is strengthening its competitive position among the leading waste services providers every day. As always, we are doing so by offering our customers cost-effective and safe waste collection, reliable recycling, and\n\nI am proud of our team and what they accomplished. The\n\nfor 2005 is \"Republic Services…A Company that cares\".\n\ngrowth markets, especially those in the rapidly expanding Sunbelt states.\n\nreinvestment, repurchases of our stock and regular quarterly cash dividends.\n\nwill be no different. We will continue to concentrate on these fundamentals.\n\nRevenue in 2004 grew 7.6 percent to $2.7 billion, a record. The increases came largely from new municipal contracts and improved pricing. At the same time, we benefited from our presence in high-\n\nWe met last year's guidance. Net income per diluted share rose 15 percent to $1.53. Our revenue enhancement and cost reduction efforts produced results. We generated a record level of free cash flow - $388 million to be exact. Republic continues to generate strong and predictable levels of cash flow. As in the past year, we will concentrate on free cash flow and use it for acquisitions,\n\nAs I thought about these achievements, I realized they result from the environment that we work to create for both our customers and our people. We care about our customers and the communities we serve. About our people. About the environment. And, of course, we care about you -- our shareholders. Every year we adopt a theme that captures our Company and our values. Our theme\n\nOur 13,400 dedicated people worked hard last year to create real value. We improved the way we deliver our services, increasing our efficiency in routing our collection trucks. We improved the way we construct disposal cells at numerous landfills, lowering costs. We worked with our vendors to control prices. And, we communicated to our customers the value of the services we offer. This year\n\nRepublic's future is bright. We are mindful of our mission. We know our business exists to ease the burden of managing society's waste. It's not a glamorous business, but it is an essential one, and we\n\nAt the end of the year, Republic had 140 collection companies, 58 landfills, 96 transfer stations and 35 recycling facilities in 22 states. These resources give us many opportunities to listen to our customers, anticipate their needs and quickly respond to them. Each customer faces challenges unique to his or her business and community. Our goal is to remain flexible and to tailor our services to each\n\naccomplish just as much in the coming year.\n\n**Letter to Shareholders**\n\nenvironmentally protective disposal options.\n\nresults tell you just how well they did.\n\ntake this responsibility very seriously.\n\ncustomer.\n\nWe are focused on improving our service and strengthening relationships with our customers. Exceptional service allows us to build loyalty and create lasting bonds with those we serve. We will continue to train and develop our people, too, so they may grow as we grow as a Company. And we will continue to focus on improving the safety of our operations, an important commitment we have made to our people and service communities.\n\nThe last year was indeed an outstanding one for Republic. Our goal is to continue to deliver impressive results in 2005.\n\nI am both privileged and grateful to have the opportunity to lead a team of such exceptional people. Everyday, I grow more impressed with the experience, knowledge, loyalty and hard work they contribute. Republic truly has one of the best management and operations teams in America.\n\nOn behalf of all of us at Republic, I want to thank our shareholders for the trust they have placed in us. We are a Company that cares about you, and we pledge to continue working hard to serve you in 2005 and beyond.\n\nSincerely,\n\n**James E. O'Connor** *Chairman and Chief Executive Officer* March 31, 2005", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "#### **Letter to Shareholders**\n\n# *Dear Fellow Shareholders:*\n\nI am pleased to report that 2004 was a very good year for Republic Services, Inc. Our team met and exceeded the important financial and management goals we told you about here a year ago, and we plan to work just as hard and accomplish just as much in the coming year.\n\nRepublic is strengthening its competitive position among the leading waste services providers every day. As always, we are doing so by offering our customers cost-effective and safe waste collection, reliable recycling, and environmentally protective disposal options.\n\nI am proud of our team and what they accomplished. The results tell you just how well they did.\n\nare exceeded.\n\n**The Year Ahead**\n\nimpressive results in 2005.\n\n2005 and beyond.\n\n**James E. O'Connor**\n\nMarch 31, 2005\n\n*Chairman and Chief Executive Officer*\n\nSincerely,\n\nmade to our people and service communities.\n\nOur decentralized structure is an advantage. It gives us flexibility and speed in reacting to local conditions. Our division leaders are well-positioned to respond immediately to the needs, changes and developments among their customers. We in the corporate office set the goals, establish the discipline, provide financial resources, management and operational support, but it is in our local divisions where customer relationships are established and the work is done. Our community-based focus forges strong local relationships and ensures that, at the customer level, the highest expectations\n\n**Board of Directors**\n\nJames E. O'Connor 1 *Chairman & Chief Executive Officer*\n\nJames E. O'Connor\n\n**Officers**\n\nDavid A. Barclay\n\nTod C. Holmes\n\nLee V. Twyford\n\nBrian A. Bales\n\nTim M. Benter\n\nJerry S. Clark\n\nPaul J. Connealy *Vice President, Tax* Matthew E. Davies\n\nArthur J. Dudzinski\n\nKenneth M. Baylor\n\n*Vice President & Controller*\n\nMichael J. Cordesman\n\nW. Lee Nutter 2, 3, 4 *Chairman, Compensation Committee Chairman, President & Chief Executive Officer Rayonier, Inc. (a forest products company)*\n\n*Chairman & Chief Executive Officer*\n\n*President & Chief Operating Officer* \n\n*Senior Vice President & General Counsel*\n\n*Vice President, Corporate Development*\n\n*Vice President, Employee & Labor Relations*\n\n*Vice President & Associate General Counsel*\n\n*Regional Vice President - Western Region*\n\n*Vice President, Environmental Engineering & Compliance*\n\n*Senior Vice President & Chief Financial Officer*\n\n*Senior Vice President & Chief Information Officer*\n\nWilliam C. Flower\n\nAllan C. Sorensen 2, 3, 4 *Presiding Director President & Chief Executive Officer Interim Health Care, Inc. (a provider of temporary labor to the healthcare industry)*\n\nHarris W. Hudson 1 *Vice Chairman of the Board*\n\n1 *Member, Executive Committee* • 2 *Member, Audit Committee* • 3 *Member, Compensation Committee* • 4 *Member, Nominating and Corporate Governance Committee*\n\nRamon A. Rodriguez 2, 3, 4 *Chairman, Audit Committee President & Chief Executive Officer Madsen, Sapp, Mena, Rodriguez & Co. (a public accounting firm)*\n\nMatthew D. Katz\n\nRonald R. Krall\n\nEdward A. Lang III\n\nThomas E. Miller\n\nCraig J. Nichols\n\nCharles F. Serianni\n\nRobert N. Shepard\n\nKevin C. Walbridge\n\nGerard W. Wickett\n\nGary L. Sova\n\n*Vice President, Communications*\n\n*Vice President & Associate General Counsel*\n\nMichael W. Wickham 2, 3, 4 *Retired Chairman, President & Chief Executive Officer, Roadway Corporation*\n\nJohn W. Croghan 2, 3, 4 *Chairman, Nominating and Corporate Governance Committee Chairman, Rail-Splitter Capital Management, LLC (an investment management firm)*\n\n*Regional Vice President - Southwest Region*\n\n*Vice President & Chief Accounting Officer*\n\n*Regional Vice President - Southern Region*\n\n*Regional Vice President - Central Region*\n\n*Vice President, Purchasing & Maintenance*\n\n*Regional Vice President - Eastern Region*\n\n*Vice President, Finance & Treasurer*\n\n*Vice President, Human Resources*\n\n*Vice President, Marketing & Sales*\n\nUltimately, all the things we do as a Company are aimed at increasing value for our shareholders. We know the importance of strong and predictable cash flow in meeting our shareholders' expectations. Over time, our cash flow has proven to be a strong indicator of the quality of our earnings. Last year's record free cash flow enabled us to reinvest in our business, acquire new companies, repurchase $266 million of our common stock and double the quarterly dividend to $0.12 per share. The plan this year is similar. We will continue to use our strong free cash flow to grow and strengthen the Company by building our customer base through internal growth and strategic acquisitions. Additionally, we plan to repurchase Republic stock worth up to $275 million and pay a regular quarterly cash dividend to\n\nWe are focused on improving our service and strengthening relationships with our customers. Exceptional service allows us to build loyalty and create lasting bonds with those we serve. We will continue to train and develop our people, too, so they may grow as we grow as a Company. And we will continue to focus on improving the safety of our operations, an important commitment we have\n\nThe last year was indeed an outstanding one for Republic. Our goal is to continue to deliver\n\nI am both privileged and grateful to have the opportunity to lead a team of such exceptional people. Everyday, I grow more impressed with the experience, knowledge, loyalty and hard work they\n\nOn behalf of all of us at Republic, I want to thank our shareholders for the trust they have placed in us. We are a Company that cares about you, and we pledge to continue working hard to serve you in\n\ncontribute. Republic truly has one of the best management and operations teams in America.\n\nour shareholders. We believe these steps will increase shareholder value.\n\nRevenue in 2004 grew 7.6 percent to $2.7 billion, a record. The increases came largely from new municipal contracts and improved pricing. At the same time, we benefited from our presence in highgrowth markets, especially those in the rapidly expanding Sunbelt states.\n\nWe met last year's guidance. Net income per diluted share rose 15 percent to $1.53. Our revenue enhancement and cost reduction efforts produced results. We generated a record level of free cash flow - $388 million to be exact. Republic continues to generate strong and predictable levels of cash flow. As in the past year, we will concentrate on free cash flow and use it for acquisitions, reinvestment, repurchases of our stock and regular quarterly cash dividends.\n\nAs I thought about these achievements, I realized they result from the environment that we work to create for both our customers and our people. We care about our customers and the communities we serve. About our people. About the environment. And, of course, we care about you -- our shareholders. Every year we adopt a theme that captures our Company and our values. Our theme for 2005 is \"Republic Services…A Company that cares\".\n\nOur 13,400 dedicated people worked hard last year to create real value. We improved the way we deliver our services, increasing our efficiency in routing our collection trucks. We improved the way we construct disposal cells at numerous landfills, lowering costs. We worked with our vendors to control prices. And, we communicated to our customers the value of the services we offer. This year will be no different. We will continue to concentrate on these fundamentals.\n\nRepublic's future is bright. We are mindful of our mission. We know our business exists to ease the burden of managing society's waste. It's not a glamorous business, but it is an essential one, and we take this responsibility very seriously.\n\nAt the end of the year, Republic had 140 collection companies, 58 landfills, 96 transfer stations and 35 recycling facilities in 22 states. These resources give us many opportunities to listen to our customers, anticipate their needs and quickly respond to them. Each customer faces challenges unique to his or her business and community. Our goal is to remain flexible and to tailor our services to each customer.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "# **Executive Officers**\n\nExecutive Vice President, Executive Vice President,\n\n**Brian K. Dennehy,** 49 Executive Vice President and Apparel Executive Vice President and Regional Manager, Chief Marketing Officer Southern California **Geevy S. K. Thomas,** 50\n\nExecutive Vice President and Nordstrom Rack Chief Financial Officer **Blake W. Nordstrom,** 54\n\n**Gemma Lionello,** 49 Executive Vice President and General Merchandise Manager, Executive Vice President and Cosmetics Division President, Nordstrom.com **David M. Witman,** 56\n\nChief Information Officer President, Stores\n\nFinance and Operations, President, Merchandising Nordstrom.com\n\n**Teri Bariquit,** 49 **Steven C. Mattics,** 46 **Robert B. Sari,** 58 Executive Vice President, Executive Vice President; Executive Vice President, Nordstrom Merchandising Group Chairman and Chief Executive Officer of General Counsel and Secretary Nordstrom fsb, **Kirk Beardsley,** 46 President of Nordstrom Credit, Inc. **Michael Sato,** 48\n\nOnline Merchandising **Scott A. Meden,** 52 Supply Chain Executive Vice President and **Terence Boyle,** 42 General Merchandise Manager, **Tricia D. Smith,** 43 Executive Vice President, Shoe Division Executive Vice President and\n\n**James A. Howell,** 49 **Margaret Myers,** 68 President, Nordstrom Rack Executive Vice President, Executive Vice President and Finance and Treasurer General Merchandise Manager, **Paige L. Thomas,** 43 Accessories and Women's Executive Vice President and **Michael G. Koppel,** 58 Specialized Divisions General Merchandise Manager,\n\nPresident **Mark J. Tritton,** 51\n\nExecutive Vice President and Executive Vice President and Men's Apparel\n\n**Lisa Luther,** 46 **Peter E. Nordstrom,** 53 Executive Vice President, Executive Vice President of Executive Vice President and Strategy and Development\n\n> **Brian Saltzman,** 47 Executive Vice President, User Experience and Optimization\n\nNordstromrack.com|HauteLook General Merchandise Manager, **Robert J. Middlemas,** 58 Designer, Women's and Kids'\n\nExecutive Vice President and\n\nExecutive Vice President and **Erik B. Nordstrom,** 51 President, Nordstrom Product Group\n\nExecutive Vice President and **Daniel F. Little,** 53 **James F. Nordstrom, Jr.,** 42 General Merchandise Manager,\n\n**Kenneth J. Worzel,** 50", - "page_start": 91, - "page_end": 91, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### **Information on Directors**\n\n#### **Michael Damer Hannell**\n\n*Chairman, BSc Eng (Hons), FIEAust*\n\n#### *Experience*\n\nMike has been a Director of Sundance since March 2006 and chairman of our board of directors since December 2008. Mr. Hannell has over 45 years of experience in the oil and gas industry, initially in the downstream sector and subsequently in the upstream sector. His extensive experience has been in a wide range of design and construction, engineering, operations, exploration and development, marketing and commercial, financial and corporate areas in the United States, United Kingdom, continental Europe and Australia at the senior executive level with Mobil Oil (now Exxon) and Santos Ltd. Mr. Hannell recently finished his term as the chairman of Rees Operations Pty Ltd (doing business as Milford Industries Pty Ltd), an Australian automotive components and transportation container manufacturer and supplier. He has also held a number of other board appointments including the chairman of Sydac Pty Ltd, a designer and producer of simulation training products for industry. Mr. Hannell has also served on a number of not-for-profit boards, with appointments as president of the Adelaide-based Chamber of Mines and Energy, president of Business SA (formerly the South Australian Chamber of Commerce and Industry), chairman of the Investigator Science and Technology Centre, chairman of the Adelaide Graduate School of Business, and a member of the South Australian Legal Practitioners Conduct Board. Mr. Hannell holds a Bachelor of Science degree in Engineering (with Honors) from the University of London and is a Fellow of the Institution of Engineers Australia.\n\n*Interest in Shares*: 1,059,000 ordinary shares in Sundance Energy Australia Limited\n\n*Special Responsibilities*: -Chairman of the Board of Directors -Chairman of the Remuneration and Nominations Committee -Member of the Audit and Risk Management Committee -Member of the Reserves Committee\n\n*Other Directorships*: Nil\n\n#### **Eric P. McCrady**\n\n*Director, BS in Business* Administration\n\n#### *Experience*\n\nEric has been our Chief Executive Officer since April 2011 and Managing Director of our board of directors since November 2011. He also served as our Chief Financial Officer from June 2010 until becoming Chief Executive Officer in 2011. Mr. McCrady has served in numerous positions in the energy, private investment and retail industries. From 2004 to 2010, Mr. McCrady was employed by The Broe Group, a private investment firm, in various financial and executive management positions across a variety of industry investment platforms, including energy, transportation and real estate. From 1997 to 2003, Mr. McCrady was employed by American Coin Merchandising, Inc. in various corporate finance roles. Mr. McCrady holds a degree in Business Administration from the University of Colorado, Boulder.\n\n*Interest in Shares, Restricted Share Units and Options:* 1,908,581 Ordinary Shares in Sundance Energy Australia Limited and 791,561 Restricted Share Units\n\n*Special Responsibilities*: Managing Director and Chief Executive Officer of the Company\n\n*Other Directorships*: Nil", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "The external auditor is requested to attend the Company's Annual General Meeting and be available to answer shareholder questions about the conduct of the audit and the preparation and content of the Audit Report.\n\nPricewaterhouseCoopers was appointed as external auditor of the Company for the 2013 financial year.\n\n## Risk Oversight and Management\n\nThe Board, through the Audit Committee, is responsible for ensuring that there are adequate policies in place in relation to risk management, compliance and internal control systems.\n\nKingsgate has a systematic and structured risk oversight and management program that involves a detailed analysis of material risks to the business and operates at various levels underpinned by specific systems and procedures.\n\nRisk monitoring, managing, mitigating and reporting is conducted regularly and includes the following:\n\n- 〉 regular internal management reporting;\n- 〉 reporting at Board and Committee meetings by relevant managers;\n- 〉 site visits by the Board and senior management;\n- 〉 internal and external audits; and\n- 〉 training, procedural manuals and meetings.\n\nThe Board has received assurance from the Managing Director and the Chief Financial Officer that the solvency declaration provided in accordance with section 295A of the *Corporations Act 2001* (Cth) is founded on a sound system of risk management and internal control and that the system is operating effectively in all material respects in relation to financial reporting risks.\n\nA summary of the Company's Risk Oversight and Management Policy is published in the 'Corporate Governance' section of the Company's website.\n\n## Remuneration Committee\n\nThe members of the Remuneration Committee as at the date of this Report are:\n\n- 〉 Mr Ross Smyth-Kirk (Chairman of Remuneration Committee);\n- 〉 Mr Peter McAleer;\n- 〉 Mr Craig Carracher; and\n- 〉 Mr Peter Alexander.\n\nThe Remuneration Committee's role is to oversee the Company's remuneration and compensation plans.\n\nTo ensure that the review of remuneration practices and strategies on which decision making is based is objective and well founded, the Remuneration Committee engages external remuneration consultants.\n\nThe Remuneration Committee supports and advises the Board in fulfilling its responsibilities to shareholders by:\n\n- 〉 ensuring shareholder and employee interests are aligned;\n- 〉 ensuring the Company is able to attract, develop and retain talented employees;\n- 〉 recommending to the Board, with the Managing Director, an appropriate executive remuneration policy;\n- 〉 determining the remuneration of Directors;\n- 〉 having regard to the Company's Diversity Policy, including issues relating to remuneration by gender;\n- 〉 reviewing and approving the remuneration of those reporting directly to the Managing Director and other senior executives, as appropriate; and\n- 〉 reviewing all equity based plans for approval by the Board.\n\nThe Remuneration Committee operates in accordance with the Company's Remuneration Policy. The policy is designed so that it motivates senior executives to pursue the long-term growth and success of the Company and demonstrates a clear relationship between senior executives' performance and remuneration.\n\nThe Remuneration Committee met one time during the 2013 financial year.\n\nThe Remuneration Committee operates in accordance with a charter published in the 'Corporate Governance' section of the Company's website.\n\n## Nomination Committee\n\nThe members of the Nomination Committee as at the date of this Report are:\n\n- 〉 Mr Ross Smyth-Kirk (Chairman of Nomination Committee);\n- 〉 Mr Peter McAleer; and\n- 〉 Mr Craig Carracher.\n\nThe role of the Nomination Committee supports and advises the Board in fulfilling its responsibility to ensure that it comprises individuals who are best able to discharge the responsibilities of the Directors, having regard to the law and the highest standards of governance, by:\n\n- 〉 assessing the skills required on the Board;\n- 〉 reviewing the structure, size and composition of the Board;\n- 〉 from time to time assessing the extent to which the required skills are represented on the Board and ensuring an appropriate succession planning is in place;\n- 〉 establishing processes for the review of the performance of individual Directors and the Board as a whole, its committees and key executives; and\n- 〉 establishing processes for the identification of suitable candidates for appointment to the Board.\n\nTo ensure that the Board has an appropriate mix of skills and experience, the Nomination Committee will consider men and women from diverse backgrounds for Board membership who have demonstrated high levels of integrity and performance in improving shareholder returns, and who can apply such skills and experience to the benefit of the Company.\n\nThe Nomination Committee met once during the 2013 financial year.\n\nThe Nomination Committee operates in accordance with a charter published in the 'Corporate Governance' section of the Company's website.\n\n## Ethical Standards and Code of Conduct\n\nThe Board and the Company's employees are expected to maintain the highest level of corporate ethics and personal behaviour.\n\nThe Company has established a Code of Conduct which provides an ethical and legal framework for all employees in the conduct of its business. The Code of Conduct defines how the Company relates to its employees, shareholders and the community in which the Company operates.\n\nThe core values of the Code of Conduct are:\n\n- 〉 honesty and integrity;\n- 〉 fairness and respect; and\n- 〉 trust and openness.\n\nThe Code of Conduct provides clear directions on conducting business internationally, interacting with governments, communities, business partners and general workplace behaviour having regard to the best practice corporate governance models. The Code of Conduct sets out a behavioural framework for all employees in the context of a wide range of ethical and legal issues.\n\nThe Code of Conduct is published in the 'Corporate Governance' section of the Company's website.", - "page_start": 37, - "page_end": 37, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "### Information on Directors\n\nRoss Smyth-Kirk B Com, CPA, F Fin\n\n#### Chairman – Non-Executive\n\nRoss Smyth-Kirk was a founding Director of the former leading investment management company, Clayton Robard Management Limited and has had extensive experience over a number of years in investment management including a close involvement with the minerals and mining sectors. He has been a Director of a number of companies over the past 33 years in Australia and the UK. Mr Smyth-Kirk was previously Chairman of the Australian Jockey Club Limited and retired in May 2013 as a Director of Argent Minerals Limited.\n\n#### Responsibilities:\n\nChairman of the Board, member of the Audit Committee and Chairman of the Remuneration Committee and Nomination Committee.\n\n#### Peter McAleer B Com (Hons), B L (Kings Inn – Dublin, Ireland)\n\n#### Non-Executive Director\n\nPeter McAleer was until the end of May 2013 the Senior Independent Director and Chairman of the Audit Committee of Kenmare Resources PLC (Ireland). He is now a member of the Advisory Panel to the Board of Kenmare. Previously, he was Chairman of Latin Gold Limited, Director and Chief Executive Officer of Equatorial Mining Limited and was a Director of Minera El Tesoro (Chile).\n\n#### Responsibilities:\n\nMember of the Audit Committee, Remuneration Committee and Nomination Committee.\n\n## Craig Carracher\n\nLLB (Sydney), BCL (Oxford)\n\n#### Non-Executive Director\n\nCraig Carracher graduated from Sydney University Law School with an LLB (First Class Honours) (1991) and the University Medal and also graduated on a Commonwealth Scholarship with a BCL Law Degree from Magdalen College, Oxford University (First Class Honours) (1993). He has considerable commercial experience in Asia and was managing partner of an international law firm based in Thailand for many years. Mr Carracher has held numerous directorships of listed and private groups throughout Asia. He was previously Group General Counsel with Consolidated Press Holdings Limited, Managing Director of Asian private equity firm Arctic Capital based in Hong Kong, Special Advisor to the Chairman of the Australian Securities and Investment Commission and Associate to the former Chief Justice of the Supreme Court of New South Wales. Mr Carracher is Managing Director of Telopea Capital Partners, an Asiafocussed private equity group based in Sydney. Mr Carracher is also a Non-Executive Director of ASX listed Sunland Group Limited.\n\n#### Responsibilities:\n\nChairman of the Audit Committee, member of the Nomination and Remuneration Committees.\n\n#### Peter Alexander\n\nAss. Appl. Geol\n\n#### Non-Executive Director\n\nPeter Alexander has had 40 years experience in the Australian and off-shore mining and exploration industry. He was Managing Director of Dominion Mining Limited for 10 years prior to his retirement in January 2008. Mr Alexander was appointed a Non-Executive Director of Dominion Mining Limited in February 2008 and resigned on 21 February 2011. Mr Alexander is Chairman of the ASX listed company Doray Minerals Limited, a Director of ASX listed companies Fortunis Resources Limited and Caravel Minerals Limited.\n\n#### Responsibilities:\n\nMember of the Remuneration Committee.\n\n#### Gavin Thomas BSc FAusIMM\n\n#### Managing Director\n\nGavin Thomas has had a successful career in developing mining companies from the exploration phase into mid-tier gold and / or copper production entities. He has over 42 years of international experience in exploring for, evaluating, developing, operating and reclaiming mines in North America, South America, Australia, the Southwest Pacific, Asia and Europe. Amongst other things he was credited with the discovery of the Lihir gold deposit in Papua New Guinea, one of the largest gold deposits in the world. In particular he has extensive experience in Thailand, south-west Pacific and South America. Mr Thomas was previously Chairman of the TSX listed company Mercator Minerals and Chairman of the formerly ASX listed company Laguna Resources NL.\n\n#### Responsibilities:\n\nManaging Director and Chief Executive Officer.\n\n#### Company Secretary\n\nRoss Coyle BA, FCPA, FCIS\n\nBefore joining Kingsgate Consolidated Limited Mr Coyle was Company Secretary of Dominion Mining Limited.", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "How mush did the Moomba incident cost to Santos in 2004 ?", - "target_page": 12, - "target_passage": " the Moomba incident resulted in $17 million of one-off costs in 2004.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# ANALYSING FINANCIAL PERFORMANCE\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 10\n\n**'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'** \n\n#### **PETER WASOW**\n\nChief Financial Officer\n\n#### **2004 WAS A YEAR OF GOOD OPERATING RESULTS**\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## **PRODUCTION HAMPERED BY MOOMBA INCIDENT**\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million boe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## **PRODUCTION COSTS UNDER CONTROL**\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue to effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n- the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n- changes in our accounting added a further $16 million to Santos' production costs\n- higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n- the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n- cost savings in established production areas more than offset increases in the price of services and materials\n- Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.\n\n### **PRODUCTION AND SALES REVENUE**", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "**Santos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.**\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 28\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n#### **SUPPORTING COMMUNITIES**\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n#### **CORPORATE GOVERNANCE**\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "is also located in relatively shallow water with infrastructure nearby, creating options for early production.\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 5\n\nAt Santos, we are proud that an Australian company took on that challenge and succeeded, and I congratulate the exploration and drilling teams on a great effort. With the Jeruk discovery behind us, Indonesia is at the forefront of our international exploration efforts. With eight wells planned in the region for 2005, Santos is currently the most active explorer in Indonesia.\n\n## **A STRONG FINANCIAL PERFORMANCE**\n\nIt was pleasing that Santos was able to conclude 2004 on a higher note than it started.\n\nWe achieved record annual revenue thanks to higher oil and gas prices combined with the return of full production at Moomba to produce a 21.5% jump in second half sales: the best result for any six-month period in Santos' history.\n\nThe average realised price for crude oil was up nearly 19% to A$51.83 per barrel.\n\nThese results have left Santos well positioned to continue its strong investment program which saw capital expenditure peak at $930 million in 2004.\n\nIn 2005 we expect to invest around $850 million of new capital in projects and our strategy is to plan for firm developments based on affordability at relatively low oil prices. If higher prices continue and some projects mature quickly and can be given the green light, our overall capital expenditure may be higher.\n\nProduction is expected to rise in 2005 when, as usual, our financial performance will be subject to oil prices, exchange rates and interest rates. These factors have a significant effect on our bottom line. A US$1 per barrel change in the oil price equates to a A$16 million change in net profit after tax in 2005.\n\nA one US cent movement in the Australia–US dollar exchange rate would produce a change in profit after tax of A$8 million, and a 1% change in interest rates equates to a change in net profit after tax of A$9 million.\n\n2004 has also been an important period for shareholders, with a significant improvement in the Santos share price combined with an increase in the dividend.\n\n### **PRODUCTION TO REBOUND**\n\nWhile we expected lower production overall in 2004, our output was obviously curtailed further by the incident at the Moomba plant. The good news is that several projects emerged from the development pipeline during the year and made positive contributions to our expanding suite of oil and gas facilities.\n\nProduction is forecast to increase by 15% in 2005, or by 4% after excluding the effect of the Moomba downtime, to about 54 million boe. We expect this positive forward trend to be followed by further production growth of more than 10% in 2006.\n\nThe Bayu-Undan liquids project came on line in April 2004 and, at its increased design throughput of just over one billion cubic feet of gas per day, produced liquids at a rate of 100,000 barrels per day.\n\nBayu-Undan is currently stripping liquids and re-injecting the gas pending tie-in of the pipeline to Darwin in May 2005 for future LNG production. The onshore LNG facilities are more than two-thirds complete. With a gross production of 19 million barrels, 22% above expectations for the year, we were pleased with the performance of Bayu-Undan and look forward to a full year contribution from this exciting project in 2005.\n\nThe Minerva gas field off Victoria's western coast started production in December 2004 and is ramping up to full field production of around 150 TJ per day. Our share in this project is 10%, and is significant because it represents our first foray into marketing gas directly to customers or into the Victorian spot market through our sales vehicle, Santos Direct, aimed at delivering higher prices.\n\n### **RECORD EXPLORATION EFFORT AHEAD**\n\nExploration is a great way to increase shareholder value so I am pleased to be able to report that in 2004, Santos drilled 16 wildcat wells resulting in seven hydrocarbon discoveries.\n\nGrowing our oil and gas reserves for future production is the goal of our exploration efforts. On a rolling three-year average we have replaced the hydrocarbons that Santos has produced at a rate of 130% of Proven (1P) reserves, at an average replacement cost of around US$7 per boe.\n\nSantos has an exciting exploration program for 2005: one that I believe holds the highest resource potential of any program in the Company's 50-year history.\n\nWe expect to participate in drilling a record 157 wells during 2005, of which 25 are exploration wildcat wells. Consistent with the growing internationalisation of Santos, this includes eight wells in Indonesia and six wells in the Gulf of Suez, Egypt. This program offers an attractive combination of risk and reward and is a new focus to our overseas exploration effort.\n\nIn the US, two exploration wells are planned, one onshore, and one offshore in the shallow waters of the Gulf of Mexico.\n\nIn Australia, our increasing focus on the potential of offshore areas will see Santos drill three wells off Western Australia in 2005, one off southern Australia and two wells off northern Australia. We will also drill two wells onshore in Queensland and one onshore in Victoria.\n\nThe discovery of oil and gas at Hiu Aman in the Kutei Basin, offshore East Kalimantan, has provided a strong start to our 2005 exploration program and we look forward with anticipation to further work on that significant find. Santos has a 50% interest in the discovery. We believe this region of Indonesia is very promising and Santos expects to drill four wells in the Kutei Basin in 2005.\n\n## **BIGGEST DEVELOPMENT YEAR YET**\n\nI am pleased also to report that 2004 was a record year for development with six projects advancing through the pipeline.\n\nThe start-up of the Mutineer-Exeter oil field is a significant milestone in Santos' development history. This project off the", - "page_start": 6, - "page_end": 6, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## **MALEO NEGOTIATIONS ADVANCED**\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 21\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## **FIRST RETAIL GAS SALES WITH SANTOS DIRECT**\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field – around 15 TJ/d – in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## **LIQUIDS MARKETING ALLIANCE WITH BP**\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n**'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'**\n\n#### **RICK WILKINSON**\n\nVice President Gas Marketing and Commercialisation\n\n**The alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.**", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Text 30/3/05 12:06 PM Page 11\n\n## **DEPRECIATION, DEPLETION AND AMORTISATION**\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## **CASH FLOW LOWER**\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n#### **RECORD CAPITAL EXPENDITURE**\n\nCapital expenditure ended right on target at $930 million – a record year for Santos – approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n#### **FINANCIAL FLEXIBILITY INTACT**\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# NOTES TO THE FINANCIAL STATEMENTS\n\nfor the year ended 31 December 2004\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 70\n\n#### **22. Investments in Controlled Entities**\n\n| Name | Place of | Name | Place of |\n| --- | --- | --- | --- |\n| | incorporation | | incorporation |\n| Santos Ltd (Parent Entity) | SA | Santos Brantas Pty Ltd3 | VIC |\n| Controlled entities1 : | | Santos (Donggala) Pty Ltd3 | VIC |\n| Alliance Petroleum Australia Pty Ltd | VIC | Santos Egypt Pty Ltd3 | VIC |\n| Boston L.H.F. Pty Ltd | VIC | Santos Hides Ltd | PNG |\n| Bridgefield Pty Ltd | QLD | Santos International Operations Pty Ltd | QLD |\n| Bridge Oil Developments Pty Limited | NSW | Santos (Madura Offshore) Pty Ltd | WA |\n| Canso Resources Pty Ltd | NSW | Santos Niugini Exploration Limited | PNG |\n| Coveyork Pty Ltd | NSW | Santos (Nth Bali 1) Pty Ltd | SA |\n| Doce Pty Ltd | QLD | Santos (Papalang) Pty Ltd | SA |\n| Farmout Drillers Pty Ltd | NSW | Santos (Popodi) Pty Ltd | SA |\n| Kipper GS Pty Ltd | WA | Santos (JPDA 91-01) Pty Ltd | ACT |\n| Controlled entity of Kipper GS Pty Ltd | | Santos (JPDA 91-12) Pty Ltd | ACT |\n| Crusader (Victoria) Pty Ltd | VIC | Santos (NGA) Pty Ltd | VIC |\n| Moonie Pipeline Company Pty Ltd | QLD | Santos (N.T.) Pty Ltd | ACT |\n| Novus Australia Resources NL2 | VIC | Controlled entity of Santos (N.T.) Pty Ltd | |\n| Reef Oil Pty Ltd | NSW | Bonaparte Gas & Oil Pty Limited | NSW |\n| Santos Asia Pacific Pty Ltd | QLD | Santos Offshore Pty Ltd | VIC |\n| Controlled entities of Santos Asia Pacific Pty Ltd | | Santos Oil Exploration (Malaysia) Sdn Bhd (in liquidation) | MAL |\n| Santos (Sampang) Pty Ltd | SA | Santos Petroleum Pty Ltd | NSW |\n| Santos (Warim) Pty Ltd | SA | Santos QNT Pty Ltd | QLD |\n| Santos Australian Hydrocarbons Pty Ltd | QLD | Controlled entities of Santos QNT Pty Ltd | |\n| Santos (BOL) Pty Ltd | NSW | Santos QNT (No. 1) Pty Ltd | QLD |\n| Controlled entity of Santos (BOL) Pty Ltd | | Controlled entities of Santos QNT (No. 1) Pty Ltd | |\n| Bridge Oil Exploration Pty Limited | ACT | Santos Petroleum Management Pty Ltd | QLD |\n| Santos Darwin LNG Pty Ltd | ACT | Santos Petroleum Operations Pty Ltd | QLD |\n| Santos Direct Pty Ltd3 | SA | TMOC Exploration Proprietary Limited | QLD |\n| Santos Facilities Pty Ltd | SA | Santos QNT (No. 2) Pty Ltd | QLD |\n| Santos Finance Ltd | NSW | Controlled entities of Santos QNT (No. 2) Pty Ltd | |\n| Santos Globe Pty Ltd (formerly Globex Far East Pty Ltd) | WA | Associated Petroleum Pty Ltd | QLD |\n| Santos International Holdings Pty Ltd | ACT | Moonie Oil Pty Ltd | QLD |\n| Controlled entities of Santos International Holdings Pty Ltd | | Petromin Pty Ltd | QLD |\n| Barracuda Limited | PNG | Santos (299) Pty Ltd | QLD |\n| Lavana Limited | PNG | Santos Exploration Pty Ltd | VIC |\n| Novus UK (Kakap 2) Limited2 | UK | Santos Gnuco Pty Ltd | QLD |\n| Peko Offshore Ltd | BER | Transoil Pty Ltd | QLD |\n| Sanro Insurance Pte Ltd | SING | Santos Resources Pty Ltd | QLD |\n| Santos Americas and Europe Corporation | USA | Santos Timor Sea Pipeline Pty Ltd | NSW |\n| Controlled entity of Santos Americas and Europe Corporation | | Sesap Pty Ltd2 | VIC |\n| Santos USA Corp | USA | Vamgas Pty Ltd | VIC |\n| Santos (Bawean) Pty Ltd | SA | | |\n\n1 Beneficial interests in all controlled entities is 100% except for Kipper GS Pty Ltd in which two shares of the total issued capital of 9,246,353 shares are owned by a third party.\n\n2 Company acquired during the year.\n\n3 Company incorporated during the year.", - "page_start": 71, - "page_end": 71, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# THE WORLD OF SANTOS\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 8", - "page_start": 9, - "page_end": 9, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 23\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Text 30/3/05 12:06 PM Page 2\n\n# DELIVERING ON THE STRATEGY\n\nDear Shareholder,\n\nI am pleased to report that in 2004 Santos continued to deliver on its strategy to transform the Company into a truly international exploration and production business with world-class operations.\n\nWhile the year saw many positives in terms of development and exploration success, it did not get off to a good start with the incident on New Year's Day at the Moomba processing facility in central Australia.\n\nImportantly, Santos was able to work effectively with its key stakeholders, including customers, joint venturers and government departments, to minimise the commercial impacts.\n\nNatural gas supplies were quickly restored, in part by recovering processed gas from underground storage reservoirs. Liquids processing facilities were progressively reinstated allowing further increases to gas production and sales volumes, with the ramp-up to full liquids production achieved by August as planned.\n\nA large proportion of the costs and foregone revenues associated with the repair of the damaged plant and the reduced oil and gas production volumes are being recovered under insurance policies.\n\nDue to the long cycle times inherent in the oil and gas business, it had been recognised that 2004 would be a year in which production was marginally below the previous year, with subsequent increases in 2005 and beyond driven by new development projects.\n\nIn this light, it is pleasing to report that the Minerva gas and Bayu-Undan liquids projects commenced production during the year as planned, while first oil from Mutineer-Exeter and several other key growth projects are progressing to plan.\n\nIndonesia matured into a core area during 2004, through a strategy of prudent acquisition, portfolio management and exploration. In particular, the Jeruk discovery has the potential to add significant value, with further evaluation activities underway.\n\nEven with the large effort expended on the Moomba incident, Santos was able to deliver strong results for 2004, reflecting higher average prices across most products.\n\nGroup sales revenue increased by 2.5% to a record $1,501 million, earnings before interest and tax improved by 23% to $574 million and net profit after tax rose by 16% to $380 million.\n\nThis strong financial performance, combined with the confidence that Santos will continue to grow earnings in the future, enabled the Board to increase the final dividend on ordinary shares by 20% from 15 cents to 18 cents per share, fully franked. For the full year, dividends increased by 10% to 33 cents per share, compared with 30 cents per share in each of the four previous years. On a grossed up basis, this represents a yield of over 5%.\n\nIn response to increasing interest and enquiry from shareholders, the Dividend Reinvestment Plan has been reintroduced and applied to the final dividend paid during March 2005.\n\nSantos continued its proactive approach to capital management with the redemption and buyback of the outstanding Preference Shares and the issue of FUELS (Franked Unsecured Equity Listed Securities). This initiative was driven by the alignment of Australian accounting standards with international requirements, and closed oversubscribed, raising $600 million in new equity.\n\nThe total shareholder return for the year, including share price appreciation and dividends paid, was 28% – an excellent result.\n\nIn addition to our focus on shareholder value, Santos takes its corporate social responsibilities seriously and is committed to sustainability as a core value in all operations. The Company's first Sustainability Review was released during the year.\n\nSantos continues to be recognised for the high quality of its corporate governance, receiving a measure of five out of five for corporate governance for the third successive year in an independent report prepared by leading accounting and management firm, Horwath, and the University of Newcastle.\n\nThe safety of our employees and contractors is the highest priority for the Board and I'm pleased that Santos has delivered another year of safety improvement with an 11% reduction in the 2004 total recordable case frequency rate.\n\nMr Frank Conroy retired from the Board of Directors during December 2004. A member of the Board for five years, Mr Conroy brought extensive business and corporate experience to the Board and I thank him for his outstanding contribution.\n\nIn February 2005 we appointed two new Board members, Mr Kenneth Dean from Shell, and Mr Christopher Recny from the international management consultancy firm, L.E.K. These individuals further strengthen the composition of the Board, bringing strong international oil and gas expertise and outstanding management experience.\n\nFinally, I'd like to acknowledge the extraordinary effort made by everyone at Santos to keep the Company moving forward during this challenging year.\n\nI am confident that the significant achievements made during 2004 provide Santos with a solid platform from which to achieve future growth with increased value for our shareholders.\n\nStephen Gerlach **Chairman** 21 March 2005", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# Sustainability\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 26\n\n# MANAGING FOR SUSTAINABLE GROWTH\n\n**'The publication of our first Sustainability Review in 2004 was a major achievement for Santos. The next steps are to undertake projects to improve our performance – not just in Australia but worldwide – and to accurately collect, verify and report on a range of sustainability data.'**\n\n#### **MARTYN EAMES**\n\nVice President Corporate and People\n\nLate in 2004 Santos published *First Steps: Sustainability Review*, the Company's first standalone publication on this topic. It describes how Santos is implementing the principles of sustainability in the areas of corporate governance, the environment, social responsibility and economic performance.\n\nThis was a significant milestone for Santos as it represents a starting point for the collection of data and the ongoing measurement of performance in the area of sustainability.\n\nCommunicating with stakeholders is an important activity and the publication of the Sustainability Review is a further extension of Santos' commitment in this regard. Santos applies considerable resources to the communication effort and aims to present information in a clear and concise manner in order to generate a greater understanding of the business by its stakeholders.\n\nSantos has been recognised for its achievements in this area. Santos' 2003 Annual Report was featured as an example of best practice reporting in PricewaterhouseCoopers' *Trends in Corporate Reporting 2004* publication. Reports from companies worldwide are considered in compiling this publication and they must meet specified criteria. This is the third time a Santos annual report has been featured. Santos was also awarded a 2004 Silver Award for Excellence in Annual Reporting for the 2002 Annual Report by the Australasian Reporting Awards.\n\nReceiving independent recognition for these activities serves as a reference point for Santos' desire to continually improve communication performance.\n\nSantos has been listed as an inaugural member of the Australian SAM Sustainability Index (AuSSI). The AuSSI tracks the performance of around 70 Australian companies that lead their industry in terms of economic, environmental and\n\n#### **TOTAL RECORDABLE CASE FREQUENCY RATE**\n\nTRCFR per millions hours worked\n\nsocial criteria. The index is calculated daily by Dow Jones Indexes and published in *The Australian* newspaper.\n\nFollowing is an overview of progress and achievements in the area of sustainability for 2004.\n\n#### **SAFETY IMPROVING**\n\nThe health and safety of employees is of paramount concern to Santos. Santos delivered another year of improvement in 2004 and achieved its lowest total recordable case frequency rate of 6.4.\n\nFurther improvements were also made with the implementation of the Environment, Health and Safety Management System standards, with Santos operations undergoing full assessments against standards for the first time.\n\nThe results demonstrated considerable improvement over the baseline assessments conducted in 2003 with steady progress in the implementation of the procedures, processes and tools needed to achieve the requirements of the standards.\n\nProcess safety capability which deals with plant and equipment integrity assurance, design and construction, and maintenance, is being developed through the formation of a new set of standards to be incorporated\n\ninto the health and safety management system.\n\nThe safety focus in 2005 will be on finalising a comprehensive set of hazard standards which outline the required controls to ensure that hazards encountered across Santos' operations and activities are well managed.\n\n## **POSITIONING THE WORKFORCE FOR THE FUTURE**\n\nSantos commenced a major company-wide transformational change program in late 2003. The program was designed to significantly improve Santos' performance in four areas: key business processes, financial performance, organisation structure and company culture.\n\nReorganising and simplifying the Company's structure was one of the major outcomes and in May 2004 Santos began operating under a new functionally-based organisation structure.\n\nThe new structure is designed to support the explorationfocused growth strategy. It mirrors the 'conveyor belt' lifecycle of an exploration and production company where exploration success leads to commercialisation and development activity and finally revenue-generating production.\n\nIt also follows the principle that the majority of employees should", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "What is the main focus of the Santos 2005 program ?", - "target_page": 19, - "target_passage": " Oil is the main focus of the 2005 program", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "**Santos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.**\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 28\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n#### **SUPPORTING COMMUNITIES**\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n#### **CORPORATE GOVERNANCE**\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## **MALEO NEGOTIATIONS ADVANCED**\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 21\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## **FIRST RETAIL GAS SALES WITH SANTOS DIRECT**\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field – around 15 TJ/d – in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## **LIQUIDS MARKETING ALLIANCE WITH BP**\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n**'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'**\n\n#### **RICK WILKINSON**\n\nVice President Gas Marketing and Commercialisation\n\n**The alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.**", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## **HIGH IMPACT DRILLING IN 2005**\n\nThe 2005 exploration program has the highest resource potential of any program undertaken at Santos.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 17\n\nSantos is planning a large, high impact drilling campaign that is already well underway.\n\nSantos plans to drill 25 wells and will invest $150 million testing prospects within its expanding domestic and international exploration portfolio – up 19% from the $126 million spent on exploration in 2004.\n\nOil is the main focus of the 2005 program with most activity in the Kutei and East Java Basins offshore Indonesia, the Gulf of\n\nSuez in Egypt, the Bonaparte Basin in the Timor Sea and the Carnarvon Basin offshore Western Australia.\n\nThe 2005 program reflects the increasing materiality of Santos' exploration portfolio and continues the emphasis on more globally-focused exploration as an important part of the Company's growth strategy.\n\nSantos has already had drilling success early in 2005 with the Hiu Aman 1 well – the first to be drilled by Santos in the Donggala PSC. Hiu Aman 1 has indicated the presence of a prolific hydrocarbon system in this area. The discovery should add other lower risk prospects to Santos'\n\nexploration portfolio. A multi-well drilling program will be undertaken in Santos' Kutei Basin PSCs during 2005.\n\nAnother gas discovery has been made at Hurricane 1 in the Carnarvon Basin, offshore Western Australia. While both wells were discoveries, they require further evaluation to determine their commercial significance.", - "page_start": 18, - "page_end": 18, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Text 30/3/05 12:07 PM Page 27\n\n**Santos is investing in the future of Australia's petroleum industry through the funding of the Australian School of Petroleum at the University of Adelaide.**\n\nbe working in business operations with a lean and efficient corporate and services group.\n\nWith the exception of a small number of project teams, all non-award based positions in the Company were declared vacant and a selection process commenced around a set of criteria designed to ensure that people with the right skills and the ability to successfully grow Santos were appointed. As is often the case with transformational change initiatives, not everyone was re-appointed and, as a result, the workforce was reduced by 9%.\n\n#### **CULTURE CHANGE**\n\nThe need to develop a culture that supports the newly designed business processes was another of the major outcomes of the change program. A Santos-wide culture change program led by employees is currently underway.\n\nThis long-term program is designed to ensure that the way employees work together enhances Santos' ability to be successful.\n\nOne of the first tasks undertaken was a voluntary employee survey to identify the gaps between the existing culture and the desired culture. The outcomes of the survey will assist in the development of programs and activities that will better align work practices with Santos' strategic goals.\n\n#### **TRAINING AND DEVELOPING PEOPLE**\n\nMaking sure training and development supports current and future business requirements, and provides opportunities for people to develop their skills to achieve optimum performance, are key aspects of Santos' human resources strategy.\n\nSantos has a number of long-term projects underway which will optimise the substantial investment the Company makes in training people. Importantly, these projects will deliver programs that are targeted to meet business and individual needs and to support culture change initiatives.\n\n#### **BANKSIA AWARDS**\n\nSantos was selected in 2004 as a finalist in the Banksia Environmental Awards for the work undertaken in the Companyled initiative to protect the world-renowned Coongie Lakes, resulting in the area being declared a new National Park by the South Australian Government.\n\nAs a finalist for this award Santos was recognised for its leadership role in bringing together a group of disparate parties to develop a Memorandum of Understanding recommending further protection for the Coongie Lakes.\n\n#### **WASTE MANAGEMENT**\n\nSantos trialled innovative waste management techniques during 2004 to reduce the volume of hydrocarbon waste generated from Cooper Basin operations. Preliminary results indicate that these waste volumes can be reduced to 3-5% of their original volume, which is a significant achievement.\n\nThis technology will be implemented where possible\n\n#### **OIL SPILL VOLUMES**\n\nacross Santos operations. The long-term environmental and financial benefits of using this technology are expected to be considerable.\n\n#### **REDUCED OIL SPILLS**\n\nThere was a substantial reduction in the volume of hydrocarbons released to the environment in 2004, with uncontained hydrocarbons spilt reducing from 1,943 cubic metres to 83 cubic metres and Santos continues to focus on reducing oil spills.\n\n#### **GREENHOUSE POLICY**\n\nSantos released its Greenhouse Policy in 2004 to drive performance improvements in this area through reducing emissions and producing oil and gas more efficiently.\n\nSantos' Greenhouse Policy is being rolled out across the organisation through crossfunctional greenhouse gas teams that have the right skill sets and responsibilities to progress this initiative. These teams will manage Greenhouse Policy and regulation, carbon management and trading opportunities, and energy efficiency. A key internal driver for emissions reduction will be the promotion of energy efficiency.\n\nSantos is committed to achieving effective emission reduction targets, to the pursuit of energy efficiency strategies and to the identification and implementation", - "page_start": 28, - "page_end": 28, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Text 30/3/05 12:06 PM Page 11\n\n## **DEPRECIATION, DEPLETION AND AMORTISATION**\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## **CASH FLOW LOWER**\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n#### **RECORD CAPITAL EXPENDITURE**\n\nCapital expenditure ended right on target at $930 million – a record year for Santos – approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n#### **FINANCIAL FLEXIBILITY INTACT**\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# Sustainability\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 26\n\n# MANAGING FOR SUSTAINABLE GROWTH\n\n**'The publication of our first Sustainability Review in 2004 was a major achievement for Santos. The next steps are to undertake projects to improve our performance – not just in Australia but worldwide – and to accurately collect, verify and report on a range of sustainability data.'**\n\n#### **MARTYN EAMES**\n\nVice President Corporate and People\n\nLate in 2004 Santos published *First Steps: Sustainability Review*, the Company's first standalone publication on this topic. It describes how Santos is implementing the principles of sustainability in the areas of corporate governance, the environment, social responsibility and economic performance.\n\nThis was a significant milestone for Santos as it represents a starting point for the collection of data and the ongoing measurement of performance in the area of sustainability.\n\nCommunicating with stakeholders is an important activity and the publication of the Sustainability Review is a further extension of Santos' commitment in this regard. Santos applies considerable resources to the communication effort and aims to present information in a clear and concise manner in order to generate a greater understanding of the business by its stakeholders.\n\nSantos has been recognised for its achievements in this area. Santos' 2003 Annual Report was featured as an example of best practice reporting in PricewaterhouseCoopers' *Trends in Corporate Reporting 2004* publication. Reports from companies worldwide are considered in compiling this publication and they must meet specified criteria. This is the third time a Santos annual report has been featured. Santos was also awarded a 2004 Silver Award for Excellence in Annual Reporting for the 2002 Annual Report by the Australasian Reporting Awards.\n\nReceiving independent recognition for these activities serves as a reference point for Santos' desire to continually improve communication performance.\n\nSantos has been listed as an inaugural member of the Australian SAM Sustainability Index (AuSSI). The AuSSI tracks the performance of around 70 Australian companies that lead their industry in terms of economic, environmental and\n\n#### **TOTAL RECORDABLE CASE FREQUENCY RATE**\n\nTRCFR per millions hours worked\n\nsocial criteria. The index is calculated daily by Dow Jones Indexes and published in *The Australian* newspaper.\n\nFollowing is an overview of progress and achievements in the area of sustainability for 2004.\n\n#### **SAFETY IMPROVING**\n\nThe health and safety of employees is of paramount concern to Santos. Santos delivered another year of improvement in 2004 and achieved its lowest total recordable case frequency rate of 6.4.\n\nFurther improvements were also made with the implementation of the Environment, Health and Safety Management System standards, with Santos operations undergoing full assessments against standards for the first time.\n\nThe results demonstrated considerable improvement over the baseline assessments conducted in 2003 with steady progress in the implementation of the procedures, processes and tools needed to achieve the requirements of the standards.\n\nProcess safety capability which deals with plant and equipment integrity assurance, design and construction, and maintenance, is being developed through the formation of a new set of standards to be incorporated\n\ninto the health and safety management system.\n\nThe safety focus in 2005 will be on finalising a comprehensive set of hazard standards which outline the required controls to ensure that hazards encountered across Santos' operations and activities are well managed.\n\n## **POSITIONING THE WORKFORCE FOR THE FUTURE**\n\nSantos commenced a major company-wide transformational change program in late 2003. The program was designed to significantly improve Santos' performance in four areas: key business processes, financial performance, organisation structure and company culture.\n\nReorganising and simplifying the Company's structure was one of the major outcomes and in May 2004 Santos began operating under a new functionally-based organisation structure.\n\nThe new structure is designed to support the explorationfocused growth strategy. It mirrors the 'conveyor belt' lifecycle of an exploration and production company where exploration success leads to commercialisation and development activity and finally revenue-generating production.\n\nIt also follows the principle that the majority of employees should", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 23\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# Managing Options\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 22\n\n# UNLOCKING THE VALUE OF STRATEGIC ASSETS\n\n**'Our objective is to derive value from undeveloped assets which have been outside of Santos' base business.'**\n\n**BRUCE WOOD** Vice President Strategic Projects Santos' Strategic Projects team focuses on assets that have proven difficult to commercialise or that need to be considered in a regional context rather than on an individual basis.\n\nThe other key activity for this team has been to lead Santos' continuous improvement focus.\n\n#### **UNITED STATES GAS**\n\nThe US gas business was a major focus in 2004 for a number of reasons, not the least of which are the higher gas prices in the US compared with the domestic Australian market, and the ability to rapidly commercialise new discoveries.\n\nAn ongoing development and delineation program was carried out during the year, yielding better than planned production. The exploration initiative also continued to seek higher risk but more material prospects, aimed at enhancing the move into the shallow water area of the Gulf of Mexico. Exploration results in this area during 2005 will shape Santos' future strategy in the US.\n\n#### **TIGHT GAS**\n\nHydrocarbons contained in traps with poor permeability are known as 'tight gas'. Large tight gas resources are known to exist in the Cooper Basin. Under current circumstances, this gas cannot be economically developed but, with the combination of improved production techniques and better commercial terms, could prove attractive.\n\nSantos assessed the resources and potential technologies that could be applied to unlock these resources during 2004 and is now working up a range of possible evaluation projects to be undertaken in 2005.\n\n#### **NORTHERN AUSTRALIA GAS**\n\nSantos has a significant existing gas resource base and some promising exploration acreage in the waters offshore Darwin, where it intends to drill a gas exploration well later this year.\n\nThe Company currently operates the Mereenie gas field in the Amadeus Basin in central Australia, which supplies gas to Darwin. Santos' first offshore gas production in northern Australia begins in 2006, sending Bayu-Undan gas to Darwin for conversion to LNG. Santos plans to build upon its growing position in the region to target further development which could ensure long-term gas supplies for the current market, or an expanded Northern Territory domestic market, or for export.\n\n#### **PAPUA NEW GUINEA GAS**\n\nSantos is in active discussions with the PNG Gas Project participants to potentially re-enter the PNG Gas Project. Santos has a significant interest in a large part of the liquids-rich Hides gas field which is integral to the development of the Project.\n\n## **2004 CONTINGENT RESOURCES** (TOTAL 1,443 mmboe)\n\n- Northern Australia 709 mmboe\n- Western Australia 71 mmboe\n- Central Australia 240 mmboe\n- Southern Australia 32 mmboe\n- Papua New Guinea 391 mmboe", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ANALYSING FINANCIAL PERFORMANCE\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 10\n\n**'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'** \n\n#### **PETER WASOW**\n\nChief Financial Officer\n\n#### **2004 WAS A YEAR OF GOOD OPERATING RESULTS**\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## **PRODUCTION HAMPERED BY MOOMBA INCIDENT**\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million boe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## **PRODUCTION COSTS UNDER CONTROL**\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue to effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n- the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n- changes in our accounting added a further $16 million to Santos' production costs\n- higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n- the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n- cost savings in established production areas more than offset increases in the price of services and materials\n- Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.\n\n### **PRODUCTION AND SALES REVENUE**", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "SAN165 WWW Cover 30/3/05 12:21 PM Page 2\n\n#### **Cover photograph:**\n\nClose-up of spinning Kelly Bushing (KB) on the drill floor of an exploration rig.\n\n#### **Page 1 photographs (top to bottom):**\n\nInspection of coiled tubing drilling activities, Cooper Basin, central Australia; installation of mid water arches, Mutineer-Exeter oil fields, Carnarvon Basin, offshore Western Australia; site inspection and liaison with contractors, offshore Western Australia; inspection of MODEC Venture 11 Floating Production Storage and Offloading facility, Jurong Shipyard, Singapore.\n\n#### INSIDE\n\n#### CHAIRMAN'S REVIEW\n\n- **2** Stephen Gerlach comments on Santos' performance in 2004.\n#### 2004 ACHIEVEMENTS 2005 AND BEYOND\n\n- **3** Key achievements in 2004 and three-year performance, plus what to look for in the near-term future.\n#### MANAGING DIRECTOR'S REVIEW\n\n- **4** John Ellice-Flint reviews Santos' 50th year, where the values embodied in the great explorers of yesteryear are shaping Santos today.\n#### THE WORLD OF SANTOS\n\n- **8** Locations of Santos' global exploration, development and production activities.\n#### ANALYSING FINANCIAL PERFORMANCE\n\n- **10** Putting the numbers in perspective and explaining the 2004 financial results.\n#### LEVERAGING BASE BUSINESS\n\n- **12** Production results for 2004 plus a review of activities that are creating value in Santos' base business.\n#### CREATING OPPORTUNITIES\n\n- **15** Exploration strategy, results and acreage acquisitions, 2005 program and new venture opportunities.\n#### CAPTURING AND DELIVERING GROWTH\n\n- **18** Progress on Santos' development projects and gas commercialisation highlights.\n#### MANAGING OPTIONS\n\n- **22** Strategic projects, portfolio management activities and reserves movements in 2004.\n## SUSTAINABILITY\n\n- **26** Sustainability activities undertaken in 2004, including safety and environmental performance, employees and communities.\n#### CORPORATE GOVERNANCE\n\n- **29** Details of the main corporate governance practices Santos has in place.\n#### DIRECTORS' AND SENIOR EXECUTIVES' REMUNERATION\n\n- **37** Remuneration details for Directors and key executives.\n#### BOARD OF DIRECTORS\n\n- **41** Directors' biographical details.\n#### GROUP INTERESTS\n\n- **42** Santos licence areas and percentage interests.\n#### 10 YEAR SUMMARY\n\n- **44** Statistical summary of financial performance.\n#### DIRECTORS' STATUTORY REPORT\n\nGLOSSARY\n\nThe standard unit of measurement for all\n\nproduction and sales. One barrel = 159 litres\n\n**hydrocarbons**\n\n**LNG**\n\n**LPG**\n\n**mbbls**\n\nresources.\n\n**mmbbls**\n\n**mmboe**\n\n**mmscf/d**\n\n**PJ**\n\nMillion barrels.\n\n**petroleum liquids**\n\npropane and butane.\n\nkilojoule = 0.9478 BTU.\n\n**Proven reserves (1P)**\n\nhydrogen and carbon.\n\nLiquefied natural gas.\n\nThousand barrels.\n\n**mean resource potential**\n\nSolid, liquid or gas compounds of the elements\n\nadditions net of acquisitions and divestments.\n\nfixed asset expenditure net of stay-in-business\n\nReserves added during the reporting period\n\nResource potential refers to those quantities\n\nof petroleum yet to be discovered. It may\n\nrefer to single opportunities or a group\n\nReturn on average capital employed.\n\nData used to gain an understanding of rock\n\nformations beneath the earth's surface using\n\nTerajoules. Joules are the metric measurement\n\nunit for energy. A terajoule is equal to 1 joule\n\n**total recordable case frequency rate (TRCFR)**\n\nA statistical measure of safety performance.\n\ncalculated as the total number of recordable\n\ncases (medical treatment injuries and lost time\n\ninjuries) per million hours worked. A lost time\n\ninjury is a work-related injury or illness that\n\ndisability or time lost of one complete shift\n\nillness. A medical treatment injury is a workrelated injury or illness, other than a lost time\n\ninjury, where the injury is serious enough to\n\nrequire more than minor first aid treatment.\n\nmodified duties as medical treatment injuries.\n\nExploration wells testing new play concepts or\n\nsales gas 1 petajoule = 171.937 boe x 103\n\ncondensate/naphtha 1 barrel = 0.935 boe\n\nFor a comprehensive online conversion\n\ncalculator tool, visit the Santos website,\n\nstructures distanced from current fields.\n\ncrude oil 1 barrel = 1 boe\n\nLPG 1 tonne = 8.458 boe\n\nwww.santos.com.\n\nSantos classifies injuries that result in\n\n**wildcat exploration**\n\n**Conversion**\n\nor day or more any time after the injury or\n\nresults, or would result, in a permanent\n\nTotal recordable case frequency rate is\n\ndivided by the production over the same\n\nDevelopment includes all development and\n\nand corporate capital expenditure.\n\nperiod, reported as a percentage.\n\n**reserve replacement ratio**\n\n**resource potential**\n\nof opportunities.\n\nReturn on average equity.\n\nreflected sound waves.\n\nTrillion cubic feet.\n\n**ROAE**\n\n**ROACE**\n\n**seismic**\n\n**tcf**\n\n**TJ**\n\nx 1012.\n\nLiquefied petroleum gas, the name given to\n\npropane and butane in their liquid state.\n\nThe average of the range of recoverable\n\nMillion barrels of oil equivalent.\n\nMillion standard cubic feet per day.\n\nCrude oil, condensate, or its derivative\n\nnaphtha, and the liquefied petroleum gases\n\nPetajoules. Joules are the metric measurement\n\nunit for energy. A petajoule is equal to 1 joule\n\nProven reserves (1P) are those reserves that, to\n\na high degree of certainty (90% confidence),\n\nare recoverable. There is relatively little risk\n\ndeveloped reserves are reserves that can be\n\nrecovered from existing wells with existing\n\ninfrastructure and operating methods. Proven\n\nProven plus Probable reserves (2P) are those\n\nengineering data suggests are more likely than\n\nnot to be recoverable. There is at least a 50%\n\nprobability that reserves recovered will exceed\n\n**Proven, Probable plus Possible reserves (3P)** \n\nProven, Probable plus Possible reserves (3P)\n\ncertainty (10% confidence), are recoverable.\n\nThere is relatively high risk associated with\n\n**reserve replacement cost per barrel of** \n\nExploration, delineation and development\n\nexpenditure per annum divided by reserve\n\nare those reserves that, to a low degree of\n\nundeveloped reserves require development.\n\nassociated with these reserves. Proven\n\n**Proven plus Probable reserves (2P)**\n\nProven plus Probable reserves.\n\nProduction sharing contract.\n\nthese reserves.\n\n**oil equivalent**\n\n**PSC**\n\nreserves that analysis of geological and\n\nx 1015. The equivalent imperial measure to\n\njoules is British Thermal Units (BTU). One\n\nBillion cubic feet, a billion defined as 109, on\n\naverage 1 bcf of sales gas = 1.055 petajoules.\n\nBarrels of oil equivalent. The factor used\n\nby Santos to convert volumes of different\n\nThose quantities of hydrocarbons which are\n\nrecoverable from known accumulations, but\n\nwhich are not currently considered to be\n\nmay be of a significant size, but still have\n\nestimated, on a given date, to be potentially\n\ncommercially recoverable. Contingent resources\n\nconstraints to development. These constraints,\n\npreventing the booking of reserves, may relate\n\nto lack of gas marketing arrangements or to\n\ntechnical, environmental or political barriers.\n\nDepreciation, depletion and amortisation of\n\nbuilding, plant and equipment, exploration\n\nexploration wells and appraisal wells. Nearfield exploration wells are wells located near\n\nexisting fields/discoveries and have a higher\n\nexpectation of success than wildcat exploration\n\nwells. These wells test independent structures\n\nor traps and have a higher risk of failure than\n\nWells designed to produce hydrocarbons from\n\na gas or oil field within a proven productive\n\nreservoir defined by exploration or appraisal\n\nEarnings before interest and tax, depreciation,\n\ndepletion and amortisation of building, plant\n\nand equipment, exploration and development\n\nexpenditure and amortisation of goodwill.\n\n**finding cost per barrel of oil equivalent**\n\nannum divided by reserve additions net of\n\nacquisitions and divestments.\n\nExploration and delineation expenditure per\n\nEarnings before interest and tax.\n\nappraisal or development wells. An appraisal\n\nwell is a well drilled for the purpose of\n\nidentifying extensions to known fields or\n\nhydrocarbon production to barrels of oil\n\n**barrel/bbl**\n\n**bcf**\n\n**boe**\n\nequivalent.\n\nBarrels of oil per day.\n\n**contingent resources**\n\n**the Company** or **Santos**\n\n**DD&A**\n\nSantos Ltd and its subsidiaries.\n\nand development expenditure.\n\nComprises two categories: near-field\n\n**delineation well**\n\ndiscoveries.\n\ndrilling.\n\n**EBITDA**\n\n**EBIT**\n\n**development well**\n\n**bopd**\n\nor 35 imperial gallons.\n\n- **47** Directors' shareholdings, meetings, activities and emoluments.\n#### FINANCIAL REPORT\n\n- **50** Statements of financial performance, financial position and cash flows and notes to the financial statements.\n#### STOCK EXCHANGE AND SHAREHOLDER INFORMATION\n\n- **90** Listing of top 20 shareholders, analysis of shares and voting rights.\n#### INFORMATION FOR SHAREHOLDERS\n\n- **92** Annual General Meeting, final dividend, shareholder enquiries and information resources for shareholders.\n#### GLOSSARY\n\n- **93** Most frequently used terms explained.\nAnnual Report 2004\n\n93\n\n#### BACK COVER\n\nCorporate directory", - "page_start": 1, - "page_end": 1, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What is the primary aim of the OSPRO cohort study ?", - "target_page": 2, - "target_passage": " The primary aim of the OSPRO cohort study was to de velop and validate review of systems (i.e. evidence of sys temic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in out patient orthopedic physical therapy settings", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "shown to identify approximately 95% of positive red-flag responders. For statistical analyses, the \"yes\" responses were added for each version and included in each model as a continuous independent variable.\n\n#### OSPRO Yellow Flag tool (OSPRO-YF)\n\nThe OSPRO-YF is a yellow flag assessment tool that includes items from pain vulnerability domains (negative affect and fear-avoidance) and pain resilience domains (positive affect and self-efficacy) to aid with identification of pain-related psychological distress in outpatient orthopedic physical therapy settings [37]. The OSPRO-YF has good concurrent validity with pain intensity and region-specific disability [37] and is capable of predicting pain intensity, disability, quality of life and persistent pain 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The full-length OSPRO-YF has 17-items, however a shortened 10-item version is also available with an acceptable trade-off in accuracy. Like the OSPRO-ROS, the OSPRO-YF is designed for implementation into electronic medical record (EMR) systems to quickly and accurately identify risk for a variety of clinical outcomes [19]. For statistical analyses, a summary score was derived for each version by adding the item responses after reverse-scoring items 2, 13, 14, 15 and 17 so that higher scores indicate higher pain-related psychological distress. The summary score was then included in each model as a continuous independent variable.\n\n#### Intervention\n\nAll physical therapy treatment was provided at the discretion of the treating clinician. The duration of the episode, the number of physical therapy visits, and individual treatment parameters (type, intensity, duration, frequency) were not collected for pragmatic reasons. In particular, clinical and utilization data are not commonly collected in a standardized format and would need to be extracted from disparate medical record databases across different health care systems to assess treatment. This was not feasible given the scope and design of this multisite survey-based study. However, instead of coding treatment type we included baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores in each model as measures of treatment response. In that manner the individual effects of the treatment received were included in the predictive models, without directly accounting for the type of treatment.\n\n#### Healthcare utilization outcomes\n\nSelf-reported health care utilization was assessed at 6- and 12-months following initial evaluation by online assessment. Questions were derived from previous population-based studies involving musculoskeletal pain that have used survey methods for follow-up assessment [22, 23]. Study participants were asked whether they used any of the following healthcare services for their primary musculoskeletal pain complaint in the time following their physical therapy treatment:\n\n- 1. Opioid painkillers (eg. Vicodin, Lortab, Hydrocodone, Fentanyl, Percocet, Oxycontin, Oxycodone, tramadol, Ultram, Diludid, etc)\n- 2. Injections\n- 3. Surgery\n- 4. Diagnostic tests or Imaging (eg. xray, MRI, CT scan, nerve conduction test, etc.)\n- 5. Emergency room visits\n\n\"Yes\" responses were followed by questions regarding the quantity of services utilized (i.e. number of opioid painkillers, number of diagnostic tests or number of emergency room visits). All utilization questions were answered on a categorical scale (0, 1, 2–5, 5–10, or > 10) indicating the quantity of a particular service received during the applicable follow-up timeframe. At 6-month follow-up, study participants reported their use of services for the previous 2 months, allowing a timeframe of 4 months from initial evaluation for them to complete physical therapy. At 12-month follow-up, study participants reported their use of services over the previous 6 months since their last survey. This method provided an 8-month overall follow-up period after physical therapy and two follow-up points were included to minimize recall bias.\n\n#### Statistical analysis\n\nAll data analyses were preformed using SPSS Version 22.0 (IBM Corp., Armonk, NY). We developed models to separately predict over the course of the entire follow-up period: 1) the dichotomous outcome of no healthcare utilization versus any healthcare utilization and 2) the utilization of specific services. We decided to develop separate models since each outcome predicted by these models might have unique future policy implications. For instance, those who utilize no additional services might represent a \"low risk\" group for which physical therapy alone might be particularly appropriate. Predicting use of specific services would inform policy where reduction of specific services is a high priority, such as utilization of opioids or unnecessary use of emergency room services.\n\nAll prediction models used the following hierarchical design, which is similar to prior analyses in this cohort [20, 21]:\n\nBlock 1: age, sex, race, anatomical region of pain, insurance, chronicity of pain, surgery for current condition (yes/no), Charlson comorbidity index, baseline disability, baseline pain intensity.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed5.pdf" - }, - { - "text": "Block 2: 10-item OSPRO-YF and 10-item OSPRO-ROS at baseline.\n\nBlock 3: Remaining items from the OSPRO-YF (+ 7 items) and OSPRO-ROS (+ 13 items). These were included to determine whether full-length versions of the tools provided better prediction over shortened versions.\n\nBlock 4: Baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores. Early changes in these variables may be associated with improved prediction of outcomes over baseline variables alone [38]. This approach modeled change in these variables as a measure of treatment response and allowed us to assess the relative value of treatment monitoring for the prediction of healthcare utilization outcomes.\n\nFor the first analysis, binary logistic regression was used to determine predictors of any healthcare utilization following physical therapy, with the dependent variable defined as reporting one or more utilization events for any of the potential healthcare services over the entire follow-up period. For analyses of specific services, utilization was dichotomized for each service. Specific service utilization over early (through 6 months) and late (6 months to 12 months) phases following physical therapy were collapsed to create a single dichotomous utilization indicator for each service over the entire study follow-up period. Any utilization of the service over that period was categorized as YES. Separate multivariate binary logistic regression models were then fitted for the dichotomous utilization indicator (i.e. YES or NO) of each healthcare service (e.g. opioid use, injection, imaging, surgery, and emergency room visits).\n\nFor all analyses, full hierarchical multivariate models were first fit to assess the unique contributions of each block. This approach allowed us to determine the relative contributions of baseline demographic and health-related variables, the newly developed OSPRO-ROS and OSPRO-YF tools, and response to treatment via time varying variables (e.g., pain intensity and region specific function). However, since our primary aim was to develop concise and accurate utilization prediction models for efficient assessment of risk, we then separately developed stepwise models using backward selection for each dependent variable to derive parsimonious prediction item sets. Parsimonious models were chosen as a more conservative approach to identifying individual predictors given the potential for overfitting full multivariate models because of high subject attrition. For stepwise models, the p-value threshold was 0.05 for entry and 0.10 for removal. Overall fit for each model was examined with Hosmer & Lemeshow test, chi-square and pseudo-R2 values (e.g. Nagelkerke) when appropriate. Comparison of adjusted odds ratios (OR) and 95% confidence interval (CI) were used to determine the relative strength of each predictor in parsimonious models. Multicollinearity was assessed using variance inflation factor (VIF) and tolerance, where VIFs < 10 and tolerances > 0.1 suggested no significant collinearity among independent variables [39].\n\n#### Planned sensitivity analyses for missing data\n\nThe electronic OPT-IN data collection forms required complete data from respondents before they were allowed to proceed to subsequent survey pages. Therefore, the occurrence of missing data for independent predictor variables was minimal (< 1% of sample). However, for subjects who were lost to follow-up, we planned two approaches to assess the potential influence of missing data on study outcomes. First, demographic and baseline health variables would be compared between those with complete follow-up at 1 year and those without follow-up at 1 year to identify any potential group differences related to completion of follow-up. Second, sensitivity analyses would be conducted by repeating each analysis using inverse probability of attrition weighting (IPAW). This propensity scoring approach accounts for attrition-related selection bias in longitudinal studies by more heavily weighting observations associated with a lower probability of study completion [40]. Thus, the resulting analysis is compensated for under-representation of subjects who are more likely to be lost to follow-up. IPAW produces smaller effect estimate biases than more conventional methods that adjust for baseline predictors of attrition [41]. Briefly, logistic regression will be performed to identify predictors of attrition using an opportunistic approach that optimizes model fit, with an area under the curve (AUC) target value of > 0.7. Demographic and baseline health variables that differ between follow-up status cohorts will be used as candidate variables for the regression model to derive weights. Then, inverse of predicted probabilities for remaining in the study will be used to weight observations, and all analyses will be repeated. Regression results using IPAW will be compared with those obtained from complete case only analyses to assess the potential influence of missing data on the findings and identify robust predictors. We will focus our interpretation on predictors that are consistent across complete case and IPAW models for each type of healthcare service as they are more robust and most likely to be reproduced in future studies.\n\n#### Power analysis\n\nFor logistic regression analyses, event-per-variable values of 10 or greater are suggested, since overfitting will weaken the probability that original findings will be", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed5.pdf" - }, - { - "text": "#### Abbreviations\n\nCCI: Charlson comorbidity index; OSPRO: Optimal Screening for Prediction of Referral and Outcome; OSPRO-ROS: Review of systems screening tool from OSPRO cohort study; OSPRO-YF: Pain-related psychological distress screening tool from OSPRO cohort study\n\n#### Acknowledgements\n\nThe authors wish to acknowledge Dr. Roger B. Fillingim and Dr. Nicole M. Marlow for their input on study design and analysis. OPT-IN Network Participants included: University of Florida: Joel Bialosky; UF Health: Giorgio Zeppieri, Jr., Daniel Broome, Marty Huegel, Debi Jones, Steve Emery, Mike Hodges, Derek Miles, Jodi Davis, Charlene Stubbington, Mike Darcy; ATI Physical Therapy: Ellen Shanley, Thomas Denninger, Jenna Bartsokas, Elise Harris, Jordan Floyd, Wade Harrell; University of Southern California: Lori Michener, Amy Pomrantz, Brooks Rehabilitation: Raine Osborne, Nata Salvatori, John Leschitz, Brian Hagist, Laura Langer, Tim Shreve, Nando Malaman, Michael Bourassa, Justin Zych, Tasha Mouton Shanklin; University of Illinois at Chicago: Aaron Keil, Brad Myers, Deb Davey, Justin Payette, Adam Wielechowski, Richard Severin, Erik Martinez; Indiana State University: Ryan Hanigan, Carolina Valencia, Danielle Jena, Nicole Woodard; Arcadia University: Angela Tate; Life's Work Physical Therapy: Sandra Stryker, Aaron Leonard, Erin Courtney, Brandon Little, Kathryn Jankord, Brad Simpson, Charleen Hall, Paige Nixon, Julia Neufeld; University of Colorado, Denver: Paul Mintken, Virginia Arnette, Andrea Barsch.\n\n#### Funding\n\nThis project was supported by the 2013 Clinical Research Network grant from the Orthopaedic Section, American Physical Therapy Association. The funding body had no role in the design of the study or collection, analysis, and interpretation of the data or in writing the manuscript. TAL received additional support from the Foundation for Physical Therapy with Promotion of Doctoral Studies I & II (PODS I& II) Awards. SZG and JMB received additional support from Brooks Rehabilitation while designing this study. JMB received support from the American National Institutes of Health (NIH) Rehabilitation Research Career Development Program (K12-HD055929).\n\n#### Availability of data and materials\n\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n#### Authors' contributions\n\nTAL provided input on study design and analysis plan, drafted the manuscript and approved final version of the manuscript. SZG secured funding, provided overall design, gave input on the analysis plan and approved final version of the manuscript. JMB provided input on design and analysis plan and approved final version of the manuscript.\n\n#### Ethics approval and consent to participate\n\nEthics approval for this study was granted by the University of Florida Institutional Review Board-01 (Study #: 525–2012). All participants provided written consent to participate in the study.\n\n#### Consent for publication\n\nNot applicable.\n\n#### Competing interests\n\nThe authors declare that they have no competing interests.\n\n#### Publisher's Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n#### Author details\n\n1 Duke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham, NC 27705, USA. 2 Department of Physical Therapy, College of Public Health & Health Professions, University of Florida, Box 100154, UFHSC, Gainesville, FL 32610-0154, USA. 3 Brooks Rehabilitation Clinical Research Center, 3901 University Blvd. South, Suite 103, Jacksonville, FL 32216, USA. 4 Duke Clinical Research Institute, Department of Orthopaedic Surgery, Duke University, 2400 Pratt Street, Durham, NC 27705, USA.\n\nReceived: 9 November 2017 Accepted: 14 August 2018\n\n#### References\n\n- 1. Von Korff M, Scher AI, Helmick C, Carter-Pokras O, Dodick DW, Goulet J, et al. United states national pain strategy for population research: concepts, definitions, and pilot data. J Pain Off J Am Pain Soc. 2016;17:1068–80.\n- 2. Clarke JL, Skoufalos A, Scranton R. The American opioid epidemic: population health implications and potential solutions. Report from the national stakeholder panel. Popul Health Manag. 2016;19 Suppl 1:S1–10.\n- 3. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain--United States, 2016. JAMA. 2016;315:1624–45.\n- 4. Boyles R, Toy P, Mellon J, Hayes M, Hammer B. Effectiveness of manual physical therapy in the treatment of cervical radiculopathy: a systematic review. J Man Manip Ther. 2011;19:135–42.\n- 5. Bürge E, Monnin D, Berchtold A, Allet L. Cost-effectiveness of physical therapy only and of usual care for various health conditions: systematic review. Phys Ther. 2016;96:774–86.\n- 6. Deyle GD, Allison SC, Matekel RL, Ryder MG, Stang JM, Gohdes DD, et al. Physical therapy treatment effectiveness for osteoarthritis of the knee: a randomized comparison of supervised clinical exercise and manual therapy procedures versus a home exercise program. Phys Ther. 2005;85:1301–17.\n- 7. Deyle GD, Henderson NE, Matekel RL, Ryder MG, Garber MB, Allison SC. Effectiveness of manual physical therapy and exercise in osteoarthritis of the knee. A randomized, controlled trial. Ann Intern Med. 2000;132:173–81.\n- 8. Freburger JK, Carey TS, Holmes GM. Effectiveness of physical therapy for the management of chronic spine disorders: a propensity score approach. Phys Ther. 2006;86:381–94.\n- 9. Kuhn JE, Dunn WR, Sanders R, An Q, Baumgarten KM, Bishop JY, et al. Effectiveness of physical therapy in treating atraumatic full-thickness rotator cuff tears: a multicenter prospective cohort study. J Shoulder Elb Surg. 2013; 22:1371–9.\n- 10. Fritz JM, Childs JD, Wainner RS, Flynn TW. Primary care referral of patients with low back pain to physical therapy: impact on future health care utilization and costs. Spine. 2012;37:2114–21.\n- 11. Fritz JM, Brennan GP, Hunter SJ, Magel JS. Initial management decisions after a new consultation for low back pain: implications of the usage of physical therapy for subsequent health care costs and utilization. Arch Phys Med Rehabil. 2013;94:808–16.\n- 12. Hill JC, Dunn KM, Lewis M, Mullis R, Main CJ, Foster NE, et al. A primary care back pain screening tool: identifying patient subgroups for initial treatment. Arthritis Rheum. 2008;59:632–41.\n- 13. Traeger AC, Henschke N, Hübscher M, Williams CM, Kamper SJ, Maher CG, et al. Estimating the risk of chronic pain: development and validation of a prognostic model (PICKUP) for patients with acute low back pain. PLoS Med. 2016;13:e1002019.\n- 14. Karran EL, McAuley JH, Traeger AC, Hillier SL, Grabherr L, Russek LN, et al. Can screening instruments accurately determine poor outcome risk in adults with recent onset low back pain? A systematic review and metaanalysis. BMC Med. 2017;15:13.\n- 15. Azevedo LF, Costa-Pereira A, Mendonça L, Dias CC, Castro-Lopes JM. Chronic pain and health services utilization: is there overuse of diagnostic tests and inequalities in nonpharmacologic treatment methods utilization? Med Care. 2013;51:859–69.\n- 16. Langley P, Müller-Schwefe G, Nicolaou A, Liedgens H, Pergolizzi J, Varrassi G. The societal impact of pain in the European Union: health-related quality of life and healthcare resource utilization. J Med Econ. 2010;13:571–81.\n- 17. Pérez C, Navarro A, Saldaña MT, Wilson K, Rejas J. Modeling the predictive value of pain intensity on costs and resources utilization in patients with peripheral neuropathic pain. Clin J Pain. 2015;31:273–9.\n- 18. Hill JC, Fritz JM. Psychosocial influences on low back pain, disability, and response to treatment. Phys Ther. 2011;91:712–21.\n- 19. George SZ, Beneciuk JM, Lentz TA, Wu SS. The Optimal Screening for Prediction of Referral and Outcome (OSPRO) in patients with musculoskeletal pain conditions: a longitudinal validation cohort from the USA. BMJ Open. 2017;7:e015188.\n- 20. George SZ, Beneciuk JM, Lentz TA, Wu SS, Dai Y, Bialosky JE, Zeppieri G Jr. Optimal Screening for Prediction of Referral and Outcome (OSPRO) for Musculoskeletal Pain Conditions: Results From the Validation Cohort. J Orthop Sports Phys Ther. 2018;48(6):460–75.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed5.pdf" - }, - { - "text": "# Appendix\n\n#### **Charts showing age-of-onset distributions (by percentage of total cohort) for different cohorts based on year of first treatment**", - "page_start": 30, - "page_end": 30, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "reproduced in an independent sample [42, 43]. With 18 potential predictors, a sample of n = 180 reporting healthcare utilization at follow-up would be sufficient for the proposed analyses. However, this estimate is conservative. Other methods for determining sample size for prediction analyses suggest an overall sample size of N > 50 + 8*m (where m = number of independent variables) [44] or N > 104 + number of independent predictors [45, 46]. For these less conservative estimates, the projected study sample size is sufficient for the proposed analyses.\n\n#### Results\n\nFour hundred and forty subjects were recruited at initial evaluation. Follow-up at 4 weeks was 75.0% (n = 330), at 6 months was 69.0% (n = 304) and at 12 months was 65.2% (n = 287). Baseline demographics and health-related characteristics for the full cohort, as well as those who did and did not complete all follow-up are presented in Tables 1, 2 and 3. Those who did not complete follow-up were younger, more likely to be non-white, had less than a college degree, were more likely to have had sudden symptom onset, had higher baseline pain intensity, and had higher baseline pain-related psychological distress measured by the OSPRO-YF. Only those with complete follow-up data at each time point were considered for prediction analyses (n = 246, 55.9%).\n\nOverall, 43.1% (n = 106/246) of those with complete follow-up data utilized at least one healthcare service following the physical therapy episode. Distribution of utilization for specific services is provided in Table 4. For multivariate analyses, all VIFs were less than 10 and tolerance values greater than 0.1 suggesting no significant multicollinearity among independent variables.\n\n#### Full multivariate model performance\n\nOverall performance for each full multivariate model is listed in Table 5. Block 1 (Demographic, clinical and comorbidity) consistently contributed to prediction of healthcare utilization and accounted for the greatest amount of variation in utilization outcome for all models. Block 4 (change scores for pain, disability, and OSPRO-YF) provided statistically significant contributions in all models except prediction of injection. Blocks including baseline OSPRO-YF and OSPRO-ROS, both short and long forms, did not predict utilization outcomes. Weighted models consistently outperformed their complete case analysis model counterparts with overall model pseudo-R2 values ranging from .337 (Any care) to .611 (Emergency room).\n\nTable 1 Demographic information for the full cohort, and for those with complete and incomplete follow-up\n\n| Variable | Label | Full cohort at baseline | Completed follow-up | Did not complete follow-up | p-value a |\n| --- | --- | --- | --- | --- | --- |\n| | | (n = 440) | (n = 246) | (n = 194) | |\n| Demographic information | | | | | |\n| Age | Mean ± SD | 45.06 ± 15.82 | 46.59 ± 16.00 | 43.15 ± 15.43 | 0.02 |\n| | Median (min, max) | 45 (18–75) | 47 (18–75) | 42 (18–74) | |\n| Sex (1 missing) | Male | 164 (37.3%) | 85 (34.6%) | 79 (40.7%) | 0.20 |\n| | Female | 275 (62.5%) | 160 (65.0%) | 115 (59.3%) | |\n| Race (7 missing) | White | 343 (78.0%) | 200 (81.3%) | 143 (73.7%) | 0.05 |\n| | Non-white | 90 (20.5%) | 42 (17.1%) | 48 (24.7%) | |\n| Ethnicity (33 missing) | Hispanic or Latino | 31 (7.0%) | 20 (8.1%) | 11 (5.7%) | 0.36 |\n| | Not Hispanic or Latino | 376 (85.5%) | 211 (85.8%) | 165 (85.1%) | |\n| Education (6 missing) | Less than college graduate | 161 (36.6%) | 71 (28.9%) | 90 (46.4%) | < 0.001 |\n| | College graduate or higher | 273 (62.0%) | 172 (69.9%) | 101 (52.1%) | |\n| Income (66 missing) | $35,000 or less | 112 (25.5%) | 62 (25.2%) | 50 (25.8%) | 0.30 |\n| | $35,000 to $70,000 | 106 (24.1%) | 59 (24.0%) | 47 (24.2%) | |\n| | Greater than 70,000 | 156 (35.5%) | 99 (40.2%) | 57 (29.4%) | |\n| Insurance (26 missing) | Private | 273 (62.0%) | 156 (63.4%) | 117 (60.3%) | 0.70 |\n| | Public | 75 (17.0%) | 46 (18.7%) | 29 (14.9%) | |\n| | Other | 66 (15.0%) | 36 (14.6%) | 30 (15.5%) | |\n| Geographic region | Southeast | 275 (62.5%) | 146 (59.3%) | 129 (66.5%) | 0.10 |\n| | Midwest | 47 (10.7%) | 23 (9.3%) | 24 (12.4%) | |\n| | West | 98 (22.3%) | 65 (26.4%) | 33 (17.0%) | |\n| | Northeast | 20 (4.5%) | 12 (4.9%) | 8 (4.1%) | |\n\na Group comparisons with independent samples t-tests for continuous variables and chi-square tests for categorical variables", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed5.pdf" - }, - { - "text": "Policy information about availability of data\n\nAll manuscripts must include a data availability statement. This statement should provide the following information, where applicable:\n\n- Accession codes, unique identifiers, or web links for publicly available datasets\n- A description of any restrictions on data availability\n- For clinical datasets or third party data, please ensure that the statement adheres to our policy\n\nThe dataset consists of 26 MRI scans (T1w, T2w, and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The data is publicly available on https://openneuro.org/datasets/ds005299.\n\n# Research involving human participants, their data, or biological material\n\nPolicy information about studies with human participants or human data. See also policy information about sex, gender (identity/presentation), and sexual orientation and race, ethnicity and racism.\n\n| Reporting on sex and gender | Our study focused on a single female participant to explore how pregnancy shapes the human brain. |\n| --- | --- |\n| Reporting on race, ethnicity, or | The subject was white. |\n| other socially relevant | |\n| groupings | |\n| Population characteristics | This was a precision imaging study of one 38-year old primiparous woman. |\n| Recruitment | Our participant (corresponding author E.R.C.) was a healthy primiparous woman who underwent in-vitro fertilization (IVF) to |\n| | achieve pregnancy. The project was conceived by E.R.C. and she wished to use herself as the participant, as has been done in |\n| | previous \"dense-sampling\" studies (cf. Poldrack et al., 2015; Pritschet et al., 2020). |\n| Ethics oversight | The participant gave written informed consent and the study was approved by the University of California, Irvine Human |\n| | Subjects Committee. |\n\nNote that full information on the approval of the study protocol must also be provided in the manuscript.\n\n# Field-specific reporting\n\nPlease select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.\n\n|\n| |\n\nFor a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf\n\n# Life sciences study design\n\nAll studies must disclose on these points even when the disclosure is negative.\n\n| Sample size | We used precision imaging to deeply-phenotype, densely-sample an individual over the gestational window. As this study was the first of it's |\n| --- | --- |\n| | kind, our sample size was an N=1 design. Although this limits the generalizability of our findings, this project serves as a proof-of-concept, |\n| | showcasing the value and feasibility of studying a woman's brain during the transition to motherhood. |\n| Data exclusions | no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking |\n| Replication | This is the first study of it's kind; therefore, there are no study replications as of yet. However, to reproduce our results internally across |\n| | software packages, we also ran the T1w data through the longitudinal FreeSurfer cortical thickness pipeline (Dale et al., 1999), which |\n| | corroborated our finding that gray matter volume declines throughout gestation (e.g., successful internal replication). This pattern of results |\n| | not only held across software packages, but also brain parcellations (e.g., Schaefer 400-cortical atlas and Desikan-Killiany cortical atlas). |\n| Randomization | This was an observational study design, and therefore not randomized. |\n| Blinding | For medial temporal lobe segmentation, scans were randomized and segmentation was performed in a random order, blind to pregnancy |\n| | stage. No other blinding was applicable, given the observational study of brain changes in response to advancing gestational week. |\n\n# Reporting for specific materials, systems and methods\n\nWe require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed4.pdf" - }, - { - "text": "identifying risk for additional utilization has emerged due to the growth of cost-sharing and capitated payment models, particularly in the United States (US). As a result, many US health care services organizations have begun to prioritize early identification of individuals at risk for downstream healthcare use at the onset of treatment [10, 11]. Early risk assessment allows systems to deliver greater value by 1) focusing limited health care resources towards patients who are most in need, and 2) identifying those who may require coordination of multiple providers and services to optimize outcomes.\n\nProspective identification of risk for high subsequent healthcare utilization is a different approach to outcomes prediction for musculoskeletal pain [12, 13] and one that has not been evaluated in physical therapy settings in the US. Most existing outcomes prediction models focus on pain and disability endpoints [12–14]. They also concentrate on condition-specific and psychological predictors, with less attention to factors that could influence healthcare utilization more directly [15–17]. These factors include insurance, comorbidities, symptoms unrelated to the pain condition, and treatment response. As a result, predictors of pain-related healthcare utilization beyond physical therapy are unknown. A better understanding of these predictors will have significant implications for future healthcare pathway development. For instance, an influence of modifiable factors like pain-related psychological distress might imply the need to build clinical pathways that address those factors directly through physical therapist provided intervention. Additionally, understanding the relative predictive capabilities of baseline versus change estimates for modifiable factors would clarify whether prediction is improved by routinely assessing outcomes during the course of treatment (i.e. treatment monitoring) [18].\n\nThis study was undertaken in a nationwide, US cohort of patients receiving outpatient physical therapy for a primary complaint of knee, shoulder, back or neck pain. The primary aim of the analysis was to predict incidence of additional pain-related healthcare utilization in the year following the episode of physical therapy for musculoskeletal pain. We considered factors not commonly assessed in outcomes prediction for musculoskeletal pain, like insurance, comorbidities, and treatment response, as well as those more often associated with pain-related outcomes (e.g. psychological distress). This project will lead to the development of potentially novel outcome prediction models for this population in a common, non-pharmacological US healthcare setting. The results of this study will be particularly important in value-based payment settings where enhanced clinical decision-making drives treatment effectiveness and system efficiency.\n\n#### Methods\n\n#### Dataset and patient population\n\nThis study used data from the Orthopedic Physical Therapy – Investigative Network's (OPT-IN) Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study, a longitudinal prospective study of individuals with knee, shoulder, back or neck pain seeking Physical Therapy in the US. A convenience sample was recruited from December 2014 and December 2015 by participating OPT-IN clinics. The OPT-IN clinics that participated in data collection represented multiple geographic regions in the US including the Mideast, Southeast, Great Lakes, Rocky Mountain States and Far West, with an attempt to balance recruitment between urban and rural settings over the entire OPT-IN network. Physical therapists practicing in these clinics identified eligible participants at initial evaluation and directed them to a secure study website for the informed consent process and baseline self-report assessment. Eligibility criteria have been thoroughly reported elsewhere [19] and were intentionally broad to develop a cohort that was generalizable to those seeking physical therapy for common musculoskeletal conditions in the US. Participants completed follow-up self-reported assessments on the study website at 4 weeks, 6 months and 12 months. Participants were notified of a pending assessment by an email that directed them back to the study website to complete their follow-up assessment. For additional details of the dataset and cohort, readers are directed to the published cohort profile [19].\n\nThe primary aim of the OSPRO cohort study was to develop and validate review of systems (i.e. evidence of systemic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in outpatient orthopedic physical therapy settings. These screening tools, once validated and refined for clinical decision making, may improve the value of care delivery by accurately identifying individuals who 1) are appropriate for referral to other providers for management of non-musculoskeletal symptoms, and/or 2) would benefit from enhanced, psychologically-informed physical therapy. Early identification of individuals most appropriate for these modified pathways of care has the potential to reduce wasteful downstream health care utilization, limit the risk of unwarranted and costly care escalation, and improve clinical outcomes. Results of the primary analyses examining the predictive ability of the OSPRO tools for pain, disability, health status, and comorbidity outcomes have been previously published [20]. Pre-planned secondary analyses included prediction of persistent pain state [21] and this current analysis predicting future healthcare utilization. All subjects consented to participation in the study and ethics approval was granted by the University of Florida Institutional Review Board.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "routine pain-related psychological distress monitoring throughout the early phases of rehabilitation especially if the goal is to identify risk for subsequent pain-related healthcare utilization. The implications of these collective findings are that treatment pathways may provide greater value by 1) addressing modifiable health-related variables like pain, disability and pain-related psychological distress, 2) routine monitoring of these health-related variables and 3) offering treatment alternatives that safely escalate care if needed while minimizing risk of harm and unhelpful utilization.\n\nOpioids and diagnostic tests and imaging were the two most common subsequent healthcare services utilized following physical therapy. Of the individuals that completed follow up and had any subsequent healthcare utilization, approximately 42% reported opioid use and 70% reported use of diagnostic tests and imaging. An important health-related predictor of these services was level of comorbidity burden. For those with high comorbidity burden and inadequate treatment response to physical therapy, use of additional diagnostic tests and imaging or low-dose opioids may be appropriate in some cases. But given the growing public health concern over opioid use and the desire to avoid unnecessary treatment driven by imaging, our results suggest the importance of considering disease burden when developing treatment pathways and healthcare policy to mitigate risk for avoidable use of these services. Interestingly, neither versions of the OSPRO-ROS predicted utilization outcomes even though it has been linked to mental health, comorbidity, and persistent pain state in other analyses [20, 21]. Systemic symptom burden is a measure of patient complexity that is related to but distinct from comorbidity burden [36, 47]. In these analyses, the chronic condition measure (i.e. the CCI) was a better predictor of utilization than symptom burden (i.e. OSPRO-ROS). The reasons for this finding are unclear but may be related to providers and patients being more likely to pursue follow-up medical care for musculoskeletal pain when known co-existing conditions are present as opposed to reporting of symptoms alone. The distinction between symptom and disease burden in defining musculoskeletal patient complexity, and its influence on clinical decision-making and outcomes, should be the subject of future research particularly related to aging populations [48].\n\nUtilization outcomes benchmarks have not been established to determine how the percentage of subsequent healthcare use in this study compares to outcomes using other health services. Prior studies suggest physical therapy is associated with reduced incidence of additional healthcare use compared to not using physical therapy in patients with acute low back pain [10, 49]. Some additional healthcare use is expected following physical therapy, especially among individuals that are on long-term pain management pathways due to chronic or persistent symptoms. Yet with over 40% reporting subsequent pain-related healthcare among those completing follow-up, it is apparent that opportunities exist to improve pathway selection and/or the effectiveness of physical therapy for individuals with musculoskeletal pain. This finding is particularly notable given recent efforts to define physical therapy as an effective first line, non-pharmacological treatment option against more invasive or higher risk services, such as surgery or opioid use, respectively. Predictive variables identified in this analysis can be used to develop risk models that better inform pathway selection for those seeking physical therapy for musculoskeletal pain. The precise application of these risk models, and how they inform policy and practice should be the target of future study. However, physical therapy re-design might incorporate enhanced treatment monitoring to assess ongoing risk for downstream utilization, as well as physical therapist-led interventions to more thoroughly address important modifiable factors such as pain intensity, disability and pain-related psychological distress [38]. Improved pathway selection might entail the consideration of referral to or co-treatment with other providers to more adequately address non-modifiable characteristics. Collectively, these approaches could improve the value of physical therapy by minimizing risk for high downstream healthcare utilization and potentially unwarranted escalation of care.\n\nThe primary strength of the study is longitudinal follow-up at multiple time points following an episode of physical therapy for a variety of musculoskeletal pain conditions. Anatomical location of pain was not a significant predictor of healthcare use in all but one model, suggesting results are widely applicable across a spectrum of musculoskeletal pain conditions. Another strength of this cohort study is the assessment of various healthcare utilization outcomes of interest for establishing health policy. When considered alongside more traditional pain- or disability-related outcomes prediction models, these findings will improve the ability of healthcare systems and providers to make decisions in value-based purchasing environments. The consideration of multiple screening tools (i.e. yellow flags and review of systems) and treatment monitoring variables is also a strength of this study as screening and systematic treatment monitoring are not routine in clinical practice. A final strength is inclusion of multiple sociodemographic, health-related and psychosocial factors as potential predictors. Healthcare outcomes and utilization exhibit emergent properties that require the consideration of multiple, competing factors to fully explain [50].", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - }, - { - "text": "composition of the workforce and the climate influence on working conditions challenge all stakeholders in the field of OSH to keep pace with all these developments. In addition, the EU and consequently also OSH in the EU is increasingly and significantly influenced by the globalisation of product and service chains and the internationalisation of its workforce.\n\nThis report is a common effort of EU-OSHA and major stakeholders. It is a product of EU-OSHA's activity 'EU OSH Info system' and aims at interpreting and analysing the quantitative and qualitative data on OSH in the EU that have been collected during the past six years in this activity. The purpose of this report is to **summarise status, trends and key aspects**, preferably based on statistics, data and related analytical research findings.\n\nThe idea of a permanent observation or monitoring of the situation of working conditions and OSH is not new. In the past three EU OSH strategies between 2002 and 2020, it was always an objective to improve the knowledge on working conditions and OSH and by doing that to facilitate **better evidence for stakeholders** and give them a solid base for their activities and prioritisation.\n\nThe idea of better monitoring was strengthened by DG EMPL in 2015 in a broad and systematic effort to develop a new **EU OSH Info system, based on indicators**. The development and design of such a system was done in collaboration between DG EMPL, EU-OSHA, newly established National Contact Points, and the Advisory Committee on Safety and Health. Many indicators were discussed and subsequently included or discarded, depending on their relevance but also depending on the availability of reliable data and the efforts needed to collect such data. For example, a good description of working conditions based on EU-wide surveys or statistical data is available, whilst a detailed description of national prevention systems would require considerable research efforts and is until now not part of this info system. In 2023, the info system provides more than 125 datasets for 16 major indicators.\n\nAll these data are presented online in a visualised mode, the OSH Barometer1. This data visualisation tool does **not create new data** but **combines major OSH-related and publicly available quantitative data** with qualitative descriptions and analysis. Many of the indicators and data collections that are used in this report are published in the OSH Barometer. All quantitative indicators are based on available data, that is, they are not based on new research but on existing sources. These sources are dispersed, the info system brings it all together and makes access and overview significantly easier.\n\nMany of the **key findings of this report are based on previous work conducted by EU-OSHA**; many of these data and research results show obvious and (nearly) unambiguously accepted findings. Sometimes the existing data and findings are weak and ambiguous and allow quite diverging interpretations of the reasons and reality behind such data. For these areas and topics, even a combined analyses of quite different sources can only present hypotheses and no clear evidence; in these cases, the current knowledge is not more than a starting point to clarify open questions and to undertake research, data collection and data analysis efforts.\n\nThis report aims at **contributing to better evidence** as a base for more effective and comprehensive actions. A precise picture can better inform priority choices to be made by the legislators and state institutions, by enterprises, workers and their associations, and by OSH professionals. It can result in the ultimately desired effective protection of all groups of workers, in all sectors, all occupations, all work tasks and all forms of work.\n\nThis report paints a mostly **quantitative picture** of the current OSH status in the EU. It uses data from European surveys and statistics that were compiled in the frame of EU-OSHA's activity 'EU OSH Information System' and combines quantitative data with explanatory and analytical descriptions. The report covers trends that reach back between 10 and 25 years — depending on data availability and methodological issues. It also takes into account relevant context factors, be it economy, workforce and demography, industrial relations or technological developments.\n\nThe report covers as many indicators, trends and context developments as possible. Short overviews and summaries form the character and shape of this report, not detailed descriptions. This is slightly compensated by extensive referencing to literature, particularly the OSH Barometer data visualisation tool, reports by EU-OSHA and other EU agencies (e.g. Eurofound), and other EU institutions and international agencies.", - "page_start": 20, - "page_end": 20, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **2 Setting the scene**\n\nThe ethical and economic importance of safe and healthy working conditions was the root cause for the development of a strong legal framework and comprehensive policy actions targeting EU workplaces. The objective of all related measures is to reduce the avoidable burden for individuals and society, that is annually more than 3,000 fatal accidents at work, and more than 230,000 severe accidents at work, and an estimated 180,000 deaths from work-related illness.\n\nDuring the last 50 years, we have witnessed **significant progress** in the field of OSH in EU Member States. Milestones along the way provide evidence that a preventive, proactive and often participative approach has become mainstream in policies and many businesses. The number of work accidents that Eurostat registers has decreased significantly in the period between 1994 and 2020. The EU stabilised and promoted this development, particularly in the 1990s, by adopting the overarching OSH Framework Directive and 24 specific OSH directives. OSH strategies and strategic frameworks at EU and Member State levels have contributed to streamlined approaches in priority areas. Higher safety and health standards, better preventive technologies and OSH management, improved training and education of OSH professionals, and scientific, technical and medical progress have contributed considerably to improving safety and health at work. Member States, the EU and international organisations have been providing comprehensive and manifold guidance and support for enterprises, covering virtually every kind of OSH-related issue and proposing practical preventive measures. Broad and extensive research at national institutes and universities and by EU institutions has considerably improved the level of evidence and knowledge on OSH.\n\nLooking at the challenging and weaker aspects of the last 30 years, we still **observe deficits** concerning the level of compliance and enforcement of OSH legislation, particularly in some sectors, types of work (e.g. mobile or domestic work), types of enterprises (e.g., micro and small enterprises), and in less secure and irregular forms of work. During the COVID-19 pandemic in 2020 and 2021, quite a few media reported on insufficient safety and health measures in irregular, informal, insecure and illegal forms of work, for example, in several types of seasonal or subcontracted work. Permanent and seemingly accelerating changes in economic and social policies, technologies and forms of work, the demographic", - "page_start": 19, - "page_end": 19, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What is the range of the pain rating scale ?", - "target_page": 3, - "target_passage": "Pain intensity was assessed by the numerical pain rating scale (NPRS) ranging from “0” (no pain) to “10” (worst pain imaginable)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### Healthcare utilization predictors\n\nWe collected potential predictors by self-reported questionnaires at initial evaluation using an online study website. Participants were directed back to the study website 4 weeks following initial evaluation to again complete questions on pain intensity, disability, and pain-related psychological distress. Change in pain intensity, disability, and pain-related psychological distress from baseline to 4 weeks were modeled as treatment response variables and included as potential predictors.\n\n#### Sociodemographic and health-related information\n\nParticipants completed a standard intake questionnaire form previously used in our clinical studies that assessed age, sex, race, and insurance provider type. This questionnaire also assessed health-related variables included anatomical region of primary pain complaint (low back, neck, shoulder, or knee) and whether the patient had undergone surgery for their primary pain complaints (yes or no). Due to small cell sizes for certain categories, race was dichotomized as white or non-white. For insurance type, participants were asked to choose one of the following options: private, public (Medicare and/or Medicaid), uninsured/self-pay, worker's compensation, and other/commercial insurance. Among the study sample, we observed few with no insurance (n = 7) or worker's compensation (n = 14). The study also included relatively few with 'other/commercial insurance' (n = 45). Within this group, informal assessment of these various plans suggested high heterogeneity of plan characteristics and coverage. Due to the small number of subjects in these individual insurance strata and to improve interpretability of results, we collapsed those reporting no insurance, worker's compensation and other/commercial insurance into a single category (i.e., 'Other'). Therefore, insurance type was categorized as private, public, or other (no insurance, worker's compensation, or other/commercial insurance) for purposes of analysis.\n\n#### Pain-related clinical variables\n\nPain status was determined using established definitions that account for the duration of pain and activity limitations [22, 23] using the following two questions: 1) \"How long have you been experiencing your current painful symptoms?\" and 2) \"Have you experienced ANY pain and activity limitations every day for the past 3 months?\" Responses to question 1 of \"greater than 90 days\" or responses to question 2 of \"Yes\" were used to classify patients as having persistent pain at initial evaluation.\n\n#### Pain intensity\n\nPain intensity was assessed by the numerical pain rating scale (NPRS) ranging from \"0\" (no pain) to \"10\" (worst pain imaginable) [24–26]. Participants rated their current pain intensity, as well as their best (lowest) and worst (highest) pain intensity over the past 24 h. Current, best and worst pain ratings were averaged for purposes of analysis.\n\n#### Region-specific disability\n\nSelf-reported region-specific disability was assessed with the Neck Disability Index [27, 28], Oswestry Disability Questionnaire [29, 30], Quick Disability of Arm Shoulder and Hand [31] or International Knee Documentation Committee Subjective Knee Form [32] for cervical, low back, shoulder and knee pain, respectively. Region-specific disability measures were z-transformed for purposes of analysis, consistent with our prior work involving multiple anatomical regions [33].\n\n#### Comorbidities\n\n#### Charlson comorbidity index (CCI)\n\nThe Charlson Comorbidity Index was used to measure the presence of chronic comorbid medical conditions [34]. It lists 19 medical conditions that participants are asked to indicate whether they \"have ever been diagnosed with by a physician\". Conditions are weighted and added for an overall measure of comorbidity burden. The CCI has demonstrated good test-retest reliability (0.91) and positive but weak to modest correlations with medication use, hospitalizations, length of stay, total charges, and pharmacy and laboratory charges for older adults in general medical care and surgical care settings [35].\n\n#### Assessment tools\n\n#### OSPRO Review of Systems tool (OSPRO-ROS)\n\nThe OSPRO-ROS is a review-of-systems screening tool for use in outpatient orthopedic physical therapy settings [36]. The OSPRO-ROS has demonstrated good concurrent validity with depression and a comprehensive 97-item battery of non-musculoskeletal symptoms (i.e., red flags). [36] Moderate to strong predictive capabilities of the OSPRO-ROS have been reported for persistence of pain, quality of life, and change in comorbidity 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The OSPRO-ROS includes standard symptom descriptors to aid with identification of systemic or non-musculoskeletal origins of musculoskeletal pain. It includes questions related to symptoms of the cardiovascular, gastrointestinal, endocrine, nervous, integumentary, pulmonary, and musculoskeletal systems. The full-length 23-item version of the OSPRO-ROS is capable of identifying 100% of positive red-flag responders (i.e. indicating \"yes\" to at least one systemic symptom on a questionnaire) in outpatient orthopedic physical therapy settings. [36] A shorter, 10-item version is also available that has been", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "## R E S EAR CH A R TIC L E Open Access\n\n# Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz1* , Jason M. Beneciuk2,3 and Steven Z. George4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy (n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% (n = 106) of the study sample that completed the 12-month follow-up (n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n### Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\nto increased utilization of services such as physical\n\n© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.\n\n* Correspondence: trevor.lentz@duke.edu 1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham, NC 27705, USA\n\nFull list of author information is available at the end of the article", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290–301.\n- 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251–8.\n- 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343–50.\n- 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157–62.\n- 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533–9.\n- 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331–4.\n- 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491–502.\n- 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409–15.\n- 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187–204.\n- 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776–88.\n- 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038–46.\n- 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600–13.\n- 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251–61.\n- 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–83.\n- 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73–84.\n- 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512–26.\n- 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327–43.\n- 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656–64.\n- 39. Myers RH. Classical and modern regression with applications. 2nd ed. Pacific Grove: Duxbury Press; 2000.\n- 40. Weuve J, Tchetgen Tchetgen EJ, Glymour MM, Beck TL, Aggarwal NT, Wilson RS, et al. Accounting for bias due to selective attrition: the example of smoking and cognitive decline. Epidemiol Camb Mass. 2012;23:119–28.\n- 41. Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiol Camb Mass. 2004;15:615–25.\n- 42. Kent P, Keating JL, Leboeuf-Yde C. Research methods for subgrouping low back pain. BMC Med Res Methodol. 2010;10:62.\n- 43. Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR. A simulation study of the number of events per variable in logistic regression analysis. J Clin Epidemiol. 1996;49:1373–9.\n- 44. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Pearson; 2006.\n- 45. Green SB. How many subjects does it take to do a regression analysis. Multivar Behav Res. 1991;26:499–510.\n- 46. Harris RJ. A primer of multivariate statistics. 3rd ed. Mahwah: Psychology Press; 2001.\n- 47. Piette JD, Kerr EA. The impact of comorbid chronic conditions on diabetes care. Diabetes Care. 2006;29:725–31.\n- 48. Rice ASC, Smith BH, Blyth FM. Pain and the global burden of disease. Pain. 2016;157:791–6.\n- 49. Fritz JM, Cleland JA, Speckman M, Brennan GP, Hunter SJ. Physical therapy for acute low back pain: associations with subsequent healthcare costs. Spine. 2008;33:1800–5.\n- 50. Lentz TA, Harman JS, Marlow NM, George SZ. Application of a value model for the prevention and management of chronic musculoskeletal pain by physical therapists. Phys Ther. 2017;97:354–64.\n- 51. Sterne JAC, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.\n- 52. Bishop MD, Mintken PE, Bialosky JE, Cleland JA. Patient expectations of benefit from interventions for neck pain and resulting influence on outcomes. J Orthop Sports Phys Ther. 2013;43:457–65.\n- 53. Bialosky JE, Bishop MD, Cleland JA. Individual expectation: an overlooked, but pertinent, factor in the treatment of individuals experiencing musculoskeletal pain. Phys Ther. 2010;90:1345–55.\n- 54. Hanney WJ, Masaracchio M, Liu X, Kolber MJ. The influence of physical therapy guideline adherence on healthcare utilization and costs among patients with low back pain: a systematic review of the literature. PLoS One. 2016;11:e0156799.\n- 55. Childs JD, Fritz JM, Wu SS, Flynn TW, Wainner RS, Robertson EK, et al. Implications of early and guideline adherent physical therapy for low back pain on utilization and costs. BMC Health Serv Res. 2015;15 https://doi.org/ 10.1186/s12913-015-0830-3.\n- 56. Yu S-T, Chang H-Y, Lin M-C, Lin Y-H. Agreement between self-reported and health insurance claims on utilization of health care: a population study. J Clin Epidemiol. 2009;62:1316–22.\n- 57. Petrou S, Murray L, Cooper P, Davidson LL. The accuracy of self-reported healthcare resource utilization in health economic studies. Int J Technol Assess Health Care. 2002;18:705–10.\n- 58. Short ME, Goetzel RZ, Pei X, Tabrizi MJ, Ozminkowski RJ, Gibson TB, et al. How accurate are self-reports? Analysis of self-reported health care utilization and absence when compared with administrative data. J Occup Environ Med. 2009;51:786–96.\n\n- \n- \n- \n- \n- \n-", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "shown to identify approximately 95% of positive red-flag responders. For statistical analyses, the \"yes\" responses were added for each version and included in each model as a continuous independent variable.\n\n#### OSPRO Yellow Flag tool (OSPRO-YF)\n\nThe OSPRO-YF is a yellow flag assessment tool that includes items from pain vulnerability domains (negative affect and fear-avoidance) and pain resilience domains (positive affect and self-efficacy) to aid with identification of pain-related psychological distress in outpatient orthopedic physical therapy settings [37]. The OSPRO-YF has good concurrent validity with pain intensity and region-specific disability [37] and is capable of predicting pain intensity, disability, quality of life and persistent pain 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The full-length OSPRO-YF has 17-items, however a shortened 10-item version is also available with an acceptable trade-off in accuracy. Like the OSPRO-ROS, the OSPRO-YF is designed for implementation into electronic medical record (EMR) systems to quickly and accurately identify risk for a variety of clinical outcomes [19]. For statistical analyses, a summary score was derived for each version by adding the item responses after reverse-scoring items 2, 13, 14, 15 and 17 so that higher scores indicate higher pain-related psychological distress. The summary score was then included in each model as a continuous independent variable.\n\n#### Intervention\n\nAll physical therapy treatment was provided at the discretion of the treating clinician. The duration of the episode, the number of physical therapy visits, and individual treatment parameters (type, intensity, duration, frequency) were not collected for pragmatic reasons. In particular, clinical and utilization data are not commonly collected in a standardized format and would need to be extracted from disparate medical record databases across different health care systems to assess treatment. This was not feasible given the scope and design of this multisite survey-based study. However, instead of coding treatment type we included baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores in each model as measures of treatment response. In that manner the individual effects of the treatment received were included in the predictive models, without directly accounting for the type of treatment.\n\n#### Healthcare utilization outcomes\n\nSelf-reported health care utilization was assessed at 6- and 12-months following initial evaluation by online assessment. Questions were derived from previous population-based studies involving musculoskeletal pain that have used survey methods for follow-up assessment [22, 23]. Study participants were asked whether they used any of the following healthcare services for their primary musculoskeletal pain complaint in the time following their physical therapy treatment:\n\n- 1. Opioid painkillers (eg. Vicodin, Lortab, Hydrocodone, Fentanyl, Percocet, Oxycontin, Oxycodone, tramadol, Ultram, Diludid, etc)\n- 2. Injections\n- 3. Surgery\n- 4. Diagnostic tests or Imaging (eg. xray, MRI, CT scan, nerve conduction test, etc.)\n- 5. Emergency room visits\n\n\"Yes\" responses were followed by questions regarding the quantity of services utilized (i.e. number of opioid painkillers, number of diagnostic tests or number of emergency room visits). All utilization questions were answered on a categorical scale (0, 1, 2–5, 5–10, or > 10) indicating the quantity of a particular service received during the applicable follow-up timeframe. At 6-month follow-up, study participants reported their use of services for the previous 2 months, allowing a timeframe of 4 months from initial evaluation for them to complete physical therapy. At 12-month follow-up, study participants reported their use of services over the previous 6 months since their last survey. This method provided an 8-month overall follow-up period after physical therapy and two follow-up points were included to minimize recall bias.\n\n#### Statistical analysis\n\nAll data analyses were preformed using SPSS Version 22.0 (IBM Corp., Armonk, NY). We developed models to separately predict over the course of the entire follow-up period: 1) the dichotomous outcome of no healthcare utilization versus any healthcare utilization and 2) the utilization of specific services. We decided to develop separate models since each outcome predicted by these models might have unique future policy implications. For instance, those who utilize no additional services might represent a \"low risk\" group for which physical therapy alone might be particularly appropriate. Predicting use of specific services would inform policy where reduction of specific services is a high priority, such as utilization of opioids or unnecessary use of emergency room services.\n\nAll prediction models used the following hierarchical design, which is similar to prior analyses in this cohort [20, 21]:\n\nBlock 1: age, sex, race, anatomical region of pain, insurance, chronicity of pain, surgery for current condition (yes/no), Charlson comorbidity index, baseline disability, baseline pain intensity.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed5.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOn a Likert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences.51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk.45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement.52 Data were analyzed from October 2023 to March 2024.\n\n## **Results**\n\n#### **Automated Tasks**\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In **Table 2**, ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n### **Clinical Evaluation Tasks**\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in **Table 3** and **Table 4**. The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\n| | Table 2. Automated Evaluation Scores, Large Language Model (LLM)–Generated and Physician-Written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Summary type | R-1a | R-2a | R-La | BERT-p | BERT-r | SCALE |\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |\n\nAbbreviations: BERT, bidirectional encoder representations from transformers; p, precision-based scores; r, recall-based scores; R, recall-oriented understudy for gisting evaluation; SCALE, source chunking approach for large-scale inconsistency evaluation.\n\na R-1, R-2, R-L are the 3 types of recall-oriented understudy for gisting evaluation scores. Higher is better for all metrics.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 6/12", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "#### **Evaluation**\n\nIt is critical to ensure that AI systems are safe, ethical, and without bias in the clinical domain. For the proposed approach, we performed comprehensive automatic evaluations and a novel, rigorous, patient safety-focused clinical evaluation. The unique clinical evaluation framework was designed to (1) screen for and identify the common, specific correctness issues in LLMs observed in longform clinical summarization and (2) assess the potential patient safety implications associated with any incorrectness identified using a modified version of the World Health Organization's International Classification for Patient Safety.45\n\n### **Automated Evaluations**\n\nWe used the summarization evaluation metrics of recall-oriented understudy for gisting evaluation (ROUGE),46 bidirectional encoder representations from transformers score (BERTScore),47 and source chunking approach for large-scale inconsistency evaluation (SCALE).48 ROUGE computes the overlap of n-grams between the generated and reference summaries. For longform document summarization, the following ROUGE scores are considered to be close to the reference summaries: ROUGE-1, above 0.4; ROUGE-2, above 0.2; and ROUGE-L, above 0.3.46 BERTScore leverages the pretrained contextual embeddings from BERT and matches words to compute a similarity score for each token in the candidate sentence with each token in the reference sentence. We used SCALE,48 a natural language inference–based approach, to measure the faithfulness between the source document and the generated text. Further background is provided about SCALE in eAppendix 2 in Supplement 1.\n\n#### **Statistical Analysis**\n\nBased on prior work, 3 board certified EM physician leaders (M.M., A.F., and P.S.) with experience in formal quality and patient safety review processes performed retrospective reviews of ED-based EHR records of 50 individual ED patient encounters, randomly selected from the test dataset.49 Based on prior published clinical evaluations of LLM, as well as the study feasibility of using EM physician quality and patient safety leaders, 50 ED patient encounters were evaluated.50 Reviewers\n\nCBC indicates complete blood count; CMP, comprehensive metabolic panel; CTH, computed tomography of the head; EHR, electronic health record; Hct, hematocrit; Hgb, hemoglobin; HPI, history of present illness; HR, heart rate; IP, inpatient; IVF, intravenous fluid; N/V/D, nausea, vomiting, and diarrhea; RR, respiratory rate; SDU, step down unit; SPO2, peripheral capillary oxygen saturation; WBC, white blood cell; WBG, whole blood glucose.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 5/12", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed8.pdf" - }, - { - "text": "Figure 2.26. Range Performance", - "page_start": 186, - "page_end": 186, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| Dependent variable | Utilization outcome | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | Any care | Opioids | Injection | Surgery | Diagnostic tests or imaging | Emergency room |\n| Age | | | | | | X |\n| Insurance | | | | | | X |\n| Comorbidities (CCI) | | X | | | X | |\n| Baseline disability | X | | X | X | X | X |\n| Baseline pain | | X | | | | |\n| Change in pain | X | X | | | X | X |\n| Change in disability | | | | X | | |\n| Change in 10-item OSPRO-YF | | | | X | | |\n\nTable 7 Summary of consistent individual predictors for each utilization outcome *\n\nCCI Charlson comorbidity index, OSPRO-YF Pain-related psychological distress screening tool *\n\nSignificant predictors (p < .05) for each dependent variable denoted with \"X\"\n\nservices, suggesting injection may be the most difficult service to predict with the included variable set.\n\n#### Surgery\n\nBaseline disability (OR = 3.13–3.25, p < 0.001), change in disability (OR = 3.04–3.05, p = 0.01) and change in 10-item OSPRO-YF score (OR = 1.12–1.14, p < 0.05) where consistent predictors of subsequent surgery. Notably, magnitude of prediction was comparable between change in disability and baseline disability. This was the only parsimonious model to include an OSPRO tool. In this case, an increase in pain-related psychological distress measured by the OSPRO-YF 10-item questionnaire over the first 4 weeks was associated with higher odds of surgery. The 3 predictors in this model explained just over 30% of the variance in surgery utilization.\n\n#### Diagnostic tests or imaging\n\nComorbidity index score (OR = 1.35–1.45, p < 0.05), baseline disability (OR = 2.25–2.66, p < 0.001), and baseline to 4-week change in pain intensity (OR = 3.04–3.05, p = 0.01) were significant predictors of diagnostic test or imaging utilization. Among these, baseline disability was the strongest predictor. In these models, higher comorbidity index, higher baseline disability and worsening pain were associated with higher odds of utilization. Together, these variables explained approximately 30% of the variance in utilization.\n\n#### Emergency room\n\nModels for emergency room use had the highest pseudo-R2 values of any individual service (0.48–0.50), but also had the largest number of predictors (8–9). Agreement between complete case and weighted models was moderate. The models converged on the following predictors: age (OR = 0.91–0.94, p < 0.05), insurance (OR = 8.99–13.15, p < 0.05), baseline disability (OR = 3.33–4.88, p < 0.001), and change in pain (OR = 1.59–1.77, p < 0.05). Higher utilization was associated with younger age, other insurance (e.g., self-pay, Worker's Compensation, or other commercial insurance) compared to private insurance, higher baseline disability and worsening of pain. In the weighted analysis, subjects with knee pain were less likely to utilize the emergency room than those with low back pain. However, this relationship was not significant (p = .06) in the complete case analysis. Of the significant predictors in both models, insurance status was the strongest individual predictor of subsequent emergency room use.\n\n#### Discussion\n\nThis study identified novel predictors for pain-related utilization outcomes following an episode of physical therapy for a primary complaint of musculoskeletal pain. The most robust finding from these analyses was that baseline disability and change in pain intensity over the first 4 weeks following physical therapy evaluation were consistent predictors of subsequent pain-related healthcare utilization in those participants that completed all follow up. Aside from these robust predictors, other individual predictors of utilization were highly outcome-specific. High model specificity for utilization outcomes observed in this study is consistent with a recent systematic review that found similar levels of model specificity for more traditional outcomes like pain intensity, disability and work absenteeism [14]. Across models, health-related variables were generally stronger predictors than sociodemographic factors, which is also supported by prior research [15, 16]. Additionally, there were cases when prediction models were improved for specific services (e.g. surgery, use of opioids) when considering change in pain, disability or pain-related psychological distress. A notable finding is that the OSPRO-YF had the greatest utility when used to measure change in pain-related psychological distress. Current risk prediction paradigms in musculoskeletal pain consider only baseline pain-related psychological distress. However, these results underscore the importance of", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed5.pdf" - }, - { - "text": "identifying risk for additional utilization has emerged due to the growth of cost-sharing and capitated payment models, particularly in the United States (US). As a result, many US health care services organizations have begun to prioritize early identification of individuals at risk for downstream healthcare use at the onset of treatment [10, 11]. Early risk assessment allows systems to deliver greater value by 1) focusing limited health care resources towards patients who are most in need, and 2) identifying those who may require coordination of multiple providers and services to optimize outcomes.\n\nProspective identification of risk for high subsequent healthcare utilization is a different approach to outcomes prediction for musculoskeletal pain [12, 13] and one that has not been evaluated in physical therapy settings in the US. Most existing outcomes prediction models focus on pain and disability endpoints [12–14]. They also concentrate on condition-specific and psychological predictors, with less attention to factors that could influence healthcare utilization more directly [15–17]. These factors include insurance, comorbidities, symptoms unrelated to the pain condition, and treatment response. As a result, predictors of pain-related healthcare utilization beyond physical therapy are unknown. A better understanding of these predictors will have significant implications for future healthcare pathway development. For instance, an influence of modifiable factors like pain-related psychological distress might imply the need to build clinical pathways that address those factors directly through physical therapist provided intervention. Additionally, understanding the relative predictive capabilities of baseline versus change estimates for modifiable factors would clarify whether prediction is improved by routinely assessing outcomes during the course of treatment (i.e. treatment monitoring) [18].\n\nThis study was undertaken in a nationwide, US cohort of patients receiving outpatient physical therapy for a primary complaint of knee, shoulder, back or neck pain. The primary aim of the analysis was to predict incidence of additional pain-related healthcare utilization in the year following the episode of physical therapy for musculoskeletal pain. We considered factors not commonly assessed in outcomes prediction for musculoskeletal pain, like insurance, comorbidities, and treatment response, as well as those more often associated with pain-related outcomes (e.g. psychological distress). This project will lead to the development of potentially novel outcome prediction models for this population in a common, non-pharmacological US healthcare setting. The results of this study will be particularly important in value-based payment settings where enhanced clinical decision-making drives treatment effectiveness and system efficiency.\n\n#### Methods\n\n#### Dataset and patient population\n\nThis study used data from the Orthopedic Physical Therapy – Investigative Network's (OPT-IN) Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study, a longitudinal prospective study of individuals with knee, shoulder, back or neck pain seeking Physical Therapy in the US. A convenience sample was recruited from December 2014 and December 2015 by participating OPT-IN clinics. The OPT-IN clinics that participated in data collection represented multiple geographic regions in the US including the Mideast, Southeast, Great Lakes, Rocky Mountain States and Far West, with an attempt to balance recruitment between urban and rural settings over the entire OPT-IN network. Physical therapists practicing in these clinics identified eligible participants at initial evaluation and directed them to a secure study website for the informed consent process and baseline self-report assessment. Eligibility criteria have been thoroughly reported elsewhere [19] and were intentionally broad to develop a cohort that was generalizable to those seeking physical therapy for common musculoskeletal conditions in the US. Participants completed follow-up self-reported assessments on the study website at 4 weeks, 6 months and 12 months. Participants were notified of a pending assessment by an email that directed them back to the study website to complete their follow-up assessment. For additional details of the dataset and cohort, readers are directed to the published cohort profile [19].\n\nThe primary aim of the OSPRO cohort study was to develop and validate review of systems (i.e. evidence of systemic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in outpatient orthopedic physical therapy settings. These screening tools, once validated and refined for clinical decision making, may improve the value of care delivery by accurately identifying individuals who 1) are appropriate for referral to other providers for management of non-musculoskeletal symptoms, and/or 2) would benefit from enhanced, psychologically-informed physical therapy. Early identification of individuals most appropriate for these modified pathways of care has the potential to reduce wasteful downstream health care utilization, limit the risk of unwarranted and costly care escalation, and improve clinical outcomes. Results of the primary analyses examining the predictive ability of the OSPRO tools for pain, disability, health status, and comorbidity outcomes have been previously published [20]. Pre-planned secondary analyses included prediction of persistent pain state [21] and this current analysis predicting future healthcare utilization. All subjects consented to participation in the study and ethics approval was granted by the University of Florida Institutional Review Board.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "Table 4 Frequency of healthcare utilization reported at 6-month and 12-month follow-up (n = 246)\n\n| Label | | Utilization reported | | | | | Utilization reported | | | | | Dichotomous indicator for any healthcare |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | at 6-month follow-up | | | | | at 12 month follow-up | | | | | utilization over entire follow-up |\n| Utilization volume | 0 | 1 | 2–5 | 5–10 | > 10 | 0 | 1 | 2–5 | 5–10 | > 10 | No | Yes |\n| Opioids | 209 | 18 | 16 | 1 | 2 | 204 | 19 | 16 | 7 | 0 | 201 | 45 |\n| Injection | 212 | 17 | 14 | 2 | 1 | 217 | 17 | 12 | 0 | 0 | 206 | 40 |\n| Surgery | 240 | 4 | 2 | 0 | 0 | 231 | 13 | 2 | 0 | 0 | 227 | 19 |\n| Diagnostic tests or imaging | 183 | 40 | 22 | 1 | 0 | 188 | 26 | 28 | 4 | 0 | 172 | 74 |\n| Emergency room | 237 | 7 | 2 | 0 | 0 | 232 | 11 | 2 | 1 | 0 | 228 | 18 |\n| Any care | | | | | | | | | | | 140 | 106 |\n\nweighted analytic models for each type of healthcare service.\n\n#### Any healthcare\n\nThe final parsimonious models for any healthcare utilization differed slightly between complete case and weighted analyses (Table 6). Included in the models were chronicity of symptoms, CCI, baseline pain, baseline disability, and change in pain from baseline to 4-week follow-up. However, only baseline disability (OR = 1.48– 2.47, p < 0.05) and change in pain (OR = 1.28–1.45, p < 0.05) were significant predictors in both models, with greater baseline disability and worsening pain associated with higher odds of any utilization.\n\n#### Utilization of individual services Opioids\n\nComorbidity index score (i.e. CCI), baseline pain and change in pain were consistent predictors between the models of opioid utilization. In these models, higher pain (OR = 1.70–1.76, p < 0.001), CCI (OR = 1.54–1.60, p < 0.001) and increase in pain (OR = 1.70–1.71, p < 0.001) were associated with higher odds of opioid utilization. These models explained approximately 30% of the variation in opioid use.\n\n#### Injection\n\nA combination of race, chronicity and baseline disability explained slightly more than 20% of the variance in\n\nTable 5 Overall performance of full logistic multivariate regression models (n = 246)\n\n| Dependent variable | Any care | Opioids | Injection | Surgery | Diagnostic tests or imaging | Emergency room |\n| --- | --- | --- | --- | --- | --- | --- |\n| Complete Case | | | | | | |\n| Block 1 | .258** | .274** | .292** | .226* | .250* | .400** |\n| Demographic, Clinical and Comorbidity | | | | | | |\n| Block 2 | .267 | .294 | .293 | .234 | .253 | .404 |\n| OSPRO-YF (10 items) | | | | | | |\n| OSPRO-ROS (10 items) | | | | | | |\n| Block 3 | .275 | .296 | .315 | .259 | .271 | .457 |\n| OSPRO-YF (+ 7 items) | | | | | | |\n| OSPRO-ROS (+ 13 items) | | | | | | |\n| Block 4 | .337* | .424** | .353 | .426** | .340* | .560* |\n| 4-week change | | | | | | |\n| (Pain, Disability, OSPRO-YF) | | | | | | |\n| Inverse Probability of Attrition Weighted | | | | | | |\n| Block 1 | .306** | .294** | .313** | .236 | .304** | .430** |\n| Demographic, Clinical and Comorbidity | | | | | | |\n| Block 2 | .314 | .317 | .317 | .250 | .305 | .435 |\n| OSPRO-YF (10 items) | | | | | | |\n| OSPRO-ROS (10 items) | | | | | | |\n| Block 3 | .321 | .321 | .334 | .284 | .321 | .473 |\n| OSPRO-YF (+ 7 items) | | | | | | |\n| OSPRO-ROS (+ 13 items) | | | | | | |\n| Block 4 | .382* | .448** | .373 | .464** | .389* | .611* |\n| 4-week change | | | | | | |\n| (Pain, Disability, OSPRO-YF) | | | | | | |\n\nAll values are variance explained (pseudo-R2 ); Health care utilization (dependent) variables refer to utilization during the follow-up period, whereas independent variables refer to measurements taken before the follow-up period *\n\np < 0.05, **p < 0.01", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What are the health consequences of musculoskeletal pain ?", - "target_page": 1, - "target_passage": "Musculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related ad diction [1].", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "In a similar way, **the levels of ergonomic risks** are related with the sectoral structure of a country, determining the type of occupations and work tasks. EU-OSHA provided a detailed analysis of the prevalence of musculoskeletal disorders (MSDs) and the related risk factors in several studies on musculoskeletal diseases, for example, 'Work-related musculoskeletal disorders: why are they still so prevalent?'58\n\nAn example of the **interrelation between sectors and risks is the connection** between the sector aggregate 'Trade, transport, food/accommodation and recreation activities' and three major indicators of ergonomic burden, that is, 'Painful, tiring positions', 'Repetitive hand or arm movements', and 'Carrying or moving heavy loads'.\n\nSeven countries have a share of employees in this sector of more than 30% (Cyprus, Greece, Spain, Malta, Bulgaria, Croatia and Latvia), and many of them are present in two or three lists of countries with the highest number of responses regarding the indicators.", - "page_start": 42, - "page_end": 42, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## R E S EAR CH A R TIC L E Open Access\n\n# Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz1* , Jason M. Beneciuk2,3 and Steven Z. George4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy (n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% (n = 106) of the study sample that completed the 12-month follow-up (n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n### Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\nto increased utilization of services such as physical\n\n© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.\n\n* Correspondence: trevor.lentz@duke.edu 1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham, NC 27705, USA\n\nFull list of author information is available at the end of the article", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "224 Pega et al., 2022: Global, regional and national burden of disease attributable to 19 selected occupational risk factors for 183 countries, 2000–2016: A systematic analysis from the WHO/ILO Joint Estimates of the Workrelated Burden of Disease and Injury, here\n\n225 Kauppinen et al., 1998: Occupational exposure to carcinogens in the European Union in 1990-1993: international information system on occupational exposure to carcinogens, here CAREX Canada\n\nFevotte et al., 2011: Matgéné: A Program to Develop Job-Exposure Matrices in the General Population in France Mannetje et al., 2011: Developing a general population job-exposure matrix in the absence of sufficient exposure monitoring data\n\n226 YLDs = years lived with disability, together with YLLs = years of life lost, it composes the DALY (DALY = YLL + YLD).\n\n227 GBD 2019 Mental Disorders Collaborators, 2022: Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990–2019: a systematic analysis from the Global Burden of Disease Study 2019, here\n\n228 WHO: Mental disorders, Key facts and\n\nIHME: Global Health Data Exchange (GHDx), here\n\n229 OECD, 2015: Sick on the Job?: Myths and Realities about Mental Health and Work\n\n230 OECD/European Union, 2018: Health at a Glance: Europe 2018: State of Health in the EU Cycle\n\n231 Andlin-Sobocki et al., 2005: Cost of disorders of the brain in Europe\n\n232 Niedhammer et al.; 2021: Update of the fractions of cardiovascular diseases and mental disorders attributable to psychosocial work factors in Europe, here\n\n233 Norder et al., 2017: Beyond return to work from sickness absence due to mental disorders: 5-year longitudinal study of employment status among production workers, here\n\n234 Leka & Jain, 2017: EU Compass for Action on Mental Health and Well-Being - Mental Health in the Workplace in Europe\n\n235 Musculoskeletal disorders refer to backache and/or muscular pains in shoulders, neck, upper limbs and/or lower limbs (hips, legs, knees, feet, etc.). In the medical systematic it is the IC 10 group of diseases: Diseases of the musculoskeletal system and connective tissue.\n\n236 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 237 Graveling, 2018: Ergonomics and Musculoskeletal Disorders (MSDs) in the Workplace. A Forensic and Epidemiological Analysis\n\n238 Da Costa & Viera, 2010: Risk factors for work-related musculoskeletal disorders: a systematic review of recent longitudinal studies, here\n\n239 EU-OSHA, 2020: Work-related musculoskeletal disorders: why are they still so prevalent? Evidence from a literature review (p. 15).\n\n240 EU-OSHA, 2019: Summary - Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU (p. 8).\n\n241 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 242 Ibid., p. 174ff.\n\n243 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (p. 77).\n\n244 United Nations Economic Commission for Europe (UNECE), 2015: Handbook on measuring quality of employment: A statistical framework, here\n\n245 Quinlan & Bohle, 2013: Re-invigorating industrial relations as a field of study: Changes at work, substantive working conditions and the case of OHS, here (p. 8).\n\n246 The percentages of responses to this question in the European Working Conditions Survey (EWCS, 2015) are displayed. Each bar shows the percentages of the four possible responses for each EU Member State, the average for the EU Member States, and the responses for Switzerland and Norway. Responses are displayed for the question below: How satisfied are you with working conditions in your main paid job? Answer options were: Not at all satisfied; Not very satisfied; Satisfied; Very satisfied. See here\n\n247 Flash Eurobarometer 398, 2014, p 2, https://www.cesi.org/wp-content/uploads/2014/04/fl_398_sum_en.pdf . The displayed Flash Eurobarometer data refer to the 'working population', with two subgroups A (employees and manual workers), and B (self-employed). In the Flash Eurobarometer sample these two groups are separated from three further groups forming the 'Not working' population These groups are: subgroups: students, retired, looking for a job.\n\n248 Ibid., p. 58.\n\n249 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (pp. 77-81).", - "page_start": 149, - "page_end": 149, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "routine pain-related psychological distress monitoring throughout the early phases of rehabilitation especially if the goal is to identify risk for subsequent pain-related healthcare utilization. The implications of these collective findings are that treatment pathways may provide greater value by 1) addressing modifiable health-related variables like pain, disability and pain-related psychological distress, 2) routine monitoring of these health-related variables and 3) offering treatment alternatives that safely escalate care if needed while minimizing risk of harm and unhelpful utilization.\n\nOpioids and diagnostic tests and imaging were the two most common subsequent healthcare services utilized following physical therapy. Of the individuals that completed follow up and had any subsequent healthcare utilization, approximately 42% reported opioid use and 70% reported use of diagnostic tests and imaging. An important health-related predictor of these services was level of comorbidity burden. For those with high comorbidity burden and inadequate treatment response to physical therapy, use of additional diagnostic tests and imaging or low-dose opioids may be appropriate in some cases. But given the growing public health concern over opioid use and the desire to avoid unnecessary treatment driven by imaging, our results suggest the importance of considering disease burden when developing treatment pathways and healthcare policy to mitigate risk for avoidable use of these services. Interestingly, neither versions of the OSPRO-ROS predicted utilization outcomes even though it has been linked to mental health, comorbidity, and persistent pain state in other analyses [20, 21]. Systemic symptom burden is a measure of patient complexity that is related to but distinct from comorbidity burden [36, 47]. In these analyses, the chronic condition measure (i.e. the CCI) was a better predictor of utilization than symptom burden (i.e. OSPRO-ROS). The reasons for this finding are unclear but may be related to providers and patients being more likely to pursue follow-up medical care for musculoskeletal pain when known co-existing conditions are present as opposed to reporting of symptoms alone. The distinction between symptom and disease burden in defining musculoskeletal patient complexity, and its influence on clinical decision-making and outcomes, should be the subject of future research particularly related to aging populations [48].\n\nUtilization outcomes benchmarks have not been established to determine how the percentage of subsequent healthcare use in this study compares to outcomes using other health services. Prior studies suggest physical therapy is associated with reduced incidence of additional healthcare use compared to not using physical therapy in patients with acute low back pain [10, 49]. Some additional healthcare use is expected following physical therapy, especially among individuals that are on long-term pain management pathways due to chronic or persistent symptoms. Yet with over 40% reporting subsequent pain-related healthcare among those completing follow-up, it is apparent that opportunities exist to improve pathway selection and/or the effectiveness of physical therapy for individuals with musculoskeletal pain. This finding is particularly notable given recent efforts to define physical therapy as an effective first line, non-pharmacological treatment option against more invasive or higher risk services, such as surgery or opioid use, respectively. Predictive variables identified in this analysis can be used to develop risk models that better inform pathway selection for those seeking physical therapy for musculoskeletal pain. The precise application of these risk models, and how they inform policy and practice should be the target of future study. However, physical therapy re-design might incorporate enhanced treatment monitoring to assess ongoing risk for downstream utilization, as well as physical therapist-led interventions to more thoroughly address important modifiable factors such as pain intensity, disability and pain-related psychological distress [38]. Improved pathway selection might entail the consideration of referral to or co-treatment with other providers to more adequately address non-modifiable characteristics. Collectively, these approaches could improve the value of physical therapy by minimizing risk for high downstream healthcare utilization and potentially unwarranted escalation of care.\n\nThe primary strength of the study is longitudinal follow-up at multiple time points following an episode of physical therapy for a variety of musculoskeletal pain conditions. Anatomical location of pain was not a significant predictor of healthcare use in all but one model, suggesting results are widely applicable across a spectrum of musculoskeletal pain conditions. Another strength of this cohort study is the assessment of various healthcare utilization outcomes of interest for establishing health policy. When considered alongside more traditional pain- or disability-related outcomes prediction models, these findings will improve the ability of healthcare systems and providers to make decisions in value-based purchasing environments. The consideration of multiple screening tools (i.e. yellow flags and review of systems) and treatment monitoring variables is also a strength of this study as screening and systematic treatment monitoring are not routine in clinical practice. A final strength is inclusion of multiple sociodemographic, health-related and psychosocial factors as potential predictors. Healthcare outcomes and utilization exhibit emergent properties that require the consideration of multiple, competing factors to fully explain [50].", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290–301.\n- 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251–8.\n- 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343–50.\n- 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157–62.\n- 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533–9.\n- 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331–4.\n- 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491–502.\n- 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409–15.\n- 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187–204.\n- 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776–88.\n- 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038–46.\n- 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600–13.\n- 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251–61.\n- 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–83.\n- 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73–84.\n- 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512–26.\n- 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327–43.\n- 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656–64.\n- 39. Myers RH. Classical and modern regression with applications. 2nd ed. Pacific Grove: Duxbury Press; 2000.\n- 40. Weuve J, Tchetgen Tchetgen EJ, Glymour MM, Beck TL, Aggarwal NT, Wilson RS, et al. Accounting for bias due to selective attrition: the example of smoking and cognitive decline. Epidemiol Camb Mass. 2012;23:119–28.\n- 41. Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiol Camb Mass. 2004;15:615–25.\n- 42. Kent P, Keating JL, Leboeuf-Yde C. Research methods for subgrouping low back pain. BMC Med Res Methodol. 2010;10:62.\n- 43. Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR. A simulation study of the number of events per variable in logistic regression analysis. J Clin Epidemiol. 1996;49:1373–9.\n- 44. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Pearson; 2006.\n- 45. Green SB. How many subjects does it take to do a regression analysis. Multivar Behav Res. 1991;26:499–510.\n- 46. Harris RJ. A primer of multivariate statistics. 3rd ed. Mahwah: Psychology Press; 2001.\n- 47. Piette JD, Kerr EA. The impact of comorbid chronic conditions on diabetes care. Diabetes Care. 2006;29:725–31.\n- 48. Rice ASC, Smith BH, Blyth FM. Pain and the global burden of disease. Pain. 2016;157:791–6.\n- 49. Fritz JM, Cleland JA, Speckman M, Brennan GP, Hunter SJ. Physical therapy for acute low back pain: associations with subsequent healthcare costs. Spine. 2008;33:1800–5.\n- 50. Lentz TA, Harman JS, Marlow NM, George SZ. Application of a value model for the prevention and management of chronic musculoskeletal pain by physical therapists. Phys Ther. 2017;97:354–64.\n- 51. Sterne JAC, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.\n- 52. Bishop MD, Mintken PE, Bialosky JE, Cleland JA. Patient expectations of benefit from interventions for neck pain and resulting influence on outcomes. J Orthop Sports Phys Ther. 2013;43:457–65.\n- 53. Bialosky JE, Bishop MD, Cleland JA. Individual expectation: an overlooked, but pertinent, factor in the treatment of individuals experiencing musculoskeletal pain. Phys Ther. 2010;90:1345–55.\n- 54. Hanney WJ, Masaracchio M, Liu X, Kolber MJ. The influence of physical therapy guideline adherence on healthcare utilization and costs among patients with low back pain: a systematic review of the literature. PLoS One. 2016;11:e0156799.\n- 55. Childs JD, Fritz JM, Wu SS, Flynn TW, Wainner RS, Robertson EK, et al. Implications of early and guideline adherent physical therapy for low back pain on utilization and costs. BMC Health Serv Res. 2015;15 https://doi.org/ 10.1186/s12913-015-0830-3.\n- 56. Yu S-T, Chang H-Y, Lin M-C, Lin Y-H. Agreement between self-reported and health insurance claims on utilization of health care: a population study. J Clin Epidemiol. 2009;62:1316–22.\n- 57. Petrou S, Murray L, Cooper P, Davidson LL. The accuracy of self-reported healthcare resource utilization in health economic studies. Int J Technol Assess Health Care. 2002;18:705–10.\n- 58. Short ME, Goetzel RZ, Pei X, Tabrizi MJ, Ozminkowski RJ, Gibson TB, et al. How accurate are self-reports? Analysis of self-reported health care utilization and absence when compared with administrative data. J Occup Environ Med. 2009;51:786–96.\n\n- \n- \n- \n- \n- \n-", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "identifying risk for additional utilization has emerged due to the growth of cost-sharing and capitated payment models, particularly in the United States (US). As a result, many US health care services organizations have begun to prioritize early identification of individuals at risk for downstream healthcare use at the onset of treatment [10, 11]. Early risk assessment allows systems to deliver greater value by 1) focusing limited health care resources towards patients who are most in need, and 2) identifying those who may require coordination of multiple providers and services to optimize outcomes.\n\nProspective identification of risk for high subsequent healthcare utilization is a different approach to outcomes prediction for musculoskeletal pain [12, 13] and one that has not been evaluated in physical therapy settings in the US. Most existing outcomes prediction models focus on pain and disability endpoints [12–14]. They also concentrate on condition-specific and psychological predictors, with less attention to factors that could influence healthcare utilization more directly [15–17]. These factors include insurance, comorbidities, symptoms unrelated to the pain condition, and treatment response. As a result, predictors of pain-related healthcare utilization beyond physical therapy are unknown. A better understanding of these predictors will have significant implications for future healthcare pathway development. For instance, an influence of modifiable factors like pain-related psychological distress might imply the need to build clinical pathways that address those factors directly through physical therapist provided intervention. Additionally, understanding the relative predictive capabilities of baseline versus change estimates for modifiable factors would clarify whether prediction is improved by routinely assessing outcomes during the course of treatment (i.e. treatment monitoring) [18].\n\nThis study was undertaken in a nationwide, US cohort of patients receiving outpatient physical therapy for a primary complaint of knee, shoulder, back or neck pain. The primary aim of the analysis was to predict incidence of additional pain-related healthcare utilization in the year following the episode of physical therapy for musculoskeletal pain. We considered factors not commonly assessed in outcomes prediction for musculoskeletal pain, like insurance, comorbidities, and treatment response, as well as those more often associated with pain-related outcomes (e.g. psychological distress). This project will lead to the development of potentially novel outcome prediction models for this population in a common, non-pharmacological US healthcare setting. The results of this study will be particularly important in value-based payment settings where enhanced clinical decision-making drives treatment effectiveness and system efficiency.\n\n#### Methods\n\n#### Dataset and patient population\n\nThis study used data from the Orthopedic Physical Therapy – Investigative Network's (OPT-IN) Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study, a longitudinal prospective study of individuals with knee, shoulder, back or neck pain seeking Physical Therapy in the US. A convenience sample was recruited from December 2014 and December 2015 by participating OPT-IN clinics. The OPT-IN clinics that participated in data collection represented multiple geographic regions in the US including the Mideast, Southeast, Great Lakes, Rocky Mountain States and Far West, with an attempt to balance recruitment between urban and rural settings over the entire OPT-IN network. Physical therapists practicing in these clinics identified eligible participants at initial evaluation and directed them to a secure study website for the informed consent process and baseline self-report assessment. Eligibility criteria have been thoroughly reported elsewhere [19] and were intentionally broad to develop a cohort that was generalizable to those seeking physical therapy for common musculoskeletal conditions in the US. Participants completed follow-up self-reported assessments on the study website at 4 weeks, 6 months and 12 months. Participants were notified of a pending assessment by an email that directed them back to the study website to complete their follow-up assessment. For additional details of the dataset and cohort, readers are directed to the published cohort profile [19].\n\nThe primary aim of the OSPRO cohort study was to develop and validate review of systems (i.e. evidence of systemic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in outpatient orthopedic physical therapy settings. These screening tools, once validated and refined for clinical decision making, may improve the value of care delivery by accurately identifying individuals who 1) are appropriate for referral to other providers for management of non-musculoskeletal symptoms, and/or 2) would benefit from enhanced, psychologically-informed physical therapy. Early identification of individuals most appropriate for these modified pathways of care has the potential to reduce wasteful downstream health care utilization, limit the risk of unwarranted and costly care escalation, and improve clinical outcomes. Results of the primary analyses examining the predictive ability of the OSPRO tools for pain, disability, health status, and comorbidity outcomes have been previously published [20]. Pre-planned secondary analyses included prediction of persistent pain state [21] and this current analysis predicting future healthcare utilization. All subjects consented to participation in the study and ethics approval was granted by the University of Florida Institutional Review Board.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "Looking at the data, it is quite obvious that the **northern and the central European countries are underrepresented** in the group of countries with the highest share of physical and ergonomic risks. The central European countries (Austria, Germany, the Netherlands, Belgium, Luxembourg and France, and the two northern European countries Denmark and Sweden) are practically not present in these lists. The picture changes if it is about lifting or moving of people, a consequence of the relatively larger relevance of care work in these countries.\n\n**Physical inactivity and permanent or prolonged sitting or standing** is a specific ergonomic risk with health impacts for the musculoskeletal system but also contributing to other health impacts like cardiovascular diseases, tendency to overweight and so on.60 According to ESENER 2019, the second most frequently reported risk factor in the EU27 was **prolonged sitting**. By sector, it was most frequently reported by enterprises in financial and insurance activities (92% of establishments in the sector in the EU28), information and communication (92%), and public administration (89%). On average, three to four hours of this sedentary behaviour occurs at work. In the EU, 28% of workers report that their work involves sitting almost all the time and a further 30% report sitting a quarter to three quarters of the time, and throughout Europe 18% of the workers sit more than 7.5 hours a day.\n\nAs mentioned in previous chapters, there exists a **share of workers exposed to physical risks** that is prevalent in spite of all structural and sectoral changes. Some of the structural changes of the economy, for example, from industrial production to maintenance and repair,61 might even cause higher ergonomic risks; in general it will be more difficult to use technical help tools in varying maintenance and repair situations, compared to more homogenous tasks in industry. Growing sectors, for example, home care of ill or elderly people, involve ergonomic risks due to transport and moving of patients and/or tiring positions.\n\n#### **OSH Barometer – Physical risks:**\n\nhttps://visualisation.osha.europa.eu/osh-barometer/working-conditions-preventions/physicalrisk/vibrations-loud-noise-and-temperature\n\n**ESENER – Data visualisation:** \n\nhttps://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019\n\n**EU-OSHA Themes – Musculoskeletal disorders:**\n\nhttps://osha.europa.eu/en/themes/musculoskeletal-disorders\n\n# **3.3 Contract types and work locations**\n\nThe chapter deals with the impact of **non-standard types** of work on working conditions in comparison to standard work, focusing on the impact of the 'Conditions of employment' on OSH.\n\nMost studies that dealt with the **connection between the employment forms and health outcomes** and in particular safety and health aspects found significant correlations.62 A census-based study from Belgium on non-standard forms of work and mortality from Belgium concluded (2021):\n\n'*Our study, which to our knowledge is the first one to assess associations between forms of nonstandard employment and mortality using population-wide data, revealed considerable mortality inequalities within the salaried employee population in Belgium. Over the subsequent 13 years and three months of follow-up, certain non-standard workers were at increased risk of death compared to permanently employed workers.'*63\n\nThe **conventional non-standard types** of work start with widespread temporary (or fixed-term) work, seasonal work, casual work, remote work in different forms (at home or other places), self-employed work, family work, mobile work in transport and often in construction, domestic work, care and craft work at the places of clients, plus several types of less regular and undeclared work.\n\nHigh public awareness is directed to those types of non-standard work that are connected either to **new forms of contracts** (voucher, platform, zero-hours, etc.) or new types of work made possible by the", - "page_start": 44, - "page_end": 44, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### Healthcare utilization predictors\n\nWe collected potential predictors by self-reported questionnaires at initial evaluation using an online study website. Participants were directed back to the study website 4 weeks following initial evaluation to again complete questions on pain intensity, disability, and pain-related psychological distress. Change in pain intensity, disability, and pain-related psychological distress from baseline to 4 weeks were modeled as treatment response variables and included as potential predictors.\n\n#### Sociodemographic and health-related information\n\nParticipants completed a standard intake questionnaire form previously used in our clinical studies that assessed age, sex, race, and insurance provider type. This questionnaire also assessed health-related variables included anatomical region of primary pain complaint (low back, neck, shoulder, or knee) and whether the patient had undergone surgery for their primary pain complaints (yes or no). Due to small cell sizes for certain categories, race was dichotomized as white or non-white. For insurance type, participants were asked to choose one of the following options: private, public (Medicare and/or Medicaid), uninsured/self-pay, worker's compensation, and other/commercial insurance. Among the study sample, we observed few with no insurance (n = 7) or worker's compensation (n = 14). The study also included relatively few with 'other/commercial insurance' (n = 45). Within this group, informal assessment of these various plans suggested high heterogeneity of plan characteristics and coverage. Due to the small number of subjects in these individual insurance strata and to improve interpretability of results, we collapsed those reporting no insurance, worker's compensation and other/commercial insurance into a single category (i.e., 'Other'). Therefore, insurance type was categorized as private, public, or other (no insurance, worker's compensation, or other/commercial insurance) for purposes of analysis.\n\n#### Pain-related clinical variables\n\nPain status was determined using established definitions that account for the duration of pain and activity limitations [22, 23] using the following two questions: 1) \"How long have you been experiencing your current painful symptoms?\" and 2) \"Have you experienced ANY pain and activity limitations every day for the past 3 months?\" Responses to question 1 of \"greater than 90 days\" or responses to question 2 of \"Yes\" were used to classify patients as having persistent pain at initial evaluation.\n\n#### Pain intensity\n\nPain intensity was assessed by the numerical pain rating scale (NPRS) ranging from \"0\" (no pain) to \"10\" (worst pain imaginable) [24–26]. Participants rated their current pain intensity, as well as their best (lowest) and worst (highest) pain intensity over the past 24 h. Current, best and worst pain ratings were averaged for purposes of analysis.\n\n#### Region-specific disability\n\nSelf-reported region-specific disability was assessed with the Neck Disability Index [27, 28], Oswestry Disability Questionnaire [29, 30], Quick Disability of Arm Shoulder and Hand [31] or International Knee Documentation Committee Subjective Knee Form [32] for cervical, low back, shoulder and knee pain, respectively. Region-specific disability measures were z-transformed for purposes of analysis, consistent with our prior work involving multiple anatomical regions [33].\n\n#### Comorbidities\n\n#### Charlson comorbidity index (CCI)\n\nThe Charlson Comorbidity Index was used to measure the presence of chronic comorbid medical conditions [34]. It lists 19 medical conditions that participants are asked to indicate whether they \"have ever been diagnosed with by a physician\". Conditions are weighted and added for an overall measure of comorbidity burden. The CCI has demonstrated good test-retest reliability (0.91) and positive but weak to modest correlations with medication use, hospitalizations, length of stay, total charges, and pharmacy and laboratory charges for older adults in general medical care and surgical care settings [35].\n\n#### Assessment tools\n\n#### OSPRO Review of Systems tool (OSPRO-ROS)\n\nThe OSPRO-ROS is a review-of-systems screening tool for use in outpatient orthopedic physical therapy settings [36]. The OSPRO-ROS has demonstrated good concurrent validity with depression and a comprehensive 97-item battery of non-musculoskeletal symptoms (i.e., red flags). [36] Moderate to strong predictive capabilities of the OSPRO-ROS have been reported for persistence of pain, quality of life, and change in comorbidity 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The OSPRO-ROS includes standard symptom descriptors to aid with identification of systemic or non-musculoskeletal origins of musculoskeletal pain. It includes questions related to symptoms of the cardiovascular, gastrointestinal, endocrine, nervous, integumentary, pulmonary, and musculoskeletal systems. The full-length 23-item version of the OSPRO-ROS is capable of identifying 100% of positive red-flag responders (i.e. indicating \"yes\" to at least one systemic symptom on a questionnaire) in outpatient orthopedic physical therapy settings. [36] A shorter, 10-item version is also available that has been", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "clients' places. Also **ergonomic risks** — repetitive hand-arm movements, tiring and painful positions, lifting and carrying, and prolonged sitting — can pose major health risks, and the statistics show no significant decrease.\n\nThere is a shift of **workforce to administrative, communicative, and emotionally demanding and client-oriented sectors**, like the sectors 'Education, human health and social work activities' and 'Trade, transport, food/accommodation and recreation activities' (more human–human interaction, less human– machine interaction). Consequently, this development caused an overall **shift of risks to psychosocial and emotional challenges** and — mostly but by far not always — less physical activity. Some health risks worsen in such types of work, like work with difficult clients or long working hours. Many approaches and pilot projects have been developed to mitigate these workloads, but the implementation seems to be limited to a minority of workplaces with high awareness of work-related health issues. Also, since 2005, statistics and surveys find a stagnation (practically no increase and no decrease) concerning the development of **working time**, **time pressure and high workload** for workers.\n\nWhen looking at the **overall relationship between work and some major diseases** in the adult population (cardiovascular diseases, cancer, musculoskeletal disorders, pulmonary diseases, hearing loss), there is a clear connection to socioeconomic status that is a major cause of low life expectancy and high morbidity. In public health morbidity and mortality studies, a more precise analysis of impact of working conditions on health, as a very important factor of socioeconomic status, is very rare. This would require more detailed knowledge and analysis of the health impacts of occupations and work tasks and of the preventive measures at work, as well as an improvement in the detection capacities of preventive and monitoring health systems. Identification of the approximate **attributable fraction of work to diseases** is still the subject of intense scientific debate, with clearer results for some relations and less clear results for others.\n\nThe **level of implementation and enforcement** of compliance with legislation seems to stagnate. The capacities of the OSH infrastructure at national levels show a mixed picture in EU Member States. Across the EU, between 2010 and 2020, the labour inspectorates performed on average **two million labour inspections per year**, in approximately 22 million businesses. To enhance the level of implementation in terms of coverage and quality, many labour inspections tried to enhance the effectiveness of common drop-in company inspections by **smart enforcement and supervision concepts**.\n\nThere is no measurable progress in the types of **work with eroded employer–worker relations** (subcontracts, involuntary self-employed). The reliability of statistical monitoring fades where the employer–worker relationship is less clear (regarding aspects such as working conditions, work accidents and work-related diseases, and of compliance with legislation).\n\nMany enterprises and particularly MSEs and the self-employed very often **cannot fully comply with more complex risk prevention tasks** (e.g. psychosocial, chemical, biological, optical, electromagnetic risks) due to lack of resources, expertise and awareness (ESENER data). In general, enforcement authorities can only supervise a small percentage of enterprises, particularly not a substantial portion of MSEs, of self-employed or of non-standard types of work; some Member States included in their strategic approaches the objective to reach these enterprises/self-employed. The reason for the continued levels of intensification of work from 2005 onwards might be that the related tasks were contracted out or put on the shoulders of non-standard workers, for example, self-employed, temporary and seasonal workers.\n\nSome **EU OSH legislation** may be adapted and modernised to cope with the changes in technologies, employment conditions, longer working life, and a growing share of mobile and remote work. Many of these changes in the world of work have caused higher insecurity, less clear employer–worker relations, and a higher burden of psychosocial and ergonomic risks.\n\n#### **Which are the areas of concern?**\n\n**Incomplete compliance with OSH regulation is more noticeable in** certain sectors and types of work. Most of these types of work — mobile and home-based work, domestic work, care work and long-term domestic care work, seasonal work, platform work, non-voluntary self-employed — are growing in terms of workforce. But many of these work and employment formats are until now not covered in the same", - "page_start": 17, - "page_end": 17, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### **Women**\n\n· Most women in the EU bear the main responsibility of household work and childcare. The health risks from this non-paid work add up to the risks from their paid work; this double burden in general is not, however, considered when addressing occupational health problems faced by women. · In surveys, about 6% of women under 30 in the EU have reported sexual harassment at work (though this may be an under-estimate).\n\n· Overall, women report fewer work-related accidents than men, but higher levels of work-related health problems, including MSDs and stress (see also the overview in the EIGE 'Gender statistics database', section: Working conditions).99,100\n\n#### **Migrants**\n\nWhile a minority of migrant workers hold high-skilled jobs, many have jobs that are \"dirty, dangerous and demanding\" and consequently face high risks of work-related accidents and disease.\n\n· Language and cultural barriers also contribute to higher risks for migrant workers.\n\n· While EU-wide statistics are not available, country studies confirm that migrant workers suffer higher levels of work-related accidents and disease. Health and safety risks are believed to be higher for undocumented migrant workers although, because of their situation, there is a lack of data on their conditions.\n\n#### **Low-qualified workers**\n\n· Low-qualified workers are found mainly in traditional sectors, including manufacturing, agriculture, construction, wholesale and retail trades.\n\n· Very often these workers have high-risk or elementary occupations that expose them to a higher rate of injuries and health-related problems.\n\n· Low-qualified workers have less autonomy, less responsibility and overall experience less job satisfaction than workers with higher qualifications. Most low-qualified workers have low-paid jobs and many have temporary contracts.\n\n#### **Ageing workers**\n\n· Ageing workers are more at risk of occupational health problems than younger workers because they have been exposed longer to certain hazards. Older workers report more work-related health problems than younger workers, with backache and muscular pain for more than 70% of workers aged 55 and more.\n\n· Older workers are at lesser risk of non-fatal accidents because they have greater experience;\n\nhowever fatal accidents are more frequent than for younger workers.\n\n· Recovery time and return to work after illness are key issues to address when aiming to increase the employment rate of ageing workers.\n\n#### **Young workers101**\n\n· Overall, young workers have a higher rate of non-fatal injuries than older workers.\n\n· Young workers are more likely to be employed under non-standard forms of contractual arrangements such as part-time or temporary contracts.\n\n· Younger workers have less training, experience and maturity in their job, which puts them at risk of overestimating their physical capacities or underestimating the safety and health risks associated with their tasks.\n\n· A further concern is that exposure to workplace risks when young can contribute to later disease – this factor is not, however, addressed by worker health and safety surveillance.", - "page_start": 54, - "page_end": 54, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "What is Creative Commons ?", - "target_page": 2, - "target_passage": "Creative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "This is a frame from \"Twenty Years of Creative Commons (in Sixty Seconds)\" by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n**Creative Commons**\n\nPO Box 1866 Mountain View CA 94042 USA\n\n+1 415 429 6753 info@creativecommons.org", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n# **About Us**\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n#### **Chief Executive Officer**\n\nAnna Tumadóttir\n\n#### **General Counsel**\n\nKat Walsh\n\n# **Board of Directors**\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann Lawrence Lessig **Emeritus* Angela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\n**Except where otherwise noted, \"Annual Report 2023\" by Creative Commons is licensed under CC BY 4.0.**", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# **A Note from Leadership**\n\nCC staff photos are licensed under CC BY 4.0.\n\n2023 was a busy year at Creative Commons. Our **Open Culture** program and **Open Climate Campaign** entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our **Open Infrastructure Circle** in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on **the critical work that awaits us in 2024**.\n\n**Anna Tumadóttir, CEO**", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# *Acknowledgements*\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## *1. Introduction*1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called \"Books3\" to train LLMs.2 The Books3 dataset contains text from over 170,000 books, which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited.3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a \"books data commons for AI training\" might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1 Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nSee e.g. Knibbs, Kate. \"The Battle over Books3 Could Change AI Forever.\" *Wired*, 4 Sept. 2023, 2 www.wired.com/story/battle-over-books3/.\n\nFor key documents in these cases, see the helpful compendium at \"Master List of Lawsuits v. AI, 3 ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.\" *Chat GPT Is Eating the World*, 27 Dec. 2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoftmeta-midjourney-other-ai-cos. See also \"Fair Use Week 2024: Day Two with Guest Expert Brandon Butler.\" *Fair Use Week*, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-withguest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not consequential for the fair use analysis).", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **Our Impact**\n\nCC believes that opening up knowledge is key to addressing the world's most pressing challenges. Today, we steer campaigns, programming, and training in many areas:\n\n### **Open Culture**\n\n2023 was quite a year for the CC Open Culture Program, thanks to generous funding from **Arcadia**. We grew our Open Culture team from one to two and a half staff, rolling out new initiatives like TAROC (Towards a Recommendation on Open Culture) and **Open Culture Live: A Webinar Series**. We invite you to read \"**What did Creative Commons do for Open Culture in 2023?**\" to learn more.\n\n### **Open Journalism**\n\nThanks to generous funding from the **John D. and Catherine T. MacArthur Foundation**, CC hosted its very first Open Journalism track at the CC Global Summit, including eight presentations, lightning talks, panel discussions, and workshops as well as a **keynote by Anya Kamenetz**.\n\nRepresentatives from 33 news outlets and digital rights-focused organizations attended the CC Summit sessions. The Open Journalism track built on **numerous collaborations and workshops** throughout 2023.\n\n### **Open Education**\n\nWe delivered workshops and presentations on CC Licenses and Open Educational Resources at over 16 conferences and events. The CC Open Education Platform also funded six global projects, **including work to advance the UNESCO Recommendation on OER.**\n\n\"Follow the Color Brick Road\" by Bert Kaufmann is licensed under CC BY-SA 2.0.", - "page_start": 6, - "page_end": 6, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "different rightsholders and authors. Managing opt-outs for so many different interests within one book may get overly complicated very fast.\n\nIn any event, creating an opt-out system will need some ways of authenticating whether someone has the relevant authority to make choices about inclusion of a work.\n\n## *Who would get to use the books data commons? For what?*\n\nA commons might be made publicly available to all, as has been done with datasets like The Pile. Another possible design choice is to restrict access only to authorized users and to enforce particular responsibilities or obligations in return for authorization. Three particular dimensions of permitted uses and users came up in our discussions:\n\n- **Defining and ensuring acceptable and ethical use:** Participants discussed to what extent restrictions should be put on use of the resource. In the case of HathiTrust, acceptable use is implicitly ensured by limiting access to researchers from member institutions; other forms of \"gated access\" are possible, allowing access only to certain types of users and for certain uses. One can imagine more fine-grained 39 mechanisms, based on a review of the purpose for which datasets are used. This imagined resource could become a useful lever to demand responsible development and use of AI; alongside \"sticks\" like legal penalties, this would be a \"carrot\" that could incentivize good behavior. At the same time, drawing the lines around, let alone enforcing, \"good behavior\" would constitute a significant challenge.\n- **Charging for use to support sustainability of the training corpus itself:** While wanting to ensure broad access to this resource, it is important to consider economic sustainability, including support for continuing to update the resource with new works and appropriate tooling for AI training. Requiring some form of payment to use the resource could support sustainability, perhaps with different requirements for different types of users (e.g., differentiating between non-commercial and commercial users, or high-volume, well-resourced users and others).40\n- **Ensuring benefits of AI are broadly shared, including with book authors or publishers:** The creation of a training resource might lower barriers to the development of AI tools, and in that way support broadly shared benefits by facilitating greater competition and mitigating concentration of power. On the other hand, just as concentration of technology industries is already a significant challenge, AI might not look much different, and the benefits of this resource may still simply go to a few large firms in \"winner takes all-or-most\" markets. The workshops discussed how, for instance, large commercial users might be expected to contribute to a fund that supported contributors of training data, or more generally to fund writers, to ensure everyone contributing to the development of AI benefits.\n\nFor examples of gated access to AI models, see https://huggingface.co./docs/hub/en/models-gated. 39\n\nAs an analogy, consider for instance Wikimedia Enterprise, which \"build[s] services for high-volume 40 commercial reusers of Wikimedia content\" and charges for that access. https://meta.wikimedia.org/ wiki/Wikimedia_Enterprise.", - "page_start": 18, - "page_end": 18, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "When was the first CC licence created?", - "target_page": 4, - "target_passage": "The first CC License was created in 2002.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# Understanding Creative Commons license\n\nbefore licensing your work\n\n## **THREE-LAYER DESIGN**\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\" : contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n# **FOUR ELEMENTS**\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for\n\nND\n\nSA\n\nnoncommercial purposes.\n\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n# **SIX LICENSES**\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n# **REMIND THAT…**\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n#### **CC LICENSE CAN'T BE USED FOR …**\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n### **ALSO FOR …**\n\nthe work that is already in the Public Domain. For those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n# **NOW, SHARE YOUR WORK!** https://creativecommons.org/choose/\n\nTexts are adapted from CC Certification for Educators. CC BY license.\n\nBY, SA, NC, ND icons, CC BY, CC BY-SA, CC BY-NC, CC BY-NC-SA, CC BY-ND, and CC BY-NC-ND buttons are trademark of Creative Commons, and subject to their policies. 3-layer design of CC license image is taken from CC Certification for Educators. CC BY license. Line, icons, and gradients are from Canva, and subject to their policies.", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "# **Training in how to use CC Licenses is key to their adoption.**\n\nWe offer a ten-week **CC Certificate** program that is now tailored not only to the education and library sectors, but also galleries, archives, libraries, and museums and **available in 10 languages**.\n\nAs of 2023, we've certified:\n\n### **In 2023, we greatly expanded our CC Licenses training and education offerings:**\n\n#### **19 Workshops & Trainings**\n\nwith institutions like ALA, Connecticut Humanities & State University of New York, Digital Research Alliance of Canada, and WikiConf North America.\n\n#### **2 Week-Long CC Certificate Bootcamps**\n\nfor California Community Colleges.\n\n#### **27 Webinars**\n\non topics like the basics of Open Culture, the possibilties of Open Educational Resources (OER) for business-university cooperation, and the future of CC Licenses in digital and online education.\n\n#### **12 CC Legal Open Office Hours**\n\nhosted by our legal team, providing a personalized opportunity for the CC community to ask questions about CC Licenses, open access, and sharing.", - "page_start": 4, - "page_end": 4, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "### **3.2.6 How to view licensing information**\n\nLicensing information is available for all datasets associated with common licences, which are supported by the Licence Assistant. When available a link to the assistant is provided on left side of a dataset page.\n\nBy clicking on the **licence name** (here: cc-by), the Licence Assistant tool is opened in a new window, displaying relevant information for this particular licence.\n\n| IROPFAN | | | Newsletter FAQ Search Contact Cookies Legal notice | English (en) | ◀ |\n| --- | --- | --- | --- | --- | --- |\n| | | European Data Portal > Datasets > Daten über Anbieter von Hochs ... | | Search site content ... | ರ |\n| 1 European Data Po | | What we do ▼ Data ▼ Pro | | | |\n| WI G | | Data · Dataset Categories Similar Datasets | Using Data - | Resources . | |\n| | | Higher Education Provider Daing Assistant | SPARQL Manager | Statistics | |\n| | | data.gov.uk | | | |\n| Licensi | | | | | |\n| | | We publish the full HESA Finance return as open data | | | |\n| | | providers for the reference of funding and requlatory | | | |\n| CC-BY | | | | | |\n| Open licer | | Distributions (21) | | | |\n| You are f | | | | Comparable licences | |\n| Deriva | | Tahle 12 Analysis of staff costs 2016/17 to 2017/18 | | · CC-BY-NC-ND4.0 | |\n| CSV Create | | | | · CC-BY-NC-SA4.0 | |\n| Distrib | | Licence: cc-by (i | | · CC-BY-NC4.0 | |\n| | | | | · CC-BY-ND4.0 | |\n| Redistr | | | | · CC-BY-SA3.0NL | |\n| Reproc CSV | | Table 1 - Consolidated statement of comprehensive | | | |\n| \"Repro | | expenditure year ended 31 July 2015/16 to 2017/18 | without limitation by sound or | · CC-BY-SA4.0 | |\n| | | | | · CC-BY3.0NL | |\n| visual r | Licence: cc-by (i | | Work, including storage of a | | |\n| protect | | | edium. | · CC-BY4.0 | |\n| | | | | · CCBY3.0Austria | |\n| | | | | · DL-DE-BY-NC1.0 | |\n| You are obligated to: | | | | · DL-DE-BY1.0 | |\n| | | | | · DL-DE-BY2.0 | |\n| Attribution | | | | · EUPL-1.1 | |\n| Give proper credit to the copyright holder and/or author | | | | · FR-LO | |\n| Notice | | | | · GFDL-1.1 | |\n| Keep copyright and licence notices intact | | | | · GFDL-1.2 | |\n| State Changes | | | | · GFDL-1.3 | |\n| | | Indicate which changes have been made to the original licenced work in a manner that permits attribution. | | · IODLv1.0 | |\n| | | | | · IODLv2.0 | |\n| | | | | · NLOD | |\n| | | | | · ODC-BY | |\n| | | | | · ODC-ODbL | |\n| | | | | · OGL-NC | |\n| | | | | · OGL-ROU-1.0 | |\n| | | | | · OGL1.0 | |\n| | | | | · OGL2.0 | |\n| | | | | · OGL3.0 | |\n| | | | | · PSEUL | |", - "page_start": 33, - "page_end": 33, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "# **Licenses and Public Domain Tools**\n\nThe first CC License was created in 2002. Today, we boast **six CC Licenses** and two public domain tools, setting a global standard for sharing.\n\n### **We've estimated that over 2.5 billion pieces of content were CC Licensed by the end of 2023.**\n\n\"The great growling engine of change - technology. Alvin Toffler\" by katerha is licensed under CC BY 2.0. Our legal and technology staff continued to make key infrastructure updates and manage daily maintenance to ensure these Licenses work for everyone.\n\n### **In 2023, we launched the Open Infrastructure Circle (OIC) to ensure consistent funding for this work.**\n\nWe're grateful to the early supporters of the OIC, including the William + Flora Hewlett Foundation, Bill & Melinda Gates Foundation, Filecoin Foundation for the Decentralized Web, Robert Wood Johnson Foundation, Chan Zuckerberg Initiative, Endless, Siegel Family Endowment, Flickr, Microsoft, and Paul and Iris Brest.", - "page_start": 3, - "page_end": 3, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n#### **Permissively licensed works**\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution).18\n\nSee e.g. Heald, Paul J. \"How Copyright Makes Books and Music Disappear (and How Secondary 16 Liability Rules Help Resurrect Old Songs).\" Illinois Program in Law, Behavior and Social Science Paper No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181. Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen, Rebecca J. \"Why Are so Few Books from the 20th Century Available as Ebooks?\" *The Atlantic*, 18 Mar. 2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-centuryavailable-as-ebooks/284486/. See also \"Google Book Search Settlement and Access to Out of Print Books.\" *Google Public Policy Blog*, publicpolicy.googleblog.com/2009/06/google-book-searchsettlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed classaction settlement between Google, the Authors Guild, and the Association of American Publishers). Google's final brief in the settlement proceedings notes the \"prohibitive transaction costs of identifying and locating individual Rightsholders of these largely older, out-of-print books\" — see this brief at https:// web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/ google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also justified the settlement's terms in light of the fact that \"the transaction costs involved in finding copyright owners and clearing the rights are too high\"; while they argued that most works are not truly \"orphans,\" they note that total transaction costs as a whole (including, for example, determining whether the author or publisher holds the rights and then negotiating rates) are so high as to block uses of outof-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http:// thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.\n\nIn the EU, the 2019 Copyright Directive introduced specific provisions on the \"use of out-of-commerce 17 works and other subject matter by cultural heritage institutions\" (Articles 8-11 CDSMD). These provisions allow cultural heritage institutions to \"make available, for non-commercial purposes, out-ofcommerce works or other subject matter permanently in their collections\". The limitation to noncommercial purposes means that works made available under these provisions would be of limited use in building a books data commons.\n\nFor one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18 they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin' 'Bout AI Generation: Copyright and the Generative AI Supply Chain. Forthcoming, *Journal of the Copyright Society* 2024. https://doi.org/10.2139/ssrn.4523551.", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "By clicking on the \"**Data->Licensing Assistant**\" link in the main menu, the Licence Assistant is opened in a new window, displaying relevant information of all supported licences by the tool.\n\n| | | Newsletter FAQ Search Contact Cookies Legal notice English (en) | > |\n| --- | --- | --- | --- |\n| | | Search site content ... | ರ |\n| European Data Portal > Licensing Assistant | | | |\n| 11 What we do - | Data~ Providing Data . | Using Data - Resources . | |\n| Datasets Cataloques | Metadata Quality Licensing Assistant | SPARQL Manager Statistics | |\n| Licensing Assistant | | | |\n| Data which is shared with a licence becomes Open Data. There are many licences available. | The licence assistant provides a description of the available licences. It also gives an overview | | |\n| of how to apply licences as re-publisher/distributor of Open Data and how to combine multiple | | | |\n| licences. | | | |\n| Please find a licence by selecting the preferred licence terms below: | | | |\n| Advanced settings | | | |\n| Obligation | Permission | Prohibition | |\n| Lesser Copyleft Attribution | Derivative Works Distribution | Commercial use | |\n| Sharealike Notice Copyleft | Reproduction Sublicensing | | |\n| State Changes | Use patent claims | | |\n| Name Terms | | | |\n| CC BY 3.0 Austria | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY 4.0 | Obligation: Attribution Permission: Derivative Works | Permission: Distribution Obligation: Notice | |\n| | Obligation: State Changes Permission: Reproduction | | |\n| CC-BY 3.0 NL | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY-NC 4.0 | Obligation: Attribution Permission: Derivative Works | Obligation: Notice | |\n| | Prohibition: Commercial use Permission: Distribution | Obligation: State Changes | |\n| | Permission: Reproduction | | |\n| CC-BY-NC-ND 4.0 | Obligation: Attribution Obligation: Notice | Prohibition: Commercial use Permission: Distribution | |\n| | Obligation: State Changes Permission: Reproduction | | |", - "page_start": 34, - "page_end": 34, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "ISBN: 978-1-78655-073-6\n\nISSN: 1756-3666\n\n© Crown copyright 2016\n\nThis publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gsi.gov.uk.\n\nWhere we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.", - "page_start": 44, - "page_end": 44, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7%.\n\nThe chief executive officer and 80% of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by non-Canadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10% of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10% of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n#### WIRELESS\n\n#### **Consultation on the Renewal of Cellular and Personal Communications Services (PCS) Spectrum Licences**\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n- At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n- The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n- A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.\n\n#### **Consultation on a Policy and Technical Framework for the 700Mhz and 2500-2690Mhz Band and Aspects Related to Commercial Mobile Spectrum**\n\nIn March 2012, Industry Canada released its policy and technical framework for the auction of spectrum in the 700 MHz and 2500–2690 MHz spectrum bands. Key things to note:\n\n- Industry Canada adopted an auction cap for the 700 MHZ (not a setaside like in the 2008 Advanced Wireless Services (AWS) spectrum auction). There are four blocks of spectrum that are considered \"prime\". Large domestic wireless carriers are restricted to a single block of prime spectrum each, while all other carriers are restricted to two blocks. Rogers, Bell and Telus are considered large carriers nationally. SaskTel is considered a large carrier in Saskatchewan, and MTS is considered a large carrier in Manitoba.\n- To encourage rural deployments, single carriers who win two paired blocks, or two carriers who share their two paired blocks, are required to use their 700 MHz spectrum to provide coverage to 90% of their HSPA+ territory within five years and 97% within seven years. Industry Canada will use Tier 2 licence areas for the 700Mhz auction. These are 14 large service areas covering all of Canada, and are generally the same size as individual provinces.\nIn March 2013, Industry Canada released *Licensing Framework for Mobile Broadband Services (MBS) – 700 MHz Band*. Key things to note:\n\n- Industry Canada confirmed that, for the most part, the policy and technical framework to auction spectrum in the 700 MHz band are the same as proposed in its March 14, 2012 consultation document.\n- The auction will use a combinatorial clock auction (CCA) format, where bids are made for packages of spectrum licences, rather than the simultaneous multiple round auction (SMRA) format used in the past, where bids are made on individual licences.\n- Associated entities can apply to bid separately and to have the auction cap applied individually. These bidders must demonstrate that they \"intend to separately and actively provide services\" within a given licence area for the duration of the spectrum caps (five years after licensing). Industry Canada has determined that no registered bidders were associated with each other.\n\nThe auction was initially set to begin on November 19, 2013. In June 2013, Industry Canada moved the application deadline to September 17, 2013, and the auction start to January 14, 2014.\n\nIn October 2013, Industry Canada released its consultation paper, seeking comments on licencing considerations related to auction format, rules and processes, as well as on conditions of licence for spectrum in the 2500–2690 MHz band. The final policy was released on January 10, 2014.\n\nKey things to note about 2500–2690 MHz spectrum policy:\n\n- Industry Canada adopted a spectrum cap (not an auction cap like in the 700 MHz auction). No carrier participating in the auction may possess more than 40 MHz of 2500–2690 MHz spectrum. Rogers is grandfathered with respect to our holdings in those situations where we already hold more than 40 MHz of this spectrum. We will not be required to return spectrum.\n- There is no special roll-out requirement for 2500–2690 MHz spectrum. A general roll-out rule will be determined in the policy.\n- The auction is set to commence on April 15, 2015.\n- The 2500MHz auction will use Tier 3 licence areas.\n\n#### **Roaming and Tower Sharing Policy**\n\nIn March 2013, Industry Canada released *Revised Frameworks for Mandatory Roaming and Antenna Tower and Site Sharing*, concluding a consultation initiated in 2012. It sets out the current rules for roaming and tower and site sharing. Its key terms are:\n\n- All holders of spectrum licences, radio licences and broadcasting certificates must share towers and antenna sites, where technically feasible, at commercial rates.\n- All licensees were permitted to request roaming from other licensees at commercial rates.\n- The timeframe for negotiating agreements is 60 days, after which arbitration according to Industry Canada arbitration rules will begin.\n- The roaming capabilities must provide connectivity for digital voice and data services regardless of the spectrum band or underlying technology used.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "# **A Note from Leadership**\n\nCC staff photos are licensed under CC BY 4.0.\n\n2023 was a busy year at Creative Commons. Our **Open Culture** program and **Open Climate Campaign** entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our **Open Infrastructure Circle** in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on **the critical work that awaits us in 2024**.\n\n**Anna Tumadóttir, CEO**", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "To what subjects Creative Commons expand its work in 2023 ?", - "target_page": 8, - "target_passage": "We expanded our work in biodiversity, climate, and life sciences focused on ensuring that science research and data are open", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "This is a frame from \"Twenty Years of Creative Commons (in Sixty Seconds)\" by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n**Creative Commons**\n\nPO Box 1866 Mountain View CA 94042 USA\n\n+1 415 429 6753 info@creativecommons.org", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# **A Note from Leadership**\n\nCC staff photos are licensed under CC BY 4.0.\n\n2023 was a busy year at Creative Commons. Our **Open Culture** program and **Open Climate Campaign** entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our **Open Infrastructure Circle** in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on **the critical work that awaits us in 2024**.\n\n**Anna Tumadóttir, CEO**", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n# **About Us**\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n#### **Chief Executive Officer**\n\nAnna Tumadóttir\n\n#### **General Counsel**\n\nKat Walsh\n\n# **Board of Directors**\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann Lawrence Lessig **Emeritus* Angela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\n**Except where otherwise noted, \"Annual Report 2023\" by Creative Commons is licensed under CC BY 4.0.**", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## *1. Introduction*1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called \"Books3\" to train LLMs.2 The Books3 dataset contains text from over 170,000 books, which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited.3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a \"books data commons for AI training\" might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1 Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nSee e.g. Knibbs, Kate. \"The Battle over Books3 Could Change AI Forever.\" *Wired*, 4 Sept. 2023, 2 www.wired.com/story/battle-over-books3/.\n\nFor key documents in these cases, see the helpful compendium at \"Master List of Lawsuits v. AI, 3 ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.\" *Chat GPT Is Eating the World*, 27 Dec. 2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoftmeta-midjourney-other-ai-cos. See also \"Fair Use Week 2024: Day Two with Guest Expert Brandon Butler.\" *Fair Use Week*, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-withguest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not consequential for the fair use analysis).", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **Our Impact**\n\nCC believes that opening up knowledge is key to addressing the world's most pressing challenges. Today, we steer campaigns, programming, and training in many areas:\n\n### **Open Culture**\n\n2023 was quite a year for the CC Open Culture Program, thanks to generous funding from **Arcadia**. We grew our Open Culture team from one to two and a half staff, rolling out new initiatives like TAROC (Towards a Recommendation on Open Culture) and **Open Culture Live: A Webinar Series**. We invite you to read \"**What did Creative Commons do for Open Culture in 2023?**\" to learn more.\n\n### **Open Journalism**\n\nThanks to generous funding from the **John D. and Catherine T. MacArthur Foundation**, CC hosted its very first Open Journalism track at the CC Global Summit, including eight presentations, lightning talks, panel discussions, and workshops as well as a **keynote by Anya Kamenetz**.\n\nRepresentatives from 33 news outlets and digital rights-focused organizations attended the CC Summit sessions. The Open Journalism track built on **numerous collaborations and workshops** throughout 2023.\n\n### **Open Education**\n\nWe delivered workshops and presentations on CC Licenses and Open Educational Resources at over 16 conferences and events. The CC Open Education Platform also funded six global projects, **including work to advance the UNESCO Recommendation on OER.**\n\n\"Follow the Color Brick Road\" by Bert Kaufmann is licensed under CC BY-SA 2.0.", - "page_start": 6, - "page_end": 6, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *Acknowledgements*\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### **Table 28: Digitalisation and OSH discussed – ESENER 2019279**\n\n| Digitalisation and impacts on OSH discussed | % Establishments |\n| --- | --- |\n| Need for continuous training to keep skills updated | 77% |\n| Prolonged sitting | ୧୮୭୧ |\n| More flexibility in terms of place of work and working time | 63% |\n| Increased work intensity or time pressure | 58% |\n| Repetitive movements | 58% |\n| Information overload | 52% |\n| Blurring boundaries between work and private life | 47% |\n| Fear of job loss | 21% |\n\nActually, only 24% of surveyed establishments in ESENER 2019 reported discussing about the potential impact of digitalisation on the health and safety of workers. Of those 24% of all surveyed establishments, 77% discuss the need for continuous training to keep skills updated. The next major topics are prolonged sitting (65%) and the request for more flexibility for employees in terms of place of work and working time (63%).280\n\nSome obvious **side effects** on working conditions require political actions. In response to the rapid development of online platform work in the EU, the European Commission started several activities on how to **protect people working through digital platforms**. The new Strategic Framework on OSH aims at adapting the OSH directives on Workplace minimum requirements and Digital screen equipment.\n\nThese fast and far-reaching changes by digitalisation have also triggered **ethical concerns**. The High-Level Expert Group of the EU Commission on Ethics adds, referring to the development of AI: *'In an AI context, freedom of the individual for instance requires mitigation of (in) direct illegitimate coercion, threats to mental autonomy and mental health, unjustified surveillance, deception and unfair manipulation.'* 281 In a report from 2022, EU-OSHA highlighted the possible consequences of AI for worker management.282\n\n**Major environmental changes and policies** influence OSH. **The enhanced and accelerated introduction of environmental technologies is widely supported by national and EU policies.** *(Green deal283 and circular economy.*284*)* Consequently, the number of workers in these sectors will increase and impact the working conditions of many workers. Sectors/enterprises dealing with sustainable technologies grow fast, for example, decentralised and carbon-free energy production, green products, waste and recycling, green mobility and transport, and energy saving buildings' renovation. These 'green jobs' have gained a relevant and sometimes essential share in several economic areas.285\n\nSectors like **construction and crafts** will profit significantly from this development. That would also mean that sector-typical OSH risks — accident risks — will 'return'. Also new risks will emerge, a circular economy approach286 will pose additional **risks in recycling and waste treatment**, due to more handling of contaminated materials and probable exposure to more chemical contaminants and infectious biological agents.\n\nEU-OSHA summarises: *'The new technologies or working processes associated with green jobs can lead to new hazards, which call for new combinations of skills to deal with them: the \"old\" OSH knowledge cannot simply be transferred to them. Installing a solar water heater, for example, involves combining the skills of a roofer, a plumber and an electrician.'*287\n\nIn addition, many of the new green technologies often require new skills and new processes and might **produce unprecedented OSH risks** — for example, fire and explosion from less environmentally harmful but less safe chemicals. However, at the same time, **green technologies support risk reduction at source**, due to principles such as limitation of hazardous chemicals and materials and", - "page_start": 105, - "page_end": 105, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "From which country does Killam Properties Inc originate ?", - "target_page": 3, - "target_passage": "Killam Properties Inc. is a growth oriented Canadian real estate company.", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "# Killam properties Inc **2013 annual report**", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **PART II**\n\n## **Business Overview**\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi‑family residential and Manufactured Home Community (\"MHC\") properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFO per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi‑family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land‑lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap‑rates and use the funds to continue to grow the apartment portfolio.\n\n## **Key Performance Indicators (KPIs)**\n\nManagement measures Killam's performance based on the following KPIs:\n\n- 1. FFO per Share A standard measure of earnings for real estate entities. Management is focused on growing FFO per share on an annual basis.\n- 2. Rental Increases Management expects to achieve increases in average rental rates on an annual basis and measures the average rental increases achieved.\n- 3. Occupancy Management is focused on maximizing occupancy levels while also managing the impact of higher rents. This measure considers units rented as a percentage of total stabilized units at a point in time.\n- 4. Same Store NOI Growth This measure considers the Company's ability to increase the NOI at properties that it has owned for equivalent periods year‑over‑year, removing the impact of acquisitions, dispositions, developments and other non same store operating adjustments.\n- 5. Weighted Average Cost of Debt Killam monitors the weighted average cost of its mortgage debt and total debt.\n- 6. Debt to Total Assets Killam measures its debt levels as a percentage of total assets and works to ensure that the debt to total assets remains at a range of 55% to 65%.\n- 7. Term to Maturity Management monitors the average number of years to maturity on its debt.\n- 8. Interest Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest on outstanding debt. Generally, the higher the interest coverage ratio, the lower the credit risk.\n- 9. Debt Service Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest and principal on outstanding debt. Generally the higher the debt service coverage ratio, the lower the credit risk.", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# A Diversified Portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash flows.\n\nS2, Halifax, Nova Scotia", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Business Strategy**\n\n#### **Maximize NOI from Existing Portfolio**\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n#### **Growth through Acquisitions**\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n#### **Growth through Development**\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to‑date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n#### **Investment in New Properties**\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high‑quality buildings will result in above‑market and long‑term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in Canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above‑average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two‑bedroom unit in these newer buildings was $1,320 per month, compared to a market average two‑bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap‑rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n#### **Geographic Diversification**\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%‑18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year‑end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long‑term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap‑rates. This creates moderate short‑term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Increasing Geographic Diversification\n\nWith a home base in Halifax, Killam's roots are in Atlantic Canada and the Company has successfully grown by consolidating the residential real estate market in the region's urban centres. In order to meet its long-term growth targets and increase its investment in Canada's most dynamic real estate markets, Killam has been actively expanding its apartment portfolio in Ontario and is exploring investment opportunities in Western Canada. Since 2010, Killam has expanded its apartment target markets to include specific cities in Ontario, and has invested approximately $200 million in real estate assets in the province. Approximately 15% of Killam's 2014 net operating income is expected to be earned in Ontario. The Company has set a long-term target to earn 50% of its net operating income outside Atlantic Canada.", - "page_start": 16, - "page_end": 16, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# About Killam Properties Inc.\n\nKillam Properties Inc. is a growth oriented Canadian real estate company. We own, manage and develop multi-family residential properties in Atlantic Canada and Ontario. Since our first acquisition in 2002, our real estate portfolio has grown to $1.5 billion and includes 12,647 apartment units and 5,164 manufactured home community (MHC) sites. We are committed to growing Killam's earnings by maximizing the returns from our existing portfolio and expanding through acquisitions and development.\n\n# Our Mission\n\nTo have a team of caring staff deliver clean, safe, quality housing to tenants who are proud to call our properties home.\n\n> Strong **Customer** Relationships\n\nCreative **Solutions**\n\n# Our Core Values\n\nCurb **Appeal** Do the **Right** Thing\n\n**President's Letter 9 Asset Portfolio 18 MD&A 21 Financial Statements 66 Five-Year Summary 96**\n\n180 Mill Street, London, Ontario", - "page_start": 2, - "page_end": 2, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Killam's NOI by Province**\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward‑looking NOI by province based on ownership interest at December 31, 2013:\n\n## **NOI by Province**\n\n## **The Multi‑family Market Leader in Atlantic Canada**\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. Acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. Looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except share and per share amounts)*\n\n## **28. Related Party Transactions**\n\nKillam has contracted APM Construction Services Inc. (\"APM\") to act as Project Manager on the new construction project in St. John's, NL. APM was previously the Project Manager on The Plaza, which was completed in May 2013. APM is an entity controlled by a director of Killam. APM will be paid an industry standard management fee of approximately 4% of the construction costs. For the year ended December 31, 2013, Killam paid APM $0.5 million for construction management services (December 31, 2012 ‑ $0.2 million).\n\n On December 6, 2013, Killam acquired Northgate Apartments from a director of Killam for a purchase price of $3.7 million, the fair value at the acquisition date.\n\nKillam has a 50% interest in a commercial complex that houses its head office. The remaining 50% interest is owned by a Company controlled by an executive and director of Killam. In addition, the property manager for the commercial complex is controlled by the executive and director and is paid an industry standard property management fee.\n\n#### Key management personnel remuneration\n\nThe remuneration of directors and other key management personnel which include the Directors, President & Chief Executive Officer, Executive Vice‑President and Chief Financial Officer, and Vice‑Presidents of Killam during the years ended December 31, 2013, and 2012 was at follows:\n\n| | 2013 | 2012 |\n| --- | --- | --- |\n| Salaries, board compensation and incentives | $2,371 | $2,892 |\n| Restricted share awards | 847 | 633 |\n| Total | $3,218 | $3,525 |\n\n## **29. Subsequent Events**\n\nOn January 20, 2014, and February 18, 2014, the Company announced dividends of $0.05 per share, payable on February 17, 2014, and March 17, 2014, to shareholders of record on January 31, 2014, and February 28, 2014.", - "page_start": 94, - "page_end": 94, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# President's Letter\n\n## **Dear Shareholders,**\n\nI am pleased to review Killam's 2013 performance with you, and outline our strategy and plans for the future. We are progressing nicely with our priorities to increase the quality of our portfolio and expand geographically. In addition, we are focused on three key areas of growth for the Company: increase the value of our existing portfolio, acquire accretively and develop profitably.\n\nDuring the past year we expanded communication of our corporate strategy to reach the broader Killam community with the introduction of Killam's Core Values. These values have been inherent in the Company since our first acquisition in 2002, but had not been broadly promoted until this past year. Our Core Values (Curb Appeal, Build Community, Strong Customer Relationships, Do the Right Thing and Creative Solutions)\n\nare represented in the colourful squares you will see throughout this year's report. Killam employees across the Company demonstrate these values in their daily work, distinguishing Killam as a high-quality landlord. The introduction of a quarterly awards program, which recognizes employees who exemplify Killam's\n\nCore Values, enables us to celebrate these values. I have been impressed by both the number and quality of nominations. We truly have a remarkable group of employees who go above and beyond in providing exceptional service to our tenants.\n\n## **A Look Back at 2013**\n\nI would summarize 2013 as a mixed year for Killam. We were successful in achieving many of the objectives and targets we had set for ourselves, as summarized in the adjacent chart, but faced challenges that impacted our financial performance. We added $191 million in new assets to our portfolio through acquisitions and the completion of four new developments. We also enhanced our leasing and marketing programs, which allowed us to realize gains in occupancy in the second half of the year and improve our position for 2014. We further benefited from both interest and administrative cost savings in the year. These improvements were mitigated somewhat by large increases in natural gas costs in Atlantic Canada and a more competitive rental market in the Maritimes, which resulted in increased year-over-year vacancy. The challenges we faced in 2013 resulted in funds from operations (FFO) per share of $0.72, the same as Killam's 2012 FFO per share.\n\n## **Growing the Cash Flow from our Properties**\n\nWe expect to generate, on average, between 2% and 4% in net operating income (NOI) growth through our same store portfolio on an annual basis. Our same store portfolio represents properties we have owned for equivalent periods year-over-year. Due to commodity price volatility, we experienced an unexpected spike in natural gas prices in Nova Scotia and New Brunswick throughout the 2013 heating season that increased same store utility and fuel expenses by 14%. We were able to partially offset this unprecedented increase by managing controllable expenses to a modest 0.3% increase in the year; however, overall same store operating costs grew by 5.0%. These higher expenses more than offset a 1.8% growth in revenue, resulting in a disappointing 0.4% decline in same store NOI for the year.\n\nWe are targeting positive same store growth in 2014 of up to 2%. Year-over-year occupancy improvements and increased rental rates are expected to generate revenue growth. Increasing our leasing staff and refining our marketing and leasing process is proving effective, resulting in improved occupancy levels in many of our core markets, especially in Ontario and New Brunswick. A colder than normal winter this year (2014) is translating into increased energy consumption and continued volatility in natural gas prices in Atlantic Canada, expected to result in higher than normal heating costs. We continue to invest in energy and operational efficiencies which we expect will keep our controllable costs down throughout the year and partially offset higher heating costs.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "How Killam Properties Inc does increase its geographic diversification ? ", - "target_page": 5, - "target_passage": "We are increasing our geographic diversification by expanding our apartment ownership outside Atlantic Canada. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Killam properties Inc **2013 annual report**", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **PART II**\n\n## **Business Overview**\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi‑family residential and Manufactured Home Community (\"MHC\") properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFO per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi‑family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land‑lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap‑rates and use the funds to continue to grow the apartment portfolio.\n\n## **Key Performance Indicators (KPIs)**\n\nManagement measures Killam's performance based on the following KPIs:\n\n- 1. FFO per Share A standard measure of earnings for real estate entities. Management is focused on growing FFO per share on an annual basis.\n- 2. Rental Increases Management expects to achieve increases in average rental rates on an annual basis and measures the average rental increases achieved.\n- 3. Occupancy Management is focused on maximizing occupancy levels while also managing the impact of higher rents. This measure considers units rented as a percentage of total stabilized units at a point in time.\n- 4. Same Store NOI Growth This measure considers the Company's ability to increase the NOI at properties that it has owned for equivalent periods year‑over‑year, removing the impact of acquisitions, dispositions, developments and other non same store operating adjustments.\n- 5. Weighted Average Cost of Debt Killam monitors the weighted average cost of its mortgage debt and total debt.\n- 6. Debt to Total Assets Killam measures its debt levels as a percentage of total assets and works to ensure that the debt to total assets remains at a range of 55% to 65%.\n- 7. Term to Maturity Management monitors the average number of years to maturity on its debt.\n- 8. Interest Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest on outstanding debt. Generally, the higher the interest coverage ratio, the lower the credit risk.\n- 9. Debt Service Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest and principal on outstanding debt. Generally the higher the debt service coverage ratio, the lower the credit risk.", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Increasing Geographic Diversification\n\nWith a home base in Halifax, Killam's roots are in Atlantic Canada and the Company has successfully grown by consolidating the residential real estate market in the region's urban centres. In order to meet its long-term growth targets and increase its investment in Canada's most dynamic real estate markets, Killam has been actively expanding its apartment portfolio in Ontario and is exploring investment opportunities in Western Canada. Since 2010, Killam has expanded its apartment target markets to include specific cities in Ontario, and has invested approximately $200 million in real estate assets in the province. Approximately 15% of Killam's 2014 net operating income is expected to be earned in Ontario. The Company has set a long-term target to earn 50% of its net operating income outside Atlantic Canada.", - "page_start": 16, - "page_end": 16, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. Acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. Looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Business Strategy**\n\n#### **Maximize NOI from Existing Portfolio**\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n#### **Growth through Acquisitions**\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n#### **Growth through Development**\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to‑date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n#### **Investment in New Properties**\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high‑quality buildings will result in above‑market and long‑term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in Canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above‑average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two‑bedroom unit in these newer buildings was $1,320 per month, compared to a market average two‑bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap‑rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n#### **Geographic Diversification**\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%‑18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year‑end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long‑term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap‑rates. This creates moderate short‑term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# President's Letter\n\n## **Dear Shareholders,**\n\nI am pleased to review Killam's 2013 performance with you, and outline our strategy and plans for the future. We are progressing nicely with our priorities to increase the quality of our portfolio and expand geographically. In addition, we are focused on three key areas of growth for the Company: increase the value of our existing portfolio, acquire accretively and develop profitably.\n\nDuring the past year we expanded communication of our corporate strategy to reach the broader Killam community with the introduction of Killam's Core Values. These values have been inherent in the Company since our first acquisition in 2002, but had not been broadly promoted until this past year. Our Core Values (Curb Appeal, Build Community, Strong Customer Relationships, Do the Right Thing and Creative Solutions)\n\nare represented in the colourful squares you will see throughout this year's report. Killam employees across the Company demonstrate these values in their daily work, distinguishing Killam as a high-quality landlord. The introduction of a quarterly awards program, which recognizes employees who exemplify Killam's\n\nCore Values, enables us to celebrate these values. I have been impressed by both the number and quality of nominations. We truly have a remarkable group of employees who go above and beyond in providing exceptional service to our tenants.\n\n## **A Look Back at 2013**\n\nI would summarize 2013 as a mixed year for Killam. We were successful in achieving many of the objectives and targets we had set for ourselves, as summarized in the adjacent chart, but faced challenges that impacted our financial performance. We added $191 million in new assets to our portfolio through acquisitions and the completion of four new developments. We also enhanced our leasing and marketing programs, which allowed us to realize gains in occupancy in the second half of the year and improve our position for 2014. We further benefited from both interest and administrative cost savings in the year. These improvements were mitigated somewhat by large increases in natural gas costs in Atlantic Canada and a more competitive rental market in the Maritimes, which resulted in increased year-over-year vacancy. The challenges we faced in 2013 resulted in funds from operations (FFO) per share of $0.72, the same as Killam's 2012 FFO per share.\n\n## **Growing the Cash Flow from our Properties**\n\nWe expect to generate, on average, between 2% and 4% in net operating income (NOI) growth through our same store portfolio on an annual basis. Our same store portfolio represents properties we have owned for equivalent periods year-over-year. Due to commodity price volatility, we experienced an unexpected spike in natural gas prices in Nova Scotia and New Brunswick throughout the 2013 heating season that increased same store utility and fuel expenses by 14%. We were able to partially offset this unprecedented increase by managing controllable expenses to a modest 0.3% increase in the year; however, overall same store operating costs grew by 5.0%. These higher expenses more than offset a 1.8% growth in revenue, resulting in a disappointing 0.4% decline in same store NOI for the year.\n\nWe are targeting positive same store growth in 2014 of up to 2%. Year-over-year occupancy improvements and increased rental rates are expected to generate revenue growth. Increasing our leasing staff and refining our marketing and leasing process is proving effective, resulting in improved occupancy levels in many of our core markets, especially in Ontario and New Brunswick. A colder than normal winter this year (2014) is translating into increased energy consumption and continued volatility in natural gas prices in Atlantic Canada, expected to result in higher than normal heating costs. We continue to invest in energy and operational efficiencies which we expect will keep our controllable costs down throughout the year and partially offset higher heating costs.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# A Diversified Portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash flows.\n\nS2, Halifax, Nova Scotia", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Killam's NOI by Province**\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward‑looking NOI by province based on ownership interest at December 31, 2013:\n\n## **NOI by Province**\n\n## **The Multi‑family Market Leader in Atlantic Canada**\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# About Killam Properties Inc.\n\nKillam Properties Inc. is a growth oriented Canadian real estate company. We own, manage and develop multi-family residential properties in Atlantic Canada and Ontario. Since our first acquisition in 2002, our real estate portfolio has grown to $1.5 billion and includes 12,647 apartment units and 5,164 manufactured home community (MHC) sites. We are committed to growing Killam's earnings by maximizing the returns from our existing portfolio and expanding through acquisitions and development.\n\n# Our Mission\n\nTo have a team of caring staff deliver clean, safe, quality housing to tenants who are proud to call our properties home.\n\n> Strong **Customer** Relationships\n\nCreative **Solutions**\n\n# Our Core Values\n\nCurb **Appeal** Do the **Right** Thing\n\n**President's Letter 9 Asset Portfolio 18 MD&A 21 Financial Statements 66 Five-Year Summary 96**\n\n180 Mill Street, London, Ontario", - "page_start": 2, - "page_end": 2, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n#### **Continued Geographic Expansion in Ontario**\n\nKillam acquired two buildings in Ontario during 2013 including a 102‑unit building located in Ottawa for $10.4 million as well as a newly constructed, 8‑storey, mixed‑use complex containing 21,242 square feet of street level retail (TD Bank, Shoppers Drug Mart and Tim Hortons) and 179 apartment units in downtown Toronto for $40.0 million. With the completion of these two acquisitions, Killam's future NOI generated from its Ontario properties is expected to increase to 15.0% from 7.5%.\n\n#### **Reduced Cap‑Rate Compression in 2013**\n\nDuring 2013 Killam recorded $13.1 million in fair value gains related to its portfolio compared to $37.7 million in 2012. This decrease year‑over‑year was driven by a combination of reduced cap‑rate compression in 2013 and a slight uptick in cap‑rates of 25 bps in the Saint John market in the fourth quarter of 2013. The net gain in real estate valuations does not impact the Company's FFO per share, its key measure of performance.\n\n#### **Dividend Increase**\n\nOn December 23, 2013, Killam announced an increase in its annual dividend by 3.4% to $0.60 per share from $0.58 per share. The increase reflects Management's expectation of earning's growth to be generated in 2014.\n\n## **Performance Compared to 2013 Key Objectives**\n\n| Consolidation of Multi‑family Residential Real Estate Market | |\n| --- | --- |\n| 2013 Target Complete approximately $75‑$125 million in acquisitions. | |\n| 2013 Performance | Killam completed $121.1 million in acquisitions in 2013 which includes $112.8 million in apartment |\n| acquisitions, $1.4 million for 65 MHC sites and $6.9 million in vacant land for future developments. | |\n| Increase Investment in New Properties | |\n| 2013 Target | Focus on newer properties as part of the acquisition program in 2013. Complete and lease‑up Killam's four |\n| developments, and commence two new development projects. | |\n| 2013 Performance | During 2013 Killam acquired 552 units which were constructed after 2001, representing 74% of the total |\n| units added to the portfolio during the year. The acquisitions included three buildings constructed in 2013, | |\n| an 83‑unit luxury building in Halifax, a 48‑unit building in Moncton, and a 179‑unit building on Queen Street | |\n| West in Toronto. | |\n| The Company also completed the construction of four development projects totaling 282 units during | |\n| the first half of the year. These buildings were all ready for occupancy by the beginning of May 2013 with | |\n| lease‑up periods varying by project. Bennett House and Brighton House were fully leased within three | |\n| months of opening while the S2 and The Plaza are currently 62% and 61% leased. Both properties are | |\n| expected to be substantially leased by mid‑2014. | |\n| Killam commenced two new development projects during the year. Development started on a 101‑unit | |\n| project in St. John's in Q3‑2013 and a 122‑unit project in Cambridge broke ground in December 2013. Please | |\n| refer to the Investment Properties Under Construction section of the MD&A on page 49 for further details on | |\n| these projects. | |", - "page_start": 25, - "page_end": 25, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "What is the Killam Properties Inc 2013 performance about the Geographic Diversification objective ?", - "target_page": 8, - "target_passage": "Target achieved. Killam acquired $55 million in Ontario real estate in 2013, representing 45% of its acquisition program in the year. Assets acquired included a 102-unit property in Ottawa, a newly built, 179-unit, mixed-used property in downtown Toronto and a 5.2 acre parcel of land for development in Cambridge, Ontario. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Killam properties Inc **2013 annual report**", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **PART II**\n\n## **Business Overview**\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi‑family residential and Manufactured Home Community (\"MHC\") properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFO per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi‑family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land‑lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap‑rates and use the funds to continue to grow the apartment portfolio.\n\n## **Key Performance Indicators (KPIs)**\n\nManagement measures Killam's performance based on the following KPIs:\n\n- 1. FFO per Share A standard measure of earnings for real estate entities. Management is focused on growing FFO per share on an annual basis.\n- 2. Rental Increases Management expects to achieve increases in average rental rates on an annual basis and measures the average rental increases achieved.\n- 3. Occupancy Management is focused on maximizing occupancy levels while also managing the impact of higher rents. This measure considers units rented as a percentage of total stabilized units at a point in time.\n- 4. Same Store NOI Growth This measure considers the Company's ability to increase the NOI at properties that it has owned for equivalent periods year‑over‑year, removing the impact of acquisitions, dispositions, developments and other non same store operating adjustments.\n- 5. Weighted Average Cost of Debt Killam monitors the weighted average cost of its mortgage debt and total debt.\n- 6. Debt to Total Assets Killam measures its debt levels as a percentage of total assets and works to ensure that the debt to total assets remains at a range of 55% to 65%.\n- 7. Term to Maturity Management monitors the average number of years to maturity on its debt.\n- 8. Interest Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest on outstanding debt. Generally, the higher the interest coverage ratio, the lower the credit risk.\n- 9. Debt Service Coverage Ratio A common measure of credit risk used by lenders, this measure considers Killam's ability to pay interest and principal on outstanding debt. Generally the higher the debt service coverage ratio, the lower the credit risk.", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# President's Letter\n\n## **Dear Shareholders,**\n\nI am pleased to review Killam's 2013 performance with you, and outline our strategy and plans for the future. We are progressing nicely with our priorities to increase the quality of our portfolio and expand geographically. In addition, we are focused on three key areas of growth for the Company: increase the value of our existing portfolio, acquire accretively and develop profitably.\n\nDuring the past year we expanded communication of our corporate strategy to reach the broader Killam community with the introduction of Killam's Core Values. These values have been inherent in the Company since our first acquisition in 2002, but had not been broadly promoted until this past year. Our Core Values (Curb Appeal, Build Community, Strong Customer Relationships, Do the Right Thing and Creative Solutions)\n\nare represented in the colourful squares you will see throughout this year's report. Killam employees across the Company demonstrate these values in their daily work, distinguishing Killam as a high-quality landlord. The introduction of a quarterly awards program, which recognizes employees who exemplify Killam's\n\nCore Values, enables us to celebrate these values. I have been impressed by both the number and quality of nominations. We truly have a remarkable group of employees who go above and beyond in providing exceptional service to our tenants.\n\n## **A Look Back at 2013**\n\nI would summarize 2013 as a mixed year for Killam. We were successful in achieving many of the objectives and targets we had set for ourselves, as summarized in the adjacent chart, but faced challenges that impacted our financial performance. We added $191 million in new assets to our portfolio through acquisitions and the completion of four new developments. We also enhanced our leasing and marketing programs, which allowed us to realize gains in occupancy in the second half of the year and improve our position for 2014. We further benefited from both interest and administrative cost savings in the year. These improvements were mitigated somewhat by large increases in natural gas costs in Atlantic Canada and a more competitive rental market in the Maritimes, which resulted in increased year-over-year vacancy. The challenges we faced in 2013 resulted in funds from operations (FFO) per share of $0.72, the same as Killam's 2012 FFO per share.\n\n## **Growing the Cash Flow from our Properties**\n\nWe expect to generate, on average, between 2% and 4% in net operating income (NOI) growth through our same store portfolio on an annual basis. Our same store portfolio represents properties we have owned for equivalent periods year-over-year. Due to commodity price volatility, we experienced an unexpected spike in natural gas prices in Nova Scotia and New Brunswick throughout the 2013 heating season that increased same store utility and fuel expenses by 14%. We were able to partially offset this unprecedented increase by managing controllable expenses to a modest 0.3% increase in the year; however, overall same store operating costs grew by 5.0%. These higher expenses more than offset a 1.8% growth in revenue, resulting in a disappointing 0.4% decline in same store NOI for the year.\n\nWe are targeting positive same store growth in 2014 of up to 2%. Year-over-year occupancy improvements and increased rental rates are expected to generate revenue growth. Increasing our leasing staff and refining our marketing and leasing process is proving effective, resulting in improved occupancy levels in many of our core markets, especially in Ontario and New Brunswick. A colder than normal winter this year (2014) is translating into increased energy consumption and continued volatility in natural gas prices in Atlantic Canada, expected to result in higher than normal heating costs. We continue to invest in energy and operational efficiencies which we expect will keep our controllable costs down throughout the year and partially offset higher heating costs.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Business Strategy**\n\n#### **Maximize NOI from Existing Portfolio**\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n#### **Growth through Acquisitions**\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n#### **Growth through Development**\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to‑date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n#### **Investment in New Properties**\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high‑quality buildings will result in above‑market and long‑term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in Canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above‑average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two‑bedroom unit in these newer buildings was $1,320 per month, compared to a market average two‑bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap‑rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n#### **Geographic Diversification**\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%‑18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year‑end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long‑term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap‑rates. This creates moderate short‑term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Increasing Geographic Diversification\n\nWith a home base in Halifax, Killam's roots are in Atlantic Canada and the Company has successfully grown by consolidating the residential real estate market in the region's urban centres. In order to meet its long-term growth targets and increase its investment in Canada's most dynamic real estate markets, Killam has been actively expanding its apartment portfolio in Ontario and is exploring investment opportunities in Western Canada. Since 2010, Killam has expanded its apartment target markets to include specific cities in Ontario, and has invested approximately $200 million in real estate assets in the province. Approximately 15% of Killam's 2014 net operating income is expected to be earned in Ontario. The Company has set a long-term target to earn 50% of its net operating income outside Atlantic Canada.", - "page_start": 16, - "page_end": 16, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# A Diversified Portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash flows.\n\nS2, Halifax, Nova Scotia", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# Opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. Acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. Looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# About Killam Properties Inc.\n\nKillam Properties Inc. is a growth oriented Canadian real estate company. We own, manage and develop multi-family residential properties in Atlantic Canada and Ontario. Since our first acquisition in 2002, our real estate portfolio has grown to $1.5 billion and includes 12,647 apartment units and 5,164 manufactured home community (MHC) sites. We are committed to growing Killam's earnings by maximizing the returns from our existing portfolio and expanding through acquisitions and development.\n\n# Our Mission\n\nTo have a team of caring staff deliver clean, safe, quality housing to tenants who are proud to call our properties home.\n\n> Strong **Customer** Relationships\n\nCreative **Solutions**\n\n# Our Core Values\n\nCurb **Appeal** Do the **Right** Thing\n\n**President's Letter 9 Asset Portfolio 18 MD&A 21 Financial Statements 66 Five-Year Summary 96**\n\n180 Mill Street, London, Ontario", - "page_start": 2, - "page_end": 2, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n#### **Continued Geographic Expansion in Ontario**\n\nKillam acquired two buildings in Ontario during 2013 including a 102‑unit building located in Ottawa for $10.4 million as well as a newly constructed, 8‑storey, mixed‑use complex containing 21,242 square feet of street level retail (TD Bank, Shoppers Drug Mart and Tim Hortons) and 179 apartment units in downtown Toronto for $40.0 million. With the completion of these two acquisitions, Killam's future NOI generated from its Ontario properties is expected to increase to 15.0% from 7.5%.\n\n#### **Reduced Cap‑Rate Compression in 2013**\n\nDuring 2013 Killam recorded $13.1 million in fair value gains related to its portfolio compared to $37.7 million in 2012. This decrease year‑over‑year was driven by a combination of reduced cap‑rate compression in 2013 and a slight uptick in cap‑rates of 25 bps in the Saint John market in the fourth quarter of 2013. The net gain in real estate valuations does not impact the Company's FFO per share, its key measure of performance.\n\n#### **Dividend Increase**\n\nOn December 23, 2013, Killam announced an increase in its annual dividend by 3.4% to $0.60 per share from $0.58 per share. The increase reflects Management's expectation of earning's growth to be generated in 2014.\n\n## **Performance Compared to 2013 Key Objectives**\n\n| Consolidation of Multi‑family Residential Real Estate Market | |\n| --- | --- |\n| 2013 Target Complete approximately $75‑$125 million in acquisitions. | |\n| 2013 Performance | Killam completed $121.1 million in acquisitions in 2013 which includes $112.8 million in apartment |\n| acquisitions, $1.4 million for 65 MHC sites and $6.9 million in vacant land for future developments. | |\n| Increase Investment in New Properties | |\n| 2013 Target | Focus on newer properties as part of the acquisition program in 2013. Complete and lease‑up Killam's four |\n| developments, and commence two new development projects. | |\n| 2013 Performance | During 2013 Killam acquired 552 units which were constructed after 2001, representing 74% of the total |\n| units added to the portfolio during the year. The acquisitions included three buildings constructed in 2013, | |\n| an 83‑unit luxury building in Halifax, a 48‑unit building in Moncton, and a 179‑unit building on Queen Street | |\n| West in Toronto. | |\n| The Company also completed the construction of four development projects totaling 282 units during | |\n| the first half of the year. These buildings were all ready for occupancy by the beginning of May 2013 with | |\n| lease‑up periods varying by project. Bennett House and Brighton House were fully leased within three | |\n| months of opening while the S2 and The Plaza are currently 62% and 61% leased. Both properties are | |\n| expected to be substantially leased by mid‑2014. | |\n| Killam commenced two new development projects during the year. Development started on a 101‑unit | |\n| project in St. John's in Q3‑2013 and a 122‑unit project in Cambridge broke ground in December 2013. Please | |\n| refer to the Investment Properties Under Construction section of the MD&A on page 49 for further details on | |\n| these projects. | |", - "page_start": 25, - "page_end": 25, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Killam's NOI by Province**\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward‑looking NOI by province based on ownership interest at December 31, 2013:\n\n## **NOI by Province**\n\n## **The Multi‑family Market Leader in Atlantic Canada**\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "What is the conventional workflow for BERT ?", - "target_page": 1, - "target_passage": "The conventional workflow for BERT consists of two stages: pre-training and fine-tuning. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# A Primer in BERTology: What We Know About How BERT Works\n\nAnna Rogers Center for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\nOlga Kovaleva Dept. of Computer Science University of Massachusetts Lowell okovalev@cs.uml.edu\n\n### Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell arum@cs.uml.edu\n\n### Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n### 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear *why*, which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n### 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\n1https://github.com/ google-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| | | | Compression Performance Speedup | | Model | Evaluation |\n| --- | --- | --- | --- | --- | --- | --- |\n| | BERT-base (Devlin et al., 2019) | ×1 | 100% | ×1 | BERT12 | All GLUE tasks, SQuAD |\n| | BERT-small | ×3.8 | 91% | - | BERT4† | All GLUE tasks |\n| | DistilBERT (Sanh et al., 2019a) BERT6-PKD (Sun et al., 2019a) | ×1.5 ×1.6 | 90%§ 98% | ×1.6 ×1.9 | BERT6 BERT6 | All GLUE tasks, SQuAD No WNLI, CoLA, STS-B; RACE |\n| | BERT3-PKD (Sun et al., 2019a) | ×2.4 | 92% | ×3.7 | BERT3 | No WNLI, CoLA, STS-B; RACE |\n| | Aguilar et al. (2019), Exp. 3 | ×1.6 | 93% | - | BERT6 | CoLA, MRPC, QQP, RTE |\n| | BERT-48 (Zhao et al., 2019) | ×62 | 87% | ×77 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| | BERT-192 (Zhao et al., 2019) | ×5.7 | 93% | ×22 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| Distillation | TinyBERT (Jiao et al., 2019) | ×7.5 | 96% | ×9.4 | † BERT4 | No WNLI; SQuAD |\n| | MobileBERT (Sun et al., 2020) | ×4.3 | 100% | ×4 | † BERT24 | No WNLI; SQuAD |\n| | PD (Turc et al., 2019) | ×1.6 | 98% | ×2.5‡ | † BERT6 | No WNLI, CoLA and STS-B |\n| | WaLDORf (Tian et al., 2019) | ×4.4 | 93% | ×9 | †k BERT8 | SQuAD |\n| | MiniLM (Wang et al., 2020b) | ×1.65 | 99% | ×2 | BERT6 | No WNLI, STS-B, MNLImm; SQuAD |\n| | MiniBERT(Tsai et al., 2019) | ∗∗ ×6 | 98% | ×27∗∗ | mBERT3 | † CoNLL-18 POS and morphology |\n| | BiLSTM-soft (Tang et al., 2019) | ×110 | 91% | ×434‡ | | BiLSTM1 MNLI, QQP, SST-2 |\n| | Q-BERT-MP (Shen et al., 2019) | ×13 | 98%¶ | - | BERT12 | MNLI, SST-2, CoNLL-03, SQuAD |\n| Quanti zation | BERT-QAT (Zafrir et al., 2019) | ×4 | 99% | - | BERT12 | No WNLI, MNLI; SQuAD |\n| | GOBO(Zadeh and Moshovos, 2020) | ×9.8 | 99% | - | BERT12 | MNLI |\n| | McCarley et al. (2020), ff2 | ×2.2‡ | 98%‡ | ×1.9‡ | BERT24 | SQuAD, Natural Questions |\n| Pruning | RPP (Guo et al., 2019) | ×1.7‡ | 99%‡ | - | BERT24 | No WNLI, STS-B; SQuAD |\n| | Soft MvP (Sanh et al., 2020) | ×33 | 94%¶ | - | BERT12 | MNLI, QQP, SQuAD |\n| | IMP (Chen et al., 2020), rewind 50% | ×1.4–2.5 | 94–100% | - | BERT12 | No MNLI-mm; SQuAD |\n| | ALBERT-base (Lan et al., 2020b) | ×9 | 97% | - | † BERT12 | MNLI, SST-2 |\n| | ALBERT-xxlarge (Lan et al., 2020b) | ×0.47 | 107% | - | † BERT12 | MNLI, SST-2 |\n| Other | BERT-of-Theseus (Xu et al., 2020) | ×1.6 | 98% | ×1.9 | BERT6 | No WNLI |\n| | PoWER-BERT (Goyal et al., 2020) | N/A | 99% | ×2–4.5 | BERT12 | No WNLI; RACE |\n\nTable 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗Smaller vocabulary used. †The dimensionality of the hidden layers is reduced. kConvolutional layers used. ‡Compared to BERTlarge. ∗∗Compared to mBERT. §As reported in (Jiao et al., 2019).¶In comparison to the dev set.\n\nthis strategy often requires compatible hardware.\n\nAs discussed in section 6, individual selfattention heads and BERT layers can be disabled without significant drop in performance (Michel et al., 2019; Kovaleva et al., 2019; Baan et al., 2019). Pruning is a compression technique that takes advantage of that fact, typically reducing the amount of computation via zeroing out of certain parts of the large model. In structured pruning, architecture blocks are dropped, as in LayerDrop (Fan et al., 2019). In unstructured, the weights in the entire model are pruned irrespective of their location, as in magnitude pruning (Chen et al., 2020) or movement pruning (Sanh et al., 2020).\n\nPrasanna et al. (2020) and Chen et al. (2020) explore BERT from the perspective of the lottery ticket hypothesis (Frankle and Carbin, 2019), looking specifically at the \"winning\" subnetworks in pre-trained BERT. They independently find that such subnetworks do exist, and that transferability between subnetworks for different tasks varies.\n\nIf the ultimate goal of training BERT is compression, Li et al. (2020) recommend training larger\n\nmodels and compressing them heavily rather than compressing smaller models lightly.\n\nOther techniques include decomposing BERT's embedding matrix into smaller matrices (Lan et al., 2020a), progressive module replacing (Xu et al., 2020) and dynamic elimination of intermediate encoder outputs (Goyal et al., 2020). See Ganesh et al. (2020) for a more detailed discussion of compression methods.\n\n#### 6.3 Pruning and model analysis\n\nThere is a nascent discussion around pruning as a model analysis technique. The basic idea is that a compressed model a priori consists of elements that are useful for prediction; therefore by finding out what they do we may find out what the whole network does. For instance, BERT has heads that seem to encode frame-semantic relations, but disabling them might not hurt downstream task performance Kovaleva et al. (2019); this suggests that this knowledge is not actually used.\n\nFor the base Transformer, Voita et al. (2019b) identify the functions of self-attention heads and", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that *most* weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n### 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS], but not [SEP]. If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n- Taking more layers into account: learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n- Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vulic´, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n- Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n- Adversarial regularization in combination with *Bregman Proximal Point Optimization* helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n- Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).\n\nWith large models, even fine-tuning becomes expensive, but Houlsby et al. (2019) show that it can\n\n5Kondratyuk and Straka (2019) suggest that fine-tuning on Universal Dependencies does result in syntactically meaningful attention patterns, but there was no quantitative evaluation.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "report that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fillin-the-blank\" cloze statements. Language\n\nAbstract\n\nRecent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these\n\nFabio Petroni1 Tim Rocktaschel ¨\n\n#### 3.2 Semantic knowledge models have many advantages over structured knowledge bases: they require no schema en-\n\narXiv:1909.01066v2 [cs.CL] 4 Sep 2019\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLM probing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles, since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nBERT struggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA. 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nOut-of-the-box BERT is surprisingly brittle to named entity replacements: e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence (*e.g.* \"Dante was born in [Mask] in the year 1265.\"). The parameters of these models appear to store\n\n#### 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nMemory Query Answer\n\nSymbolic Memory Access\n\nFlorence\n\n(Dante, born-in, X)\n\n1,2 Patrick Lewis1,2 Anton Bakhtin1\n\nDante\n\nFlorence born-in\n\nLanguage Models as Knowledge Bases?\n\nYuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2 1Facebook AI Research 2University College London {fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com\n\nKG\n\nFigure 1: Querying knowledge bases (KB) and language models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. blanks (e.g. \"Cats like to chase [___]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nIn contrast, knowledge bases are effective solutions for accessing annotated gold-standard relational data by enabling queries such as (Dante, born-in, X). However, in practice we often need to *extract* relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014) components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like \"Dante was born However, BERT cannot reason based on its world knowledge. Forbes et al. (2019) show that BERT can \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it \"knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n#### 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, \"the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n### 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance. For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30–40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n#### 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss, which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.\n\nThe studies in the knowledge distillation framework (Hinton et al., 2014) use a smaller student-network trained to mimic the behavior of a larger teacher-network. For BERT, this has been achieved through experiments with loss functions (Sanh et al., 2019b; Jiao et al., 2019), mimicking the activation patterns of individual portions of the teacher network (Sun et al., 2019a), and knowledge transfer at the pre-training (Turc et al., 2019; Jiao et al., 2019; Sun et al., 2020) or fine-tuning stage (Jiao et al., 2019). McCarley et al. (2020) suggest that distillation has so far worked better for GLUE than for reading comprehension, and report good results for QA from a combination of structured pruning and task-specific distillation.\n\nQuantization decreases BERT's memory footprint through lowering the precision of its weights (Shen et al., 2019; Zafrir et al., 2019). Note that", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Pre-Training for Deep Language Understanding. *arXiv:1908.04577 [cs]*.\n\n- Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020b. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. *arXiv preprint arXiv:2002.10957*.\n- Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2020c. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. *arXiv:1911.06136 [cs]*.\n- Yile Wang, Leyang Cui, and Yue Zhang. 2020d. How Can BERT Help Lexical Semantics Tasks? *arXiv:1911.02929 [cs]*.\n- Zihan Wang, Stephen Mayhew, Dan Roth, et al. 2019b. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. *arXiv preprint arXiv:1912.07840*.\n- Alex Warstadt and Samuel R. Bowman. 2020. Can neural networks acquire a structural bias from raw linguistic data? In *Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society*, Online.\n- Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, et al. 2019. Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 2870–2880.\n- Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. *arXiv preprint arXiv:1909.10430*.\n- Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not Explanation. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pages 11– 20, Hong Kong, China. Association for Computational Linguistics.\n- Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2020. Hugging-Face's Transformers: State-of-the-Art Natural Language Processing. *arXiv:1910.03771 [cs]*.\n- Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019a. Pay Less Attention with Lightweight and Dynamic Convolutions. In *International Conference on Learning Representations*.\n- Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019b. Conditional BERT Contextual Augmentation. In *ICCS 2019: Computational Science – ICCS 2019*, pages 84–95. Springer.\n- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. *arXiv preprint arXiv:1609.08144*.\n- Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4166–4176, Online. Association for Computational Linguistics.\n- Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. *arXiv preprint arXiv:2002.02925*.\n- Junjie Yang and Hai Zhao. 2019. Deepening Hidden Representations from Pre-Trained Language Models for Natural Language Understanding. *arXiv:1911.01940 [cs]*.\n- Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. *arXiv:1906.08237 [cs]*.\n- Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for Joint Understanding of Textual and Tabular", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "#### **3.3.2 Core capabilities**\n\nIntegral to the IBM Cloud Pak for Automation is a set of containerized software, both IBM and open source, which includes the following components:\n\n- -IBM Business Automation Workflow\nManual workflows can easily disrupt or slow operations. Lack of transparency and dependencies on employees leave businesses vulnerable to various bottlenecks that create inefficiencies. Automating workflows safeguards against potential barriers and empowers business professionals to directly participate in designing business solutions. Workflow automation orchestrates multiple business processes straight-through, human-assisted or case management within operations and provides visibility into each step.\n\n- -IBM Operational Decision Manager\nA business rules management system (BRMS) enables businesses to create and manage business logic independently from applications and processes. Through business rules, your team can specify decision logic in simple terms, close to natural language. Because rules are easily integrated with other IT systems, your applications can scale and run automated decisions across multiple channels.\n\nWhen changes to business rules are required, business users can quickly update them, which provides the agility and speed that is needed to meet changing business demands. Decision automation uses business rules to remove manual work from a decision process, which improves business agility and reducing IT reliance.\n\n- -IBM Business Automation Content Analyzer\nTo reinvent under-performing, high-friction business processes, enterprises are investing in digital transformation. This investment requires processes and applications to access and control a wide range of content, including documents, images, and audio files.\n\nContent services are accessible in multiple ways, including mobile devices and desktops, and as discrete capabilities embedded in workflows or applications, such as Enterprise Resource Planning (ERP) systems. This content analyzer enables efficient, consistent, and accurate content collaboration and decision-making across the organization. Content services are capabilities for collecting, governing, managing, and enriching enterprise content to be deployed efficiently across any cloud and within any application.\n\n- -Process Mapping\nInefficient processes cost you time and money. Bottlenecks, complexities, and a lack of understanding mask opportunities for process improvement. Process modeling helps you to gain better visibility into business operations, which helps you create efficiencies at scale. Process mapping is any automation strategy's first step. It enables non-technical people to work across departments to see a process landscape.\n\n- -Data Capture\nEnterprises produce and receive massive volumes of new information every day to make decisions, manage operations, and create value. Most that information is inaccessible and invisible to the business applications that need it most, which undermines the ability of decision makers to truly understand the opportunities and constraints that are affecting their organization.", - "page_start": 62, - "page_end": 62, - "source_file": "sg248459.pdf" - }, - { - "text": "layers are more transferable (Liu et al., 2019a). In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019), and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019).\n\nTenney et al. (2019a) suggest that while syntactic information appears early in the model and can be localized, semantics is spread across the entire model, which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182). But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.\n\nNote that Tenney et al. (2019a)'s experiments concern sentence-level semantic relations; Cui et al. (2020) report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place \"surface features in lower layers, syntactic features in middle layers and semantic features in higher layers\", but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.\n\n### 5 Training BERT\n\nThis section reviews the proposals to optimize the training and architecture of the original BERT.\n\n### 5.1 Model architecture choices\n\nTo date, the most systematic study of BERT architecture was performed by Wang et al. (2019b), who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers. That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by Liu et al. (2019a) that the middle layers were the most transferable. Larger hidden representation size was consistently better, but the gains varied by setting.\n\nAll in all, changes in the number of heads and layers appear to perform different functions. The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier (Liu et al., 2019a), to the initial layers which appear to be the most task-invariant (Hao et al., 2019), and where the tokens resemble the input tokens the most (Brunner et al., 2020) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.\n\nOn the other head, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019). This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.\n\nVanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more selfattention sublayers at the bottom and more feedforward sublayers at the top.\n\n#### 5.2 Improvements to the training regime\n\nLiu et al. (2019b) demonstrate the benefits of large-batch training: with 8k examples both the language model perplexity and downstream task performance are improved. They also publish their recommendations for other parameters. You et al. (2019) report that with a batch size of 32k BERT's training time can be significantly reduced with no degradation in performance. Zhou et al. (2019) observe that the normalization of the trained [CLS] token stabilizes the training and slightly improves performance on text classification tasks.\n\nGong et al. (2019) note that, since self-attention patterns in higher and lower layers are similar, the model training can be done in a recursive manner, where the shallower version is trained first and then the trained parameters are copied to deeper layers. Such a \"warm-start\" can lead to a 25% faster training without sacrificing performance.", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n### 4 Localizing linguistic knowledge\n\n#### 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized. Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic). Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs, likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising\n\n3Voita et al. (2019a) look at the evolution of token embeddings, showing that in the earlier Transformer layers, MLM forces the acquisition of contextual information at the expense of the token identity, which gets recreated in later layers.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "### 3 What knowledge does BERT have?\n\nA number of studies have looked at the knowledge encoded in BERT weights. The popular approaches include fill-in-the-gap probes of MLM, analysis of self-attention weights, and probing classifiers with different BERT representations as inputs.\n\n#### 3.1 Syntactic knowledge\n\nLin et al. (2019) showed that BERT representations are hierarchical rather than linear, i.e. there is something akin to syntactic tree structure in addition to the word order information. Tenney et al. (2019b) and Liu et al. (2019a) also showed that BERT embeddings encode information about parts of speech, syntactic chunks and roles. Enough syntactic information seems to be captured in the token embeddings themselves to recover syntactic trees (Vilares et al., 2020; Kim et al., 2020; Rosa and Marecek ˇ , 2019), although probing classifiers could not recover the labels of distant parent nodes in the syntactic tree (Liu et al., 2019a). Warstadt and Bowman (2020) report evidence of hierarchical structure in three out of four probing tasks. [CLS]For thosewho follow social media transitions on CapitolHill , thiswill be a little different . [CLS] For those who follow social media transitions on Capitol Hill , this will be a little different . 0 Figure 1: Heatmap of the impact matrix for the sentence \"For those who follow social media transitions on Capitol Hill, this will be a little different.\"\n\nAs far as *how* syntax is represented, it seems that syntactic structure is not directly encoded in self-attention weights. Htut et al. (2019) were unable to extract full parse trees from BERT heads even with the gold annotations for the root. Jawahar et al. (2019) include a brief illustration of a dependency tree extracted directly from self-attention weights, but provide no quantitative evaluation. 3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, let us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer\n\nHowever, syntactic information can be recovered from BERT token representations. Hewitt and Manning (2019) were able to learn transformation matrices that successfully recovered syntactic dependencies in PennTreebank data from BERT's token embeddings (see also Manning et al., 2020). Jawahar et al. (2019) experimented with transformations of the [CLS] token using Tensor Product Decomposition Networks (McCoy et al., 2019a), concluding that dependency trees are the best match among 5 decomposition schemes (although the reported MSE differences are very small). Miaschi and Dell'Orletta (2020) performs a range of syntactic probing experiments with concatenated token representations as input. to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English Parallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear vertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences\n\nNote that all these approaches look for the evidence of gold-standard linguistic structures, and add some amount of extra knowledge to the probe. Most recently, Wu et al. (2020) proposed a of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all\n\nfrom the matrices (see Section 4.1).\n\nremaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us to explore the idea of extracting dependency trees\n\nfollow social media transitions on Capitol Hill\n\nFigure 2: Part of the constituency tree.\n\nConstituency. Figure 2 shows part of the constituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable impacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical ex-\n\nOther Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively eval-\n\nWe start with two syntactic probes – dependency\n\nWith the goal of exploring the extent dependency relations are captured in BERT, we set out to answer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what ex-\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\nperiments in Section 4.2.\n\nuate these observations.\n\n4 Syntactic Probe\n\nprobe and constituency probe.\n\n4.1 Dependency Probe\n\ntent?\n\n3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, Figure 2: Part of the constituency tree. Constituency. Figure 2 shows part of the con-Figure 1: Parameter-free probe for syntactic knowledge: words sharing syntactic subtrees have larger impact on each other in the MLM prediction (Wu et al., 2020)\n\n1\n\n2\n\n3\n\n4\n\n5\n\nlet us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English stituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable imparameter-free approach based on measuring the impact that one word has on predicting another word within a sequence in the MLM task (Figure 1). They concluded that BERT \"naturally\" learns some syntactic information, although it is not very similar to linguistic annotated resources.\n\nParallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear pacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical experiments in Section 4.2. Other Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger The fill-in-the-gap probes of MLM showed that BERT takes subject-predicate agreement into account when performing the cloze task (Goldberg, 2019; van Schijndel et al., 2019), even for meaningless sentences and sentences with distractor clauses between the subject and the verb (Goldberg, 2019). A study of negative polarity items (NPIs) by Warstadt et al. (2019) showed that BERT is better able to detect the presence of NPIs (e.g. \"ever\") and the words that allow their use (e.g. \"whether\") than scope violations.\n\nvertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all remaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively evaluate these observations. 4 Syntactic Probe We start with two syntactic probes – dependency probe and constituency probe. 4.1 Dependency Probe With the goal of exploring the extent dependency The above claims of syntactic knowledge are belied by the evidence that BERT does not \"understand\" negation and is insensitive to malformed input. In particular, its predictions were not altered2 even with shuffled word order, truncated sentences, removed subjects and objects (Ettinger, 2019). This could mean that either BERT's syntactic knowledge is incomplete, or it does not need to rely on it for solving its tasks. The latter seems more likely, since Glavaš and Vulic´ (2020)\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\n4168\n\nto explore the idea of extracting dependency trees\n\nrelations are captured in BERT, we set out to an-\n\n4168 from the matrices (see Section 4.1). swer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what extent? 2 See also the recent findings on adversarial triggers, which get the model to produce a certain output even though they are not well-formed from the point of view of a human reader (Wallace et al., 2019a).", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv2_taclccby4_license.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "Is syntaxis encoded with Bert model ?", - "target_page": 2, - "target_passage": " As far as how syntaxis represented, it seems that syntactic structure is not directly encoded in self-attention weights.", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "Figure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that *most* weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n### 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS], but not [SEP]. If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n- Taking more layers into account: learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n- Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vulic´, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n- Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n- Adversarial regularization in combination with *Bregman Proximal Point Optimization* helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n- Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).\n\nWith large models, even fine-tuning becomes expensive, but Houlsby et al. (2019) show that it can\n\n5Kondratyuk and Straka (2019) suggest that fine-tuning on Universal Dependencies does result in syntactically meaningful attention patterns, but there was no quantitative evaluation.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "report that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fillin-the-blank\" cloze statements. Language\n\nAbstract\n\nRecent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these\n\nFabio Petroni1 Tim Rocktaschel ¨\n\n#### 3.2 Semantic knowledge models have many advantages over structured knowledge bases: they require no schema en-\n\narXiv:1909.01066v2 [cs.CL] 4 Sep 2019\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLM probing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles, since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nBERT struggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA. 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nOut-of-the-box BERT is surprisingly brittle to named entity replacements: e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence (*e.g.* \"Dante was born in [Mask] in the year 1265.\"). The parameters of these models appear to store\n\n#### 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nMemory Query Answer\n\nSymbolic Memory Access\n\nFlorence\n\n(Dante, born-in, X)\n\n1,2 Patrick Lewis1,2 Anton Bakhtin1\n\nDante\n\nFlorence born-in\n\nLanguage Models as Knowledge Bases?\n\nYuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2 1Facebook AI Research 2University College London {fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com\n\nKG\n\nFigure 1: Querying knowledge bases (KB) and language models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. blanks (e.g. \"Cats like to chase [___]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nIn contrast, knowledge bases are effective solutions for accessing annotated gold-standard relational data by enabling queries such as (Dante, born-in, X). However, in practice we often need to *extract* relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014) components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like \"Dante was born However, BERT cannot reason based on its world knowledge. Forbes et al. (2019) show that BERT can \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it \"knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n#### 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, \"the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "layers are more transferable (Liu et al., 2019a). In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019), and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019).\n\nTenney et al. (2019a) suggest that while syntactic information appears early in the model and can be localized, semantics is spread across the entire model, which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182). But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.\n\nNote that Tenney et al. (2019a)'s experiments concern sentence-level semantic relations; Cui et al. (2020) report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place \"surface features in lower layers, syntactic features in middle layers and semantic features in higher layers\", but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.\n\n### 5 Training BERT\n\nThis section reviews the proposals to optimize the training and architecture of the original BERT.\n\n### 5.1 Model architecture choices\n\nTo date, the most systematic study of BERT architecture was performed by Wang et al. (2019b), who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers. That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by Liu et al. (2019a) that the middle layers were the most transferable. Larger hidden representation size was consistently better, but the gains varied by setting.\n\nAll in all, changes in the number of heads and layers appear to perform different functions. The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier (Liu et al., 2019a), to the initial layers which appear to be the most task-invariant (Hao et al., 2019), and where the tokens resemble the input tokens the most (Brunner et al., 2020) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.\n\nOn the other head, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019). This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.\n\nVanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more selfattention sublayers at the bottom and more feedforward sublayers at the top.\n\n#### 5.2 Improvements to the training regime\n\nLiu et al. (2019b) demonstrate the benefits of large-batch training: with 8k examples both the language model perplexity and downstream task performance are improved. They also publish their recommendations for other parameters. You et al. (2019) report that with a batch size of 32k BERT's training time can be significantly reduced with no degradation in performance. Zhou et al. (2019) observe that the normalization of the trained [CLS] token stabilizes the training and slightly improves performance on text classification tasks.\n\nGong et al. (2019) note that, since self-attention patterns in higher and lower layers are similar, the model training can be done in a recursive manner, where the shallower version is trained first and then the trained parameters are copied to deeper layers. Such a \"warm-start\" can lead to a 25% faster training without sacrificing performance.", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "# A Primer in BERTology: What We Know About How BERT Works\n\nAnna Rogers Center for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\nOlga Kovaleva Dept. of Computer Science University of Massachusetts Lowell okovalev@cs.uml.edu\n\n### Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell arum@cs.uml.edu\n\n### Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n### 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear *why*, which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n### 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\n1https://github.com/ google-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n### 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance. For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30–40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n#### 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss, which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.\n\nThe studies in the knowledge distillation framework (Hinton et al., 2014) use a smaller student-network trained to mimic the behavior of a larger teacher-network. For BERT, this has been achieved through experiments with loss functions (Sanh et al., 2019b; Jiao et al., 2019), mimicking the activation patterns of individual portions of the teacher network (Sun et al., 2019a), and knowledge transfer at the pre-training (Turc et al., 2019; Jiao et al., 2019; Sun et al., 2020) or fine-tuning stage (Jiao et al., 2019). McCarley et al. (2020) suggest that distillation has so far worked better for GLUE than for reading comprehension, and report good results for QA from a combination of structured pruning and task-specific distillation.\n\nQuantization decreases BERT's memory footprint through lowering the precision of its weights (Shen et al., 2019; Zafrir et al., 2019). Note that", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n### 4 Localizing linguistic knowledge\n\n#### 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized. Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic). Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs, likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising\n\n3Voita et al. (2019a) look at the evolution of token embeddings, showing that in the earlier Transformer layers, MLM forces the acquisition of contextual information at the expense of the token identity, which gets recreated in later layers.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "tation and, in practical applications, the underlying storage and compute costs. We selected models with embedding dimensions ranging from 384 to 4096.\n\n- *Sequence length:* Being the number of tokens that a model can consider as input, the sequence length is important as it impacts the unit that can be encoded (sentence, paragraph, document). However, encoding overly long sequences requires efficiently storing the relevant information into a single vector. Among the selected methods, this criterion varies from 128 tokens to 32768.\n- *Model parameters:* Often correlated with the two first characteristics, parameter count is important for practical applications as it affects usability on resource-efficient machines. The selected models have a number of parameters ranging from 20 million (∼100Mb in float32) to 7 billion (∼28Gb).\n- *Language:* This is a major feature of language models. Some are monolingual, and others are multilingual. Language is usually acquired during pre-training, but sometimes, models familiarize themselves with new languages at tuning. For the benchmark, we selected French models, as well as bilingual or multilingual models. We also included a few ones that claimed to be English (e.g. *all-MiniLM-L12-v2*9 ).\n- *Model types:* There are several strategies to generate text embeddings such as aggregating (e.g. with average pooling) token-level embeddings from raw pre-trained models, or adding an extra contrastive learning step on a sentence similarity task with, optionally, additional transformation layers. We included models of all types in our benchmark, summarizing the model type information under two relevant criteria: finetuned vs pretrained, and trained for sentence similarity or not.\n\nThe selected models are visible in Figure 1, and all of their characteristics are summarized in appendix Table 7. Overall, the selection includes the best models from the sentence transformers framework (Reimers and Gurevych, 2019), the most popular French NLP models (Le et al., 2020; Martin\n\net al., 2019), their variants optimized for semantic similarity (Reimers and Gurevych, 2019), numerous multilingual models performing at the top on MTEB (e.g *E5* and *T5*), *Bloom* variants (Zhang et al., 2023), models based on very recent powerful LLMs (Wang et al., 2023; Faysse et al., 2024) and finally the proprietary models of OpenAI, Cohere and Voyage. Certain models were selected in multiple sizes to isolate the dimensionality effect effectively. We provide information on the models' licenses as reported in the Hugging Face hub10 . However, we encourage readers to conduct further research before utilizing a model.\n\n#### 3.3 Evaluation\n\nFor the sake of homogeneity, models are evaluated using the same metrics per task as in MTEB (Muennighoff et al., 2022): Classification (Accuracy), Bitext mining (F1 score), Pair classification (AP), Clustering (V measure), Reranking (MAP), Retrieval (NDCG@10), Summarization and STS (Spearman correlation based on cosine similarity). BitextMining tasks are excluded from the average performance scores and therefore the figures, as this task evaluates 2 languages instead of one, and this benchmark focuses only on one language (French). We present the results for both *DiaBlaBitextMining* and *FloresBitextMining* in Table 12.\n\nUsing the overall benchmark results, our goal will be to answer the following research questions: Q1: Is a model outstanding on all tasks?\n\nAs we are trying to find out whether one embedding model is statistically better than the others for French, the objective will also be to analyze the performance of the models by tasks to facilitate model choice for specific applications.\n\nQ2: Are there any links between the model characteristics and performance?\n\nIn section 3.2, we undertook the substantial task of gathering the characteristics of all evaluated models. The goal here will be to analyze their impact on performance and draw conclusions about, for example, the relationship between embedding dimension and model ranking on the benchmark.\n\nQ3: Do monolingual models have multilingual capabilities?\n\nWe interrogate the ability of a model trained exclusively in one language to perform well in another language.\n\nQ4: Are there any correlations between datasets\n\n9 https://huggingface.co./sentence-transformers/ all-MiniLM-L12-v2\n\n10https://huggingface.co./models", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv4.pdf" - }, - { - "text": "### 3 What knowledge does BERT have?\n\nA number of studies have looked at the knowledge encoded in BERT weights. The popular approaches include fill-in-the-gap probes of MLM, analysis of self-attention weights, and probing classifiers with different BERT representations as inputs.\n\n#### 3.1 Syntactic knowledge\n\nLin et al. (2019) showed that BERT representations are hierarchical rather than linear, i.e. there is something akin to syntactic tree structure in addition to the word order information. Tenney et al. (2019b) and Liu et al. (2019a) also showed that BERT embeddings encode information about parts of speech, syntactic chunks and roles. Enough syntactic information seems to be captured in the token embeddings themselves to recover syntactic trees (Vilares et al., 2020; Kim et al., 2020; Rosa and Marecek ˇ , 2019), although probing classifiers could not recover the labels of distant parent nodes in the syntactic tree (Liu et al., 2019a). Warstadt and Bowman (2020) report evidence of hierarchical structure in three out of four probing tasks. [CLS]For thosewho follow social media transitions on CapitolHill , thiswill be a little different . [CLS] For those who follow social media transitions on Capitol Hill , this will be a little different . 0 Figure 1: Heatmap of the impact matrix for the sentence \"For those who follow social media transitions on Capitol Hill, this will be a little different.\"\n\nAs far as *how* syntax is represented, it seems that syntactic structure is not directly encoded in self-attention weights. Htut et al. (2019) were unable to extract full parse trees from BERT heads even with the gold annotations for the root. Jawahar et al. (2019) include a brief illustration of a dependency tree extracted directly from self-attention weights, but provide no quantitative evaluation. 3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, let us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer\n\nHowever, syntactic information can be recovered from BERT token representations. Hewitt and Manning (2019) were able to learn transformation matrices that successfully recovered syntactic dependencies in PennTreebank data from BERT's token embeddings (see also Manning et al., 2020). Jawahar et al. (2019) experimented with transformations of the [CLS] token using Tensor Product Decomposition Networks (McCoy et al., 2019a), concluding that dependency trees are the best match among 5 decomposition schemes (although the reported MSE differences are very small). Miaschi and Dell'Orletta (2020) performs a range of syntactic probing experiments with concatenated token representations as input. to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English Parallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear vertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences\n\nNote that all these approaches look for the evidence of gold-standard linguistic structures, and add some amount of extra knowledge to the probe. Most recently, Wu et al. (2020) proposed a of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all\n\nfrom the matrices (see Section 4.1).\n\nremaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us to explore the idea of extracting dependency trees\n\nfollow social media transitions on Capitol Hill\n\nFigure 2: Part of the constituency tree.\n\nConstituency. Figure 2 shows part of the constituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable impacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical ex-\n\nOther Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively eval-\n\nWe start with two syntactic probes – dependency\n\nWith the goal of exploring the extent dependency relations are captured in BERT, we set out to answer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what ex-\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\nperiments in Section 4.2.\n\nuate these observations.\n\n4 Syntactic Probe\n\nprobe and constituency probe.\n\n4.1 Dependency Probe\n\ntent?\n\n3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, Figure 2: Part of the constituency tree. Constituency. Figure 2 shows part of the con-Figure 1: Parameter-free probe for syntactic knowledge: words sharing syntactic subtrees have larger impact on each other in the MLM prediction (Wu et al., 2020)\n\n1\n\n2\n\n3\n\n4\n\n5\n\nlet us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English stituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable imparameter-free approach based on measuring the impact that one word has on predicting another word within a sequence in the MLM task (Figure 1). They concluded that BERT \"naturally\" learns some syntactic information, although it is not very similar to linguistic annotated resources.\n\nParallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear pacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical experiments in Section 4.2. Other Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger The fill-in-the-gap probes of MLM showed that BERT takes subject-predicate agreement into account when performing the cloze task (Goldberg, 2019; van Schijndel et al., 2019), even for meaningless sentences and sentences with distractor clauses between the subject and the verb (Goldberg, 2019). A study of negative polarity items (NPIs) by Warstadt et al. (2019) showed that BERT is better able to detect the presence of NPIs (e.g. \"ever\") and the words that allow their use (e.g. \"whether\") than scope violations.\n\nvertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all remaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively evaluate these observations. 4 Syntactic Probe We start with two syntactic probes – dependency probe and constituency probe. 4.1 Dependency Probe With the goal of exploring the extent dependency The above claims of syntactic knowledge are belied by the evidence that BERT does not \"understand\" negation and is insensitive to malformed input. In particular, its predictions were not altered2 even with shuffled word order, truncated sentences, removed subjects and objects (Ettinger, 2019). This could mean that either BERT's syntactic knowledge is incomplete, or it does not need to rely on it for solving its tasks. The latter seems more likely, since Glavaš and Vulic´ (2020)\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\n4168\n\nto explore the idea of extracting dependency trees\n\nrelations are captured in BERT, we set out to an-\n\n4168 from the matrices (see Section 4.1). swer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what extent? 2 See also the recent findings on adversarial triggers, which get the model to produce a certain output even though they are not well-formed from the point of view of a human reader (Wallace et al., 2019a).", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| | | | Compression Performance Speedup | | Model | Evaluation |\n| --- | --- | --- | --- | --- | --- | --- |\n| | BERT-base (Devlin et al., 2019) | ×1 | 100% | ×1 | BERT12 | All GLUE tasks, SQuAD |\n| | BERT-small | ×3.8 | 91% | - | BERT4† | All GLUE tasks |\n| | DistilBERT (Sanh et al., 2019a) BERT6-PKD (Sun et al., 2019a) | ×1.5 ×1.6 | 90%§ 98% | ×1.6 ×1.9 | BERT6 BERT6 | All GLUE tasks, SQuAD No WNLI, CoLA, STS-B; RACE |\n| | BERT3-PKD (Sun et al., 2019a) | ×2.4 | 92% | ×3.7 | BERT3 | No WNLI, CoLA, STS-B; RACE |\n| | Aguilar et al. (2019), Exp. 3 | ×1.6 | 93% | - | BERT6 | CoLA, MRPC, QQP, RTE |\n| | BERT-48 (Zhao et al., 2019) | ×62 | 87% | ×77 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| | BERT-192 (Zhao et al., 2019) | ×5.7 | 93% | ×22 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| Distillation | TinyBERT (Jiao et al., 2019) | ×7.5 | 96% | ×9.4 | † BERT4 | No WNLI; SQuAD |\n| | MobileBERT (Sun et al., 2020) | ×4.3 | 100% | ×4 | † BERT24 | No WNLI; SQuAD |\n| | PD (Turc et al., 2019) | ×1.6 | 98% | ×2.5‡ | † BERT6 | No WNLI, CoLA and STS-B |\n| | WaLDORf (Tian et al., 2019) | ×4.4 | 93% | ×9 | †k BERT8 | SQuAD |\n| | MiniLM (Wang et al., 2020b) | ×1.65 | 99% | ×2 | BERT6 | No WNLI, STS-B, MNLImm; SQuAD |\n| | MiniBERT(Tsai et al., 2019) | ∗∗ ×6 | 98% | ×27∗∗ | mBERT3 | † CoNLL-18 POS and morphology |\n| | BiLSTM-soft (Tang et al., 2019) | ×110 | 91% | ×434‡ | | BiLSTM1 MNLI, QQP, SST-2 |\n| | Q-BERT-MP (Shen et al., 2019) | ×13 | 98%¶ | - | BERT12 | MNLI, SST-2, CoNLL-03, SQuAD |\n| Quanti zation | BERT-QAT (Zafrir et al., 2019) | ×4 | 99% | - | BERT12 | No WNLI, MNLI; SQuAD |\n| | GOBO(Zadeh and Moshovos, 2020) | ×9.8 | 99% | - | BERT12 | MNLI |\n| | McCarley et al. (2020), ff2 | ×2.2‡ | 98%‡ | ×1.9‡ | BERT24 | SQuAD, Natural Questions |\n| Pruning | RPP (Guo et al., 2019) | ×1.7‡ | 99%‡ | - | BERT24 | No WNLI, STS-B; SQuAD |\n| | Soft MvP (Sanh et al., 2020) | ×33 | 94%¶ | - | BERT12 | MNLI, QQP, SQuAD |\n| | IMP (Chen et al., 2020), rewind 50% | ×1.4–2.5 | 94–100% | - | BERT12 | No MNLI-mm; SQuAD |\n| | ALBERT-base (Lan et al., 2020b) | ×9 | 97% | - | † BERT12 | MNLI, SST-2 |\n| | ALBERT-xxlarge (Lan et al., 2020b) | ×0.47 | 107% | - | † BERT12 | MNLI, SST-2 |\n| Other | BERT-of-Theseus (Xu et al., 2020) | ×1.6 | 98% | ×1.9 | BERT6 | No WNLI |\n| | PoWER-BERT (Goyal et al., 2020) | N/A | 99% | ×2–4.5 | BERT12 | No WNLI; RACE |\n\nTable 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗Smaller vocabulary used. †The dimensionality of the hidden layers is reduced. kConvolutional layers used. ‡Compared to BERTlarge. ∗∗Compared to mBERT. §As reported in (Jiao et al., 2019).¶In comparison to the dev set.\n\nthis strategy often requires compatible hardware.\n\nAs discussed in section 6, individual selfattention heads and BERT layers can be disabled without significant drop in performance (Michel et al., 2019; Kovaleva et al., 2019; Baan et al., 2019). Pruning is a compression technique that takes advantage of that fact, typically reducing the amount of computation via zeroing out of certain parts of the large model. In structured pruning, architecture blocks are dropped, as in LayerDrop (Fan et al., 2019). In unstructured, the weights in the entire model are pruned irrespective of their location, as in magnitude pruning (Chen et al., 2020) or movement pruning (Sanh et al., 2020).\n\nPrasanna et al. (2020) and Chen et al. (2020) explore BERT from the perspective of the lottery ticket hypothesis (Frankle and Carbin, 2019), looking specifically at the \"winning\" subnetworks in pre-trained BERT. They independently find that such subnetworks do exist, and that transferability between subnetworks for different tasks varies.\n\nIf the ultimate goal of training BERT is compression, Li et al. (2020) recommend training larger\n\nmodels and compressing them heavily rather than compressing smaller models lightly.\n\nOther techniques include decomposing BERT's embedding matrix into smaller matrices (Lan et al., 2020a), progressive module replacing (Xu et al., 2020) and dynamic elimination of intermediate encoder outputs (Goyal et al., 2020). See Ganesh et al. (2020) for a more detailed discussion of compression methods.\n\n#### 6.3 Pruning and model analysis\n\nThere is a nascent discussion around pruning as a model analysis technique. The basic idea is that a compressed model a priori consists of elements that are useful for prediction; therefore by finding out what they do we may find out what the whole network does. For instance, BERT has heads that seem to encode frame-semantic relations, but disabling them might not hurt downstream task performance Kovaleva et al. (2019); this suggests that this knowledge is not actually used.\n\nFor the base Transformer, Voita et al. (2019b) identify the functions of self-attention heads and", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "```\n# Initialize model for single subject\n single_subject_model = create_model (\n agent , # ActionModels agent object\n priors , # Dictionary with parameter priors\n observations , # Vector of observations\n actions , # Vector of actions\n )\n✝ ✆\n```\nIf we have multiple subjects whose parameters we wish to estimate, we can achieve this by passing a dataframe object to the same function. Here, we specify which columns are actions and inputs, as well as which column to use for grouping the specific time series:\n\n✞ ☎\n\n✞ ☎\n\n```\n# Initialize model for multiple subjects\nmultiple_subjects_model = create_model (\n agent , # ActionModels agent\n priors , # Dictionary with parameter priors\n data ; # Dataframe with data from multiple subjects\n grouping_cols = [\"ID\"], # Column to split dataframe\n input_cols = [\" observations \"], # Columns with observations\n action_cols = [\" actions \"], # Columns with actions\n)\n```\nThis model can be used as a normal Turing model object. ActionModels provides a convenience function for doing this with appropriate defaults:\n\n✞ ☎\n\n✝ ✆\n\n```\nresults = fit_model (\n single_subject_model ; # Model object\n n_iterations = 1000 , # Number of iterations\n n_chains = 1 # Number of chains\n)\n```\nThe output of the fit_model function is an object containing the standard Turing chains, which we can use to access the chain statistics:\n\n✝ ✆\n\n```\n✞ ☎\n# Extract the Chains object\nchains = results . chains\n# Rename the chains for interpretable parameter names\nrename_chains ( chains , model )\n✝ ✆\n```\nActionModels provides a range of convenience functions for behavioural modelling. We can extract the posterior parameter estimates for each participant, and extract it in a convenient data frame structure for later processing:\n\n✞ ☎\n\n✝ ✆\n\n```\n# Extract quantities from the chain\nagent_parameters = extract_quantities ( single_subject_model , chains )\n# Get posterior median\nestimates = get_estimates ( agent_parameters )\n```\nWe can also sample parameter values from the prior and plot the posteriors against the priors:\n\n```\n✞ ☎\n # Sample from the prior\n prior_chains = sample ( single_subject_model , Prior (), 1000 )\n # Plot parameter distribution .\n plot_parameter_distribution ( chains , prior_chains )\n```\nSee the documentation for ActionModels at github.com/ilabcode/ActionModels.jl for various other functionalities, including modelling how parameters vary across a population,\n\n✝ ✆", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "Is BERT good with numbers representations ?", - "target_page": 3, - "target_passage": " BERTstruggles with representations of numbers. ", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# A Primer in BERTology: What We Know About How BERT Works\n\nAnna Rogers Center for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\nOlga Kovaleva Dept. of Computer Science University of Massachusetts Lowell okovalev@cs.uml.edu\n\n### Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell arum@cs.uml.edu\n\n### Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n### 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear *why*, which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n### 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\n1https://github.com/ google-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that *most* weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n### 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS], but not [SEP]. If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n- Taking more layers into account: learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n- Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vulic´, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n- Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n- Adversarial regularization in combination with *Bregman Proximal Point Optimization* helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n- Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).\n\nWith large models, even fine-tuning becomes expensive, but Houlsby et al. (2019) show that it can\n\n5Kondratyuk and Straka (2019) suggest that fine-tuning on Universal Dependencies does result in syntactically meaningful attention patterns, but there was no quantitative evaluation.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| | | | Compression Performance Speedup | | Model | Evaluation |\n| --- | --- | --- | --- | --- | --- | --- |\n| | BERT-base (Devlin et al., 2019) | ×1 | 100% | ×1 | BERT12 | All GLUE tasks, SQuAD |\n| | BERT-small | ×3.8 | 91% | - | BERT4† | All GLUE tasks |\n| | DistilBERT (Sanh et al., 2019a) BERT6-PKD (Sun et al., 2019a) | ×1.5 ×1.6 | 90%§ 98% | ×1.6 ×1.9 | BERT6 BERT6 | All GLUE tasks, SQuAD No WNLI, CoLA, STS-B; RACE |\n| | BERT3-PKD (Sun et al., 2019a) | ×2.4 | 92% | ×3.7 | BERT3 | No WNLI, CoLA, STS-B; RACE |\n| | Aguilar et al. (2019), Exp. 3 | ×1.6 | 93% | - | BERT6 | CoLA, MRPC, QQP, RTE |\n| | BERT-48 (Zhao et al., 2019) | ×62 | 87% | ×77 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| | BERT-192 (Zhao et al., 2019) | ×5.7 | 93% | ×22 | BERT12 | ∗† MNLI, MRPC, SST-2 |\n| Distillation | TinyBERT (Jiao et al., 2019) | ×7.5 | 96% | ×9.4 | † BERT4 | No WNLI; SQuAD |\n| | MobileBERT (Sun et al., 2020) | ×4.3 | 100% | ×4 | † BERT24 | No WNLI; SQuAD |\n| | PD (Turc et al., 2019) | ×1.6 | 98% | ×2.5‡ | † BERT6 | No WNLI, CoLA and STS-B |\n| | WaLDORf (Tian et al., 2019) | ×4.4 | 93% | ×9 | †k BERT8 | SQuAD |\n| | MiniLM (Wang et al., 2020b) | ×1.65 | 99% | ×2 | BERT6 | No WNLI, STS-B, MNLImm; SQuAD |\n| | MiniBERT(Tsai et al., 2019) | ∗∗ ×6 | 98% | ×27∗∗ | mBERT3 | † CoNLL-18 POS and morphology |\n| | BiLSTM-soft (Tang et al., 2019) | ×110 | 91% | ×434‡ | | BiLSTM1 MNLI, QQP, SST-2 |\n| | Q-BERT-MP (Shen et al., 2019) | ×13 | 98%¶ | - | BERT12 | MNLI, SST-2, CoNLL-03, SQuAD |\n| Quanti zation | BERT-QAT (Zafrir et al., 2019) | ×4 | 99% | - | BERT12 | No WNLI, MNLI; SQuAD |\n| | GOBO(Zadeh and Moshovos, 2020) | ×9.8 | 99% | - | BERT12 | MNLI |\n| | McCarley et al. (2020), ff2 | ×2.2‡ | 98%‡ | ×1.9‡ | BERT24 | SQuAD, Natural Questions |\n| Pruning | RPP (Guo et al., 2019) | ×1.7‡ | 99%‡ | - | BERT24 | No WNLI, STS-B; SQuAD |\n| | Soft MvP (Sanh et al., 2020) | ×33 | 94%¶ | - | BERT12 | MNLI, QQP, SQuAD |\n| | IMP (Chen et al., 2020), rewind 50% | ×1.4–2.5 | 94–100% | - | BERT12 | No MNLI-mm; SQuAD |\n| | ALBERT-base (Lan et al., 2020b) | ×9 | 97% | - | † BERT12 | MNLI, SST-2 |\n| | ALBERT-xxlarge (Lan et al., 2020b) | ×0.47 | 107% | - | † BERT12 | MNLI, SST-2 |\n| Other | BERT-of-Theseus (Xu et al., 2020) | ×1.6 | 98% | ×1.9 | BERT6 | No WNLI |\n| | PoWER-BERT (Goyal et al., 2020) | N/A | 99% | ×2–4.5 | BERT12 | No WNLI; RACE |\n\nTable 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗Smaller vocabulary used. †The dimensionality of the hidden layers is reduced. kConvolutional layers used. ‡Compared to BERTlarge. ∗∗Compared to mBERT. §As reported in (Jiao et al., 2019).¶In comparison to the dev set.\n\nthis strategy often requires compatible hardware.\n\nAs discussed in section 6, individual selfattention heads and BERT layers can be disabled without significant drop in performance (Michel et al., 2019; Kovaleva et al., 2019; Baan et al., 2019). Pruning is a compression technique that takes advantage of that fact, typically reducing the amount of computation via zeroing out of certain parts of the large model. In structured pruning, architecture blocks are dropped, as in LayerDrop (Fan et al., 2019). In unstructured, the weights in the entire model are pruned irrespective of their location, as in magnitude pruning (Chen et al., 2020) or movement pruning (Sanh et al., 2020).\n\nPrasanna et al. (2020) and Chen et al. (2020) explore BERT from the perspective of the lottery ticket hypothesis (Frankle and Carbin, 2019), looking specifically at the \"winning\" subnetworks in pre-trained BERT. They independently find that such subnetworks do exist, and that transferability between subnetworks for different tasks varies.\n\nIf the ultimate goal of training BERT is compression, Li et al. (2020) recommend training larger\n\nmodels and compressing them heavily rather than compressing smaller models lightly.\n\nOther techniques include decomposing BERT's embedding matrix into smaller matrices (Lan et al., 2020a), progressive module replacing (Xu et al., 2020) and dynamic elimination of intermediate encoder outputs (Goyal et al., 2020). See Ganesh et al. (2020) for a more detailed discussion of compression methods.\n\n#### 6.3 Pruning and model analysis\n\nThere is a nascent discussion around pruning as a model analysis technique. The basic idea is that a compressed model a priori consists of elements that are useful for prediction; therefore by finding out what they do we may find out what the whole network does. For instance, BERT has heads that seem to encode frame-semantic relations, but disabling them might not hurt downstream task performance Kovaleva et al. (2019); this suggests that this knowledge is not actually used.\n\nFor the base Transformer, Voita et al. (2019b) identify the functions of self-attention heads and", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "report that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fillin-the-blank\" cloze statements. Language\n\nAbstract\n\nRecent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these\n\nFabio Petroni1 Tim Rocktaschel ¨\n\n#### 3.2 Semantic knowledge models have many advantages over structured knowledge bases: they require no schema en-\n\narXiv:1909.01066v2 [cs.CL] 4 Sep 2019\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLM probing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles, since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nBERT struggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA. 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nOut-of-the-box BERT is surprisingly brittle to named entity replacements: e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence (*e.g.* \"Dante was born in [Mask] in the year 1265.\"). The parameters of these models appear to store\n\n#### 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nMemory Query Answer\n\nSymbolic Memory Access\n\nFlorence\n\n(Dante, born-in, X)\n\n1,2 Patrick Lewis1,2 Anton Bakhtin1\n\nDante\n\nFlorence born-in\n\nLanguage Models as Knowledge Bases?\n\nYuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2 1Facebook AI Research 2University College London {fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com\n\nKG\n\nFigure 1: Querying knowledge bases (KB) and language models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. blanks (e.g. \"Cats like to chase [___]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nIn contrast, knowledge bases are effective solutions for accessing annotated gold-standard relational data by enabling queries such as (Dante, born-in, X). However, in practice we often need to *extract* relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014) components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like \"Dante was born However, BERT cannot reason based on its world knowledge. Forbes et al. (2019) show that BERT can \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it \"knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n#### 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, \"the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "layers are more transferable (Liu et al., 2019a). In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019), and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019).\n\nTenney et al. (2019a) suggest that while syntactic information appears early in the model and can be localized, semantics is spread across the entire model, which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182). But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.\n\nNote that Tenney et al. (2019a)'s experiments concern sentence-level semantic relations; Cui et al. (2020) report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place \"surface features in lower layers, syntactic features in middle layers and semantic features in higher layers\", but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.\n\n### 5 Training BERT\n\nThis section reviews the proposals to optimize the training and architecture of the original BERT.\n\n### 5.1 Model architecture choices\n\nTo date, the most systematic study of BERT architecture was performed by Wang et al. (2019b), who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers. That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by Liu et al. (2019a) that the middle layers were the most transferable. Larger hidden representation size was consistently better, but the gains varied by setting.\n\nAll in all, changes in the number of heads and layers appear to perform different functions. The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier (Liu et al., 2019a), to the initial layers which appear to be the most task-invariant (Hao et al., 2019), and where the tokens resemble the input tokens the most (Brunner et al., 2020) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.\n\nOn the other head, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019). This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.\n\nVanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more selfattention sublayers at the bottom and more feedforward sublayers at the top.\n\n#### 5.2 Improvements to the training regime\n\nLiu et al. (2019b) demonstrate the benefits of large-batch training: with 8k examples both the language model perplexity and downstream task performance are improved. They also publish their recommendations for other parameters. You et al. (2019) report that with a batch size of 32k BERT's training time can be significantly reduced with no degradation in performance. Zhou et al. (2019) observe that the normalization of the trained [CLS] token stabilizes the training and slightly improves performance on text classification tasks.\n\nGong et al. (2019) note that, since self-attention patterns in higher and lower layers are similar, the model training can be done in a recursive manner, where the shallower version is trained first and then the trained parameters are copied to deeper layers. Such a \"warm-start\" can lead to a 25% faster training without sacrificing performance.", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n### 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance. For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30–40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n#### 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss, which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.\n\nThe studies in the knowledge distillation framework (Hinton et al., 2014) use a smaller student-network trained to mimic the behavior of a larger teacher-network. For BERT, this has been achieved through experiments with loss functions (Sanh et al., 2019b; Jiao et al., 2019), mimicking the activation patterns of individual portions of the teacher network (Sun et al., 2019a), and knowledge transfer at the pre-training (Turc et al., 2019; Jiao et al., 2019; Sun et al., 2020) or fine-tuning stage (Jiao et al., 2019). McCarley et al. (2020) suggest that distillation has so far worked better for GLUE than for reading comprehension, and report good results for QA from a combination of structured pruning and task-specific distillation.\n\nQuantization decreases BERT's memory footprint through lowering the precision of its weights (Shen et al., 2019; Zafrir et al., 2019). Note that", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n### 4 Localizing linguistic knowledge\n\n#### 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized. Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic). Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs, likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising\n\n3Voita et al. (2019a) look at the evolution of token embeddings, showing that in the earlier Transformer layers, MLM forces the acquisition of contextual information at the expense of the token identity, which gets recreated in later layers.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "More recently, Kobayashi et al. (2020) showed that the norms of attention-weighted input vectors, which yield a more intuitive interpretation of self-attention, reduce the attention to special tokens. However, even when the attention weights are normed, it is still not the case that most heads that do the \"heavy lifting\" are even potentially interpretable (Prasanna et al., 2020).\n\nOne methodological choice in in many studies of attention is to focus on inter-word attention and simply exclude special tokens (e.g. Lin et al. (2019) and Htut et al. (2019)). However, if attention to special tokens actually matters at inference time, drawing conclusions purely from inter-word attention patterns does not seem warranted.\n\nThe functions of special tokens are not yet well understood. [CLS] is typically viewed as an aggregated sentence-level representation (although all token representations also contain at least some sentence-level information, as discussed in subsection 4.1); in that case, we may not see e.g. full syntactic trees in inter-word attention because part of that information is actually packed in [CLS].\n\nClark et al. (2019) experiment with encoding Wikipedia paragraphs with base BERT to consider specifically the attention to special tokens, noting that heads in early layers attend more to [CLS], in middle layers to [SEP], and in final layers to periods and commas. They hypothesize that its function might be one of \"no-op\", a signal to ignore the head if its pattern is not applicable to the current case. As a result, for example, [SEP] gets increased attention starting in layer 5, but its importance for prediction drops. However, after fine-tuning both [SEP] and [CLS] get a lot of attention, depending on the task (Kovaleva et al., 2019). Interestingly, BERT also pays a lot of attention to punctuation, which Clark et al. (2019) explain by the fact that periods and commas are simply almost as frequent as the special tokens, and so the model might learn to rely on them for the same reasons.\n\n#### 4.3 BERT layers\n\nThe first layer of BERT receives as input a combination of token, segment, and positional embeddings.\n\nIt stands to reason that the lower layers have the most information about linear word order. Lin et al. (2019) report a decrease in the knowledge of linear word order around layer 4 in BERT-base. This is accompanied by an increased knowledge\n\n(a) ELMo (original)\n\nestingly, we see that layers 1 and 2 in the 4-layer ELMo model have very similar performance—this warrants further exploration. On the other hand, the layers of the ELMo (transformer) model do not exhibit such a monotonic increase. While the topmost layer is best (which we expected, since this is the vector originally fed into a softmax classifier during pretraining), the middle layers show varying performance. Across all models, the representations that are better-suited for language modeling are also those that exhibit worse probing task performance (Figure 3), indicating that contextualizer layers trade off between encoding general\n\nThese results also reveal a difference in the layerwise behavior of LSTMs and transformers; moving up the LSTM layers yields more taskspecific representations, but the same does not hold for transformers. Better understanding the differences between transformers and LSTMs is an active area of research (Chen et al., 2018; Tang et al., 2018), and we leave further exploration of\n\nThese observations motivate the gradual unfreezing method of Howard and Ruder (2018), where the model layers are progressively unfrozen (starting from the final layer) during the finetuning process. Given our observation that higherlevel LSTM layers are less general (and more pretraining task-specific), they likely have to be finetuned a bit more in order to make them appropriately task specific. Meanwhile, the base layer of the LSTM already learns highly transferable features, and may not benefit from fine-tuning.\n\nand task-specific features.\n\nthese observations to future work.\n\n6 Transferring Between Tasks\n\nguage model pretraining.\n\nSuccessful pretrained contextualizers have used self-supervised tasks such as bidirectional language modeling (Peters et al., 2018a) and next sentence prediction (Devlin et al., 2018), which enable the use of large, unannotated text corpora. However, contextualizers can also be pretrained on explicitly supervised objectives, as done in pretrained *sentence* embedding methods (Conneau et al., 2017). To better understand how the choice of pretraining task affects the linguistic knowledge within and transferability of CWRs, we compare pretraining on a range of different explicitly-supervised tasks with bidirectional lan-\n\n(b) ELMo (4-layer)\n\n(c) ELMo (transformer)\n\n(d) OpenAI transformer\n\nLayer 0 Layer 2\n\nLayer 0 Layer 4\n\nLayer 0 Layer 6\n\nLayer 0 Layer 12\n\nFigure 3: A visualization of layerwise patterns in task performance. Each column represents a probing task, and each row represents a contextualizer layer. Figure 4: BERT layer transferability (columns correspond to probing tasks, Liu et al. (2019a).\n\ntextualizers. Furthermore, the ELMo-based models facilitate a controlled comparison—they only of hierarchical sentence structure, as detected by the probing tasks of predicting the token index, the main auxiliary verb and the sentence subject.\n\ndiffer in the contextualizer architecture used. We evaluate how well CWR features perform the pretraining task—bidirectional language modeling. Specifically, we take the pretrained representations for each layer and relearn the language model softmax classifiers used to predict the next and previous token. The ELMo models are trained on the Billion Word Benchmark, so we retrain the softmax classifier on similar data to mitigate any possible effects from domain shift. We split the held-out portion of the Billion Word Benchmark into train (80%, 6.2M tokens) and evaluation (20%, 1.6M tokens) sets and use this data to retrain and evaluate the softmax classifiers. We expect that biLM perplexity will be lower when training the softmax classifiers on representations from layers that capture more information about There is a wide consensus in studies with different tasks, datasets and methodologies that syntactic information is most prominent in the middle layers of BERT.4 Hewitt and Manning (2019) had the most success reconstructing syntactic tree depth from the middle BERT layers (6-9 for base-BERT, 14-19 for BERT-large). Goldberg (2019) reports the best subject-verb agreement around layers 8- 9, and the performance on syntactic probing tasks used by Jawahar et al. (2019) also seems to peak around the middle of the model. The prominence of syntactic information in the middle BERT layers is related to Liu et al. (2019a)'s observation that the middle layers of Transformers are best-performing overall and the most transferable across tasks (see Figure 4).\n\nthe pretraining task. 5.2 Results and Discussion Figure 4 presents the performance of softmax classifiers trained to perform the bidirectional language modeling task, given just the CWRs as input. We notice that higher layers in recurrent models consistently achieve lower perplexities. Inter-There is conflicting evidence about syntactic chunks. Tenney et al. (2019a) conclude that \"the basic syntactic information appears earlier in the network while high-level semantic features appear at the higher layers\", drawing parallels between this order and the order of components in a typical NLP pipeline – from POS-tagging to dependency parsing to semantic role labeling. Jawahar et al. (2019) also report that the lower layers were more useful for chunking, while middle layers were more useful for parsing. At the same time, the probing experiments by Liu et al. (2019a) find the opposite: both POS-tagging and chunking were performed best at the middle layers, in both BERT-base and BERT-large. However, all three studies use different suites of probing tasks.\n\nThe final layers of BERT are the most taskspecific. In pre-training, this means specificity to the MLM task, which explains why the middle\n\n4These BERT results are also compatible with findings by Vig and Belinkov (2019), who report the highest attention to tokens in dependency relations in the middle layers of GPT-2.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "### 3 What knowledge does BERT have?\n\nA number of studies have looked at the knowledge encoded in BERT weights. The popular approaches include fill-in-the-gap probes of MLM, analysis of self-attention weights, and probing classifiers with different BERT representations as inputs.\n\n#### 3.1 Syntactic knowledge\n\nLin et al. (2019) showed that BERT representations are hierarchical rather than linear, i.e. there is something akin to syntactic tree structure in addition to the word order information. Tenney et al. (2019b) and Liu et al. (2019a) also showed that BERT embeddings encode information about parts of speech, syntactic chunks and roles. Enough syntactic information seems to be captured in the token embeddings themselves to recover syntactic trees (Vilares et al., 2020; Kim et al., 2020; Rosa and Marecek ˇ , 2019), although probing classifiers could not recover the labels of distant parent nodes in the syntactic tree (Liu et al., 2019a). Warstadt and Bowman (2020) report evidence of hierarchical structure in three out of four probing tasks. [CLS]For thosewho follow social media transitions on CapitolHill , thiswill be a little different . [CLS] For those who follow social media transitions on Capitol Hill , this will be a little different . 0 Figure 1: Heatmap of the impact matrix for the sentence \"For those who follow social media transitions on Capitol Hill, this will be a little different.\"\n\nAs far as *how* syntax is represented, it seems that syntactic structure is not directly encoded in self-attention weights. Htut et al. (2019) were unable to extract full parse trees from BERT heads even with the gold annotations for the root. Jawahar et al. (2019) include a brief illustration of a dependency tree extracted directly from self-attention weights, but provide no quantitative evaluation. 3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, let us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer\n\nHowever, syntactic information can be recovered from BERT token representations. Hewitt and Manning (2019) were able to learn transformation matrices that successfully recovered syntactic dependencies in PennTreebank data from BERT's token embeddings (see also Manning et al., 2020). Jawahar et al. (2019) experimented with transformations of the [CLS] token using Tensor Product Decomposition Networks (McCoy et al., 2019a), concluding that dependency trees are the best match among 5 decomposition schemes (although the reported MSE differences are very small). Miaschi and Dell'Orletta (2020) performs a range of syntactic probing experiments with concatenated token representations as input. to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English Parallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear vertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences\n\nNote that all these approaches look for the evidence of gold-standard linguistic structures, and add some amount of extra knowledge to the probe. Most recently, Wu et al. (2020) proposed a of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all\n\nfrom the matrices (see Section 4.1).\n\nremaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us to explore the idea of extracting dependency trees\n\nfollow social media transitions on Capitol Hill\n\nFigure 2: Part of the constituency tree.\n\nConstituency. Figure 2 shows part of the constituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable impacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical ex-\n\nOther Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively eval-\n\nWe start with two syntactic probes – dependency\n\nWith the goal of exploring the extent dependency relations are captured in BERT, we set out to answer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what ex-\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\nperiments in Section 4.2.\n\nuate these observations.\n\n4 Syntactic Probe\n\nprobe and constituency probe.\n\n4.1 Dependency Probe\n\ntent?\n\n3 Visualization with Impact Maps Before we discuss specific syntactic phenomena, Figure 2: Part of the constituency tree. Constituency. Figure 2 shows part of the con-Figure 1: Parameter-free probe for syntactic knowledge: words sharing syntactic subtrees have larger impact on each other in the MLM prediction (Wu et al., 2020)\n\n1\n\n2\n\n3\n\n4\n\n5\n\nlet us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term \"impact map\" to refer to a heatmap of an impact matrix. Setup. We extract impact matrices by feeding BERT with 1,000 sentences from the English stituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014). In this sentence, \"*media*\" and \"*on*\" are two words that are adjacent to \"*transitions*\". From the tree, however, we see that \"*media*\" is closer to \"*transitions*\" than \"*on*\" is in terms of syntactic distance. If a model is syntactically uninformed, we would expect \"*media*\" and \"*on*\" to have comparable imparameter-free approach based on measuring the impact that one word has on predicting another word within a sequence in the MLM task (Figure 1). They concluded that BERT \"naturally\" learns some syntactic information, although it is not very similar to linguistic annotated resources.\n\nParallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017). We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure 1. Dependency. We notice that the impact map contains many *stripes*, which are short series of vertical/horizontal cells, typically located along the diagonal. Take the word \"*different*\" as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear pacts on the prediction of \"*transitions*\", and vice versa. However, we observe a far greater impact (darker color) between \"*media*\" and \"*transitions*\" than that between \"*on*\" and \"*transitions*\". We will further support this observation with empirical experiments in Section 4.2. Other Structures. Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase – *on Capitol Hill*). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger The fill-in-the-gap probes of MLM showed that BERT takes subject-predicate agreement into account when performing the cloze task (Goldberg, 2019; van Schijndel et al., 2019), even for meaningless sentences and sentences with distractor clauses between the subject and the verb (Goldberg, 2019). A study of negative polarity items (NPIs) by Warstadt et al. (2019) showed that BERT is better able to detect the presence of NPIs (e.g. \"ever\") and the words that allow their use (e.g. \"whether\") than scope violations.\n\nvertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word \"*different*\" strongly affects the occurrences of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects \"*different*\" as the head of all remaining words in the phrase \"*this will be a little different*.\" We also observe similar patterns on \"*transitions*\" and \"*Hill*\". Such correlations lead us verb phrase. This observation suggest that BERT may capture the compositionality of the language. In the following sections we quantitatively evaluate these observations. 4 Syntactic Probe We start with two syntactic probes – dependency probe and constituency probe. 4.1 Dependency Probe With the goal of exploring the extent dependency The above claims of syntactic knowledge are belied by the evidence that BERT does not \"understand\" negation and is insensitive to malformed input. In particular, its predictions were not altered2 even with shuffled word order, truncated sentences, removed subjects and objects (Ettinger, 2019). This could mean that either BERT's syntactic knowledge is incomplete, or it does not need to rely on it for solving its tasks. The latter seems more likely, since Glavaš and Vulic´ (2020)\n\nWe begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence. We then utilize graph-based algorithms to induce a dependency tree from F, and compare it against ground-truth whose annotations\n\n4168\n\nto explore the idea of extracting dependency trees\n\nrelations are captured in BERT, we set out to an-\n\n4168 from the matrices (see Section 4.1). swer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what extent? 2 See also the recent findings on adversarial triggers, which get the model to produce a certain output even though they are not well-formed from the point of view of a human reader (Wallace et al., 2019a).", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "mBERT across 29 tasks. Either way, these models do not address the inclusion problems raised by [65], who note that over 90% of the world's languages used by more than a billion people currently have little to no support in terms of language technology.\n\nAlongside work investigating what information the models retain from the data, we see a trend in reducing the size of these models using various techniques such as knowledge distillation [26, 58], quantization [118, 153], factorized embedding parameterization and cross-layer parameter sharing [70], and progressive module replacing [146]. Rogers et al. [110] provide a comprehensive comparison of models derived from BERT using these techniques, such as DistilBERT [113] and ALBERT [70]. While these models maintain and sometimes exceed the performance of the original BERT model, despite their much smaller size, they ultimately still rely on large quantities of data and significant processing and storage capabilities to both hold and reduce the model.\n\nWe note that the change from n-gram LMs to word vectors distilled from neural LMs to pretrained Transformer LMs is paralleled by an expansion and change in the types of tasks they are useful for: n-gram LMs were initially typically deployed in selecting among the outputs of e.g. acoustical or translation models; the LSTM-derived word vectors were quickly picked up as more effective representations of words (in place of bag of words features) in a variety of NLP tasks involving labeling and classification; and the pretrained Transformer models can be retrained on very small datasets (few-shot, one-shot or even zero-shot learning) to perform apparently meaning-manipulating tasks such as summarization, question answering and the like. Nonetheless, all of these systems share the property of being LMs in the sense we give above, that is, systems trained to predict sequences of words (or characters or sentences). Where they differ is in the size of the training datasets they leverage and the spheres of influence they can possibly affect. By scaling up in these two ways, modern very large LMs incur new kinds of risk, which we turn to in the following sections.\n\n# 3 ENVIRONMENTAL AND FINANCIAL COST\n\nStrubell et al. recently benchmarked model training and development costs in terms of dollars and estimated 퐶푂2 emissions [129]. While the average human is responsible for an estimated 5t 퐶푂2푒 per year,2 the authors trained a Transformer (big) model [136] with neural architecture search and estimated that the training procedure emitted 284t of 퐶푂2. Training a single BERT base model (without hyperparameter tuning) on GPUs was estimated to require as much energy as a trans-American flight.\n\nWhile some of this energy comes from renewable sources, or cloud compute companies' use of carbon credit-offset sources, the authors note that the majority of cloud compute providers' energy is not sourced from renewable sources and many energy sources in the world are not carbon neutral. In addition, renewable energy sources are still costly to the environment,3 and data centers with increasing computation requirements take away from other potential uses of\n\ngreen energy,4 underscoring the need for energy efficient model architectures and training paradigms.\n\nStrubell et al. also examine the cost of these models vs. their accuracy gains. For the task of machine translation where large LMs have resulted in performance gains, they estimate that an increase in 0.1 BLEU score using neural architecture search for English to German translation results in an increase of $150,000 compute cost in addition to the carbon emissions. To encourage more equitable access to NLP research and reduce carbon footprint, the authors give recommendations to report training time and sensitivity to hyperparameters when the released model is meant to be re-trained for downstream use. They also urge governments to invest in compute clouds to provide equitable access to researchers.\n\nInitiatives such as the SustainNLP workshop5 have since taken up the goal of prioritizing computationally efficient hardware and algorithms. Schwartz et al. [115] also call for the development of green AI, similar to other environmentally friendly scientific developments such as green chemistry or sustainable computing. As shown in [5], the amount of compute used to train the largest deep learning models (for NLP and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore's Law. To promote green AI, Schwartz et al. argue for promoting efficiency as an evaluation metric and show that most sampled papers from ACL 2018, NeurIPS 2018, and CVPR 2019 claim accuracy improvements alone as primary contributions to the field, and none focused on measures of efficiency as primary contributions. Since then, works such as [57, 75] have released online tools to help researchers benchmark their energy usage. Among their recommendations are to run experiments in carbon friendly regions, consistently report energy and carbon metrics, and consider energyperformance trade-offs before deploying energy hungry models. In addition to these calls for documentation and technical fixes, Bietti and Vatanparast underscore the need for social and political engagement in shaping a future where data driven systems have minimal negative impact on the environment [16].\n\nWhile [129] benchmarks the training process in a research setting, many LMs are deployed in industrial or other settings where the cost of inference might greatly outweigh that of training in the long run. In this scenario, it may be more appropriate to deploy models with lower energy costs during inference even if their training costs are high. In addition to benchmarking tools, works estimating the cost increase associated with the introduction of LMs for particular applications, and how they compare to alternative NLP methods, will be important for understanding the trade-offs.\n\nWhen we perform risk/benefit analyses of language technology, we must keep in mind how the risks and benefits are distributed, because they do not accrue to the same people. On the one hand, it is well documented in the literature on environmental racism that the negative effects of climate change are reaching and impacting the world's most marginalized communities first [1, 27].6 Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100 [6]) or the 800,000 people in Sudan affected\n\n2Data for 2017, from https://ourworldindata.org/co2-emissions, accessed Jan 21, 2021 3https://www.heraldscotland.com/news/18270734.14m-trees-cut-scotland-make-waywind-farms/\n\n4https://news.microsoft.com/2017/11/02/microsoft-announces-one-of-the-largestwind-deals-in-the-netherlands-with-vattenfall/\n\n5https://sites.google.com/view/sustainlp2020/organization\n\n6https://www.un.org/sustainabledevelopment/blog/2016/10/report-inequalitiesexacerbate-climate-impacts-on-poor/", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv5_ccby4license.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "How many affiliate banks has First Financial Bankshares ?", - "target_page": 4, - "target_passage": "The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "First Financial Bankshares, Inc. is a financial holding company\n\nheadquartered in Abilene, Texas, with consolidated assets of $2.0 billion as of December 31, 2002. The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. The common stock of First Financial Bankshares, Inc. is held by more than 3,500 shareholders and is listed on The NASDAQ Stock Market® under the symbol FFIN.\n\n\"Our 10 affiliate banks provide services from 28 full-service locations in the Central, West and High Plains regions of Texas.\"", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n#### Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) (\"Bankshares\") is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the \"Company\") applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n#### Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n#### Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n#### Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n#### Loans and Allowance for Loan Losses\n\nLoans are stated at the amount of unpaid principal, reduced by unearned income and an allowance for loan losses. Unearned income on installment loans is recognized in income over the terms of the loans in decreasing amounts using a method which approximates the interest method. Interest on other loans is calculated by using the simple interest method on daily balances of the principal amounts outstanding. The Company expenses its net loan origination costs, a method which does not materially differ from deferring and amortizing such amounts as an adjustment to yield. The allowance for loan losses is established through a provision for loan losses charged to", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "range of services to individuals, associations, and corporations. These services include administering estates, testamentary trusts, various types of living trusts, and agency accounts. In addition, First National Bank of Abilene, First Financial Bank, Cleburne, San Angelo National Bank and First Financial Bank, National Association, Southlake, Texas provide securities brokerage services through arrangements with various third parties.\n\nWe have filed an application with the office of the Comptroller of the Currency to form a limited purpose national bank under which we will consolidate the management of our current trust departments. The new entity will operate as a subsidiary of our subsidiary holding company, First Financial Bankshares of Delaware, Inc. We believe that with this structure we can more effectively manage our current trust operations and provide trust services to customers of our banks that do not currently have trust departments. We anticipate that the new trust company will begin operations in the latter part of 2003.\n\n#### **Competition**\n\nCommercial banking in Texas is highly competitive, and because we hold less than 1% of the state's deposits, we represent only a minor segment of the industry. To succeed in this industry, our management believes that our banks must have the capability to compete in the areas of (1) interest rates paid or charged; (2) scope of services offered; and (3) prices charged for such services. Our subsidiary banks compete in their respective service areas against highly competitive banks, thrifts, savings and loan associations, small loan companies, credit unions, mortgage companies, and brokerage firms, all of which are engaged in providing financial products and services and some of which are larger than our subsidiary banks in terms of capital, resources and personnel.\n\nOur business does not depend on any single customer or any few customers, the loss of any one of which would have a materially adverse effect upon our business. Although we have a broad base of customers that are not related to us, our customers also occasionally include our officers and directors, as well as other entities with which we are affiliated. With our subsidiary banks we may make loans to officers and directors, and entities with which we are affiliated, in the ordinary course of business. We make these loans on substantially the same terms, including interest rates and collateral, as those prevailing at the time for comparable transactions with other persons. Loans to directors, officers and their affiliates are also subject to numerous restrictions under federal and state banking laws which we describe in greater detail below.\n\n#### **Employees**\n\nWith our subsidiary banks we employed approximately 750 full-time equivalent employees at February 1, 2003. Our management believes that our employee relations have been and will continue to be good.\n\n#### **Supervision and Regulation**\n\nBoth federal and state laws extensively regulate bank holding companies, financial holding companies and banks. These laws (and the regulations promulgated thereunder) are primarily intended to protect depositors and the deposit insurance fund of the Federal Deposit Insurance Corporation, or FDIC, although shareholders may also benefit. The following information describes particular laws and regulatory provisions relating to financial holding companies and banks. This discussion is qualified in its entirety by reference to the particular laws and regulatory provisions. A change in any of these laws or regulations may have a material effect on our business and the business of our subsidiary banks.\n\n#### *Bank Holding Companies and Financial Holding Companies*\n\nTraditionally, the activities of bank holding companies were limited to the business of banking and activities closely related or incidental to banking. Bank holding companies were generally prohibited from acquiring control of any company which was not a bank and from engaging in any business other than the business of banking or managing and controlling banks. The Gramm-Leach-Bliley Act, which took effect on March 12, 2000, dismantled many Depression-era restrictions against affiliation between banking, securities and insurance firms by permitting bank holding companies to engage in a broader range of financial activities, so long as certain safeguards are observed. Specifically, bank holding companies may elect to become \"financial holding companies\" that may affiliate with securities firms and insurance companies and engage in other activities that are financial in nature or", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "the parent's general unsecured creditors. If a depository institution fails to submit an acceptable capital restoration plan, it shall be treated as if it is significantly undercapitalized. \"Significantly undercapitalized\" depository institutions may be subject to a number of requirements and restrictions, including orders to sell sufficient voting stock to become \"adequately capitalized,\" requirements to reduce total assets, and cessation of receipt of deposits from correspondent banks. \"Critically undercapitalized\" institutions are subject to the appointment of a receiver or conservator. Finally, FDICIA requires the various regulatory agencies to set forth certain standards that do not relate to capital. Such standards relate to the safety and soundness of operations and management and to asset quality and executive compensation, and permit regulatory action against a financial institution that does not meet such standards.\n\nIf an insured bank fails to meet its capital guidelines, it may be subject to a variety of other enforcement remedies, including a prohibition on the taking of brokered deposits and the termination of deposit insurance by the FDIC. Bank regulators continue to indicate their desire to raise capital requirements beyond their current levels.\n\nIn addition to FDICIA capital standards, Texas-chartered banks must also comply with the capital requirements imposed by the Texas Banking Department. Neither the Texas Finance Code nor its regulations specify any minimum capital-to-assets ratio that must be maintained by a Texas-chartered bank. Instead, the Texas Banking Department determines the appropriate ratio on a bank by bank basis, considering factors such as the nature of a bank's business, its total revenue, and the bank's total assets. As of December 31, 2002, all of our Texas-chartered banks exceeded the minimum ratios applied to them.\n\n#### *Our Support of Our Subsidiary Banks*\n\nUnder Federal Reserve Board policy, we are expected to commit resources to act as a source of strength to support each of our subsidiary banks. This support may be required at times when, absent such Federal Reserve Board policy, we would not otherwise be required to provide it. In addition, any loans we make to our subsidiary banks would be subordinate in right of payment to deposits and to other indebtedness of our banks. In the event of a bank holding company's bankruptcy, any commitment by the bank holding company to a federal bank regulatory agency to maintain the capital of a subsidiary bank will be assumed by the bankruptcy trustee and be subject to a priority of payment.\n\nUnder the National Bank Act, if the capital stock of a national bank is impaired by losses or otherwise, the OCC is authorized to require the bank's shareholders to pay the deficiency on a pro-rata basis. If any shareholder refuses to pay the pro-rata assessment after three months notice, then the bank's board of directors must sell an appropriate amount of the shareholder's stock at a public auction to make up the deficiency. To the extent necessary, if a deficiency in capital still exists and the bank refuses to go into liquidation, then a receiver may be appointed to wind up the bank's affairs. Additionally, under the Federal Deposit Insurance Act, in the event of a loss suffered or anticipated by the FDIC (either as a result of the default of a banking subsidiary or related to FDIC assistance provided to a subsidiary in danger of default) our other banking subsidiaries may be assessed for the FDIC's loss.\n\n#### *Interstate Banking and Branching Act*\n\nPursuant to the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, or Riegle-Neal Act, a bank holding company or financial holding company is able to acquire banks in states other than its home state. The Riegle-Neal Act also authorized banks to merge across state lines, thereby creating interstate branches, beginning June 1, 1997. Furthermore, under this act, a bank is now able to open new branches in a state in which it does not already have banking operations, if the laws of such state permit it to do so. Accordingly, both the OCC and the Texas Banking Department accept applications for interstate merger and branching transactions, subject to certain limitations on ages of the banks to be acquired and the total amount of deposits within the state a bank or financial holding company may control. Since our primary service area is Texas, we do not expect that the ability to operate in other states will have any material impact on our growth strategy. We may, however, face increased competition from out-of-state banks that branch or make acquisitions in our primary markets in Texas.", - "page_start": 35, - "page_end": 35, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "Our subsidiary banks paid aggregate dividends of approximately $26.6 million in 2002 and approximately $25.5 million in 2001. Under the dividend restrictions discussed above, as of December 31, 2002, our subsidiary banks, without obtaining governmental approvals, could have declared in the aggregate additional dividends of approximately $20.7 million from retained net profits.\n\nTo pay dividends, we and our subsidiary banks must maintain adequate capital above regulatory guidelines. In addition, if the applicable regulatory authority believes that a bank under its jurisdiction is engaged in or is about to engage in an unsafe or unsound practice (which, depending on the financial condition of the bank, could include the payment of dividends), the authority may require, after notice and hearing, that such bank cease and desist from the unsafe practice. The Federal Reserve Board and the OCC have each indicated that paying dividends that deplete a bank's capital base to an inadequate level would be an unsafe and unsound banking practice. The Federal Reserve Board, the OCC and the FDIC have issued policy statements that recommend that bank holding companies and insured banks should generally only pay dividends to the extent that net income is sufficient to cover both cash dividends and rate of earnings retention consistent with capital needs, asset quality and overall financial condition. No undercapitalized institution may pay a dividend.\n\n#### *Affiliate Transactions*\n\nThe Federal Reserve Act, the FDIA and the rules adopted under these statutes restrict the extent to which we can borrow or otherwise obtain credit from, or engage in certain other transactions with, our depository subsidiaries. These laws regulate \"covered transactions\" between insured depository institutions and their subsidiaries, on the one hand, and their nondepository affiliates, on the other hand. \"Covered transactions\" include a loan or extension of credit to a nondepository affiliate, a purchase of securities issued by such an affiliate, a purchase of assets from such an affiliate (unless otherwise exempted by the Federal Reserve Board), an acceptance of securities issued by such an affiliate as collateral for a loan, and an issuance of a guarantee, acceptance, or letter of credit for the benefit of such an affiliate. The \"covered transactions\" that an insured depository institution and its subsidiaries are permitted to engage in with their nondepository affiliates are limited to the following amounts: (1) in the case of any one such affiliate, the aggregate amount of \"covered transactions\" cannot exceed ten percent of the capital stock and the surplus of the insured depository institution; and (2) in the case of all affiliates, the aggregate amount of \"covered transactions\" cannot exceed twenty percent of the capital stock and surplus of the insured depository institution. In addition, extensions of credit that constitute \"covered transactions\" must be collateralized in prescribed amounts. Further, a bank holding company and its subsidiaries are prohibited from engaging in certain tie-in arrangements in connection with any extension of credit, lease or sale of property or furnishing of services. Finally, when we and our subsidiary banks conduct transactions internally among us, we are required to do so at arm's length.\n\n#### *Loans to Directors, Executive Officers and Principal Shareholders*\n\nThe authority of our subsidiary banks to extend credit to our directors, executive officers and principal shareholders, including their immediate family members and corporations and other entities that they control, is subject to substantial restrictions and requirements under Sections 22(g) and 22(h) of the Federal Reserve Act and Regulation O promulgated thereunder. These statutes and regulations impose specific limits on the amount of loans our subsidiary banks may make to directors and other insiders, and specified approval procedures must be followed in making loans that exceed certain amounts. In addition, all loans our subsidiary banks make to directors and other insiders must satisfy the following requirements:\n\n- The loans must be made on substantially the same terms, including interest rates and collateral, as prevailing at the time for comparable transactions with persons not affiliated with us or the subsidiary banks;\n- The subsidiary banks must follow credit underwriting procedures at least as stringent as those applicable to comparable transactions with persons who are not affiliated with us or the subsidiary banks; and\n- The loans must not involve a greater than normal risk of repayment or other unfavorable features.", - "page_start": 33, - "page_end": 33, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "incidental to a financial activity. Thus, with the enactment of the Gramm-Leach-Bliley Act, banks, securities firms and insurance companies find it easier to acquire or affiliate with each other and cross-sell financial products. The act permits a single financial services organization to offer a more complete array of financial products and services than historically was permitted.\n\nA financial holding company is essentially a bank holding company with significantly expanded powers. Under the Gramm-Leach-Bliley Act, among the activities that will be deemed \"financial in nature\" for financial holding companies are, in addition to traditional lending activities, securities underwriting, dealing in or making a market in securities, sponsoring mutual funds and investment companies, insurance underwriting and agency activities, activities which the Federal Reserve Board determines to be closely related to banking, and certain merchant banking activities. The Federal Reserve Board has proposed permitting a number of additional financial activities, but we cannot predict whether any of these additional proposals will be adopted or the form any final rule will take.\n\nWe elected to become a financial holding company in September 2001. As a financial holding company, we have very broad discretion to affiliate with securities firms and insurance companies, make merchant banking investments, and engage in other activities that the Federal Reserve Board has deemed financial in nature. In order to continue as a financial holding company, we must continue to be well-capitalized, well-managed and maintain compliance with the Community Reinvestment Act. Depending on the types of financial activities that we may engage in in the future, under Gramm-Leach-Bliley's fractional regulation principles, we may become subject to supervision by additional government agencies. The election to be treated as a financial holding company increases our ability to offer financial products and services that historically we were either unable to provide or were only able to provide on a limited basis. As a result, we will face increased competition in the markets for any new financial products and services that we may offer. Likewise, an increased amount of consolidation among banks and securities firms or banks and insurance firms could result in a growing number of large financial institutions that could compete aggressively with us.\n\n#### *Mergers and Acquisitions*\n\nWe generally must obtain approval from the banking regulators before we can acquire other financial institutions. We must not engage in certain acquisitions if we are undercapitalized. Furthermore, the BHCA provides that the Federal Reserve Board cannot approve any acquisition, merger or consolidation that may substantially lessen competition in the banking industry, create a monopoly in any section of the country, or be a restraint of trade. However, the Federal Reserve Board may approve such a transaction if the convenience and needs of the community clearly outweigh any anti-competitive effects. Specifically, the Federal Reserve Board would consider, among other factors, the expected benefits to the public (greater convenience, increased competition, greater efficiency, etc.) against the risks of possible adverse effects (undue concentration of resources, decreased or unfair competition, conflicts of interest, unsound banking practices, etc.).\n\n#### *Banks*\n\nFederal and state laws and regulations that govern banks have the effect of, among other things, regulating the scope of business, investments, cash reserves, the purpose and nature of loans, the maximum interest rate chargeable on loans, the amount of dividends declared, and required capitalization ratios.\n\n*National Banking Associations*. Banks that are organized as national banking associations under the National Bank Act are subject to regulation and examination by the Office of the Comptroller of the Currency, or OCC. The OCC supervises, regulates and regularly examines the First National Bank of Abilene, First National Bank, Sweetwater, First Financial Bank, National Association, Cleburne, Eastland National Bank, San Angelo National Bank, Weatherford National Bank, First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. The OCC's supervision and regulation of banks is primarily intended to protect the interests of depositors. The National Bank Act:\n\n- requires each national banking association to maintain reserves against deposits,\n- restricts the nature and amount of loans that may be made and the interest that may be charged, and\n- restricts investments and other activities.", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "*State Banks*. Banks that are organized as state banks under Texas law are subject to regulation and examination by the Banking Commissioner of the State of Texas. The Commissioner regulates and supervises, and the Texas Banking Department regularly examines, Hereford State Bank and Stephenville Bank and Trust Co. The Commissioner's supervision and regulation of banks is primarily designed to protect the interests of depositors. Texas law\n\n- requires each state bank to maintain reserves against deposits,\n- restricts the nature and amount of loans that may be made and the interest that may be charged, and\n- restricts investments and other activities.\n\nBecause our Texas-chartered banks are members of the FDIC, they are also subject to regulation at the federal level by the FDIC, and are subject to most of the federal laws described below.\n\n#### *Deposit Insurance*\n\nEach of our subsidiary banks is a member of the FDIC. The FDIC provides deposit insurance protection that covers all deposit accounts in FDIC-insured depository institutions and generally does not exceed $100,000 per depositor. Our subsidiary banks must pay assessments to the FDIC under a risk-based assessment system for federal deposit insurance protection. FDIC-insured depository institutions that are members of the Bank Insurance Fund pay insurance premiums at rates based on their risk classification. Institutions assigned to higher risk classifications (i.e., institutions that pose a greater risk of loss to their respective deposit insurance funds) pay assessments at higher rates than institutions that pose a lower risk. An institution's risk classification is assigned based on its capital levels and the level of supervisory concern the institution poses to bank regulators. In addition, the FDIC can impose special assessments to cover the costs of borrowings from the U.S. Treasury, the Federal Financing Bank and the Bank Insurance Fund member banks. As of December 31, 2002, the assessment rate for each of our subsidiary banks is at the lowest level risk-based premium available.\n\nUnder the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, or FIRREA, an FDICinsured depository institution can be held liable for any losses incurred by the FDIC in connection with (1) the \"default\" of one of its FDIC-insured subsidiaries or (2) any assistance provided by the FDIC to one of its FDICinsured subsidiaries \"in danger of default.\" \"Default\" is defined generally as the appointment of a conservator or receiver, and \"in danger of default\" is defined generally as the existence of certain conditions indicating that a default is likely to occur in the absence of regulatory assistance.\n\nThe Federal Deposit Insurance Act, or FDIA requires that the FDIC review (1) any merger or consolidation by or with an insured bank, or (2) any establishment of branches by an insured bank. The FDIC is also empowered to regulate interest rates paid by insured banks. Approval of the FDIC is also required before an insured bank retires any part of its common or preferred stock, or any capital notes or debentures. Insured banks that are also members of the Federal Reserve System, however, are regulated with respect to the foregoing matters by the Federal Reserve System.\n\n#### *Payment of Dividends*\n\nWe are a legal entity separate and distinct from our banking and other subsidiaries. We receive most of our revenue from dividends paid to us by our Delaware holding company subsidiary. Similarly, the Delaware holding company subsidiary receives dividends from our bank subsidiaries. Described below are some of the laws and regulations that apply when either we or our subsidiary banks pay dividends.\n\nEach state bank that is a member of the Federal Reserve System and each national banking association is required by federal law to obtain the prior approval of the Federal Reserve Board and the OCC, respectively, to declare and pay dividends if the total of all dividends declared in any calendar year would exceed the total of (1) such bank's net profits (as defined and interpreted by regulation) for that year plus (2) its retained net profits (as defined and interpreted by regulation) for the preceding two calendar years, less any required transfers to surplus. In addition, these banks may only pay dividends to the extent that retained net profits (including the portion transferred to surplus) exceed bad debts (as defined by regulation).", - "page_start": 32, - "page_end": 32, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "amounted to 49.1%, 48.9% and 45.2% of net earnings, respectively, in 2002, 2001 and 2000. Given our current strong capital position and projected earnings and asset growth rates, we do not anticipate any change in our current dividend policy.\n\nEach state bank that is a member of the Federal Reserve System and each national banking association is required by federal law to obtain the prior approval of the Federal Reserve Board and the OCC, respectively, to declare and pay dividends if the total of all dividends declared in any calendar year would exceed the total of (1) such bank's net profits (as defined and interpreted by regulation) for that year plus (2) its retained net profits (as defined and interpreted by regulation) for the preceding two calendar years, less any required transfers to surplus. In addition, these banks may only pay dividends to the extent that retained net profits (including the portion transferred to surplus) exceed bad debts (as defined by regulation).\n\nTo pay dividends, we and our subsidiary banks must maintain adequate capital above regulatory guidelines. In addition, if the applicable regulatory authority believes that a bank under its jurisdiction is engaged in or is about to engage in an unsafe or unsound practice (which, depending on the financial condition of the bank, could include the payment of dividends), the authority may require, after notice and hearing, that such bank cease and desist from the unsafe practice. The Federal Reserve Board and the OCC have each indicated that paying dividends that deplete a bank's capital base to an inadequate level would be an unsafe and unsound banking practice. The Federal Reserve Board, the OCC and the FDIC have issued policy statements that recommend that bank holding companies and insured banks should generally only pay dividends out of current operating earnings.\n\n#### **ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK**\n\nOur management considers interest rate risk to be a significant market risk for us. See \"Item 7—Management's Discussion and Analysis of Financial Condition and Results of Operations—Balance Sheet Review—Interest Rate Risk\" for disclosure regarding this market risk.", - "page_start": 55, - "page_end": 55, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "- Eastland National Bank, Eastland, Texas;\n- First Financial Bank, National Association, Cleburne, Texas;\n- Stephenville Bank and Trust Co., Stephenville, Texas;\n- San Angelo National Bank, San Angelo, Texas;\n- Weatherford National Bank, Weatherford, Texas;\n- First Financial Bank, National Association, Southlake, Texas; and\n- City National Bank, Mineral Wells, Texas.\n\nAs described in more detail below, we elected to be treated as a financial holding company in September 2001.\n\nOur service centers are located primarily in North Central and West Texas. Considering the branches and locations of all our subsidiary banks, as of December 31, 2002, we had 28 financial centers across Texas, with seven locations in Abilene, two locations in Cleburne, two locations in Stephenville, two locations in San Angelo, three locations in Weatherford, and one location each in Mineral Wells, Hereford, Sweetwater, Eastland, Southlake, Aledo, Alvarado, Burleson, Keller, Trophy Club, Roby, and Trent.\n\nInformation on our revenues, profits and losses and total assets appears in the discussion of our Results of Operations contained in Item 7 hereof.\n\n#### **First Financial Bankshares, Inc.**\n\nWe provide management and technical resources and policy direction to our subsidiary banks, which enables them to improve or expand their banking services while continuing their local activity and identity. Each of our subsidiary banks operates under the day-to-day management of its own board of directors and officers, with substantial authority in making decisions concerning their own investments, loan policies, interest rates, and service charges. We provide resources and policy direction in, among other things, the following areas:\n\n- asset and liability management;\n- accounting, budgeting, planning and insurance;\n- capitalization; and\n- regulatory compliance.\n\nIn particular, we assist our subsidiary banks with, among other things, decisions concerning major capital expenditures, employee fringe benefits, including pension plans and group insurance, dividend policies, and appointment of officers and directors and their compensation. We also perform, through corporate staff groups or by outsourcing to third parties, internal audits and loan reviews of our subsidiary banks. Through First National Bank of Abilene, we provide advice and specialized services for our banks related to lending, investing, purchasing, advertising, public relations, and computer services.\n\nWhile we have no specific acquisition agreements in place or commitments to expand our branch network, we periodically evaluate various potential financial institution acquisition opportunities and also periodically evaluate potential locations for new branch offices. We anticipate that funding for any acquisitions or expansions would be provided from our existing cash balances, available dividends from subsidiary banks, utilization of available lines of credit and future debt or equity offerings.\n\n#### **Services Offered by Our Subsidiary Banks**\n\nEach of our subsidiary banks is a separate legal entity that operates under the day-to-day management of its own board of directors and officers. Each of our subsidiary banks provides general commercial banking services, which include accepting and holding checking, savings and time deposits, making loans, automated teller machines, drivein and night deposit services, safe deposit facilities, transmitting funds, and performing other customary commercial banking services. Certain of our subsidiary banks also administer pension plans, profit sharing plans and other employee benefit plans. First National Bank of Abilene, First National Bank, Sweetwater, Stephenville Bank and Trust Co. and San Angelo National Bank have active trust departments. The trust departments offer a complete", - "page_start": 29, - "page_end": 29, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### **S.L. Garrison knows how to grow things.**\n\nIn fact, two of his businesses *specialize* in growth. As founder/partner of Bar-G Feedyard and the Garrison and Townsend Inc. hybrid seed company, Garrison has a keen perspective on what it takes to build successful companies. His other interests include Backyard Adventures, a fast-rising maker of high-quality playground equipment.\n\n\"I've been a customer of Hereford State Bank since 1966, when we were first starting out,\" says Garrison. \"As our company grew, the bank was always willing to grow with us to meet our loan needs. Of course, we tried to be good customers and pay them back!\n\n\"I've owned First Financial Bankshares stock since the early '80s ... they are a strong company. They've paid good dividends, the value has grown, and their strategy of acquiring solid banks has been good for shareholders.\n\n\"When they acquire a bank, they keep a local board of directors for that bank – that's important for strong support of the community. They are leaders who help grow the communities they serve.\n\n\"As with all businesses, it's people that make the difference, and First Financial Bankshares' emphasis on making sure they have quality, informed people from top to bottom is obvious. They understand business, and they are active and involved in community affairs. No matter what the need, they always step in to help.\"\n\nS.L. Garrison Founder/Partner Bar-G Feedyard Garrison and Townsend Inc. Hereford, Texas\n\n7\n\n## \"As with all businesses, it's people that make the difference.\".", - "page_start": 8, - "page_end": 8, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "What was the net income of First Financial Bankshares in 1995 ?", - "target_page": 14, - "target_passage": " 16,355", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n#### Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) (\"Bankshares\") is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the \"Company\") applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n#### Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n#### Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n#### Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n#### Loans and Allowance for Loan Losses\n\nLoans are stated at the amount of unpaid principal, reduced by unearned income and an allowance for loan losses. Unearned income on installment loans is recognized in income over the terms of the loans in decreasing amounts using a method which approximates the interest method. Interest on other loans is calculated by using the simple interest method on daily balances of the principal amounts outstanding. The Company expenses its net loan origination costs, a method which does not materially differ from deferring and amortizing such amounts as an adjustment to yield. The allowance for loan losses is established through a provision for loan losses charged to", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### **REPORT OF INDEPENDENT AUDITORS**\n\nTo the Board of Directors and Shareholders of First Financial Bankshares, Inc.\n\nWe have audited the accompanying consolidated balance sheet of First Financial Bankshares, Inc. (a Texas corporation) and subsidiaries as of December 31, 2002, and the related consolidated statements of earnings, comprehensive earnings, shareholders' equity, and cash flows for the year then ended. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audit. The consolidated financial statements of First Financial Bankshares, Inc. and subsidiaries as of December 31, 2001 and for each of the two years then ended, were audited by other auditors who have ceased operations and whose report dated January 11, 2002, expressed an unqualified opinion on those statements.\n\nWe conducted our audit in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audit provides a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of First Financial Bankshares, Inc. and subsidiaries at December 31, 2002, and the consolidated results of their operations and their cash flows for the year then ended in conformity with accounting principles generally accepted in the United States.\n\nAs discussed above, the financial statements of First Financial Bankshares, Inc. as of December 31, 2001 and the two years then ended were audited by other auditors who have ceased operations. As described in Note 1, these financial statements have been revised to include the transitional disclosures required by Statement of Financial Accounting Standards No. 142, *Goodwill and Other Intangible Assets*, which was adopted by the Company as of January 1, 2002. Our audit procedures with respect to the disclosures in Note 1 with respect to 2001 and 2000 included (a) agreeing the previously reported net income to the previously issued financial statements and the adjustments to reported net income representing amortization expense including related tax effects recognized in those periods related to goodwill to the Company's underlying records obtained from management, and (b) testing the mathematical accuracy of the reconciliation of adjusted net income to reported net income, and the related earnings per share amounts. In our opinion, the disclosures for 2001 and 2000 are appropriate. However, we were not engaged to audit, review, or apply any procedures to the 2001 and 2000 financial statements of the Company other than with respect to such disclosures and, accordingly, we do not express an opinion or any other form of assurance on the 2001 and 2000 financial statements taken as a whole.\n\nErnst & Young LLP\n\nDallas, Texas January 14, 2003", - "page_start": 64, - "page_end": 64, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "amounted to 49.1%, 48.9% and 45.2% of net earnings, respectively, in 2002, 2001 and 2000. Given our current strong capital position and projected earnings and asset growth rates, we do not anticipate any change in our current dividend policy.\n\nEach state bank that is a member of the Federal Reserve System and each national banking association is required by federal law to obtain the prior approval of the Federal Reserve Board and the OCC, respectively, to declare and pay dividends if the total of all dividends declared in any calendar year would exceed the total of (1) such bank's net profits (as defined and interpreted by regulation) for that year plus (2) its retained net profits (as defined and interpreted by regulation) for the preceding two calendar years, less any required transfers to surplus. In addition, these banks may only pay dividends to the extent that retained net profits (including the portion transferred to surplus) exceed bad debts (as defined by regulation).\n\nTo pay dividends, we and our subsidiary banks must maintain adequate capital above regulatory guidelines. In addition, if the applicable regulatory authority believes that a bank under its jurisdiction is engaged in or is about to engage in an unsafe or unsound practice (which, depending on the financial condition of the bank, could include the payment of dividends), the authority may require, after notice and hearing, that such bank cease and desist from the unsafe practice. The Federal Reserve Board and the OCC have each indicated that paying dividends that deplete a bank's capital base to an inadequate level would be an unsafe and unsound banking practice. The Federal Reserve Board, the OCC and the FDIC have issued policy statements that recommend that bank holding companies and insured banks should generally only pay dividends out of current operating earnings.\n\n#### **ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK**\n\nOur management considers interest rate risk to be a significant market risk for us. See \"Item 7—Management's Discussion and Analysis of Financial Condition and Results of Operations—Balance Sheet Review—Interest Rate Risk\" for disclosure regarding this market risk.", - "page_start": 55, - "page_end": 55, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "First Financial Bankshares, Inc. is a financial holding company\n\nheadquartered in Abilene, Texas, with consolidated assets of $2.0 billion as of December 31, 2002. The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. The common stock of First Financial Bankshares, Inc. is held by more than 3,500 shareholders and is listed on The NASDAQ Stock Market® under the symbol FFIN.\n\n\"Our 10 affiliate banks provide services from 28 full-service locations in the Central, West and High Plains regions of Texas.\"", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "range of services to individuals, associations, and corporations. These services include administering estates, testamentary trusts, various types of living trusts, and agency accounts. In addition, First National Bank of Abilene, First Financial Bank, Cleburne, San Angelo National Bank and First Financial Bank, National Association, Southlake, Texas provide securities brokerage services through arrangements with various third parties.\n\nWe have filed an application with the office of the Comptroller of the Currency to form a limited purpose national bank under which we will consolidate the management of our current trust departments. The new entity will operate as a subsidiary of our subsidiary holding company, First Financial Bankshares of Delaware, Inc. We believe that with this structure we can more effectively manage our current trust operations and provide trust services to customers of our banks that do not currently have trust departments. We anticipate that the new trust company will begin operations in the latter part of 2003.\n\n#### **Competition**\n\nCommercial banking in Texas is highly competitive, and because we hold less than 1% of the state's deposits, we represent only a minor segment of the industry. To succeed in this industry, our management believes that our banks must have the capability to compete in the areas of (1) interest rates paid or charged; (2) scope of services offered; and (3) prices charged for such services. Our subsidiary banks compete in their respective service areas against highly competitive banks, thrifts, savings and loan associations, small loan companies, credit unions, mortgage companies, and brokerage firms, all of which are engaged in providing financial products and services and some of which are larger than our subsidiary banks in terms of capital, resources and personnel.\n\nOur business does not depend on any single customer or any few customers, the loss of any one of which would have a materially adverse effect upon our business. Although we have a broad base of customers that are not related to us, our customers also occasionally include our officers and directors, as well as other entities with which we are affiliated. With our subsidiary banks we may make loans to officers and directors, and entities with which we are affiliated, in the ordinary course of business. We make these loans on substantially the same terms, including interest rates and collateral, as those prevailing at the time for comparable transactions with other persons. Loans to directors, officers and their affiliates are also subject to numerous restrictions under federal and state banking laws which we describe in greater detail below.\n\n#### **Employees**\n\nWith our subsidiary banks we employed approximately 750 full-time equivalent employees at February 1, 2003. Our management believes that our employee relations have been and will continue to be good.\n\n#### **Supervision and Regulation**\n\nBoth federal and state laws extensively regulate bank holding companies, financial holding companies and banks. These laws (and the regulations promulgated thereunder) are primarily intended to protect depositors and the deposit insurance fund of the Federal Deposit Insurance Corporation, or FDIC, although shareholders may also benefit. The following information describes particular laws and regulatory provisions relating to financial holding companies and banks. This discussion is qualified in its entirety by reference to the particular laws and regulatory provisions. A change in any of these laws or regulations may have a material effect on our business and the business of our subsidiary banks.\n\n#### *Bank Holding Companies and Financial Holding Companies*\n\nTraditionally, the activities of bank holding companies were limited to the business of banking and activities closely related or incidental to banking. Bank holding companies were generally prohibited from acquiring control of any company which was not a bank and from engaging in any business other than the business of banking or managing and controlling banks. The Gramm-Leach-Bliley Act, which took effect on March 12, 2000, dismantled many Depression-era restrictions against affiliation between banking, securities and insurance firms by permitting bank holding companies to engage in a broader range of financial activities, so long as certain safeguards are observed. Specifically, bank holding companies may elect to become \"financial holding companies\" that may affiliate with securities firms and insurance companies and engage in other activities that are financial in nature or", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "# **NOTE 33 – FINANCIAL RISK MANAGEMENT continued**\n\n# **b) Net Fair Value of Financial Assets and Liabilities**\n\nThe net fair value of cash and cash equivalent and non-interest bearing monetary financial assets and financial liabilities of the consolidated entity approximate their carrying value.\n\nThe net fair value of other monetary financial assets and financial liabilities is based on discounting future cash flows by the current interest rates for assets and liabilities with similar risk profiles. Other than the Junior Credit Facility, the balances are not materially different from those disclosed in the consolidated statement of financial position of the Group.\n\n### **c) Credit Risk**\n\nCredit risk for the Group arises from investments in cash and cash equivalents, derivative financial instruments and deposits with banks and financial institutions, as well as credit exposures to customers including outstanding receivables and committed transactions, and represents the potential financial loss if counterparties fail to perform as contracted. The Group trades only with recognised, creditworthy third parties.\n\nThe maximum exposure to credit risk, excluding the value of any collateral or other security, at balance date to recognise the financial assets, is the carrying amount, net of any impairment of those assets, as disclosed in the balance sheet and notes to the financial statements. Receivable balances are monitored on an ongoing basis at the individual customer level.\n\nAt 31 December 2014, the Group had three customers that owed the Group more than $1.0 million each and accounted for approximately 75% of total accrued revenue receivables. There was one customer with balances greater than $5.0 million accounting for approximately 56% of total accrued revenue receivables. For joint interest billing receivables, if payment is not made, the Group can withhold future payments of revenue, as such, there is minimal to no credit risk associated with these receivables.\n\n# **d) Liquidity Risk**\n\nLiquidity risk is the risk that the Group will not be able to meet its financial obligations as they fall due. The Group's approach to managing liquidity is to ensure that it will have sufficient liquidity to meet its liabilities as they become due, without incurring unacceptable losses or risking damage to the Group's reputation. The Group manages liquidity risk by maintaining adequate reserves and banking facilities by continuously monitoring forecast and actual cash flows, and by matching the maturity profiles of financial assets and liabilities.\n\nAs at 31 December 2014, based on the current borrowing based, the Group had $15.0 million of undrawn borrowing facilities.", - "page_start": 100, - "page_end": 100, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### **REPORT OF INDEPENDENT PUBLIC ACCOUNTANTS**\n\nTo the Board of Directors and Shareholders of First Financial Bankshares, Inc.\n\nWe have audited the accompanying consolidated balance sheets of First Financial Bankshares, Inc. (a Texas corporation) and subsidiaries as of December 31, 2001 and 2000, and the related consolidated statements of earnings, comprehensive earnings, shareholders' equity, and cash flows for each of the three years in the period ended December 31, 2001. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of First Financial Bankshares, Inc. and subsidiaries as of December 31, 2001 and 2000, and the results of their operations and their cash flows for each of the three years in the period ended December 31, 2001, in conformity with accounting principles generally accepted in the United States.\n\nArthur Andersen LLP\n\nDallas, Texas, January 11, 2002\n\n### NOTE: THIS IS A COPY OF A REPORT PREVIOUSLY ISSUED BY ARTHUR ANDERSEN LLP WHICH CEASED OPERATIONS. THIS REPORT ADDRESSES CERTAIN FINANCIAL STATEMENTS FOR PERIODS THAT ARE NOT OTHERWISE REQUIRED TO BE INCLUDED IN THIS FORM 10-K.", - "page_start": 65, - "page_end": 65, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "The Company's policy requires measurement of the allowance for an impaired collateral dependent loan based on the fair value of the collateral. Other loan impairments are measured based on the present value of expected future cash flows or the loan's observable market price.\n\n#### **Results of Operations**\n\n*Performance Summary*. Net earnings for 2002 were $34.0 million, an increase of $4.6 million, or 15.7%, over net earnings for 2001 of $29.4 million. Net earnings for 2000 were $28.3 million. The increase in net earnings for 2002 over 2001 was primarily attributable to an increase in net interest income resulting primarily from growth in average earning assets and an improved net interest margin. The increase in net earnings for 2001 over 2000 was primarily attributable to an increase in net interest income resulting primarily from the growth in average earning assets and an increase in noninterest income resulting primarily from increases in service fees on deposit accounts and real estate mortgage fees.\n\nOn a basic net earnings per share basis, net earnings were $2.75 for 2002 as compared to $2.38 for 2001 and $2.28 for 2000. Return on average assets was 1.78% for 2002 as compared to 1.62% for 2001 and 1.67% for 2000. Return on average equity was 15.13% for 2002 as compared to 14.35% for 2001 and 15.39% for 2000.\n\nAffecting our 2002 net earnings and basic and diluted earnings per share is the implementation of Statement of Financial Accounting Standards No. 141, \"Business Combinations\" (\"SFAS No. 141\") and Statement of Financial Accounting Standards No. 142, \"Goodwill and Other Intangible Assets\" (\"SFAS No. 142\"). SFAS No. 141 requires that all business combinations initiated after June 30, 2001 be accounted for under the purchase method and addresses the initial recognition and measurement of goodwill and other intangible assets acquired in a business combination. SFAS No. 142 addresses the initial recognition and measurement of intangible assets acquired outside of a business combination and the accounting for goodwill and other intangible assets subsequent to their acquisition. SFAS No. 142 provides that intangible assets with finite useful lives be amortized and that goodwill and intangible assets with indefinite lives not be amortized, but rather be tested at least annually for impairment. SFAS No. 142 was effective January 1, 2002 for calendar year companies; however, acquired goodwill and intangible assets recorded in the acquisition of City Bancshares, Inc. closed subsequent to June 30, 2001 were subject immediately to its provisions.\n\nOn January 1, 2002, goodwill amounting to $23,765,896 was not subject to further amortization as a result of SFAS No. 142. The Company conducted its initial impairment test in 2002, with no reduction of recorded goodwill resulting from the test. A reconciliation adjusting comparative net earnings and earnings per share for the years ended December 31, 2001 and 2000, to show the effect of no longer amortizing the Company's goodwill, follows:\n\n| | 2001 | | 2000 | |\n| --- | --- | --- | --- | --- |\n| Reported net earnings | $ 29,354,505 | | $ 28,316,047 | |\n| Add back: goodwill amortization | | | | |\n| Goodwill amortization, before income tax | 1,641,367 | | 1,641,367 | |\n| Income tax benefit | (420,000) | | (420,000) | |\n| Adjusted net earnings | $ 30,575,872 | | $ 29,537,414 | |\n| Basic earnings per share: | | | | |\n| Reported net earnings | $ | 2.38 | $ | 2.28 |\n| Goodwill amortization, net of income tax benefit | | .10 | | .10 |\n| Adjusted net earnings | $ | 2.48 | $ | 2.38 |\n| Earnings per share, assuming dilution: | | | | |\n| Reported net earnings | $ 2.37 | | $ | 2.27 |\n| Goodwill amortization, net of income tax benefit | .10 | | | .10 |\n| Adjusted net earnings | $ 2.47 | | $ | 2.37 |\n\n*Net Interest Income*. Net interest income is the difference between interest income on earning assets and interest expense on liabilities incurred to fund those assets. Our earning assets consist primarily of loans and investment securities. Our liabilities to fund those assets consist primarily of noninterest-bearing and interestbearing deposits. Tax-equivalent net interest income was $84.2 million in 2002 as compared to $74.8 million in", - "page_start": 43, - "page_end": 43, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "Our subsidiary banks paid aggregate dividends of approximately $26.6 million in 2002 and approximately $25.5 million in 2001. Under the dividend restrictions discussed above, as of December 31, 2002, our subsidiary banks, without obtaining governmental approvals, could have declared in the aggregate additional dividends of approximately $20.7 million from retained net profits.\n\nTo pay dividends, we and our subsidiary banks must maintain adequate capital above regulatory guidelines. In addition, if the applicable regulatory authority believes that a bank under its jurisdiction is engaged in or is about to engage in an unsafe or unsound practice (which, depending on the financial condition of the bank, could include the payment of dividends), the authority may require, after notice and hearing, that such bank cease and desist from the unsafe practice. The Federal Reserve Board and the OCC have each indicated that paying dividends that deplete a bank's capital base to an inadequate level would be an unsafe and unsound banking practice. The Federal Reserve Board, the OCC and the FDIC have issued policy statements that recommend that bank holding companies and insured banks should generally only pay dividends to the extent that net income is sufficient to cover both cash dividends and rate of earnings retention consistent with capital needs, asset quality and overall financial condition. No undercapitalized institution may pay a dividend.\n\n#### *Affiliate Transactions*\n\nThe Federal Reserve Act, the FDIA and the rules adopted under these statutes restrict the extent to which we can borrow or otherwise obtain credit from, or engage in certain other transactions with, our depository subsidiaries. These laws regulate \"covered transactions\" between insured depository institutions and their subsidiaries, on the one hand, and their nondepository affiliates, on the other hand. \"Covered transactions\" include a loan or extension of credit to a nondepository affiliate, a purchase of securities issued by such an affiliate, a purchase of assets from such an affiliate (unless otherwise exempted by the Federal Reserve Board), an acceptance of securities issued by such an affiliate as collateral for a loan, and an issuance of a guarantee, acceptance, or letter of credit for the benefit of such an affiliate. The \"covered transactions\" that an insured depository institution and its subsidiaries are permitted to engage in with their nondepository affiliates are limited to the following amounts: (1) in the case of any one such affiliate, the aggregate amount of \"covered transactions\" cannot exceed ten percent of the capital stock and the surplus of the insured depository institution; and (2) in the case of all affiliates, the aggregate amount of \"covered transactions\" cannot exceed twenty percent of the capital stock and surplus of the insured depository institution. In addition, extensions of credit that constitute \"covered transactions\" must be collateralized in prescribed amounts. Further, a bank holding company and its subsidiaries are prohibited from engaging in certain tie-in arrangements in connection with any extension of credit, lease or sale of property or furnishing of services. Finally, when we and our subsidiary banks conduct transactions internally among us, we are required to do so at arm's length.\n\n#### *Loans to Directors, Executive Officers and Principal Shareholders*\n\nThe authority of our subsidiary banks to extend credit to our directors, executive officers and principal shareholders, including their immediate family members and corporations and other entities that they control, is subject to substantial restrictions and requirements under Sections 22(g) and 22(h) of the Federal Reserve Act and Regulation O promulgated thereunder. These statutes and regulations impose specific limits on the amount of loans our subsidiary banks may make to directors and other insiders, and specified approval procedures must be followed in making loans that exceed certain amounts. In addition, all loans our subsidiary banks make to directors and other insiders must satisfy the following requirements:\n\n- The loans must be made on substantially the same terms, including interest rates and collateral, as prevailing at the time for comparable transactions with persons not affiliated with us or the subsidiary banks;\n- The subsidiary banks must follow credit underwriting procedures at least as stringent as those applicable to comparable transactions with persons who are not affiliated with us or the subsidiary banks; and\n- The loans must not involve a greater than normal risk of repayment or other unfavorable features.", - "page_start": 33, - "page_end": 33, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "incidental to a financial activity. Thus, with the enactment of the Gramm-Leach-Bliley Act, banks, securities firms and insurance companies find it easier to acquire or affiliate with each other and cross-sell financial products. The act permits a single financial services organization to offer a more complete array of financial products and services than historically was permitted.\n\nA financial holding company is essentially a bank holding company with significantly expanded powers. Under the Gramm-Leach-Bliley Act, among the activities that will be deemed \"financial in nature\" for financial holding companies are, in addition to traditional lending activities, securities underwriting, dealing in or making a market in securities, sponsoring mutual funds and investment companies, insurance underwriting and agency activities, activities which the Federal Reserve Board determines to be closely related to banking, and certain merchant banking activities. The Federal Reserve Board has proposed permitting a number of additional financial activities, but we cannot predict whether any of these additional proposals will be adopted or the form any final rule will take.\n\nWe elected to become a financial holding company in September 2001. As a financial holding company, we have very broad discretion to affiliate with securities firms and insurance companies, make merchant banking investments, and engage in other activities that the Federal Reserve Board has deemed financial in nature. In order to continue as a financial holding company, we must continue to be well-capitalized, well-managed and maintain compliance with the Community Reinvestment Act. Depending on the types of financial activities that we may engage in in the future, under Gramm-Leach-Bliley's fractional regulation principles, we may become subject to supervision by additional government agencies. The election to be treated as a financial holding company increases our ability to offer financial products and services that historically we were either unable to provide or were only able to provide on a limited basis. As a result, we will face increased competition in the markets for any new financial products and services that we may offer. Likewise, an increased amount of consolidation among banks and securities firms or banks and insurance firms could result in a growing number of large financial institutions that could compete aggressively with us.\n\n#### *Mergers and Acquisitions*\n\nWe generally must obtain approval from the banking regulators before we can acquire other financial institutions. We must not engage in certain acquisitions if we are undercapitalized. Furthermore, the BHCA provides that the Federal Reserve Board cannot approve any acquisition, merger or consolidation that may substantially lessen competition in the banking industry, create a monopoly in any section of the country, or be a restraint of trade. However, the Federal Reserve Board may approve such a transaction if the convenience and needs of the community clearly outweigh any anti-competitive effects. Specifically, the Federal Reserve Board would consider, among other factors, the expected benefits to the public (greater convenience, increased competition, greater efficiency, etc.) against the risks of possible adverse effects (undue concentration of resources, decreased or unfair competition, conflicts of interest, unsound banking practices, etc.).\n\n#### *Banks*\n\nFederal and state laws and regulations that govern banks have the effect of, among other things, regulating the scope of business, investments, cash reserves, the purpose and nature of loans, the maximum interest rate chargeable on loans, the amount of dividends declared, and required capitalization ratios.\n\n*National Banking Associations*. Banks that are organized as national banking associations under the National Bank Act are subject to regulation and examination by the Office of the Comptroller of the Currency, or OCC. The OCC supervises, regulates and regularly examines the First National Bank of Abilene, First National Bank, Sweetwater, First Financial Bank, National Association, Cleburne, Eastland National Bank, San Angelo National Bank, Weatherford National Bank, First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. The OCC's supervision and regulation of banks is primarily intended to protect the interests of depositors. The National Bank Act:\n\n- requires each national banking association to maintain reserves against deposits,\n- restricts the nature and amount of loans that may be made and the interest that may be charged, and\n- restricts investments and other activities.", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "What is the address of the San Angelo National Bank main office ?", - "target_page": 21, - "target_passage": "Main Office 301 W. Beauregard San Angelo, Texas 76903 Chartered 1997 ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- Eastland National Bank, Eastland, Texas;\n- First Financial Bank, National Association, Cleburne, Texas;\n- Stephenville Bank and Trust Co., Stephenville, Texas;\n- San Angelo National Bank, San Angelo, Texas;\n- Weatherford National Bank, Weatherford, Texas;\n- First Financial Bank, National Association, Southlake, Texas; and\n- City National Bank, Mineral Wells, Texas.\n\nAs described in more detail below, we elected to be treated as a financial holding company in September 2001.\n\nOur service centers are located primarily in North Central and West Texas. Considering the branches and locations of all our subsidiary banks, as of December 31, 2002, we had 28 financial centers across Texas, with seven locations in Abilene, two locations in Cleburne, two locations in Stephenville, two locations in San Angelo, three locations in Weatherford, and one location each in Mineral Wells, Hereford, Sweetwater, Eastland, Southlake, Aledo, Alvarado, Burleson, Keller, Trophy Club, Roby, and Trent.\n\nInformation on our revenues, profits and losses and total assets appears in the discussion of our Results of Operations contained in Item 7 hereof.\n\n#### **First Financial Bankshares, Inc.**\n\nWe provide management and technical resources and policy direction to our subsidiary banks, which enables them to improve or expand their banking services while continuing their local activity and identity. Each of our subsidiary banks operates under the day-to-day management of its own board of directors and officers, with substantial authority in making decisions concerning their own investments, loan policies, interest rates, and service charges. We provide resources and policy direction in, among other things, the following areas:\n\n- asset and liability management;\n- accounting, budgeting, planning and insurance;\n- capitalization; and\n- regulatory compliance.\n\nIn particular, we assist our subsidiary banks with, among other things, decisions concerning major capital expenditures, employee fringe benefits, including pension plans and group insurance, dividend policies, and appointment of officers and directors and their compensation. We also perform, through corporate staff groups or by outsourcing to third parties, internal audits and loan reviews of our subsidiary banks. Through First National Bank of Abilene, we provide advice and specialized services for our banks related to lending, investing, purchasing, advertising, public relations, and computer services.\n\nWhile we have no specific acquisition agreements in place or commitments to expand our branch network, we periodically evaluate various potential financial institution acquisition opportunities and also periodically evaluate potential locations for new branch offices. We anticipate that funding for any acquisitions or expansions would be provided from our existing cash balances, available dividends from subsidiary banks, utilization of available lines of credit and future debt or equity offerings.\n\n#### **Services Offered by Our Subsidiary Banks**\n\nEach of our subsidiary banks is a separate legal entity that operates under the day-to-day management of its own board of directors and officers. Each of our subsidiary banks provides general commercial banking services, which include accepting and holding checking, savings and time deposits, making loans, automated teller machines, drivein and night deposit services, safe deposit facilities, transmitting funds, and performing other customary commercial banking services. Certain of our subsidiary banks also administer pension plans, profit sharing plans and other employee benefit plans. First National Bank of Abilene, First National Bank, Sweetwater, Stephenville Bank and Trust Co. and San Angelo National Bank have active trust departments. The trust departments offer a complete", - "page_start": 29, - "page_end": 29, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "# \"They stuck with me and were always team players.\"\n\n#### **Bob Housley appreciates loyalty.**\n\nHis company, Housley Communications, is a thriving business with a staff of 225 and contracting relationships with over 700 firms. The company provides engineering and implementation of advanced telecommunications systems. \"We provide everything a company needs to go from zero to 100 percent.\"\n\nSuccess hasn't necessarily been easy. \"We had some difficult times when we were starting out in the '80s,\" says Housley. \"San Angelo National Bank worked very diligently to help me get where I am today. They stuck with me and were always team players.\"\n\nHousley is a demanding customer – a trait to which he credits much of his success. \"I am very customer service-oriented. It's how I built my business. I appreciate that I can get that same type of dedication from San Angelo National Bank, and I see it reflected throughout the First Financial Bankshares organization.\"\n\nHousley the shareholder is no less demanding, but he's had good reason to be pleased with his returns from First Financial Bankshares. \"First Financial's expansion strategy is excellent – they do their research and find banks with good opportunity. Their operations are sound, and their growth is well-managed. I believe they are one of the best mid-size banking organizations around.\"\n\n> Bob Housley President Housley Communications San Angelo, Texas", - "page_start": 10, - "page_end": 10, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "Assets managed by the Trust Departments at First National Bank of Abilene, San Angelo National Bank, Stephenville Bank & Trust Co. and First National Bank, Sweetwater, increased $27.3 million during the past year to a December 31, 2002 book value of $986.2 million. However, due to depressed stock market values and volumes, trust department revenue declined in 2002. Trust combined revenues for the year were down slightly from $5.89 million in 2001 to $5.83 million for 2002. In 2003, we anticipate a return to improved income growth.\n\nThe performance of the stock market the past three years has been a challenge that our trust investment professionals have managed well. Not since 1939-1941 have we seen the S&P 500 drop 35% in a three-year period. Our portfolio managers outperformed their indices in Large Cap stocks by 83 basis points and Fixed Income securities by 168 basis points. This performance bodes well for the present and future of our client accounts.\n\nDuring 2002, we saw a successful conversion of Stephenville Bank & Trust to the SEI Corporation accounting system. In March 2003, we will be converting First National Bank, Sweetwater, to this system as well. This will provide all First Financial Bankshares trust clients with the strength and advantages of a uniform accounting system. Other operational systems have been examined and consistent practices and procedures have been implemented.\n\nTo further enhance our risk management assessments in 2003, we will be introducing an Operational Peer Review Team similar to the successful peer review teams used in the Personal Trust areas of our four locations.\n\nRobert S. Patterson *First National Bank of Abilene*\n\nDavid Byrd *San Angelo National Bank*\n\nJanis McDowell *First National Bank, Sweetwater*\n\nPlans for the formation of a First Financial Bankshares trust company are moving forward with regulatory approval anticipated in late Spring or early Summer. This will permit your Company to provide quality, locally delivered trust services to additional markets.\n\nWith skilled trust professionals offering a complete range of financial products and services, the future of our trust departments look bright. Through dedication to individualized portfolio design and personalized service, our trust departments stand ready to meet the needs of our present and future clients.\n\nSenior Vice President, Trust Services", - "page_start": 14, - "page_end": 14, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "range of services to individuals, associations, and corporations. These services include administering estates, testamentary trusts, various types of living trusts, and agency accounts. In addition, First National Bank of Abilene, First Financial Bank, Cleburne, San Angelo National Bank and First Financial Bank, National Association, Southlake, Texas provide securities brokerage services through arrangements with various third parties.\n\nWe have filed an application with the office of the Comptroller of the Currency to form a limited purpose national bank under which we will consolidate the management of our current trust departments. The new entity will operate as a subsidiary of our subsidiary holding company, First Financial Bankshares of Delaware, Inc. We believe that with this structure we can more effectively manage our current trust operations and provide trust services to customers of our banks that do not currently have trust departments. We anticipate that the new trust company will begin operations in the latter part of 2003.\n\n#### **Competition**\n\nCommercial banking in Texas is highly competitive, and because we hold less than 1% of the state's deposits, we represent only a minor segment of the industry. To succeed in this industry, our management believes that our banks must have the capability to compete in the areas of (1) interest rates paid or charged; (2) scope of services offered; and (3) prices charged for such services. Our subsidiary banks compete in their respective service areas against highly competitive banks, thrifts, savings and loan associations, small loan companies, credit unions, mortgage companies, and brokerage firms, all of which are engaged in providing financial products and services and some of which are larger than our subsidiary banks in terms of capital, resources and personnel.\n\nOur business does not depend on any single customer or any few customers, the loss of any one of which would have a materially adverse effect upon our business. Although we have a broad base of customers that are not related to us, our customers also occasionally include our officers and directors, as well as other entities with which we are affiliated. With our subsidiary banks we may make loans to officers and directors, and entities with which we are affiliated, in the ordinary course of business. We make these loans on substantially the same terms, including interest rates and collateral, as those prevailing at the time for comparable transactions with other persons. Loans to directors, officers and their affiliates are also subject to numerous restrictions under federal and state banking laws which we describe in greater detail below.\n\n#### **Employees**\n\nWith our subsidiary banks we employed approximately 750 full-time equivalent employees at February 1, 2003. Our management believes that our employee relations have been and will continue to be good.\n\n#### **Supervision and Regulation**\n\nBoth federal and state laws extensively regulate bank holding companies, financial holding companies and banks. These laws (and the regulations promulgated thereunder) are primarily intended to protect depositors and the deposit insurance fund of the Federal Deposit Insurance Corporation, or FDIC, although shareholders may also benefit. The following information describes particular laws and regulatory provisions relating to financial holding companies and banks. This discussion is qualified in its entirety by reference to the particular laws and regulatory provisions. A change in any of these laws or regulations may have a material effect on our business and the business of our subsidiary banks.\n\n#### *Bank Holding Companies and Financial Holding Companies*\n\nTraditionally, the activities of bank holding companies were limited to the business of banking and activities closely related or incidental to banking. Bank holding companies were generally prohibited from acquiring control of any company which was not a bank and from engaging in any business other than the business of banking or managing and controlling banks. The Gramm-Leach-Bliley Act, which took effect on March 12, 2000, dismantled many Depression-era restrictions against affiliation between banking, securities and insurance firms by permitting bank holding companies to engage in a broader range of financial activities, so long as certain safeguards are observed. Specifically, bank holding companies may elect to become \"financial holding companies\" that may affiliate with securities firms and insurance companies and engage in other activities that are financial in nature or", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "the parent's general unsecured creditors. If a depository institution fails to submit an acceptable capital restoration plan, it shall be treated as if it is significantly undercapitalized. \"Significantly undercapitalized\" depository institutions may be subject to a number of requirements and restrictions, including orders to sell sufficient voting stock to become \"adequately capitalized,\" requirements to reduce total assets, and cessation of receipt of deposits from correspondent banks. \"Critically undercapitalized\" institutions are subject to the appointment of a receiver or conservator. Finally, FDICIA requires the various regulatory agencies to set forth certain standards that do not relate to capital. Such standards relate to the safety and soundness of operations and management and to asset quality and executive compensation, and permit regulatory action against a financial institution that does not meet such standards.\n\nIf an insured bank fails to meet its capital guidelines, it may be subject to a variety of other enforcement remedies, including a prohibition on the taking of brokered deposits and the termination of deposit insurance by the FDIC. Bank regulators continue to indicate their desire to raise capital requirements beyond their current levels.\n\nIn addition to FDICIA capital standards, Texas-chartered banks must also comply with the capital requirements imposed by the Texas Banking Department. Neither the Texas Finance Code nor its regulations specify any minimum capital-to-assets ratio that must be maintained by a Texas-chartered bank. Instead, the Texas Banking Department determines the appropriate ratio on a bank by bank basis, considering factors such as the nature of a bank's business, its total revenue, and the bank's total assets. As of December 31, 2002, all of our Texas-chartered banks exceeded the minimum ratios applied to them.\n\n#### *Our Support of Our Subsidiary Banks*\n\nUnder Federal Reserve Board policy, we are expected to commit resources to act as a source of strength to support each of our subsidiary banks. This support may be required at times when, absent such Federal Reserve Board policy, we would not otherwise be required to provide it. In addition, any loans we make to our subsidiary banks would be subordinate in right of payment to deposits and to other indebtedness of our banks. In the event of a bank holding company's bankruptcy, any commitment by the bank holding company to a federal bank regulatory agency to maintain the capital of a subsidiary bank will be assumed by the bankruptcy trustee and be subject to a priority of payment.\n\nUnder the National Bank Act, if the capital stock of a national bank is impaired by losses or otherwise, the OCC is authorized to require the bank's shareholders to pay the deficiency on a pro-rata basis. If any shareholder refuses to pay the pro-rata assessment after three months notice, then the bank's board of directors must sell an appropriate amount of the shareholder's stock at a public auction to make up the deficiency. To the extent necessary, if a deficiency in capital still exists and the bank refuses to go into liquidation, then a receiver may be appointed to wind up the bank's affairs. Additionally, under the Federal Deposit Insurance Act, in the event of a loss suffered or anticipated by the FDIC (either as a result of the default of a banking subsidiary or related to FDIC assistance provided to a subsidiary in danger of default) our other banking subsidiaries may be assessed for the FDIC's loss.\n\n#### *Interstate Banking and Branching Act*\n\nPursuant to the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, or Riegle-Neal Act, a bank holding company or financial holding company is able to acquire banks in states other than its home state. The Riegle-Neal Act also authorized banks to merge across state lines, thereby creating interstate branches, beginning June 1, 1997. Furthermore, under this act, a bank is now able to open new branches in a state in which it does not already have banking operations, if the laws of such state permit it to do so. Accordingly, both the OCC and the Texas Banking Department accept applications for interstate merger and branching transactions, subject to certain limitations on ages of the banks to be acquired and the total amount of deposits within the state a bank or financial holding company may control. Since our primary service area is Texas, we do not expect that the ability to operate in other states will have any material impact on our growth strategy. We may, however, face increased competition from out-of-state banks that branch or make acquisitions in our primary markets in Texas.", - "page_start": 35, - "page_end": 35, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n#### Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) (\"Bankshares\") is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the \"Company\") applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n#### Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n#### Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n#### Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n#### Loans and Allowance for Loan Losses\n\nLoans are stated at the amount of unpaid principal, reduced by unearned income and an allowance for loan losses. Unearned income on installment loans is recognized in income over the terms of the loans in decreasing amounts using a method which approximates the interest method. Interest on other loans is calculated by using the simple interest method on daily balances of the principal amounts outstanding. The Company expenses its net loan origination costs, a method which does not materially differ from deferring and amortizing such amounts as an adjustment to yield. The allowance for loan losses is established through a provision for loan losses charged to", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### **Target Markets Clients** ■ Banks ■ Credit unions ■ Independent ATM owners ■ Mobile operators ■ Payment associations ■ Retailers and merchants ■ Bank Austria/Creditanstalt (CZE) ■ Budapest Bank (HUN) ■ Citibank (GRC, HUN, POL, CZE) ■ Deutsche Bank (HUN, POL) ■ DiBa (DEU) ■ Dillards Inc. (USA) ■ Metropolitan National Bank (USA) ■ Millennium Bank (POL) ■ Raiffeisenbank (HRV) ■ Saks Inc. (USA)\n\n■ ABN Amro (HUN, CUR)\n\n■ Bank Slaski (POL) ■ Century Bank (ZWE)\n\n■ Banco Comercial Português (MOZ) ■ Banco de Oro, Unibank (PHL)\n\n■ Cayman National Bank (CYM)\n\n■ Commercial Bank of Romania (ROM)\n\n- Banks\n- Credit unions\n- EFT networks\n- Independent ATM owners\n- Resellers\n- Retailers and merchants\n\n■ Mobile phone operators\n\n■ Third-party prepaid suppliers for mobile phone operators\n\n- ALLTEL (USA)\n- Centertel (POL)\n- Eurotel (CZE)\n- ERA GSM (POL)\n\n- Old National Service Corp. (USA)\n■ Maduro and Curiel's Bank N.V. (CUR)\n\n■ Old National Service Corp. (USA)\n\n■ WestPac Banking Corp. (FJI, PNG)\n\n- Plus GSM (POL)\n■ Nova Bank (GRC)\n\n■ Seylan Bank (LKA) ■ VIFI Card Services (USA)\n\n- VIPnet (HRV)\n- Banks\n- Brokerages\n- Credit card issuers\n- Credit unions\n- Investment community\n- Retailers and merchants\n- Bank of Cyprus (GRC, GBR)\n- Commercial Bank of Ceylon (LKA)\n- First Federal Savings Bank of LaCrosse (USA)\n- Maduro and Curiel's Bank N.V. (CUR)\n- National Bank of Kuwait-Lebanon (LBN)\n\n- Banks\n- Credit unions\n- Independent ATM owners\n- Retailers\n- Fortis Bank (POL)\n- ING/Bank Slaski (POL)\n- Oyak Bank (TUR)\n- Splitska Banka (HRV)\n- Union Bank Ltd. (PAK)\n- Union Banka (CZE)", - "page_start": 8, - "page_end": 8, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## **Linking consumers with any time, any place mobile banking**\n\n*In today's increasingly wireless world, consumers are turning in record numbers to mobile devices for greater convenience and access to banking and information services.*\n\nith the freedom of mobile devices, bank customers can instantly obtain account balances, transfer money and even view a mini-bank statement–or set up instant alerts to monitor their daily account balances, deposit notifications and other personalized information 24 hours a day, 7 days a week. W\n\nThe exciting potential of wireless is creating unprecedented opportunities for banks to connect with their customers. In Western Europe, the number of mobile banking accounts is expected to reach 31.8 million by 2004.1 Expanding wireless capabilities are also helping to drive growth in North America, where the number of wireless financial services users is projected to skyrocket to 35 million by 2005.2 The Asia-Pacific region is forecast at 12 million subscribers of wireless financial services alone in 2003.3\n\nThe quickly evolving market for mobile banking represents a tremendous opportunity for Euronet Worldwide. Last spring we introduced Euronet® Mobile Banking as the first financial application that offered both secure account access and a personalized accounting alerting system. Among our new mobile banking clients in 2000 were the Bank of Cyprus, for its branches in London and Greece, and the National Bank of Kuwait, for its Lebanon branch, who were both first to market in their regions.\n\nTo further strengthen our capabilities, we announced strategic alliances to market and deliver Euronet's suite of mobile banking solutions with\n\nAether Systems, Inc. for the US market and with Sila Communications for the European, Middle Eastern and Asian markets. In addition, we formed similar regional strategic alliances with companies like Stet Hellas Telecommunications S.A., a Greek mobile operator and subsidiary of Telecom Italia Mobile (TIM).\n\nAs next-generation mobile technology brings higher data speeds, personalization and other enhancements, we believe the future of mobile banking presents great opportunities for Euronet.\n\n## **National Bank of Kuwait-Lebanon**\n\n#### First-to-Market Mobile Banking\n\nTo broaden its customer and account\n\nbase, the National Bank of Kuwait-Lebanon (NBK-L) wanted to be first in their market with a mobile banking solution. In a tight race with a competing bank, Euronet's mobile solution was integrated quickly into the NBK-L's IT infrastructure, enabling the bank to be the first to deliver services in its market.\n\nTogether with GSM operator Libancell, NBK-L's new mobile banking system offers customers any time, any place access to their account information from their GSM telephones.", - "page_start": 12, - "page_end": 12, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "#### *Pending and Proposed Legislation*\n\nNew regulations and statutes are regularly proposed containing wide-ranging proposals for altering the structures, regulations and competitive relationships of financial institutions operating in the United States. We cannot predict whether or in what form any proposed regulation or statute will be adopted or the extent to which our business may be affected by any new regulation or statute.\n\n#### *Enforcement Powers of Federal Banking Agencies*\n\nThe Federal Reserve and other state and federal banking agencies and regulators have broad enforcement powers, including the power to terminate deposit insurance, issue cease-and-desist orders, impose substantial fees and other civil and criminal penalties and appoint a conservator or receiver. Our failure to comply with applicable laws, regulations and other regulatory pronouncements could subject us, as well as our officers and directors, to administrative sanctions and potentially substantial civil penalties.\n\n#### *Available Information*\n\nWe file annual, quarterly and special reports, proxy statements and other information with the Securities and Exchange Commission. You may read and copy any document we file at the Securities and Exchange Commission's Public Reference Room at 450 Fifth Street, N.W., Washington, D.C. 20549. Please call the Securities and Exchange Commission at 1-800-SEC-0330 for further information on the public reference room. Our SEC filings are also available to the public at the Securities and Exchange Commission's web site at http://www.sec.gov. No information from this web page is incorporated by reference herein. Our web site is http://www.ffin.com. You may also obtain copies of our annual, quarterly and special reports, proxy statements and certain other information filed with the SEC, as well as amendments thereto, free of charge from our web site. These documents are posted to our web site as soon as reasonably practicable after we have filed them with the SEC.\n\n#### **ITEM 2. PROPERTIES**\n\nOur principal office is located in the First National Bank Building at 400 Pine Street in downtown Abilene, Texas. We lease two spaces in a building owned by First National Bank of Abilene. The lease for approximately 2,300 square feet of space expires December 31, 2004. The lease for approximately 1,100 square feet of space expires May 31, 2006. Our subsidiary banks collectively own 22 banking facilities, some of which are detached drive-ins, and they also lease six banking facilities. Our management considers all of our existing locations to be well-suited for conducting the business of banking. We believe that our existing facilities are adequate to meet our requirements and our subsidiary banks' requirements for the foreseeable future.\n\n#### **ITEM 3. LEGAL PROCEEDINGS**\n\nFrom time to time we and our subsidiary banks are parties to lawsuits arising in the ordinary course of our banking business. However, there are no material pending legal proceedings to which we, our subsidiary banks or our other direct and indirect subsidiaries, or any of their properties, are currently subject. Other than regular, routine examinations by state and federal banking authorities, there are no proceedings pending or known to be contemplated by any governmental authorities.\n\n#### **ITEM 4. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS**\n\nNo matters were submitted to a vote of our security holders during the fourth quarter of our fiscal year ended December 31, 2002.", - "page_start": 38, - "page_end": 38, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### *Community Reinvestment Act of 1977*\n\nThe Community Reinvestment Act of 1977, or CRA subjects a bank to regulatory assessment to determine if the institution meets the credit needs of its entire community, including low- and moderate-income neighborhoods served by the bank, and to take that determination into account in its evaluation of any application made by such bank for, among other things, approval of the acquisition or establishment of a branch or other deposit facility, an office relocation, a merger, or the acquisition of shares of capital stock of another financial institution. The regulatory authority prepares a written evaluation of an institution's record of meeting the credit needs of its entire community and assigns a rating. We believe our subsidiary banks have taken significant actions to comply with the CRA, and each has received at least a \"satisfactory\" commendation in its most recent review by federal regulators with respect to its compliance with the CRA.\n\n#### *Monitoring and Reporting Suspicious Activity*\n\nUnder the Bank Secrecy Act, IRS rules and other regulations, we are required to monitor and report unusual or suspicious account activity as well as transactions involving the transfer or withdrawal of amounts in excess of prescribed limits. In the wake of the tragic events of September 11th, on October 26, 2001, the President signed the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism, or USA PATRIOT Act, of 2001. Under the USA PATRIOT Act, financial institutions are subject to prohibitions against specified financial transactions and account relationships as well as enhanced due diligence and \"know your customer\" standards in their dealings with foreign financial institutions and foreign customers. For example, the enhanced due diligence policies, procedures, and controls generally require financial institutions to take reasonable steps to:\n\n- to conduct enhanced scrutiny of account relationships to guard against money laundering and report any suspicious transaction;\n- to ascertain the identity of the nominal and beneficial owners of, and the source of funds deposited into, each account as needed to guard against money laundering and report any suspicious transactions;\n- to ascertain for any foreign bank, the shares of which are not publicly traded, the identity of the owners of the foreign bank, and the nature and extent of the ownership interest of each such owner; and\n- to ascertain whether any foreign bank provides correspondent accounts to other foreign banks and, if so, the identity of those foreign banks and related due diligence information.\n\nUnder the USA PATRIOT Act, financial institutions are also required to establish anti-money laundering programs. The USA PATRIOT Act sets forth minimum standards for these programs, including:\n\n- the development of internal policies, procedures, and controls;\n- the designation of a compliance officer;\n- an ongoing employee training program; and\n- an independent audit function to test the programs.\n\nIn addition, the USA PATRIOT Act also requires the Secretary of the Treasury to adopt rules addressing a number of related issues, including increasing the cooperation and information sharing between financial institutions, regulators, and law enforcement authorities regarding individuals, entities and organizations engaged in, or reasonably suspected based on credible evidence of engaging in, terrorist acts or money laundering activities. Any financial institution complying with these rules will not be deemed to violate the privacy provisions of the Gramm-Leach-Bliley Act that are discussed below. Finally, under the regulations of the Office of Foreign Asset Control, we are required to monitor and block transactions with certain \"specially designated nationals\" who OFAC has determined pose a risk to U.S. national security.", - "page_start": 36, - "page_end": 36, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "What kind of scholarship programs are available to start a financial career?", - "target_page": 1, - "target_passage": "Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Home / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n#### MONEY\n\n### 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n\n2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n\n3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…\n\n### RELATED ARTICLES", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n# **Community Impact**\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual volunteer program in which employees roll up their sleeves in the communities they call home.\n\nChesapeake's contributions take many forms: financial and equipment donations, volunteerism and scholarships. Last year, we made numerous in-kind donations of laptops, reconditioned Chesapeake fleet vehicles and subsidized office space. These contributions provide essential operating tools as nonprofit organizations across the nation attempt to serve more people — often with lower budgets — in tough economic times.\n\nFor example, in Louisiana we donated 12 vehicles in 2010, including one to the Panola College Oil and Natural Gas Technology Program, which teaches students about the natural gas industry and provides them with hands-on technical training. Across many of the company's operating areas, we've donated computers to deserving students, schools and organizations through Chesapeake's Discovering Tomorrow's Leaders program. In 2010 the company equipped 14 students with laptops and donated 70 computers to schools or supporting nonprofit organizations.\n\nChesapeake partners with other companies and organizations to meet basic, practical needs in hundreds of communities. An example is our\n\n*Putting food on the table — Employees volunteer at the Regional Food Bank of Oklahoma as part of Operation Blue.*\n\nsponsorship of the annual Day of Caring at the Ganus Center of Harding University in White County, Arkansas. During the event, approximately 1,200 uninsured or underinsured residents received a day of free medical, dental and eye screenings.\n\nTo help cultivate an appreciation for the great outdoors, in 2010 Chesapeake provided $25,000 to REAL School Gardens, a Fort Worthbased organization that establishes gardens at approximately 70 lower income elementary schools in North Texas. At I.M. Terrell Elementary School, students, parents, teachers and volunteers from Chesapeake and other groups worked together to prepare vegetable gardens and flower beds. In addition to teamwork skills and gardening, students learned about nutrition and took home food from the garden's bounty.\n\nWe supported servicemen and servicewomen by partnering with the Shreveport Chapter of Operation Support Our Troops, Inc. Our contribution helped offset the postage to send more than 100 care packages to troops overseas. The shipment was the largest in the organization's history and included Christmas cards, games and nonperishable food items.\n\nBy investing in the communities where we operate and the people whose lives we touch, we ensure a stronger today and a more hopeful tomorrow.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Financial Information", - "page_start": 55, - "page_end": 55, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "# DEVOTION TO SERVICE Giving back to the\n\n**or us, it is a measure of responsible corporate citizenship. The MGM MIRAGE Corporate Charitable Giving Program is the principal source of financial donations to community and social initiatives. Funded by a percentage of the company's net profits, the Corporate Charitable Giving Program supports various community efforts impacting four critical areas:** F\n\n> **CHILDHOOD DEVELOPMENT Community-based programs that focus on the overall development and well-being of children.**\n\n> **COMMUNITY DEVELOPMENT Programs that focus on low-income or socio-economically disadvantaged communities.**\n\n> **DIVERSITY Programs which are inclusive receive priority in funding. This includes efforts that encourage economic development and enhance individual and community resources.**\n\n> **EDUCATION Programs and efforts to strengthen public education from kindergarten through higher education.**\n\n**Through various education partnerships with institutions such as the University of Nevada, we award scholarships to help students achieve their educational goals and to encourage their interest in our business. Additionally, scholarship programs assist the children of our employees with their higher education aspirations**\n\n**MGM MIRAGE** supports a variety of programs to further educational aspirations of both students and employees, including tuition reimbursement for employees, scholarships for children of employees, and on-site GED, naturalization and English-asa-second-language (ESL) classes.\n\ncommunities in which MGM MIRAGE operates its businesses and where our employees live, work, and care for their families is a serious and dedicated commitment.\n\n**MGM GRAND DETROIT** President George Boyer epitomizes the company's commitment to corporate social responsibility. Boyer reads to a child at the Northwest Community Center in Detroit during an after-school mentoring program funded by the Voice Foundation.\n\n**MGM MIRAGE** employee Christina Fuentes embraces a child during an event to benefit the Variety Day Home's Emergency Childcare Assistance Program in Las Vegas, one of the many programs supported by MGM MIRAGE to support the well-being of children. The program helps underwrite childcare assistance for low-income working parents.\n\n**In 2004, MGM MIRAGE** employees raised nearly $3 million for the Voice Foundation. Companywide, Aid for AIDS of Nevada (AFAN) was among one of the leading nonprofit agencies to receive the most funding support from the Voice Foundation.", - "page_start": 21, - "page_end": 21, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "FINANCIAL SECTION", - "page_start": 69, - "page_end": 69, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Financial Information", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "incidental to a financial activity. Thus, with the enactment of the Gramm-Leach-Bliley Act, banks, securities firms and insurance companies find it easier to acquire or affiliate with each other and cross-sell financial products. The act permits a single financial services organization to offer a more complete array of financial products and services than historically was permitted.\n\nA financial holding company is essentially a bank holding company with significantly expanded powers. Under the Gramm-Leach-Bliley Act, among the activities that will be deemed \"financial in nature\" for financial holding companies are, in addition to traditional lending activities, securities underwriting, dealing in or making a market in securities, sponsoring mutual funds and investment companies, insurance underwriting and agency activities, activities which the Federal Reserve Board determines to be closely related to banking, and certain merchant banking activities. The Federal Reserve Board has proposed permitting a number of additional financial activities, but we cannot predict whether any of these additional proposals will be adopted or the form any final rule will take.\n\nWe elected to become a financial holding company in September 2001. As a financial holding company, we have very broad discretion to affiliate with securities firms and insurance companies, make merchant banking investments, and engage in other activities that the Federal Reserve Board has deemed financial in nature. In order to continue as a financial holding company, we must continue to be well-capitalized, well-managed and maintain compliance with the Community Reinvestment Act. Depending on the types of financial activities that we may engage in in the future, under Gramm-Leach-Bliley's fractional regulation principles, we may become subject to supervision by additional government agencies. The election to be treated as a financial holding company increases our ability to offer financial products and services that historically we were either unable to provide or were only able to provide on a limited basis. As a result, we will face increased competition in the markets for any new financial products and services that we may offer. Likewise, an increased amount of consolidation among banks and securities firms or banks and insurance firms could result in a growing number of large financial institutions that could compete aggressively with us.\n\n#### *Mergers and Acquisitions*\n\nWe generally must obtain approval from the banking regulators before we can acquire other financial institutions. We must not engage in certain acquisitions if we are undercapitalized. Furthermore, the BHCA provides that the Federal Reserve Board cannot approve any acquisition, merger or consolidation that may substantially lessen competition in the banking industry, create a monopoly in any section of the country, or be a restraint of trade. However, the Federal Reserve Board may approve such a transaction if the convenience and needs of the community clearly outweigh any anti-competitive effects. Specifically, the Federal Reserve Board would consider, among other factors, the expected benefits to the public (greater convenience, increased competition, greater efficiency, etc.) against the risks of possible adverse effects (undue concentration of resources, decreased or unfair competition, conflicts of interest, unsound banking practices, etc.).\n\n#### *Banks*\n\nFederal and state laws and regulations that govern banks have the effect of, among other things, regulating the scope of business, investments, cash reserves, the purpose and nature of loans, the maximum interest rate chargeable on loans, the amount of dividends declared, and required capitalization ratios.\n\n*National Banking Associations*. Banks that are organized as national banking associations under the National Bank Act are subject to regulation and examination by the Office of the Comptroller of the Currency, or OCC. The OCC supervises, regulates and regularly examines the First National Bank of Abilene, First National Bank, Sweetwater, First Financial Bank, National Association, Cleburne, Eastland National Bank, San Angelo National Bank, Weatherford National Bank, First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. The OCC's supervision and regulation of banks is primarily intended to protect the interests of depositors. The National Bank Act:\n\n- requires each national banking association to maintain reserves against deposits,\n- restricts the nature and amount of loans that may be made and the interest that may be charged, and\n- restricts investments and other activities.", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "# **NOTE 1 - STATEMENT OF SIGNIFICANT ACCOUNTING POLICIES continued**\n\n# **u) Adoption of New and Revised Accounting Standards**\n\nDuring the current reporting period the Group adopted all of the new and revised Australian Accounting Standards and Interpretations applicable to its operations which became mandatory. The nature and effect of selected new standards and amendments on the Group's consolidated financial report are described below. Adoption of the other new mandatorily applicable standards did not have a material impact on the financial statement, financial position or performance of the Group.\n\n# **AASB 2011-4 -** *Amendments to Australian Accounting Standards to Remove Individual Key Management Personnel Disclosure*\n\nThis standard removes the requirements to include individual key management personnel disclosures in the notes to and forming part of the Financial Report. This standard also removes the individual KMP disclosure requirements for all disclosing entities in relation to equity holdings, loans and other related party transactions.\n\n# **Amendments to IAS 32 -** *Offsetting Financial Assets and Financial Liabilities*\n\nThe amendments to IAS 32 clarify the requirements relating to the offset of financial assets and financial liabilities. Specifically, the amendments clarify the meaning of 'currently has a legally enforceable right of set-off' and 'simultaneous realization and settlement'. As the Group does not have any financial assets and financial liabilities that qualify for offset, the application of the amendments has had no impact on the disclosure or the Group's consolidated financial statements.\n\n# **Recently issued accounting standards to be applied in future reporting periods:**\n\nThe following Standards and Interpretations have been issued but are not yet effective. These are the standards that the Group reasonably expects will have an impact on its disclosures, financial position or performance with applied at a future date. The Group's assessment of the impact of these new standards, amendments to standards, and interpretations is set out below.\n\n# **AASB 9/IFRS 9 –** *Financial Instruments*\n\nAASB 9/IFRS 9 introduces new requirements for the classification, measurement, and derecognition of financial assets and financial liabilities. The final version of IFRS 9 supersedes all previous versions of the standard. However, for annual periods beginning before 1 January 2018, an entity may elect to apply those earlier versions of IFRS 9 if the entity's relevant date of initial application is before 1 February 2015. The effective date of this standard is for fiscal years beginning on or after 1 January 2018. Management is currently assessing the impact of the new standard but it is not expected to have a material impact on the Group's consolidated financial statements.", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### informed judgements and estimates. Management is responsible for the integrity and objectivity of these consolidated financial statements. The financial information presented in the MD&A is consistent with that in the **Management's Responsibility for Financial Statements** informed judgements and estimates. Management is responsible for the integrity and objectivity of these consolidated financial statements. The financial information presented in the MD&A is consistent with that in the\n\nconsolidated financial statements in all material respects.\n\n**Management's Responsibility for Financial Statements**\n\nconsolidated financial statements in all material respects.\n\n**Management's Responsibility for Financial Statements**\n\nThe accompanying consolidated financial statements and management's discussion and analysis of results of operations and financial condition (MD&A) have been prepared by the management of Killam Properties Inc. in accordance with International Financial Reporting Standards, and include amounts based on management's\n\non that assessment, determined that our internal controls over financial reporting were appropriately designed\n\nAudit Committee. This committee meets regularly with management and the auditors, who have full and free\n\nTo assist management in the discharge of these responsibilities, management has established the necessary internal controls designed to ensure that our financial records are reliable for preparing financial statements and other financial information, transactions are properly authorized and recorded, and assets are safeguarded. The accompanying consolidated financial statements and management's discussion and analysis of results of operations and financial condition (MD&A) have been prepared by the management of Killam Properties Inc. in accordance with International Financial Reporting Standards, and include amounts based on management's informed judgements and estimates. Management is responsible for the integrity and objectivity of these consolidated financial statements. The financial information presented in the MD&A is consistent with that in the consolidated financial statements in all material respects. To assist management in the discharge of these responsibilities, management has established the necessary internal controls designed to ensure that our financial records are reliable for preparing financial statements and other financial information, transactions are properly authorized and recorded, and assets are safeguarded.\n\nThe accompanying consolidated financial statements and management's discussion and analysis of results of operations and financial condition (MD&A) have been prepared by the management of Killam Properties Inc. in accordance with International Financial Reporting Standards, and include amounts based on management's\n\nAs at December 31, 2013, our Chief Executive Officer and Chief Financial Officer evaluated, or caused an evaluation under their direct supervision of, the design and operation of our internal controls over financial reporting (as defined in National Instrument 52‐109, Certification of Disclosure in Issuers' Annual and Interim Filings) and, based To assist management in the discharge of these responsibilities, management has established the necessary internal controls designed to ensure that our financial records are reliable for preparing financial statements and other financial information, transactions are properly authorized and recorded, and assets are safeguarded. As at December 31, 2013, our Chief Executive Officer and Chief Financial Officer evaluated, or caused an evaluation under their direct supervision of, the design and operation of our internal controls over financial reporting (as defined in National Instrument 52‐109, Certification of Disclosure in Issuers' Annual and Interim Filings) and, based\n\non that assessment, determined that our internal controls over financial reporting were appropriately designed\n\nand operating effectively. Ernst & Young LLP, the auditors appointed by the Shareholders, have examined the consolidated financial statements in accordance with Canadian generally accepted auditing standards to enable them to express to the As at December 31, 2013, our Chief Executive Officer and Chief Financial Officer evaluated, or caused an evaluation under their direct supervision of, the design and operation of our internal controls over financial reporting (as defined in National Instrument 52‐109, Certification of Disclosure in Issuers' Annual and Interim Filings) and, based on that assessment, determined that our internal controls over financial reporting were appropriately designed and operating effectively. and operating effectively. Ernst & Young LLP, the auditors appointed by the Shareholders, have examined the consolidated financial statements in accordance with Canadian generally accepted auditing standards to enable them to express to the\n\nShareholders their opinion on the consolidated financial statements. Their report as auditors is set forth below. The consolidated financial statements have been further reviewed and approved by the Board of Directors and its Ernst & Young LLP, the auditors appointed by the Shareholders, have examined the consolidated financial statements in accordance with Canadian generally accepted auditing standards to enable them to express to the Shareholders their opinion on the consolidated financial statements. Their report as auditors is set forth below. Shareholders their opinion on the consolidated financial statements. Their report as auditors is set forth below. The consolidated financial statements have been further reviewed and approved by the Board of Directors and its\n\nAudit Committee. This committee meets regularly with management and the auditors, who have full and free access to the Audit Committee. The consolidated financial statements have been further reviewed and approved by the Board of Directors and its Audit Committee. This committee meets regularly with management and the auditors, who have full and free access to the Audit Committee. access to the Audit Committee.\n\nFebruary 18, 2014\n\nFebruary 18, 2014\n\n Philip Fraser Robert Richardson President and Chief Executive Officer Executive Vice President and Chief Financial Officer Philip Fraser Robert Richardson February 18, 2014 Philip Fraser Robert Richardson President and Chief Executive Officer Executive Vice President and Chief Financial Officer\n\nPresident and Chief Executive Officer Executive Vice President and Chief Financial Officer", - "page_start": 63, - "page_end": 63, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "FIN 46R is effective at the end of the first interim period ending after March 15, 2004. Entities that have adopted FIN 46 prior to this effective date can continue to apply the provision of FIN 46 until the effective date of FIN 46R. The Company adopted FIN 46 on January 3, 2004, and it did not have an impact on the Company's financial statements.\n\nThe Financial Accounting Standards Board finalized SFAS No. 150, \"Accounting for Certain Financial Instruments with Characteristics of both Liabilities and Equity,\" effective for financial instruments entered into or modified after May 31, 2003, and otherwise is effective at the beginning of the first interim period beginning after June 15, 2003. The adoption of SFAS No. 150 did not have an impact on the Company's financial statements.\n\nDuring 2002, the Financial Accounting Standards Board finalized SFAS No. 146, \"Accounting for Costs Associated with Exit or Disposal Activities\" for exit and disposal activities that are initiated after December 31, 2002. This Statement requires that a liability for a cost associated with an exit or disposal activity be recognized when the liability is incurred. The Company applied this statement to its 2003 restructuring activities which resulted in a charge of $8.5 million during 2003.\n\nThe Financial Accounting Standards Board also issued Interpretation No. 45, \"Guarantor's Accounting and Disclosure Requirements for Guarantees, Including Indirect Guarantees of Indebtedness to Other.\" FIN 45 clarifies the requirements of SFAS No. 5, \"Accounting for Contingencies\" relating to the guarantor's accounting for and disclosure of the issuance of certain types of guarantees. The provisions for initial recognition and measurement are effective on a prospective basis for guarantees that are issued or modified after December 31, 2002. The adoption did not have a material impact on the Company's financial statements.\n\nIn December 2003, the Financial Accounting Standards Board issued a revised SFAS No. 132, \"Employers' Disclosures about Pensions and Other Postretirement Benefits.\" In 2003, the Company adopted the revised disclosure requirements of this pronouncement.\n\n#### *R E C L A S S I F I C A T I O N S*\n\nCertain prior year amounts have been reclassified to conform to the 2003 presentation.\n\n#### **Restructuring Related Charges**\n\nAs a result of the Company's business simplification and cost reduction strategies, the Company closed two office furniture facilities located in Milan, Tennessee, and Hazleton, Pennsylvania, and consolidated production into other U.S. manufacturing locations. Charges for the closures totaled $15.7 million, which consists of $6.7 million of accelerated depreciation of machinery and equipment which was recorded in cost of sales, $3.4 million of severance, and $5.6 million of facility exit, production relocation, and other costs which were recorded as restructuring costs. A total of 316 members were terminated and received severance due to these shutdowns. The closures and consolidation are substantially complete.\n\nThe Hazleton, Pennsylvania, facility is an owned facility and has been reclassified to current assets as it is currently being held as available for sale. It is included in the \"Prepaid expenses and other current assets\" in the January 3, 2004, condensed consolidated balance sheet at its carrying value of $2.1 million. The Milan, Tennessee, facility is a leased facility that is no longer being used in the production of goods. The restructuring expense for 2003 included $1.4 million of costs that will continue to be incurred under the lease contract reduced by estimated sublease rentals that could be reasonably obtained.\n\nDuring 2002, the Company recorded a pretax charge of approximately $5.4 million due to the shutdown of an office furniture facility in Jackson, Tennessee. A total of 125 members were terminated and received severance due to this shutdown. During the second quarter of 2003, a restructuring credit of approximately $0.6 million was taken back into income relating to this charge. This was due to the fact that the Company was able to exit a lease with the lessor at more favorable terms than previously estimated.\n\nDuring the second quarter of 2001, the Company recorded a pretax charge of $24.0 million or $0.26 per diluted share for a restructuring plan that involved consolidating physical facilities, discontinuing low-volume product lines, and reductions of workforce. Included in the charge was the closedown of three of its office furniture facilities located in Williamsport, Pennsylvania; Tupelo, Mississippi; and Santa Ana, California. Approximately 500 members were terminated and received severance due to the closedown of these facilities. During the second quarter of 2002, a restructuring credit of approximately $2.4 million was taken back into income relating to this charge. This was mainly due to the fact that the Company was able to exit a lease with a lessor at more favorable terms than originally estimated and the Company's ability to minimize the number of members terminated as compared to the original plan.\n\nThe following table details the change in restructuring reserve for the last three years:", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "what are career fairs for?", - "target_page": 1, - "target_passage": " In-person and virtual career fairs provide valuable opportunities to connect with prospective employers.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Home / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n#### MONEY\n\n### 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n\n2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n\n3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…\n\n### RELATED ARTICLES", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\n#### **NOTE 9: FAIR VALUE MEASUREMENTS**\n\nWe disclose our financial assets and liabilities that are measured at fair value in our Consolidated Balance Sheets by level within the fair value hierarchy as defined by applicable accounting standards:\n\n- Level 1: Quoted market prices in active markets for identical assets or liabilities\n- Level 2: Other observable market-based inputs or unobservable inputs that are corroborated by market data\n- Level 3: Unobservable inputs that cannot be corroborated by market data that reflect the reporting entity's own\n\t- assumptions\n\nWe did not have any financial assets or liabilities that were measured at fair value on a recurring basis as of January 31, 2015 or February 1, 2014.\n\nFinancial instruments not measured at fair value on a recurring basis include cash and cash equivalents, accounts receivable and accounts payable and approximate fair value due to their short-term nature. We estimate the fair value of long-term debt using quoted market prices of the same or similar issues and, as such, this is considered a Level 2 fair value measurement. The following table summarizes the carrying value and fair value estimate of our long-term debt, including current maturities:\n\n| | January 31, 2015 | February 1, 2014 |\n| --- | --- | --- |\n| Carrying value of long-term debt1 | $3,131 | $3,113 |\n| Fair value of long-term debt | 3,693 | 3,511 |\n\n1 The carrying value of long-term debt includes the remaining unamortized adjustment from our previous effective fair value hedge.\n\nWe also measure certain non-financial assets at fair value on a nonrecurring basis, primarily goodwill and long-lived tangible and intangible assets, in connection with periodic evaluations for potential impairment. See Note 1: Nature of Operations and Summary of Significant Accounting Policies for additional information related to goodwill, intangible assets and long-lived assets. We recorded no material impairment charges for these assets in 2014, 2013 and 2012. We estimate the fair value of goodwill and long-lived tangible and intangible assets using primarily unobservable inputs and, as such, these are considered Level 3 fair value measurements.\n\n#### **NOTE 10: LEASES**\n\nWe lease the land or the land and buildings at many of our stores. Additionally, we lease office facilities, warehouses and equipment. Most of these leases are classified as operating leases and they expire at various dates through 2080. The majority of our fixed, non-cancelable lease terms are 15 to 30 years for Nordstrom full-line stores and 10 to 15 years for Nordstrom Rack stores. Many of our leases include options that allow us to extend the lease term beyond the initial commitment period, subject to terms agreed to at lease inception. Most of our leases also provide for payment of operating expenses, such as common area charges, real estate taxes and other executory costs, and some leases require additional payments based on sales, referred to as \"percentage rent.\"\n\nFuture minimum lease payments as of January 31, 2015 are as follows:\n\n| Fiscal year | Capital leases | Operating leases |\n| --- | --- | --- |\n| 2015 | $2 | $210 |\n| 2016 | 2 | 231 |\n| 2017 | 1 | 229 |\n| 2018 | 1 | 227 |\n| 2019 | — | 219 |\n| Thereafter | — | 1,202 |\n| Total minimum lease payments | $6 | $2,318 |\n| Less: amount representing interest | (1) | |\n| Present value of net minimum lease payments | $5 | |", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# **NOTE 14 – FAIR VALUE MEASUREMENT**\n\nThe following table presents financial assets and liabilities measured at fair value in the consolidated statement of financial position in accordance with the fair value hierarchy. This hierarchy groups financial assets and liabilities into three levels based on the significance of inputs used in measuring the fair value of the financial assets and liabilities. The fair value hierarchy has the following levels:\n\n- Level 1: quoted prices (unadjusted) in active markets for identical assets or liabilities;\n- Level 2: inputs other than quoted prices included within Level 1 that are observable for the asset or liability, either directly (i.e. as prices) or indirectly (i.e. derived from prices); and\n- Level 3: inputs for the asset or liability that are not based on observable market data (unobservable inputs).\n\nThe Level within which the financial asset or liability is classified is determined based on the lowest level of significant input to the fair value measurement. The financial assets and liabilities measured at fair value in the statement of financial position are grouped into the fair value hierarchy as follows:\n\n| Consolidated 31 December 2014 | | | | |\n| --- | --- | --- | --- | --- |\n| (US$'000) | Level 1 | Level 2 | Level 3 | Total |\n| Assets measured at fair value | | | | |\n| Derivative commodity contracts | - | 9,476 | - | 9,476 |\n| Interest rate swap contracts | - | 107 | - | 107 |\n| Development and production | - | - | 455,084 | 455,084 |\n| assets (1) | | | | |\n| Liabilities measured at fair value | | | | |\n| Interest rate swap contracts | - | (130) | - | (130) |\n| Net fair value | - | 9,453 | 455,084 | 464,537 |\n\n(1) Excludes work-in-progress and restoration provision assets totaling $63.9 million.\n\n| Consolidated 31 December 2013 | | | | |\n| --- | --- | --- | --- | --- |\n| (US$'000) | Level 1 | Level 2 | Level 3 | Total |\n| Assets measured at fair value | | | | |\n| Interest rate swap contract | - | 176 | - | 176 |\n| Liabilities measured at fair value | | | | |\n| Derivative commodity contracts | - | (219) | - | (219) |\n| Interest rate swap contracts | - | (147) | - | (147) |\n| Net fair value | - | (190) | - | (190) |\n\nDuring the years ended 31 December 2014 and 2013, respectively, there were no transfers between level 1 and level 2 fair value measurements, and no transfer into or out of level 3 fair value measurements.", - "page_start": 82, - "page_end": 82, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "#### **29. FINANCIAL INSTRUMENTS (continued)**\n\n#### **(c) Net fair values**\n\nThe aggregrate net fair values of financial assets and liabilities are identical to the carrying amount in the balance sheet.\n\nThe following methods and assumptions are used to determine the net fair values of financial assets and liabilities:\n\n#### **Cash and cash equivalents**\n\nThe carrying amount approximates fair value because of their short term to maturity.\n\n#### **Trade debtors, other debtors and loans**\n\nThe carrying amount approximates fair value.\n\n#### **Investments**\n\nFor investments where there is no quoted market price, a reasonable estimate of the fair value is calculated based on the underlying net asset base of the investment.\n\n#### **Trade creditors, other creditors and accruals** The carrying amount approximates fair value.\n\n#### **(d) Credit risk exposures**\n\nThe economic entity's maximum exposure to credit risk at balance date in relation to each class of recognised financial assets is the carrying amount of those assets as indicated in the balance sheet.\n\n| | | Company | |\n| --- | --- | --- | --- |\n| | | 2000 | 1999 |\n| | | $ | $ |\n| 30. | CONTINGENT LIABILITIES | | |\n| | As detailed in Note 11, the company has entered | | |\n| | into a deed of cross-guarantee with certain wholly | | |\n| | owned controlled entities. The total liabilities of | | |\n| | these wholly-owned controlled entities (excluding | | |\n| | amounts owed to the parent entity) for which | | |\n| | the Company is potentially liable are: | 18,447,843 | 15,578,523 |", - "page_start": 64, - "page_end": 64, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\nIPUC is also valued at fair value, except if such values cannot be reliably determined. In the case when a fair value cannot be reliably determined, such property is recorded at cost. The fair value of IPUC is determined using the capitalization of net income method.\n\nThe determination of the fair value of investment property requires the use of estimates such as future cash flows from assets and cap‑rates applicable to those assets. In addition, development risks (such as construction and leasing risks) are also taken into consideration when determining the fair value of IPUC. These estimates are based on local market conditions existing at the reporting date. In arriving at their estimates of market values, the external valuator uses their market knowledge and professional judgment and does not rely solely on historical transaction comparables. The critical estimates and assumptions underlying the valuation of investment properties and developments are set out in Note 5.\n\n#### *Fair Value of Financial Instruments*\n\nWhere the fair value of financial assets and financial liabilities recorded in the Notes to the Consolidated Financial Statements cannot be derived from active markets, they are determined using valuation techniques, including the discounted cash flow model. Inputs to these models are taken from observable markets where possible, but where this is not feasible a degree of judgment is required in establishing fair values. The judgments include considerations of inputs such as liquidity risk, credit risk and volatility. Changes in assumptions about these factors could affect the reported fair value of financial instruments.\n\n## **Changes in Accounting Policies**\n\nThe accounting policies applied during the year ended December 31, 2013, are consistent with those used in the audited consolidated financial statements for the year ended December 31, 2012, except for the following new and amended IFRS and International Financial Reporting Interpretations Committee (\"IFRIC\") interpretations which were effective for periods beginning on or after July 1, 2012, and January 1, 2013:\n\n#### IAS 1 ‑ Financial Statement Presentation (\"IAS 1\") — Presentation of Items of Other Comprehensive Income (\"OCI\")\n\nThe amendments to IAS 1 change the grouping of items presented in OCI. Items that could be reclassified (or recycled) to profit or loss at a future point in time (for example, upon derecognition or settlement) would be presented separately from items that will never be reclassified. The adoption of this standard did not have an impact on the Company's financial position or performance.\n\n#### IFRS 10 ‑ Consolidated Financial Statements (\"IFRS 10\")\n\nIFRS 10 replaces the portion of IAS 27 ‑ Consolidated and Separate Financial Statements (\"IAS 27\") that addresses the accounting for consolidated financial statements. IFRS 10 establishes a single control model that applies to all entities including special purpose entities. The changes introduced by IFRS 10 require Management to exercise significant judgment to determine which entities are controlled, and therefore, are required to be consolidated by a parent, compared with the requirements that were in IAS 27. The adoption of this standard did not have an impact on the Company's financial position or performance.\n\n#### IFRS 11 ‑ Joint Arrangements (\"IFRS 11\")\n\nIFRS 11 replaces IAS 31 ‑ Interests in Joint Ventures and SIC 13 ‑ Jointly controlled Entities — Non monetary Contributions by Venturers. IFRS 11 removes the option to account for jointly controlled entities using proportionate consolidation. Instead, joint arrangements that meet the definition of a joint venture must be accounted for using the equity method. Otherwise joint arrangements are classified as joint operations and are accounted for by recognizing the Company's share of the arrangement's assets and liabilities. The adoption of this standard did not have an impact on the Company's accounting treatment of its joint arrangements as they meet the definition of joint ventures and were previously accounted for using the equity method.\n\n#### IFRS 12 ‑ Disclosure of Interest in Other Entities (\"IFRS 12\")\n\nIFRS 12 includes all of the disclosures that were previously in IAS 27 related to consolidated financial statements, as well as all of the disclosures that were previously included in IAS 31 and IAS 28. These disclosures relate to an entity's interests in subsidiaries, joint arrangements, associates and structured entities. A number of new disclosures are also required, including:\n\n• A requirement to disclose judgments made in determining if the Company controls, has joint control, or significant influence over an entity; and\n\n• A requirement to disclose judgments made in determining the type of joint arrangement in which the Company has an interest.\n\nThe Company adopted this standard and included the required disclosures related to the Company's interest in subsidiaries, joint arrangements and associates in the notes of these consolidated financial statements.\n\n#### IFRS 13 ‑ Fair Value Measurement (\"IFRS 13\")\n\nIFRS 13 establishes a single source of guidance under IFRS for all fair value measurements. IFRS 13 does not change when an entity is required to use fair value, but rather provides guidance on how to measure fair value under IFRS when fair value is required or permitted. The Company adopted the standard and concluded that the definition of fair value applied in IFRS 13 does not differ materially from the Company's current definition and therefore there was no impact on the Company's financial position. However, IFRS 13 does expand the disclosure requirements in respect of fair value measurement, and these additional disclosures are included in Note 5 of these consolidated financial statements.", - "page_start": 61, - "page_end": 61, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "### **NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)**\n\n| | | Accumulated Amortization | |\n| --- | --- | --- | --- |\n| | Goodwill | Other | Total |\n| Balance, December 31, 2003 ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $(143.4) | $(18.2) | $(161.6) |\n| Amortization expense ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | Ì | (5.8) | (5.8) |\n| Balance, December 31, 2004 ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $(143.4) | $(24.0) | $(167.4) |\n\nIn general, goodwill is tested for impairment on an annual basis. In testing for impairment, the Company estimates the fair value of each operating segment and compares the fair values with the carrying values. If the fair value of an operating segment is greater than its carrying value, then no impairment results. If the fair value is less than its carrying value, then the Company would determine the fair value of the goodwill. The fair value of goodwill is determined by deducting the fair value of an operating segment's identiÑable assets and liabilities from the fair value of the operating segment as a whole, as if that operating segment had just been acquired and the purchase price were being initially allocated. If the fair value of the goodwill were less than its carrying value for a segment, an impairment charge would be recorded to earnings in the Company's Consolidated Statement of Income.\n\nIn addition, the Company would evaluate an operating segment for impairment if events or circumstances change between annual tests indicating a possible impairment. Examples of such events or circumstances include:\n\n- ' A signiÑcant adverse change in legal factors or in the business climate,\n- ' An adverse action or assessment by a regulator,\n- ' A more likely than not expectation that a segment or a signiÑcant portion thereof will be sold, or\n- ' The testing for recoverability under Statement of Financial Accounting Standards No. 144, \"\"Accounting for the Impairment of Long-Lived Assets,'' of a signiÑcant asset group within the segment.\n\nThe Company incurred no impairment of goodwill as a result of its goodwill impairment test in 2004. However, there can be no assurance that goodwill will not be impaired at any time in the future.\n\n### **Accrued Liabilities**\n\nA summary of accrued liabilities is as follows:\n\n| | | December 31, |\n| --- | --- | --- |\n| | 2004 | 2003 |\n| Accrued payroll and beneÑts ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $ 51.3 | $ 43.2 |\n| Accrued fees and taxes ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 31.4 | 29.6 |\n| Accrued interest ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 18.4 | 18.7 |\n| Accrued dividends ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 18.1 | 9.5 |\n| Other ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 16.1 | 17.6 |\n| | $135.3 | $118.6 |", - "page_start": 69, - "page_end": 69, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except share and per share amounts)*\n\n## **25. Financial Risk Management Objectives and Policies (continued)**\n\n## **Fair Value Measurement**\n\nFinancial instruments are defined as a contractual right or obligation to receive or deliver cash or another financial asset. The following table presents the classification, subsequent measurement, carrying values and fair values of the Company's financial assets and liabilities:\n\n| | | | December 31, 2013 | | December 31, 2012 |\n| --- | --- | --- | --- | --- | --- |\n| | Subsequent | Carrying | | Carrying | |\n| Classification | Measurement | Value | Fair Value | Value | Fair Value |\n| Other Financial Liabilities: | | | | | |\n| Mortgages (b) | Amortized Cost | $699,130 | $748,806 | $625,081 | $687,119 |\n| Convertible debentures (a) | Amortized Cost | $96,419 | $100,461 | $98,042 | $102,942 |\n| Subordinated debentures (b) | Amortized Cost | $‑ | $‑ | $9,998 | $10,104 |\n\nCash and cash equivalents are classified as held for trading and carried at their fair values. The Company's short‑term financial instruments, comprising accounts receivable, restricted cash, accounts payable and accrued liabilities, security deposits, loans and construction loans are carried at amortized cost which, due to their short‑term nature, approximates their fair value.\n\n(a) The fair value of the convertible debentures are based on a quoted market price as at the reporting date (level 1).\n\n(b) The mortgages and subordinated debentures are based upon discounted future cash flows using discount rates that reflect current market conditions for instruments with similar terms and risks. Such fair value estimates are not necessarily indicative of the amounts the Company might pay or receive in actual market transactions (level 2).\n\nThe interest rates used to discount the estimated cash flows, when applicable, are based on the 5‑year government yield curve at the reporting date, plus an adequate credit spread, and were as follows:\n\n| | December 31, | December 31, |\n| --- | --- | --- |\n| As at | 2013 | 2012 |\n| Mortgages ‑ Apartments | 2.60% | 2.27% |\n| Mortgages ‑ MHCs | 4.45% | 4.02% |\n\nAs at December 31, 2013, the Company did not have any financial assets or liabilities measured at fair value on the Consolidated Statements of Financial Position.\n\n## **26. Commitments**\n\nAs at December 31, 2013, Killam has committed development costs of $16.6 million.\n\nThe Company is subject to various legal proceedings and claims that arise in the ordinary course of business. These matters are generally covered by insurance. Management believes that the final outcome of such matters will not have a material adverse effect on the financial position, results of operations or liquidity of the Company. However, actual outcomes may differ from Management's expectations.\n\n## **27. Financial Guarantees**\n\nKillam Properties Inc. is the guarantor for borrowings held through its three equity investments. As at December 31, 2013, the maximum potential obligation resulting from these guarantees is $70.5 million, all related to long‑term mortgage financing (December 31, 2012 – $72.3 million). These loans are secured by a first ranking mortgage over the associated investment property. Management has reviewed the contingent liability associated with its financial guarantee contracts and, at December 31, 2013, determined that a provision is not required to be recognized in the Statement of Financial Position. (December 31, 2012 ‑ $nil).", - "page_start": 93, - "page_end": 93, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# **NOTE 1 - STATEMENT OF SIGNIFICANT ACCOUNTING POLICIES continued**\n\nFinance leases are capitalised by recording an asset and a liability at the lower of the amounts equal to the fair value of the leased property or the present value of the minimum lease payments, including any guaranteed residual values. Lease payments are allocated between the reduction of the lease liability and the lease interest expense for the period.\n\nAssets under financing leases are depreciated on a straight-line basis over the shorter of their estimated useful lives or the lease term. Lease payments for operating leases, where substantially all the risks and benefits remain with the lessor, are charged as expenses in the periods in which they are incurred.\n\nLease incentives under operating leases are recognised as a liability and amortised on a straight-line basis over the life of the lease term.\n\n### **e) Financial Instruments**\n\n### **Recognition and Initial Measurement**\n\nFinancial instruments, incorporating financial assets and financial liabilities, are recognised when the entity becomes a party to the contractual provisions of the instrument. Trade date accounting is adopted for financial assets that are delivered within timeframes established by marketplace convention.\n\nFinancial instruments are initially measured at fair value plus transactions costs where the instrument is not classified at fair value through profit or loss. Transaction costs related to instruments classified at fair value through profit or loss are expensed to profit or loss immediately. Financial instruments are classified and measured as set out below.\n\n# **Derivative Financial Instruments**\n\nThe Group uses derivative financial instruments to economically hedge its exposure to changes in commodity prices arising in the normal course of business. The principal derivatives that may be used are commodity crude oil price swap, option and costless collar contracts and interest rate swaps. Their use is subject to policies and procedures as approved by the Board of Directors. The Group does not trade in derivative financial instruments for speculative purposes.\n\nDerivative financial instruments are recognised at fair value. Subsequent to initial recognition, derivative financial instruments are recognised at fair value. The fair value of these derivative financial instruments is the estimated amount that the Group would receive or pay to terminate the contracts at the reporting date, taking into account current market prices and the current creditworthiness of the contract counterparties. The derivatives are valued on a mark to market valuation and the gain or loss on re-measurement to fair value is recognised through the statement of profit or loss and other comprehensive income.\n\n# i) Financial assets at fair value through profit or loss\n\nFinancial assets are classified at fair value through profit or loss when they are held for trading for the purpose of short term profit taking, when they are derivatives not held for hedging purposes, or designated as such to avoid an accounting mismatch or to enable performance evaluation where a group of financial assets is managed by key management personnel on a fair value basis in accordance with a documented risk management or investment strategy. Realised and unrealised gains and losses arising from changes in fair value are included in profit or loss in the period in which they arise.", - "page_start": 64, - "page_end": 64, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "### **NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)**\n\nHedging Activities'' (\"\"SFAS 133''), as amended. (For further information, see Note 11, Fuel Hedge.) Of this amount, $1.6 million, net of tax, representing the eÅective portion of the change in fair value was recorded to other comprehensive income for the year ended December 31, 2002.\n\nAt December 31, 2004, the Company had $38.7 million of restricted marketable securities held as Ñnancial guarantees. These securities consist of mutual funds invested in short-term investment grade securities, including mortgage-backed securities and U.S. Government obligations. These securities are available for sale and, as a result, are stated at fair value based on quoted market prices. During the years ended December 31, 2004 and 2003, the Company recorded a $.1 million and ($.1) million unrealized gain/(loss), net of tax, respectively, to other comprehensive income related to the change in fair value of these securities.\n\nThe Company had no other components of other comprehensive income for the periods presented.\n\n### **Statements of Cash Flows**\n\nThe Company considers all unrestricted highly liquid investments with purchased maturities of three months or less to be cash equivalents. The eÅect of non-cash transactions related to business combinations, as discussed in Note 4, Business Combinations, and other non-cash transactions are excluded from the accompanying Consolidated Statements of Cash Flows.\n\n### **Fair Value of Financial Instruments**\n\nThe carrying amounts of cash and cash equivalents, restricted cash and marketable securities, receivables, accounts payable and accrued liabilities approximate fair value due to the short maturity of these instruments. The fair value of the Company's Ñxed rate unsecured notes and tax-exempt Ñnancing using quoted market rates is $1,227.4 million at December 31, 2004. The carrying value of the unsecured notes and tax exempt Ñnancing is $1,123.3 million at December 31, 2004. The carrying amounts of the Company's remaining notes payable and tax-exempt Ñnancing approximate fair value because interest rates are variable and, accordingly, approximate current market rates.\n\n### **Concentration of Credit Risk**\n\nThe Company provides services to commercial, industrial, municipal and residential customers in the United States. Concentrations of credit risk with respect to trade receivables are limited due to the wide variety of customers and markets in which services are provided as well as their dispersion across many geographic areas in the United States. The Company performs ongoing credit evaluations of its customers, but does not require collateral to support customer receivables. The Company establishes an allowance for doubtful accounts based on various factors including the credit risk of speciÑc customers, age of receivables outstanding, historical trends, economic conditions and other information.\n\n### **New Accounting Pronouncement**\n\nOn December 16, 2004, the Financial Accounting Standards Board issued Statement of Financial Accounting Standards No. 123 (revised 2004), \"\"Share-Based Payment'' (\"\"SFAS 123(R)''), which is a revision of Statement of Financial Accounting Standards No. 123, \"\"Accounting for Stock-Based Compensation'' (\"\"SFAS 123''). SFAS 123(R) supersedes APB Opinion No. 25, \"\"Accounting for Stock Issued to Employees,'' and amends SFAS 95, \"\"Statement of Cash Flows.'' Generally, the approach in SFAS 123(R) is similar to the approach described in SFAS 123. However, SFAS 123(R) requires all share-based payments to employees, including grants of employee stock options, to be recognized in the income statement based on their fair values. Pro forma disclosure is no longer an alternative.", - "page_start": 72, - "page_end": 72, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except share and per share amounts)*\n\n## **2. Significant Accounting Policies (continued)**\n\n#### *(i) Completed investment property*\n\nInvestment properties are measured initially at cost, including transaction costs. Transaction costs include deed transfer taxes and various professional fees. Subsequent to initial recognition, investment properties are recorded at fair value. Fair value is determined based on a combination of internal and external processes and valuation techniques. Gains and losses arising from changes in fair values are included in the income statement in the year in which they arise.\n\nInvestment property is derecognized when it has been disposed of or permanently withdrawn from use and no future economic benefit is expected. Any gains or losses on the retirement or disposal of investment property are recognized in the Statements of Income and Comprehensive Income in the year of retirement or disposal.\n\nTransfers are made to investment property when, and only when, there is a change in use, evidenced by the commencement of operating leases. Transfers are made from investment property when, and only when, there is a change in use, evidenced by commencement of development.\n\n#### *(ii) Investment property under construction (\"IPUC\")*\n\nThe cost of development properties includes direct development costs, realty taxes and borrowing costs directly attributable to the development. Under the requirements of International Accounting Standard 40 ‑ Investment Property (\"IAS 40\"), IPUC is measured at fair value at each reporting date, with the recognition of gains or losses in the income statement. If the fair value of IPUC is not reliably determinable, but the Company expects the fair value of the property to be reliably determinable when construction is complete, it measures that investment property under construction at cost until either its fair value becomes reliably determinable or construction is completed (whichever is earlier).\n\n#### *(iii) Borrowing costs related to IPUC*\n\nAlthough IPUC is measured at fair value, Killam's policy is to present its Statements of Income and Comprehensive Income as if borrowing costs related to the construction are capitalized. Borrowing costs directly attributable to the acquisition or construction of an asset that necessarily takes a substantial period of time to get ready for its intended use or sale are recorded as part of the cost of the respective assets. The interest is calculated using the Company's weighted average cost of borrowings after adjusting for borrowings associated with specific developments. Where borrowings are associated with specific developments, the amount is the gross interest incurred on those borrowings less any investment income arising on their temporary investment. Interest is capitalized from the commencement of the development work until the date of substantial completion. The capitalization of borrowing costs is suspended if there are prolonged periods when development activity is interrupted. Interest is also capitalized on the purchase cost of a site or property acquired specifically for redevelopment but only where activities necessary to prepare the asset for redevelopment are in progress. The Company considers substantial completion to have occurred when the property is capable of operating in the manner intended by management.\n\n#### **(G) Property and Equipment**\n\nProperty and equipment are stated at historical cost less accumulated depreciation and are mainly comprised of head office buildings, leasehold improvements and IT systems. The estimated useful lives, residual values and depreciation method are reviewed at each yearend, with the effect of any changes in estimate accounted for prospectively. These items are amortized on a straight‑line basis over their estimated useful lives ranging as follows:\n\n| Building | 40 years |\n| --- | --- |\n| Heavy equipment | 15 years |\n| Vehicles | 10 years |\n| Furniture, fixtures and office equipment | 5‑10 years |\n| Leaseholds L | ease term |\n\n#### **(H) Inventory**\n\nInventory represents manufactured homes available for sale. The manufactured homes are valued at the lower of cost (purchase price plus delivery and setup costs) and net realizable value. Net realizable value is the estimated selling price in the ordinary course of business based on market prices at the reporting date less costs to complete and the estimated costs of sale.\n\n#### **(I) Cash**\n\nCash is comprised of bank balances and interest‑earning bank accounts.\n\n#### **(J) Share‑Based Compensation**\n\nThe Company issues share‑based awards to certain employees and non‑employee directors whereby employees render services as consideration for equity instruments (equity‑settled transactions).", - "page_start": 71, - "page_end": 71, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "What are the priorities for job seekers ?", - "target_page": 1, - "target_passage": " Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Home / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n#### MONEY\n\n### 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n\n2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n\n3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…\n\n### RELATED ARTICLES", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "## Diversity\n\nThe Company has a policy to improve the diversity of its workforce over time by identifying women and individuals from under-represented backgrounds for recruitment, and by rewarding and promoting employees on the basis of performance.\n\nHowever, at this stage of its development, the Company has a small Board of Directors, and a small management team which is geographically dispersed and because of the industry in which the Company operates, the Board does not consider it to be practicable to set measurable objectives to achieve greater gender diversity at this time.\n\nIn addition, the Board acknowledges the benefits of seeking to improve gender diversity at all levels in the Company over time and will keep this issue under review.\n\nThe Company aims to foster continuous improvement in the area of diversity; building on achievement realised through the implementation of historical diversity initiatives, by applying principles successfully used at our leading operation in this area, to other parts of the business.\n\nOur flagship 'Chatree' Mine in Thailand boasts the enviable statistic of having equal representation by women on the senior management team. Recruitment, training and promotion principles employed at Chatree are currently being applied to our 'Challenger' Mine in Australia, where we currently have 14% representation of women across the senior management and professional categories and to other parts of the business.\n\nThere is currently no representation by women on our Board of Directors. Whilst this is in part reflective of the relatively small size of the Board and stage of development of key elements of the business, it forms part of an overall business review process to consider the issue of gender diversity at this level and will be the subject of ongoing review.\n\nThe Company considers that it will benefit from its ongoing commitment to promote a diverse workforce with treatment of employees and future employees on the basis of merit, abilities and potential, regardless of gender, colour, ethnic or national origin, race, disability, age, sexual orientation, gender reassignment, socioeconomic background, religious or political belief, non / trade union membership, family circumstances or other irrelevant distinction.\n\nThe Company has set various criteria and procedures in order to support equality and diversity in the workforce and applies these principles to:\n\n- 〉 Provide fair access to workplace opportunities and benefits, including internal promotion, leadership development, flexible work practices and fair and comparable wages;\n- 〉 Attracting and retaining a skilled and diverse workforce;\n- 〉 Creating an inclusive workplace culture where discriminatory behaviour is unacceptable; and\n- 〉 Providing an effective grievance mechanism for employees.\n\n### Current Proportion of Women Employees\n\n| Board | 0.0% |\n| --- | --- |\n| Senior Executives | 0.0% |\n| Senior Managers | 1.8% |\n| Managers | 1.0% |\n| Professionals | 8.6% |\n| Non-professionals | 6.4% |\n| Total Workforce | 17.8% |\n\n## Share Trading Policy\n\nIn the interests of shareholder confidence and compliance with insider trading laws, the Company has formal policies governing the trading of the Company's securities by Directors, officers and employees. Details of Directors' shareholdings are disclosed in the Directors' Report.\n\nThe policy prohibits Directors and employees from engaging in short-term trading of any of the Company's securities and buying or selling the Company's securities if they possess unpublished, price-sensitive information.\n\nDirectors and senior management may buy or sell Company securities in the four week period following significant announcements by the Company, including the release of the quarterly report, half-yearly results, the preliminary annual results and the lodgement of the Company's Annual Report (subject to the prohibition of dealing in the Company's securities if they possess unpublished price sensitive information).\n\nDirectors and senior management must also receive approval from the Chairman before buying or selling Company securities.\n\nThe Company's Share Trading Policy is available in the 'Corporate Governance' section of the Company's website.\n\n## Communication with Shareholders and Continuous Disclosure\n\nThe Company is committed to providing relevant and timely information to its shareholders in accordance with its continuous disclosure obligations under the ASX Listing Rules and the *Corporations Act 2001* (Cth).\n\nInformation is communicated to shareholders through the distribution of the Company's Annual Report and other communications. All releases are posted on the Company's website and released to the ASX in a timely manner.\n\nThe Company has practices in place throughout the year governing who may authorise and make disclosures and the method by which the market is to be informed of any price sensitive information.\n\nThe Company Secretary is responsible for communications with the ASX and ensuring that the Company meets its continuous disclosure obligations.\n\nThe Company's Continuous Disclosure is available in the 'Corporate Governance' section of the Company's website.\n\n## Annual General Meeting\n\nAll shareholders are encouraged to attend and participate in the Company's Annual General Meeting. Shareholders may attend in person or send a proxy as their representative.\n\nThe Company's external auditor is routinely invited to and attends the Annual General Meeting in order to respond to questions raised by shareholders relating to the content and conduct of the audit and accounting policies adopted by the Company in relation to the preparation of the financial statements.\n\n## Corporate Governance Disclosure\n\nThe Company's governance policies and procedures comply in all substantial respects with the Australian Securities Exchange Corporate Governance Principles and Recommendations with 2010 Amendments. The following table compares the ASX Recommendations and the Company's corporate governance policies and practices.", - "page_start": 38, - "page_end": 38, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "For over 100 years Shenandoah Telecommunications Company has been committed to providing outstanding service to our customers. Our employees take that same dedication after hours to make a difference in their community.\n\nWe take this opportunity to share with you, our shareholders, the stories of just a few of your dedicated employees.\n\n*Patty Pomeroy* **help people.\"**\n\nVolunteerism is in Patty Pomeroy's blood. Her grandfather was a dispatcher for the rescue squad in Middletown, VA for 25 years and her grandmother was in the ladies auxiliary. Her father was a charter member of the Middletown Rescue Squad. In 1997, Patty, a customer service representative at Shentel for four years, continued the family tradition by earning her Emergency Medical Technician certification and going to \"work\" for the Strasburg Rescue Squad. Patty is the administrator of membership recruitment and retention for the squad and is the liaison coordinator for junior squad members under 18. It is her job to make sure that new members are brought in to the squad and current members stay active.\n\n# **\"There is a great satisfaction that comes from knowing that what you can do will**\n\nJeff Beard has been an installer repairman with Shentel for almost five years. Two years ago, Jeff helped start Project Isaiah 58, a faith-based recovery ministry that reaches out to people who are struggling with addiction. Project Isaiah 58 has weekly group meetings in Winchester, Woodstock and Warrenton, VA. Jeff, who lives in Winchester, participates in the group meetings and also makes time to meet one-on-one with people who need personal attention.\n\n**\"I feel the need to reach out to people who are suffering.\"** \n\n*Jeff Beard*\n\nJohn Gardner has been with Shentel for two years as a PCS technician in Central Pennsylvania, but for almost a year of that time he was on Naval Reserve duty in Sasebo, Japan. John joined the Reserves after serving 10 years of active duty. In October 2002, he was activated under Noble Eagle-Enduring Freedom as part of the increase in security at bases around the world. John worked on Motorola radios and repeater systems while stationed in Japan. It was tough for the serviceman to be away from his wife and children, but John believes very strongly in serving his country.\n\n**\"Being in the Reserves is a way for me to be a civilian and still serve my country.\"**\n\n## *John Gardner*\n\nAt Shentel, George Brinkley, the store manager in Front Royal, VA, is known for being one of the biggest fund-raisers for the Shenandoah County American Cancer Society Relay for Life event. In his six years at the Company, George has raised nearly $20,000. In 2003, he raised $4,246 and was recognized as the top individual fund-raiser for the entire event.\n\nIn 2002, George was chairman of the parade committee for the Woodstock, VA 250th anniversary celebration. Under George's leadership, the 26-member committee worked for a year preparing for the parade, which was the largest in the town's history.\n\n**\"I just have a knack for volunteering. I want to make my community better any way I can.\"**\n\n*George Brinkley* 3 ■ 2003 ANNUAL REPORT", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## CHAIRMAN ' S REPORT\n\nLabour hire is heavily dependent upon the quality of the personnel database and our intention has been announced to offer training at Dampier, Broome and Darwin for those who live in the North West and wish to work in the offshore industry there. Planning for this new initiative is well advanced and we expect to be running courses for prospective offshore employees in coming months. Although the training program is not directed to any particular community group, it has been encouraging to have active support from Aboriginal leaders in the Kimberley region.\n\nWorld prospects for energy, the need for Australia to add value to its resources, Government initiatives for the support of these activities and environmental imperatives, heavily favour gas, giving every indication that Mermaid Marine's development push has been extremely timely.\n\nIt is also important to draw attention to increased efforts in terms of health, safety and environmental protection. Our workplace is largely at sea, where operations involve natural dangers and the safety of our people is paramount. We also work in a setting where the tasks in which we are involved cast us in the role of environmental caretakers of the sea and coastline.\n\nOver the past twelve months, we have worked even more closely with producers to take this side of our business to the highest possible standard. We are proud of the achievement and at the time of this report, despite the inherent dangers involved in the work, our employees have accrued a record 348 days free of Lost Time Injuries, a tremendous effort.\n\nAverage turnover for the last two years was $20 million, our target in the near term is to achieve earnings of at least $100million, with appropriate levels of accompanying profit. That will be addressed through our policy of strategic positioning and development in the North West of Australia, and also by acquisition where merger or purchase will add to our earnings and strengths. Mermaid Marine Australia Limited is in excellent shape, with confidence that we are well able to pursue and secure our ambitious program.\n\nAlan Birchmore Chairman", - "page_start": 9, - "page_end": 9, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "example, in Sweden. 378 Meanwhile, the spectrum of guidance developed regarding work-related psychosocial risks is very wide; it covers aspects such as job satisfaction (overall level of wellbeing), engagement, performance and work-related stress,379 and also discrimination, harassment, aggression and violence.380\n\n### **6.2 EU and national OSH strategies**\n\nThe EU and many Member States **applied and apply strategic approaches**, based on EU or national evidence of the state of OSH. OSH strategies are a steering instrument to focus the activities of all actors on major recognised deficits of OSH infrastructures or processes.381\n\nThe newest **EU Strategic Framework on Health and Safety at Work 2021-2027** puts the focus on change, with the title *'Occupational safety and health in a changing world of work'*.382 Consequently, the strategic framework focuses on three key objectives for these years:\n\n- • *anticipating and managing change in the new world of work brought about by the green, digital and demographic transitions;*\n- •*improving prevention of workplace accidents and illnesses;*\n- •*increasing preparedness for any potential future health crises.*\n\nThe proposed focus areas and actions are related to these three objectives. Under the first key objective there are actions like 'Modernising and simplifying EU OSH rules in the context of the green and digital transitions'; a special focus is on psychosocial and ergonomic risks. The second objective promotes a vision zero approach to work-related deaths, particularly referring to hazardous substances and cardiovascular diseases, the promotion of health at work and inclusive workplaces for all.383\n\nThe third objective responds to the impact of the pandemic situation in 2020 and 2021. It includes the development of emergency procedures for future similar situations ('Health crisis'). The Strategic Framework repeats and corroborates the value of research and data-based evidence by stating: *'Research and data collection, both at EU and national level, are a pre-condition for the prevention of work-related diseases and accidents. Scientific advice and the latest technological developments feed into OSH legislation and policy.'*\n\nAlso, many Member States have agreed on provision of better data as an objective in their national strategies.384 The EU strategy often gives orientation for the development of national OSH strategies. Under the last strategy period, 24 of the 27 Member States had applied a strategy. Many national OSH strategies contained similar targets. EU-OSHA published an overview report on national strategies, and the OSH Barometer contains as one indicator a harmonised overview on the aspects of national strategies.385\n\nOSH strategies are regarded as an important and innovative policy area, a chance for better collaboration, and also a very relevant joint national OSH activity. Those strategies help in priority setting and focused action on weaknesses. Strategies were often agreed in social dialogue processes, and many strategy actors also developed new and better monitoring instruments and indicators.386 Labour inspections play an important or essential role in most of these strategies.387\n\n#### **OSH Barometer – Steering of OSH, National strategies:**\n\nhttps://visualisation.osha.europa.eu/osh-barometer/osh-steering/national-strategies\n\n**OSHWiki: Section 'OSH System at national level', descriptions of the OSH Systems of the EU Member States:** https://oshwiki.eu/wiki/Category:OSH_systems_at_national_level", - "page_start": 123, - "page_end": 123, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## **SOCIAL** RESPONSIBILITY\n\n#### ATTAINING LEADERSHIP IN OUR INDUSTRY AND THE PRIVILEGE OF BEING CANADIANS' COMPANY-OF-CHOICE IS ABOUT DELIVERING THE BEST INNOVATIVE SERVICES WHILE BEING A RESPONSIBLE BUSINESS – AIMS THAT ARE DEEPLY CONNECTED.\n\nEach year we work hard to build a more sustainable business and contribute to building a more sustainable world. Applying social and environmental responsibility throughout Rogers' daily operations – and beyond our own walls to our supply chain and communities – helps us attract customers, enhance employee recruitment and retention, mitigate risks and provide value to all of our stakeholders.\n\nTo create a great workplace, we focus on all aspects of the employee experience – investing millions in employee training and development, providing attractive compensation and benefits, and developing a", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Annually, Internal Audit facilitates and monitors Management's completion of the financial fraud risk assessment to identify areas of potential fraud in our financial statements and to make sure we have documented and verified controls to mitigate that risk.\n\nOur Enterprise Risk Management methodology and policies rely on the expertise of our management and employees to identify risks and opportunities, and implement risk mitigation strategies as required.\n\n#### **Corporate Social Responsibility**\n\nBeing a responsible corporate citizen and sustainable business are part of good governance. We believe corporate social responsibility is increasingly important to our growth, competitive advantage and engagement with key stakeholders, and we strive to be a sustainable business and contribute to a better world.\n\nWe focus on five general areas:\n\n| Product stewardship | • looking at health, safety, the environment and other issues across the product life cycle – from design, manufacturing and transport |\n| --- | --- |\n| | to packaging, usage and end-of-life |\n| | • focusing on ensuring our products and services meet the expectations of our customer and communities, and our own criteria for |\n| | quality, social responsibility and environmental respect |\n| Employee engagement | • working hard to create a culture of employee engagement and encourage and respect diversity |\n| | • establishing Rogers as a place where employees feel proud, look forward to making a contribution and have chance to do their best |\n| | work every day |\n| | • providing leading workplace initiatives, from far-reaching benefits to customized training, development and personal assistance |\n| | programs |\n| Community investment | • promoting the principles of corporate citizenship and benchmarks for community investment established by Imagine Canada by |\n| | committing at least 1% of our net earnings before tax annually to charities and other non-profit organizations |\n| | • investing in many worthy causes to help create vibrant, healthy, talent-rich communities. Our flagship program, Rogers Youth Fund, |\n| | supports educational opportunities for at-risk Canadian youth |\n| Environmental responsibility | • proactively managing the environmental aspects of our business |\n| | • measuring our carbon footprint every year and implementing a wide range of initiatives to increase our energy efficiency, reduce |\n| | paper use and divert materials from our operations from landfills |\n| | • focusing on building environmental awareness and engagement among our employees, customers and communities |\n| Ethical supply chain | • reinforcing the importance of an ethical supply chain because it is critical to our reputation and success. Rogers is a large purchaser, |\n| | with over 37,000 suppliers across Canada and internationally |\n| | • paying special attention to sound sourcing, production and delivery of supplier products and services by setting strong expectations |\n| | of corporate social responsibility throughout our supply chain, including compliance with the Rogers Supplier Code of Conduct |\n\nSee our annual Corporate Social Responsibility report (available on our website rogers.com/csr) for more about our social, environmental and community contributions and performance.", - "page_start": 76, - "page_end": 76, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "# …TO DELIVER ON THE STRATEGY\n\nSantos continues to tap into the spirit and commitment of the entrepreneurs and explorers who laid the Company's foundations as we deliver on our growth strategy.\n\nToday, Santos is a major Australian oil and gas exploration and production company growing a global energy business through:\n\nSAN165 WWW Text 30/3/05 12:06 PM Page 1\n\n# LEVERAGING BASE BUSINESS\n\nCreating value from the base business through environment, health, safety and operational excellence, optimisation programs and cost leadership.\n\n# CREATING OPPORTUNITIES\n\nMaximising the value of the exploration program, building a better and more balanced portfolio and pursuing new opportunities.\n\n# CAPTURING AND DELIVERING GROWTH\n\nCommencing new production, advancing key projects, extracting value from our infrastructure position and seeking innovative commercial arrangements.\n\n# MANAGING OPTIONS\n\nDelivering improved returns, strong cash flow and reserve replacement through disciplined portfolio management, strategic acquisitions and divestments, and making sustainable progress.", - "page_start": 2, - "page_end": 2, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## CHALLENGER SUSTAINABILITY\n\n### Employees\n\nThe Challenger workforce totalled 272 at the end of the financial year comprising 100 Kingsgate personnel (employees and casual contractors to fill vacancies); and 172 contractors. Contractors on site include: Leighton with 154 personnel providing mining services; Sodexo, 12 personnel providing catering and cleaning services; Powerwest, 2 personnel for power supply services; and AWG, 4 personnel for air leg and rise mining services.\n\nTurnover for Challenger permanent employees during the financial year was 23%, with 23 terminations and 22 new starters. New employees recruited on a casual basis with a view to permanency accounted for 19 positions.\n\nDuring the year Kingsgate have rebuilt the Challenger management team improving the depth of mining engineering experience. The new management, combined with targeted training has brought about a cultural change with the emphasis now being on proper planning, appropriate contractor management, accountability. To encourage staff retention, there was a focus on improving the site facilities with an upgrade to the mining office as well as site communications to allow employees to communicate with their families while on site.\n\n## Community\n\nThe remoteness of Challenger mine – 310 kilometres by road from the nearest town at Coober Pedy – reduces the capacity for local involvement with surrounding communities. Challenger continued to support its nearest communities with local sponsorships including:\n\n- 〉 The Umoona Community Council;\n- 〉 Glendambo Pastoralists Ball;\n- 〉 The Royal Flying Doctor Service; and\n- 〉 The Coober Pedy Football Club.\n\nChallenger is located within the Commonwealth Government, Woomera Prohibitive Area (WPA). The Department of Defence (DOD) continues to utilise the area for rocket testing and other commercial activities. In the last 10 years DOD have not impacted on mine operations.\n\nChallenger Mine has fostered strong relations with the University of Adelaide over the past nine years. Each year selected students from the Schools of Geology and Mining Engineering undertake field trips to Challenger, where they experience a very detailed and hands-on introduction to mining. Kingsgate offers academic Bursaries and Prizes to students in both disciplines.\n\n## Environment\n\nFull details of all environmental monitoring reports and a detailed review of all environmental issues are contained within the 2013 Mining and Rehabilitation Compliance Report (MARCR). The MARCR can be downloaded from DMITRE's website www.minerals.dmitre.sa.gov. au and can be found using the search word \"Challenger\".\n\n#### Water usage\n\nA supplementary groundwater extraction bore (Gusher 3) was commissioned at Challenger to increase the supply of potable water made available to the accommodation camp. A third reverse osmosis plant was also commissioned to accommodate the increase in volume of water that needs to be filtered for potable use.\n\nA total of 436,175 tonnes of water was used to process 556,631 tonnes of ore during the financial year with a ratio of 0.78 tonnes of water to one tonne of ore. Water usage was reduced onsite via recycling of supernatant water from Tailings Storage Facility (TSF) 2 via the decant water return system.", - "page_start": 25, - "page_end": 25, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "### Exploration\n\nWith the approvals of the Special Prospecting Licence (\"SPL\") applications in Thailand still awaiting the Minister of Industry's consent, exploration attention over the past 12 months has focused on new exploration opportunities and Mineral Resource enhancement targets within the Mining Leases. This exploration formed part of a strategic exploration program within the mining leases at Chatree that commenced in late 2012. The program has successfully defined several new areas of mineralisation within the Mining Lease, most notably at Q and A North Prospects, and has also upgraded several larger areas of Inferred Resources to the Measured and Indicated Mineral Resource category.\n\n## Looking Ahead\n\nOver the current financial year and beyond, Kingsgate remains focused on optimising production within an uncertain metal price environment, continuing to build resources and reserves and advancing the development project pipeline of Nueva Esperanza and Bowdens. These initiatives are designed to grow earnings per share for the benefit of all shareholders.\n\nIn late September, Kingsgate's Thai subsidiary, Akara Resources Public Company Limited (\"Akara\") has submitted its listing application and draft Prospectus to the Thai Securities Exchange Commission (SEC) and the Stock Exchange of Thailand (SET) for an initial public offering of its shares on the SET.\n\nThe SEC and SET will review the draft Prospectus in the coming months in order to approve the listing of Akara. The decision to list Akara will depend on market conditions and other factors at the time of approval.\n\nGroup gold production for the full year to 30 June 2014 is expected to be in the range of 190,000 to 210,000 ounces. This includes 120,000 to 130,000 ounces from Chatree and 70,000 to 80,000 ounces from Challenger.", - "page_start": 6, - "page_end": 6, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "What does ShareAlike mean in terms of licencing ?", - "target_page": 1, - "target_passage": "adaptations based on this work must be licensed under the same license.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Understanding Creative Commons license\n\nbefore licensing your work\n\n## **THREE-LAYER DESIGN**\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\" : contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n# **FOUR ELEMENTS**\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for\n\nND\n\nSA\n\nnoncommercial purposes.\n\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n# **SIX LICENSES**\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n# **REMIND THAT…**\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n#### **CC LICENSE CAN'T BE USED FOR …**\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n### **ALSO FOR …**\n\nthe work that is already in the Public Domain. For those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n# **NOW, SHARE YOUR WORK!** https://creativecommons.org/choose/\n\nTexts are adapted from CC Certification for Educators. CC BY license.\n\nBY, SA, NC, ND icons, CC BY, CC BY-SA, CC BY-NC, CC BY-NC-SA, CC BY-ND, and CC BY-NC-ND buttons are trademark of Creative Commons, and subject to their policies. 3-layer design of CC license image is taken from CC Certification for Educators. CC BY license. Line, icons, and gradients are from Canva, and subject to their policies.", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "Even in the **short period between 2013 and 2018** (the period covered by these pilot statistics) the data show an overall decline and a decline of several relevant occupational diseases. The strongest decrease — practically a halving — can be seen for hearing impairments (diseases of the inner ear). Pneumoconiosis, mesothelioma and selected occupational cancers went down between 7% and 14%. **Asthma and some recognised MSDs** are more or less stagnating, probably due to unchanged exposure to biological or chemical substances and no change regarding the health outcomes of ergonomic working conditions.\n\nIf work is **one of some** causative factors, a clear assignment of work to a health outcome is complex. Moreover, in many cases a quite **long observation period** is necessary simply due to the **latency time between exposure at work, outbreak and detection of a disease**, which is obviously very different from the clear and immediate consequence of an accident at work.\n\nThe detection of a disease and the correlation between work and this disease depends highly on the **monitoring capacities of the health system and its ability, tradition and standards to connect diseases and work-related causes**. In a study on 'Asbestos‐related occupational diseases in Central and East European Countries' the authors refer to different policies for identifying workers formerly exposed to asbestos and conclude:\n\n*'Consequently, large differences are observed from one country to another regarding the number of recognised asbestos-related cases. In Slovenia, for example, the annual asbestosis rate (cases of asbestosis/population) amounts to 14.9, in Croatia 5.3, and in Poland 2.1. Moreover, in Estonia, the incidence of asbestosis is unknown as there is no systematic collection of data.'*181\n\nFor example, until now very few occupational diseases have been recognised as outcomes of psychosocial risks at work. The ILO proposes in its 'List of Occupational Diseases Recommendation' a large number of very specific and 'classic' occupational diseases — a very broad definition of *'Mental and behavioural disorders'* but leaving the responsibility to science and to 'national conditions'. 182 Similarly, the development of the European Schedule of Occupational Diseases (ESOD) aims to improve knowledge, step up prevention and provide assistance in linking occupational activities and diseases.", - "page_start": 74, - "page_end": 74, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "for his or her education or welfare during any period ending not later than the date when he or she attains the age of 18 years;\n\n- (g) for the purpose of preventing the spread of an infectious or contagious disease;\n- (h) in the case of a person who is, or is reasonably suspected to be, of unsound mind, addicted to drugs or alcohol, or a vagrant, for the purpose of his or her care or treatment or the protection of the community;\n- (i) for the purpose of preventing the unlawful entry of that person into Botswana, or for the purpose of effecting the expulsion, extradition or other lawful removal of that person from Botswana, or for the purpose of restricting that person while he or she is being conveyed through Botswana in the course of his or her extradition or removal as a convicted prisoner from one country to another;\n- (j) to such extent as may be necessary in the execution of a lawful order requiring that person to remain within a specified area within Botswana or prohibiting him or her from being within such an area, or to such extent as may be reasonably justifiable for the taking of proceedings against that person relating to the making of any such order, or to such extent as may be reasonably justifiable for restraining that person during any visit that he or she is permitted to make to any part of Botswana in which, in consequence of any such order, his or her presence would otherwise be unlawful; or\n- (k) for the purpose of ensuring the safety of aircraft in flight.\n\n(2) Any person who is arrested or detained shall be informed as soon as reasonably practicable, in a language that he or she understands, of the reasons for his or her arrest or detention.\n\n(3) Any person who is arrested or detained-\n\n- (a) for the purpose of bringing him or her before a court in execution of the order of a court; or\n- (b) upon reasonable suspicion of his or her having committed, or being about to commit, a criminal offence under the law in force in Botswana,\n\nand who is not released, shall be brought as soon as is reasonably practicable before a court; and if any person arrested or detained as mentioned in paragraph (b) of this subsection is not tried within a reasonable time, then, without prejudice to any further proceedings that may be brought against him or her, he or she shall be released either unconditionally or upon reasonable conditions, including in particular such conditions as are reasonably necessary to ensure that he or she appears at a later date for trial or for proceedings preliminary to trial.\n\n(4) Any person who is unlawfully arrested or detained by any other person shall be entitled to compensation therefor from that other person.\n\n# **6. Protection from slavery and forced labour**\n\n(1) No person shall be held in slavery or servitude.\n\n(2) No person shall be required to perform forced labour.\n\n(3) For the purposes of this section, the expression \"forced labour\" does not include-\n\n- (a) any labour required in consequence of the sentence or order of a court;\n- (b) labour required of any person while he or she is lawfully detained that, though not required in consequence of the sentence or order of a court, is reasonably necessary in the interests of hygiene or for the maintenance of the place at which he or she is detained;\n- (c) any labour required of a member of a disciplined force in pursuance of his or her duties as such or, in the case of a person who has conscientious objections to service as a member of a naval, military or air force, any labour that that person is required by law to perform in place of such service;", - "page_start": 5, - "page_end": 5, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "The name of the city has taken the forms *Lugdon*, *Luon*, and since the 13th century, *Lyon*. The Gallic *Lugdun* or *Lugdunon* that was Latinized in Roman as Lugdunum is composed of two words. The first may be the name of the Celtic god Lug (in charge of order and law), or the derived word *lugon*, meaning \"crow\" (the crow being the messenger of Lug), but might also be another word *lug*, meaning \"light\". The second is *dunos* ('fortress', 'hill'). The name thus may designate the hill of Fourvière, on which the ancient city of Lyon is founded, but could mean \"hill of the god Lug\", \"hill of the crows\" or \"shining hill\".[21] [22]\n\nAlternatively Julius Pokorny associates the first part of the word with the Indo-European radical **lūg* ('dark, black, swamp'), the basis of the toponyms Ludza in Latvia, Lusatia in Germany (from Sorbian *Łužica*), and several places in the Czech Republic named Lužice;[23] it could then also be compared to Luze in Franche-Comté and various hydronyms such as Louge.\n\nFurther down, in the current Saint-Vincent district, was the Gallic village of Condate, probably a simple hamlet of sailors or fishermen living on the banks of the Saône. *Condate* is a Gallic word meaning \"confluence\", from which the Confluence district gets its name.\n\nIn Roman times the city was called *Caput Galliæ*, meaning \"capital of the Gauls\". As an homage to this title, the Archbishop of Lyon is still called the Primate of Gaul.\n\nDuring the revolutionary period, Lyon was renamed *Commune-Affranchie* (\"Emancipated Commune\") on 12 October 1793 by a decree of the Convention Nationale. It resumed its name in 1794, after the end of the Terror.\n\nLyon is called *Liyon* in Franco-Provençal. [24]\n\n#### **Ancient Lyon**\n\nAccording to the historian Dio Cassius, in 43 BC, the Roman Senate ordered the creation of a settlement for Roman refugees of war with the Allobroges. These refugees had been expelled from Vienne and were now encamped at the confluence of the Saône and Rhône rivers. The foundation was built on Fourvière hill and officially called *Colonia Copia Felix Munatia*, a name invoking prosperity and the blessing of the gods. The city became increasingly referred to as *Lugdunum* (and occasionally *Lugudunum*[25] ).[26] The earliest translation of this Gaulish place-name as \"Desired Mountain\" is offered by the 9th-century *Endlicher Glossary*. [27] In contrast, some modern scholars have proposed a Gaulish hill-fort named Lug[o]dunon, after the Celtic god Lugus (cognate with Old Irish *Lugh*, Modern Irish *Lú*), and *dúnon* (hillfort).\n\nThe Romans recognised that Lugdunum's strategic location at the convergence of two navigable rivers made it a natural communications hub. The city became the starting point of main Roman roads in the area, and it quickly became the capital of the province, Gallia Lugdunensis. Two Emperors were born in this city: Claudius, whose speech is preserved in the Lyon Tablet in which he justifies the nomination of Gallic Senators, and Caracalla.\n\n| Country | France |\n| --- | --- |\n| Region | Auvergne-Rhône-Alpes |\n| Metropolis | Lyon Metropolis |\n| Arrondissement | Lyon |\n| Subdivisions | 9 arrondissements |\n| Government | |\n| • Mayor (2020– | [2] Grégory Doucet |\n| 2026) | (EELV) |\n| 1 Area | 47.87 km2 (18.48 sq mi) |\n| [3]) • Urban (2020 | 1,141.4 km2 |\n| | (440.7 sq mi) |\n| [4] • Metro (2020 ) | 4,605.8 km2 |\n| | (1,778.3 sq mi) |\n| [5] Population (2022) | 520,774 |\n| • Rank | 3rd in France |\n| • Density | 11,000/km2 |\n| | (28,000/sq mi) |\n| • Urban (Jan. | 1,702,921 |\n| [6] 2021 ) | |\n| • Urban density | 1,500/km2 (3,900/sq mi) |\n| • Metro (Jan. | 2,308,818 |\n| [7] 2021 ) | |", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Ballet dancing was used by Louis as a political tool to hold power over his state. He integrated ballet deeply into court social functions and fixated his nobles' attention on upholding standards in ballet dancing, effectively distracting them from political activities.[120] In 1661, the Royal Academy of Dance was founded by Louis to further his ambition. Pierre Beauchamp, his private dance instructor, was ordered by Louis to come up with a notation system to record ballet performances, which he did with great success. His work was adopted and published by Feuillet in 1700 as *Choregraphie*. This major development in ballet played an important role in promoting French culture and ballet throughout Europe during Louis's time.[121]\n\nLouis greatly emphasized etiquettes in ballet dancing, evidently seen in \"La belle danse\" (the French noble style). More challenging skills were required to perform this dance with movements very much resembling court behaviours, as a way to remind the nobles of the king's absolute power and their own status. All the details and rules were compressed in five positions of the bodies codified by Beauchamp.[122]\n\n### **Unofficial image**\n\nBesides the official depiction and image of Louis, his subjects also followed a non-official discourse consisting mainly of clandestine publications, popular songs, and rumours that provided an alternative interpretation of Louis and his government. They often focused on the miseries arising from poor government, but also carried the hope for a better future when Louis escaped the malignant influence of his ministers and mistresses, and took the government into his own hands. On the other hand, petitions addressed either directly to Louis or to his ministers exploited the traditional imagery and language of monarchy. These varying interpretations of Louis abounded in self-contradictions that reflected the people's amalgamation of their everyday experiences with the idea of monarchy. [123]\n\nLouis XIV as Apollo in the *Ballet Royal de la Nuit* (1653)\n\nHall of Mirrors, Palace of Versailles\n\n### **In fiction**\n\n#### **Literature**\n\n- Alexandre Dumas portrayed Louis in his two sequels to his 1844 novel *The Three Musketeers*: first as a child in *Twenty Years After* (1845), then as a young man in *The Vicomte de Bragelonne* (1847–1850), in which he is a central character. The final part of the latter novel recounts the legend that a mysterious prisoner in an iron mask was actually Louis's twin brother and has spawned numerous film adaptations generally titled *The Man in the Iron Mask*.\n- In 1910, the American historical novelist Charles Major wrote *\"The Little King: A Story of the Childhood of King Louis XIV\"*.\n- Louis is a major character in the 1959 historical novel *Angélique et le Roy* (\"Angélique and the King\"), part of the *Angélique* series. The protagonist, a strong-willed lady at Versailles, rejects the King's advances and refuses to become his mistress. A later book, the 1961 *Angélique se révolte* (\"Angélique in Revolt\"), details the dire consequences of her defying this powerful monarch.\n- A character based on Louis plays an important role in *The Age of Unreason*, a series of four alternate history novels written by American science fiction and fantasy author Gregory Keyes.\n- Louis features significantly in Neal Stephenson's Baroque Cycle, specifically in the 2003 novel *The Confusion*, the greater part of which takes place at Versailles.\n- In the *39 Clues* series universe, it has been noted that Louis was part of the Cahill branch, Tomas.\n- He is called the son of Apollo in Rick Riordan's *Trials of Apollo* series.\n- Louis XIV is portrayed in Vonda N. McIntyre's 1997 novel *The Moon and the Sun*.\n\n#### **Films**\n\n- The film, *The Taking of Power by Louis XIV* (1966), directed by Roberto Rossellini, shows Louis's rise to power after the death of Cardinal Mazarin.\n- The film *Man in the Iron Mask* (1998), directed by Randall Wallace, focused on the identity of an anonymous masked prisoner who spent decades in the Bastille and other French prisons, and his true identity remains somewhat a mystery till date. The monarch was played by Leonardo DiCaprio.", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia5.pdf" - }, - { - "text": "By clicking on the \"**Data->Licensing Assistant**\" link in the main menu, the Licence Assistant is opened in a new window, displaying relevant information of all supported licences by the tool.\n\n| | | Newsletter FAQ Search Contact Cookies Legal notice English (en) | > |\n| --- | --- | --- | --- |\n| | | Search site content ... | ರ |\n| European Data Portal > Licensing Assistant | | | |\n| 11 What we do - | Data~ Providing Data . | Using Data - Resources . | |\n| Datasets Cataloques | Metadata Quality Licensing Assistant | SPARQL Manager Statistics | |\n| Licensing Assistant | | | |\n| Data which is shared with a licence becomes Open Data. There are many licences available. | The licence assistant provides a description of the available licences. It also gives an overview | | |\n| of how to apply licences as re-publisher/distributor of Open Data and how to combine multiple | | | |\n| licences. | | | |\n| Please find a licence by selecting the preferred licence terms below: | | | |\n| Advanced settings | | | |\n| Obligation | Permission | Prohibition | |\n| Lesser Copyleft Attribution | Derivative Works Distribution | Commercial use | |\n| Sharealike Notice Copyleft | Reproduction Sublicensing | | |\n| State Changes | Use patent claims | | |\n| Name Terms | | | |\n| CC BY 3.0 Austria | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY 4.0 | Obligation: Attribution Permission: Derivative Works | Permission: Distribution Obligation: Notice | |\n| | Obligation: State Changes Permission: Reproduction | | |\n| CC-BY 3.0 NL | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY-NC 4.0 | Obligation: Attribution Permission: Derivative Works | Obligation: Notice | |\n| | Prohibition: Commercial use Permission: Distribution | Obligation: State Changes | |\n| | Permission: Reproduction | | |\n| CC-BY-NC-ND 4.0 | Obligation: Attribution Obligation: Notice | Prohibition: Commercial use Permission: Distribution | |\n| | Obligation: State Changes Permission: Reproduction | | |", - "page_start": 34, - "page_end": 34, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n# **About Us**\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n#### **Chief Executive Officer**\n\nAnna Tumadóttir\n\n#### **General Counsel**\n\nKat Walsh\n\n# **Board of Directors**\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann Lawrence Lessig **Emeritus* Angela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\n**Except where otherwise noted, \"Annual Report 2023\" by Creative Commons is licensed under CC BY 4.0.**", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "1,600,000 m 2 (17,222,256.67 sq ft) of office space and services and more than 55,000 jobs.[48] *Cité Internationale*, created by the architect Renzo Piano is located in the border of the Parc de la Tête d'Or in the 6th arrondissement. The worldwide headquarters of Interpol is located there. The district of *Confluence*, in the south of the historic centre, is a new pole of economical and cultural development.\n\nTourism is an important part of the Lyon economy, with one billion euros in 2007 and 3.5 million hotel-nights in 2006 provided by non-residents. Approximately 60% of tourists visit for business, with the rest for leisure. In January 2009, Lyon ranked first in France for hostels business. The festivals most important for attracting tourists are the *Fête des lumières*, the *Nuits de Fourvière* every summer, the *Biennale d'art contemporain* and the *Nuits Sonores*.\n\n# **Culture**\n\nSince the Middle Ages, the region residents have spoken several dialects of Franco-Provençal. The Lyonnais dialect was replaced by the French language as the importance of the city grew. However some \"frenchified\" Franco-Provençal words can also be heard in the French of the Lyonnais, who call their little boys and girls \"gones\" and \"fenottes\" for example.[49]\n\n- The Lumière brothers pioneered cinema in the town in 1895. The Institut Lumière, built as Auguste Lumiere's house, and a fascinating piece of architecture in its own right, holds many of their first inventions and other early cinematic and photographic artifacts.\nGuignol, created in the early 19th C., associated with the silk-workers\n\n8 December each year is marked by the Festival of Lights (la Fête des lumières), a celebration of thanks to the Virgin Mary, who purportedly saved the city from a deadly plague in the Middle Ages. During the event, the local population places candles (*luminions*) at their windows and the city of Lyon organizes large-scale light shows onto the sides of important Lyonnais monuments, such as the medieval Cathédrale St-Jean.\n\n- The Saint Francis of Sales church is famous for its large and unaltered Cavaillé-Coll pipe organ, attracting audiences from around the world.\n- The Opéra Nouvel (New Opera House) is the home of the Opéra National de Lyon. The original opera house was re-designed by the distinguished French architect Jean Nouvel between 1985 and 1993 and is named after him.\n- Lyon is also the French capital of \"*trompe l'œil*\" walls, a very ancient tradition. Many are to be seen around the city. This old tradition is now finding a contemporary expression, for example in the art of Guillaume Bottazzi.[50][51]\n- The Brothers of the Sacred Heart, a Roman Catholic congregation that operates schools in Europe and North America, was founded in Lyon in 1821.\n- The African Museum of Lyon is one of the oldest museums situated in Lyon.[52]\n- The Museum of Resistance and Deportation looks at the various individuals prominent in the Resistance movement in World War II. The building is strongly linked to Klaus Barbie. Lyon sees itself as the centre of the French resistance and many members were shot in Place Bellecour in the town centre. The exhibition is largely a series of , mini-biographies of those involved.\n- Lyon is a pilot city of the Council of Europe and the European Commission Intercultural cities program.\n\n## **UNESCO World Heritage Site**\n\nThe historic site of Lyon was designated a UNESCO World Heritage Site in 1998. In its designation, UNESCO cited the \"exceptional testimony to the continuity of urban settlement over more than two millennia on a site of great commercial and strategic significance.\"[37] The specific regions comprising the historic site include the Roman district and Fourvière, the Renaissance district (Vieux Lyon), the silk district (slopes of Croix-Rousse), and the Presqu'île, which features architecture from the 12th century to modern times.[53]", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia4.pdf" - }, - { - "text": "### **3.2.6 How to view licensing information**\n\nLicensing information is available for all datasets associated with common licences, which are supported by the Licence Assistant. When available a link to the assistant is provided on left side of a dataset page.\n\nBy clicking on the **licence name** (here: cc-by), the Licence Assistant tool is opened in a new window, displaying relevant information for this particular licence.\n\n| IROPFAN | | | Newsletter FAQ Search Contact Cookies Legal notice | English (en) | ◀ |\n| --- | --- | --- | --- | --- | --- |\n| | | European Data Portal > Datasets > Daten über Anbieter von Hochs ... | | Search site content ... | ರ |\n| 1 European Data Po | | What we do ▼ Data ▼ Pro | | | |\n| WI G | | Data · Dataset Categories Similar Datasets | Using Data - | Resources . | |\n| | | Higher Education Provider Daing Assistant | SPARQL Manager | Statistics | |\n| | | data.gov.uk | | | |\n| Licensi | | | | | |\n| | | We publish the full HESA Finance return as open data | | | |\n| | | providers for the reference of funding and requlatory | | | |\n| CC-BY | | | | | |\n| Open licer | | Distributions (21) | | | |\n| You are f | | | | Comparable licences | |\n| Deriva | | Tahle 12 Analysis of staff costs 2016/17 to 2017/18 | | · CC-BY-NC-ND4.0 | |\n| CSV Create | | | | · CC-BY-NC-SA4.0 | |\n| Distrib | | Licence: cc-by (i | | · CC-BY-NC4.0 | |\n| | | | | · CC-BY-ND4.0 | |\n| Redistr | | | | · CC-BY-SA3.0NL | |\n| Reproc CSV | | Table 1 - Consolidated statement of comprehensive | | | |\n| \"Repro | | expenditure year ended 31 July 2015/16 to 2017/18 | without limitation by sound or | · CC-BY-SA4.0 | |\n| | | | | · CC-BY3.0NL | |\n| visual r | Licence: cc-by (i | | Work, including storage of a | | |\n| protect | | | edium. | · CC-BY4.0 | |\n| | | | | · CCBY3.0Austria | |\n| | | | | · DL-DE-BY-NC1.0 | |\n| You are obligated to: | | | | · DL-DE-BY1.0 | |\n| | | | | · DL-DE-BY2.0 | |\n| Attribution | | | | · EUPL-1.1 | |\n| Give proper credit to the copyright holder and/or author | | | | · FR-LO | |\n| Notice | | | | · GFDL-1.1 | |\n| Keep copyright and licence notices intact | | | | · GFDL-1.2 | |\n| State Changes | | | | · GFDL-1.3 | |\n| | | Indicate which changes have been made to the original licenced work in a manner that permits attribution. | | · IODLv1.0 | |\n| | | | | · IODLv2.0 | |\n| | | | | · NLOD | |\n| | | | | · ODC-BY | |\n| | | | | · ODC-ODbL | |\n| | | | | · OGL-NC | |\n| | | | | · OGL-ROU-1.0 | |\n| | | | | · OGL1.0 | |\n| | | | | · OGL2.0 | |\n| | | | | · OGL3.0 | |\n| | | | | · PSEUL | |", - "page_start": 33, - "page_end": 33, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "What is the most restricive Creative Common licence ?", - "target_page": 1, - "target_passage": "CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "This is a frame from \"Twenty Years of Creative Commons (in Sixty Seconds)\" by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n**Creative Commons**\n\nPO Box 1866 Mountain View CA 94042 USA\n\n+1 415 429 6753 info@creativecommons.org", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7%.\n\nThe chief executive officer and 80% of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by non-Canadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10% of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10% of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n#### WIRELESS\n\n#### **Consultation on the Renewal of Cellular and Personal Communications Services (PCS) Spectrum Licences**\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n- At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n- The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n- A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.\n\n#### **Consultation on a Policy and Technical Framework for the 700Mhz and 2500-2690Mhz Band and Aspects Related to Commercial Mobile Spectrum**\n\nIn March 2012, Industry Canada released its policy and technical framework for the auction of spectrum in the 700 MHz and 2500–2690 MHz spectrum bands. Key things to note:\n\n- Industry Canada adopted an auction cap for the 700 MHZ (not a setaside like in the 2008 Advanced Wireless Services (AWS) spectrum auction). There are four blocks of spectrum that are considered \"prime\". Large domestic wireless carriers are restricted to a single block of prime spectrum each, while all other carriers are restricted to two blocks. Rogers, Bell and Telus are considered large carriers nationally. SaskTel is considered a large carrier in Saskatchewan, and MTS is considered a large carrier in Manitoba.\n- To encourage rural deployments, single carriers who win two paired blocks, or two carriers who share their two paired blocks, are required to use their 700 MHz spectrum to provide coverage to 90% of their HSPA+ territory within five years and 97% within seven years. Industry Canada will use Tier 2 licence areas for the 700Mhz auction. These are 14 large service areas covering all of Canada, and are generally the same size as individual provinces.\nIn March 2013, Industry Canada released *Licensing Framework for Mobile Broadband Services (MBS) – 700 MHz Band*. Key things to note:\n\n- Industry Canada confirmed that, for the most part, the policy and technical framework to auction spectrum in the 700 MHz band are the same as proposed in its March 14, 2012 consultation document.\n- The auction will use a combinatorial clock auction (CCA) format, where bids are made for packages of spectrum licences, rather than the simultaneous multiple round auction (SMRA) format used in the past, where bids are made on individual licences.\n- Associated entities can apply to bid separately and to have the auction cap applied individually. These bidders must demonstrate that they \"intend to separately and actively provide services\" within a given licence area for the duration of the spectrum caps (five years after licensing). Industry Canada has determined that no registered bidders were associated with each other.\n\nThe auction was initially set to begin on November 19, 2013. In June 2013, Industry Canada moved the application deadline to September 17, 2013, and the auction start to January 14, 2014.\n\nIn October 2013, Industry Canada released its consultation paper, seeking comments on licencing considerations related to auction format, rules and processes, as well as on conditions of licence for spectrum in the 2500–2690 MHz band. The final policy was released on January 10, 2014.\n\nKey things to note about 2500–2690 MHz spectrum policy:\n\n- Industry Canada adopted a spectrum cap (not an auction cap like in the 700 MHz auction). No carrier participating in the auction may possess more than 40 MHz of 2500–2690 MHz spectrum. Rogers is grandfathered with respect to our holdings in those situations where we already hold more than 40 MHz of this spectrum. We will not be required to return spectrum.\n- There is no special roll-out requirement for 2500–2690 MHz spectrum. A general roll-out rule will be determined in the policy.\n- The auction is set to commence on April 15, 2015.\n- The 2500MHz auction will use Tier 3 licence areas.\n\n#### **Roaming and Tower Sharing Policy**\n\nIn March 2013, Industry Canada released *Revised Frameworks for Mandatory Roaming and Antenna Tower and Site Sharing*, concluding a consultation initiated in 2012. It sets out the current rules for roaming and tower and site sharing. Its key terms are:\n\n- All holders of spectrum licences, radio licences and broadcasting certificates must share towers and antenna sites, where technically feasible, at commercial rates.\n- All licensees were permitted to request roaming from other licensees at commercial rates.\n- The timeframe for negotiating agreements is 60 days, after which arbitration according to Industry Canada arbitration rules will begin.\n- The roaming capabilities must provide connectivity for digital voice and data services regardless of the spectrum band or underlying technology used.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "### *4.3.4 Working life perspective – health*\n\nThis EWCS 2015 question on the **working life perspective** (*'Will you be able to do this or a similar job at 60 years of age?'*) gives quite a good hint to the individual long-term prospects, which might even be more valuable than the question on currently affected health because it is a personal assessment of the overall status of health.\n\n**Differences between countries are significant but not as significant as between other categories, for example, between sectors and occupations**. The EU average of 'No' responses to the question *'Do you think you will be able to do your current job or a similar one until you are 60 years old?'* is at 27%; the eight countries with the highest rates of 'No' responses (between 44% and 33%) are France, Slovenia, Poland, Slovakia, Croatia, Belgium, Malta and Bulgaria. Under 25% of 'No' responses were given in eight countries, starting from Portugal (16%) over Germany, Denmark, Ireland, Sweden, Italy, Estonia and Lithuania (24%).263\n\n#### **Figure 35: Opinion on work until the age of 60 – EWCS 2015**\n\n**Young workers under 35 are much more sceptic** than those over 50; 38% say that they will not be able, a much higher percentage than the 22% of workers aged over 50. The employment status is also very important; 26% of the permanently employed respond with a 'No' compared to 39% of those with 'Other arrangements'. Remarkably, only 19% of the self-employed do not believe that they will be able to do their job at 60 years.\n\n**Large differences can be seen between occupation levels.** 37% per cent of the low-skilled manual workers respond with 'No', and 30% of the highly skilled manual workers respond 'No', as do 27% of the low-skilled clerical workers and only 21% of the high-skilled clerical workers, a 16% difference between high-skilled clerical workers and low-skilled manual workers. In some countries only 10% to 15% of the highly skilled clerical workers respond with 'No' while in a number of countries more than 50% of the low-skilled manual workers respond with 'No', for example, in Slovenia, Croatia, Slovakia and Czechia.\n\nThe authors of the Senior Working Life study describe these differences as follows:264\n\n*'For ISCO groups 1–4 (seated work) main expected reasons for retiring were freedom to choose and desire for more leisure time, but many would consider staying longer if there were better possibilities for additional senior days, longer vacations and flexible working hours. For ISCO groups 5–9 (physical work), poor physical health and not being capable of doing the job were common expected reasons for retiring, but many would consider staying longer if the work were less physically demanding and there were more senior days. Possibility for pension was a general expected reason for retiring. Expected reasons differed to a less extent between genders than between ISCO groups, e.g. economic factors were more important for men and high work demands more important for women.*", - "page_start": 95, - "page_end": 95, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n#### **Permissively licensed works**\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution).18\n\nSee e.g. Heald, Paul J. \"How Copyright Makes Books and Music Disappear (and How Secondary 16 Liability Rules Help Resurrect Old Songs).\" Illinois Program in Law, Behavior and Social Science Paper No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181. Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen, Rebecca J. \"Why Are so Few Books from the 20th Century Available as Ebooks?\" *The Atlantic*, 18 Mar. 2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-centuryavailable-as-ebooks/284486/. See also \"Google Book Search Settlement and Access to Out of Print Books.\" *Google Public Policy Blog*, publicpolicy.googleblog.com/2009/06/google-book-searchsettlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed classaction settlement between Google, the Authors Guild, and the Association of American Publishers). Google's final brief in the settlement proceedings notes the \"prohibitive transaction costs of identifying and locating individual Rightsholders of these largely older, out-of-print books\" — see this brief at https:// web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/ google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also justified the settlement's terms in light of the fact that \"the transaction costs involved in finding copyright owners and clearing the rights are too high\"; while they argued that most works are not truly \"orphans,\" they note that total transaction costs as a whole (including, for example, determining whether the author or publisher holds the rights and then negotiating rates) are so high as to block uses of outof-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http:// thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.\n\nIn the EU, the 2019 Copyright Directive introduced specific provisions on the \"use of out-of-commerce 17 works and other subject matter by cultural heritage institutions\" (Articles 8-11 CDSMD). These provisions allow cultural heritage institutions to \"make available, for non-commercial purposes, out-ofcommerce works or other subject matter permanently in their collections\". The limitation to noncommercial purposes means that works made available under these provisions would be of limited use in building a books data commons.\n\nFor one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18 they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin' 'Bout AI Generation: Copyright and the Generative AI Supply Chain. Forthcoming, *Journal of the Copyright Society* 2024. https://doi.org/10.2139/ssrn.4523551.", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# Understanding Creative Commons license\n\nbefore licensing your work\n\n## **THREE-LAYER DESIGN**\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\" : contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n# **FOUR ELEMENTS**\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for\n\nND\n\nSA\n\nnoncommercial purposes.\n\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n# **SIX LICENSES**\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n# **REMIND THAT…**\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n#### **CC LICENSE CAN'T BE USED FOR …**\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n### **ALSO FOR …**\n\nthe work that is already in the Public Domain. For those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n# **NOW, SHARE YOUR WORK!** https://creativecommons.org/choose/\n\nTexts are adapted from CC Certification for Educators. CC BY license.\n\nBY, SA, NC, ND icons, CC BY, CC BY-SA, CC BY-NC, CC BY-NC-SA, CC BY-ND, and CC BY-NC-ND buttons are trademark of Creative Commons, and subject to their policies. 3-layer design of CC license image is taken from CC Certification for Educators. CC BY license. Line, icons, and gradients are from Canva, and subject to their policies.", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "# *6. Cross-cutting design questions*\n\nThe workshops briefly touched on several cross-cutting design questions. While most relevant for approaches that depend on limitations and exceptions, considerations of these questions may be relevant across both tracks.\n\n### *Would authors, publishers, and other relevant rightsholders and creators have any ability to exclude their works?*\n\nOne of the greatest sources of controversy in this area is the extent to which rightsholders of copyrighted works, as well as the original creators of such works (e.g., book authors in this context), should be able to prevent use of their works for AI training.\n\nWhile a system that required affirmative \"opt-in\" consent would limit utility significantly (as discussed above in the context of directly licensing works), a system that allowed some forms of \"opt-out\" could still be quite useful to some types of AI development. In the context of use cases like development of LLMs, the performance impact may not be so significant. Since most in-copyright books are not actively managed, the majority of books would remain in the corpus by default. The performance of LLMs can still be improved across various dimensions without including, for example, the most famous writers or those who continue to commercially exploit their works and may choose to exercise an opt-out. Perhaps the potential for licensing relationships (and revenue) may induce some rightsholders to come forward and begin actively managing their works. In such a case, uses that do require a license may once again become more feasible once the rightsholder can be reached.\n\nWorkshop participants discussed different types of opt-outs that could be built. For example, opt-outs could be thought of not in blanket terms, but only as applied to certain uses, for example to commercial uses of the corpus, but not research uses. This could build on or mirror the approach that the EU has taken in its text and data mining exceptions to copyright. Opt-outs might be more granular, by focusing on allowing or forbidding particular 38 uses or other categories of users, given that rights holders have many different sets of preferences.\n\nAnother question is about *who* can opt-out particular works from the dataset. This could solely be an option for copyright holders, although authors might be allowed to exercise an opt-out for their books even if they don't hold the copyrights. This might create challenges if the author and rightsholder disagree about whether to opt a particular book out of the corpus. Another related issue is that individual books, such as anthologies, may comprise works created (and rights held) by many different entities. The images in a book may have come from third-party sources, for instance, or a compendium of poetry might involve many\n\nIn fact, as noted above, to the extent an AI model developer intends for their model to abide by the 38 EU's legal regime, they will have to abide by such opt-outs, at least if they are engaged in text and data mining for commercial uses and/or are users outside of the covered set of research and heritage institutions. A books data commons may incorporate opt-outs in particular to serve such EU-focused AI developers.", - "page_start": 17, - "page_end": 17, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Regulation in Our Industry\n\nOur business, except for the non-broadcasting operations of Media, is regulated by two groups:\n\n- the Canadian Federal Department of Industry on behalf of the Minister of Industry (Canada) (together, Industry Canada)\n- the CRTC, under the *Telecommunications Act (Canada)* (Telecommunications Act) and the *Broadcasting Act (Canada)* (Broadcasting Act).\n\nRegulation relates to the following, among other things:\n\n- wireless spectrum and broadcasting licensing\n- competition\n- the cable television programming services we must, and can, distribute\n- wireless and wireline interconnection agreements\n- rates we can charge third parties for access to our network\n- the resale of our networks\n- roaming on our networks\n- ownership and operation of our communications systems\n- our ability to acquire an interest in other communications systems.\n\nRegulatory changes or decisions can adversely affect our consolidated results of operations.\n\nOur costs of providing services may increase from time to time as we comply with industry or legislative initiatives to address consumer protection concerns or Internet-related issues like copyright infringement, unsolicited commercial e-mail, cybercrime and lawful access.\n\nGenerally, our spectrum and broadcast licences are granted for a specified term and are subject to conditions for maintaining these licences. The regulators can modify these licensing conditions at any time, and they can decide not to renew a licence when it expires. If we do not comply with the conditions, a licence may be forfeited or revoked, or we may be fined.\n\nThe licences have conditions that require us, amongst other things, to comply with Canadian ownership restrictions of the applicable legislation, and we are currently in compliance with them. If we violate the requirements, we would be subject to various penalties and it could include losing a licence in extreme cases.\n\nCable, wireless and broadcasting licences generally cannot be transferred without regulatory approval.\n\n#### **Canadian Broadcasting Operations**\n\nOur Canadian broadcasting operations – including our cable television systems, radio and television stations, and specialty services – are licenced (or operated under an exemption order) and regulated by the CRTC under the Broadcasting Act.\n\nThe CRTC is responsible for regulating and supervising all aspects of the Canadian broadcasting system. It is also responsible under the Telecommunications Act for the regulation of telecommunications carriers, including:\n\n- Wireless' mobile voice and data operations\n- Cable's Internet and telephone services.\n\nOur cable and telecommunications retail services are not subject to price regulation, because the CRTC believes there is enough competition for these services provided by other carriers to protect the interests of users, so has forborne from regulating them. Regulations\n\ncan and do, however, affect the terms and conditions under which we offer these services.\n\n#### **Spectrum Licences**\n\nIndustry Canada sets technical standards for telecommunications under the *Radiocommunication Act (Canada)* (Radiocommunication Act) and the Telecommunications Act. It licences and oversees:\n\n- the technical aspects of the operation of radio and television stations\n- the frequency-related operations of cable television networks\n- awarding and supervising spectrum for wireless communications systems in Canada.\n\n#### **Royalties**\n\nThe Copyright Board of Canada (Copyright Board) oversees the administration of copyright royalties in Canada and establishes the royalties to be paid for the use of certain copyrighted works. It sets the copyright tariff royalties that Canadian broadcasting undertakings, including cable, radio, television and specialty services, pay to copyright collectives.\n\n#### **Billing and Contracts**\n\nThe Quebec Consumer Protection Act amendments, effective June 2010, introduced new provisions applicable to wireless, wireline and Internet service contracts. These amendments include new rules on the content of such contracts, the determination of the early cancellation fees that can be charged to customers, the use of security deposits and the cancellation and renewal rights of the consumers. The amendments also established new provisions on the sale of prepaid cards and the disclosure of related costs.\n\nAmendments to the Manitoba Consumer Protection Act took effect in September 2012 and parallel the changes to the Quebec Consumer Protection Act. Similar legislation also came into effect in September 2012 in Newfoundland and Labrador and has been tabled in Nova Scotia. A private member's bill proposing similar legislation has been introduced in New Brunswick.\n\nIn April 2012, the Ontario government announced that it would be introducing legislation addressing wireless bills and contracts. The legislation seeks to ensure that contracts are written in plain language and spell out which services come with the basic fee and which would result in a higher bill. It requires providers to obtain consent in writing before they renew or amend a contract. The legislation also seeks a cap on the cost of cancelling a fixed-term contract that would vary depending on the circumstances of the contract. The proposed legislation, which would affect new contracts, would take effect six months after being passed and would also cover existing agreements that are amended, renewed or extended after that date. The legislation was passed into law in October 2013.\n\nSee also \"CRTC Wireless Code\" section under Wireless Regulation.\n\n#### **Foreign Ownership and Control**\n\nNon-Canadians can own and control directly or indirectly:\n\n- up to 33.3% of the voting shares and the related votes of a holding company that has a subsidiary operating company licenced under the Broadcasting Act, and\n- up to 20% of the voting shares and the related votes of the operating licensee company may be owned and controlled directly or indirectly by non-Canadians.", - "page_start": 70, - "page_end": 70, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "*creators* or from any third parties and all the necessary *pre-existing rights* have been obtained or licensed.\n\nTo that effect, the contractor must establish a list of all *pre-existing rights* to the *results* of this FWC or parts thereof, including identification of the rights' owners. If there are no *preexisting rights* to the *results*, the contractor must provide a declaration to that effect. The contractor must provide this list or declaration to the contracting authority together with the invoice for payment of the balance at the latest.\n\n# **II.13.5. Evidence of granting of pre-existing rights**\n\nUpon request by the contracting authority, the contractor must, in addition to the list mentioned under Article II.13.4., provide evidence that it has the ownership or the right to use all the listed *pre-existing rights*, except for the rights owned or licensed by the contracting authority. The contracting authority may request this evidence even after the end of this FWC.\n\nThis provision also applies to image rights and sound recordings.\n\nThis evidence may refer, for example, to rights to: parts of other documents, images, graphs, sounds, music, tables, data, software, technical inventions, know-how, IT development tools, routines, subroutines or other programs ('background technology'), concepts, designs, installations or pieces of art, data, source or background materials or any other parts of external origin.\n\nThis evidence must include, as appropriate:\n\n- (a) the name and version number of a software product;\n- (b) the full identification of the work and its author, developer, *creator*, translator, data entry person, graphic designer, publisher, editor, photographer, producer;\n- (c) a copy of the licence to use the product or of the agreement granting the relevant rights to the contractor or a reference to this licence;\n- (d) a copy of the agreement or extract from the employment contract granting the relevant rights to the contractor where parts of the *results* were created by its *personnel*;\n- (e) the text of the disclaimer notice if any.\n\nProvision of evidence does not release the contractor from its responsibilities if it is found that it does not hold the necessary rights, regardless of when and by whom this fact is revealed.\n\nThe contractor also warrants that it possesses the relevant rights or powers to execute the transfer and that it has paid or has verified payment of all due fees including fees due to collecting societies, related to the final *results*.\n\n# **II.13.6. Quotation of works in the result**\n\nIn the *result*, the contractor must clearly point out all quotations of existing works. The complete reference should include as appropriate, the following: name of the author, title of the work, date and place of publication, date of creation, address of publication on the internet, number, volume and other information that allows the origin to be easily identified.", - "page_start": 25, - "page_end": 25, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "# **7.2.1 Limitations**\n\nThe maximum input file size that is supported by PDF Indexer is 4 GB. The amount of data that can be processed from an input file is also limited by the amount of memory that is available on the server on which you are running the PDF Indexer. The maximum size of a single document within the input file that can be loaded into Content Manager OnDemand is 2 GB; however, we suggest that the size of a single PDF document does not exceed 50 MB.\n\nSecure PDF documents are not supported. PDF Digital Signatures are not supported. If a PDF document contains a digital signature, after indexing, the .out file does not contain the digital signature. To load a file that contains a PDF Digital Signature, create a generic index file for it, and load the file as one document.\n\n# **7.3 Performance considerations**\n\nThe best performance of the PDF Indexer is on the Windows platform. For the preferred performance practices, see 13.4.1, \"PDF data\" on page 308.\n\n# **7.3.1 PDF fonts and output file size**\n\nThe fonts that are used in a PDF document are one of the factors that determines the indexing's output file size.\n\n# **The base 14 Type 1 fonts**\n\nThe base 14 Type 1 fonts are a core set of fonts that are always available to the Acrobat program. Because they are available on the system, they are *not* embedded in the document. Therefore, documents that are created with these fonts are more compact. The base 14 fonts are listed:\n\n- -Courier\n- -Courier-Bold\n- -Courier-BoldOblique\n- -Courier-Oblique\n- -Helvetica\n- -Helvetica-Bold\n- -Helvetica-BoldOblique\n- -Helvetica-Oblique\n- -Times-Roman\n- -Times-Bold\n- -Times-Italic\n- -Times-BoldItalic\n- -Symbol\n- -ZapfDingbats\n\nFonts that are not members of the base 14 fonts might be embedded in the document, or they might be stored in a font directory.\n\nImages and bar code fonts are also embedded in the document.\n\nThe PDF Indexer collects resources, such as fonts and images, removes them from the document, and places them in a resource file. The number of embedded fonts in the document directly affects the size of the resource file.", - "page_start": 189, - "page_end": 189, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "In which case CC licence can't be used ?", - "target_page": 1, - "target_passage": "fair use, fair dealing, or some other limitation and exception to copyright applies the the work.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Understanding Creative Commons license\n\nbefore licensing your work\n\n## **THREE-LAYER DESIGN**\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\" : contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n# **FOUR ELEMENTS**\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for\n\nND\n\nSA\n\nnoncommercial purposes.\n\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n# **SIX LICENSES**\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n# **REMIND THAT…**\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n#### **CC LICENSE CAN'T BE USED FOR …**\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n### **ALSO FOR …**\n\nthe work that is already in the Public Domain. For those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n# **NOW, SHARE YOUR WORK!** https://creativecommons.org/choose/\n\nTexts are adapted from CC Certification for Educators. CC BY license.\n\nBY, SA, NC, ND icons, CC BY, CC BY-SA, CC BY-NC, CC BY-NC-SA, CC BY-ND, and CC BY-NC-ND buttons are trademark of Creative Commons, and subject to their policies. 3-layer design of CC license image is taken from CC Certification for Educators. CC BY license. Line, icons, and gradients are from Canva, and subject to their policies.", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n#### **Permissively licensed works**\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution).18\n\nSee e.g. Heald, Paul J. \"How Copyright Makes Books and Music Disappear (and How Secondary 16 Liability Rules Help Resurrect Old Songs).\" Illinois Program in Law, Behavior and Social Science Paper No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181. Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen, Rebecca J. \"Why Are so Few Books from the 20th Century Available as Ebooks?\" *The Atlantic*, 18 Mar. 2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-centuryavailable-as-ebooks/284486/. See also \"Google Book Search Settlement and Access to Out of Print Books.\" *Google Public Policy Blog*, publicpolicy.googleblog.com/2009/06/google-book-searchsettlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed classaction settlement between Google, the Authors Guild, and the Association of American Publishers). Google's final brief in the settlement proceedings notes the \"prohibitive transaction costs of identifying and locating individual Rightsholders of these largely older, out-of-print books\" — see this brief at https:// web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/ google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also justified the settlement's terms in light of the fact that \"the transaction costs involved in finding copyright owners and clearing the rights are too high\"; while they argued that most works are not truly \"orphans,\" they note that total transaction costs as a whole (including, for example, determining whether the author or publisher holds the rights and then negotiating rates) are so high as to block uses of outof-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http:// thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.\n\nIn the EU, the 2019 Copyright Directive introduced specific provisions on the \"use of out-of-commerce 17 works and other subject matter by cultural heritage institutions\" (Articles 8-11 CDSMD). These provisions allow cultural heritage institutions to \"make available, for non-commercial purposes, out-ofcommerce works or other subject matter permanently in their collections\". The limitation to noncommercial purposes means that works made available under these provisions would be of limited use in building a books data commons.\n\nFor one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18 they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin' 'Bout AI Generation: Copyright and the Generative AI Supply Chain. Forthcoming, *Journal of the Copyright Society* 2024. https://doi.org/10.2139/ssrn.4523551.", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7%.\n\nThe chief executive officer and 80% of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by non-Canadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10% of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10% of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n#### WIRELESS\n\n#### **Consultation on the Renewal of Cellular and Personal Communications Services (PCS) Spectrum Licences**\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n- At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n- The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n- A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.\n\n#### **Consultation on a Policy and Technical Framework for the 700Mhz and 2500-2690Mhz Band and Aspects Related to Commercial Mobile Spectrum**\n\nIn March 2012, Industry Canada released its policy and technical framework for the auction of spectrum in the 700 MHz and 2500–2690 MHz spectrum bands. Key things to note:\n\n- Industry Canada adopted an auction cap for the 700 MHZ (not a setaside like in the 2008 Advanced Wireless Services (AWS) spectrum auction). There are four blocks of spectrum that are considered \"prime\". Large domestic wireless carriers are restricted to a single block of prime spectrum each, while all other carriers are restricted to two blocks. Rogers, Bell and Telus are considered large carriers nationally. SaskTel is considered a large carrier in Saskatchewan, and MTS is considered a large carrier in Manitoba.\n- To encourage rural deployments, single carriers who win two paired blocks, or two carriers who share their two paired blocks, are required to use their 700 MHz spectrum to provide coverage to 90% of their HSPA+ territory within five years and 97% within seven years. Industry Canada will use Tier 2 licence areas for the 700Mhz auction. These are 14 large service areas covering all of Canada, and are generally the same size as individual provinces.\nIn March 2013, Industry Canada released *Licensing Framework for Mobile Broadband Services (MBS) – 700 MHz Band*. Key things to note:\n\n- Industry Canada confirmed that, for the most part, the policy and technical framework to auction spectrum in the 700 MHz band are the same as proposed in its March 14, 2012 consultation document.\n- The auction will use a combinatorial clock auction (CCA) format, where bids are made for packages of spectrum licences, rather than the simultaneous multiple round auction (SMRA) format used in the past, where bids are made on individual licences.\n- Associated entities can apply to bid separately and to have the auction cap applied individually. These bidders must demonstrate that they \"intend to separately and actively provide services\" within a given licence area for the duration of the spectrum caps (five years after licensing). Industry Canada has determined that no registered bidders were associated with each other.\n\nThe auction was initially set to begin on November 19, 2013. In June 2013, Industry Canada moved the application deadline to September 17, 2013, and the auction start to January 14, 2014.\n\nIn October 2013, Industry Canada released its consultation paper, seeking comments on licencing considerations related to auction format, rules and processes, as well as on conditions of licence for spectrum in the 2500–2690 MHz band. The final policy was released on January 10, 2014.\n\nKey things to note about 2500–2690 MHz spectrum policy:\n\n- Industry Canada adopted a spectrum cap (not an auction cap like in the 700 MHz auction). No carrier participating in the auction may possess more than 40 MHz of 2500–2690 MHz spectrum. Rogers is grandfathered with respect to our holdings in those situations where we already hold more than 40 MHz of this spectrum. We will not be required to return spectrum.\n- There is no special roll-out requirement for 2500–2690 MHz spectrum. A general roll-out rule will be determined in the policy.\n- The auction is set to commence on April 15, 2015.\n- The 2500MHz auction will use Tier 3 licence areas.\n\n#### **Roaming and Tower Sharing Policy**\n\nIn March 2013, Industry Canada released *Revised Frameworks for Mandatory Roaming and Antenna Tower and Site Sharing*, concluding a consultation initiated in 2012. It sets out the current rules for roaming and tower and site sharing. Its key terms are:\n\n- All holders of spectrum licences, radio licences and broadcasting certificates must share towers and antenna sites, where technically feasible, at commercial rates.\n- All licensees were permitted to request roaming from other licensees at commercial rates.\n- The timeframe for negotiating agreements is 60 days, after which arbitration according to Industry Canada arbitration rules will begin.\n- The roaming capabilities must provide connectivity for digital voice and data services regardless of the spectrum band or underlying technology used.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "# **Training in how to use CC Licenses is key to their adoption.**\n\nWe offer a ten-week **CC Certificate** program that is now tailored not only to the education and library sectors, but also galleries, archives, libraries, and museums and **available in 10 languages**.\n\nAs of 2023, we've certified:\n\n### **In 2023, we greatly expanded our CC Licenses training and education offerings:**\n\n#### **19 Workshops & Trainings**\n\nwith institutions like ALA, Connecticut Humanities & State University of New York, Digital Research Alliance of Canada, and WikiConf North America.\n\n#### **2 Week-Long CC Certificate Bootcamps**\n\nfor California Community Colleges.\n\n#### **27 Webinars**\n\non topics like the basics of Open Culture, the possibilties of Open Educational Resources (OER) for business-university cooperation, and the future of CC Licenses in digital and online education.\n\n#### **12 CC Legal Open Office Hours**\n\nhosted by our legal team, providing a personalized opportunity for the CC community to ask questions about CC Licenses, open access, and sharing.", - "page_start": 4, - "page_end": 4, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Regulation in Our Industry\n\nOur business, except for the non-broadcasting operations of Media, is regulated by two groups:\n\n- the Canadian Federal Department of Industry on behalf of the Minister of Industry (Canada) (together, Industry Canada)\n- the CRTC, under the *Telecommunications Act (Canada)* (Telecommunications Act) and the *Broadcasting Act (Canada)* (Broadcasting Act).\n\nRegulation relates to the following, among other things:\n\n- wireless spectrum and broadcasting licensing\n- competition\n- the cable television programming services we must, and can, distribute\n- wireless and wireline interconnection agreements\n- rates we can charge third parties for access to our network\n- the resale of our networks\n- roaming on our networks\n- ownership and operation of our communications systems\n- our ability to acquire an interest in other communications systems.\n\nRegulatory changes or decisions can adversely affect our consolidated results of operations.\n\nOur costs of providing services may increase from time to time as we comply with industry or legislative initiatives to address consumer protection concerns or Internet-related issues like copyright infringement, unsolicited commercial e-mail, cybercrime and lawful access.\n\nGenerally, our spectrum and broadcast licences are granted for a specified term and are subject to conditions for maintaining these licences. The regulators can modify these licensing conditions at any time, and they can decide not to renew a licence when it expires. If we do not comply with the conditions, a licence may be forfeited or revoked, or we may be fined.\n\nThe licences have conditions that require us, amongst other things, to comply with Canadian ownership restrictions of the applicable legislation, and we are currently in compliance with them. If we violate the requirements, we would be subject to various penalties and it could include losing a licence in extreme cases.\n\nCable, wireless and broadcasting licences generally cannot be transferred without regulatory approval.\n\n#### **Canadian Broadcasting Operations**\n\nOur Canadian broadcasting operations – including our cable television systems, radio and television stations, and specialty services – are licenced (or operated under an exemption order) and regulated by the CRTC under the Broadcasting Act.\n\nThe CRTC is responsible for regulating and supervising all aspects of the Canadian broadcasting system. It is also responsible under the Telecommunications Act for the regulation of telecommunications carriers, including:\n\n- Wireless' mobile voice and data operations\n- Cable's Internet and telephone services.\n\nOur cable and telecommunications retail services are not subject to price regulation, because the CRTC believes there is enough competition for these services provided by other carriers to protect the interests of users, so has forborne from regulating them. Regulations\n\ncan and do, however, affect the terms and conditions under which we offer these services.\n\n#### **Spectrum Licences**\n\nIndustry Canada sets technical standards for telecommunications under the *Radiocommunication Act (Canada)* (Radiocommunication Act) and the Telecommunications Act. It licences and oversees:\n\n- the technical aspects of the operation of radio and television stations\n- the frequency-related operations of cable television networks\n- awarding and supervising spectrum for wireless communications systems in Canada.\n\n#### **Royalties**\n\nThe Copyright Board of Canada (Copyright Board) oversees the administration of copyright royalties in Canada and establishes the royalties to be paid for the use of certain copyrighted works. It sets the copyright tariff royalties that Canadian broadcasting undertakings, including cable, radio, television and specialty services, pay to copyright collectives.\n\n#### **Billing and Contracts**\n\nThe Quebec Consumer Protection Act amendments, effective June 2010, introduced new provisions applicable to wireless, wireline and Internet service contracts. These amendments include new rules on the content of such contracts, the determination of the early cancellation fees that can be charged to customers, the use of security deposits and the cancellation and renewal rights of the consumers. The amendments also established new provisions on the sale of prepaid cards and the disclosure of related costs.\n\nAmendments to the Manitoba Consumer Protection Act took effect in September 2012 and parallel the changes to the Quebec Consumer Protection Act. Similar legislation also came into effect in September 2012 in Newfoundland and Labrador and has been tabled in Nova Scotia. A private member's bill proposing similar legislation has been introduced in New Brunswick.\n\nIn April 2012, the Ontario government announced that it would be introducing legislation addressing wireless bills and contracts. The legislation seeks to ensure that contracts are written in plain language and spell out which services come with the basic fee and which would result in a higher bill. It requires providers to obtain consent in writing before they renew or amend a contract. The legislation also seeks a cap on the cost of cancelling a fixed-term contract that would vary depending on the circumstances of the contract. The proposed legislation, which would affect new contracts, would take effect six months after being passed and would also cover existing agreements that are amended, renewed or extended after that date. The legislation was passed into law in October 2013.\n\nSee also \"CRTC Wireless Code\" section under Wireless Regulation.\n\n#### **Foreign Ownership and Control**\n\nNon-Canadians can own and control directly or indirectly:\n\n- up to 33.3% of the voting shares and the related votes of a holding company that has a subsidiary operating company licenced under the Broadcasting Act, and\n- up to 20% of the voting shares and the related votes of the operating licensee company may be owned and controlled directly or indirectly by non-Canadians.", - "page_start": 70, - "page_end": 70, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality, public health, town and country planning, the development and utilization of mineral resources, for the purpose of any census or in order to secure the development or utilization of any property for a purpose beneficial to the community;\n- (b) that is reasonably required for the purpose of protecting the rights or freedoms of other persons;\n- (c) that authorizes an officer or agent of the Government of Botswana, a local government authority or a body corporate established by law for a public purpose to enter on the premises of any person in order to inspect those premises or anything thereon for the purpose of any tax, rate or duty or in order to carry out work connected with any property that is lawfully on those premises and that belongs to that Government, authority or body corporate, as the case may be; or\n- (d) that authorizes, for the purpose of enforcing the judgment or order of a court in any civil proceedings, the search of any person or property by order of a court or entry upon any premises by such order,\n\nand except so far as that provision or, as the case may be, anything done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n# **10. Provisions to secure protection of law**\n\n(1) If any person is charged with a criminal offence, then, unless the charge is withdrawn, the case shall be afforded a fair hearing within a reasonable time by an independent and impartial court established or recognized by law.\n\n(2) Every person who is charged with a criminal offence-\n\n- (a) shall be presumed to be innocent until he or she is proved or has pleaded guilty;\n- (b) shall be informed as soon as reasonably practicable, in a language that he or she understands and in detail, of the nature of the offence charged;\n- (c) shall be given adequate time and facilities for the preparation of his or her defence;\n- (d) shall be permitted to defend himself or herself before the court in person or, at his or her own expense, by a legal representative of his or her own choice;\n- (e) shall be afforded facilities to examine in person or by his or her legal representative the witnesses called by the prosecution before the court, and to obtain the attendance and carry out the examination of witnesses to testify on his or her behalf before the court on the same conditions as those applying to witnesses called by the prosecution; and\n- (f) shall be permitted to have without payment the assistance of an interpreter if he or she cannot understand the language used at the trial of the charge,\n\nand except with his or her own consent the trial shall not take place in his or her absence unless he or she so conducts himself or herself as to render the continuance of the proceedings in his or her presence impracticable and the court has ordered him or her to be removed and the trial to proceed in his or her absence.\n\n(3) When a person is tried for any criminal offence, the accused person or any person authorized by him or her in that behalf shall, if he or she so requires and subject to payment of such reasonable fee as may be prescribed by law, be given within a reasonable time after judgment a copy for the use of the accused person of any record of the proceedings made by or on behalf of the court.", - "page_start": 8, - "page_end": 8, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "By clicking on the \"**Data->Licensing Assistant**\" link in the main menu, the Licence Assistant is opened in a new window, displaying relevant information of all supported licences by the tool.\n\n| | | Newsletter FAQ Search Contact Cookies Legal notice English (en) | > |\n| --- | --- | --- | --- |\n| | | Search site content ... | ರ |\n| European Data Portal > Licensing Assistant | | | |\n| 11 What we do - | Data~ Providing Data . | Using Data - Resources . | |\n| Datasets Cataloques | Metadata Quality Licensing Assistant | SPARQL Manager Statistics | |\n| Licensing Assistant | | | |\n| Data which is shared with a licence becomes Open Data. There are many licences available. | The licence assistant provides a description of the available licences. It also gives an overview | | |\n| of how to apply licences as re-publisher/distributor of Open Data and how to combine multiple | | | |\n| licences. | | | |\n| Please find a licence by selecting the preferred licence terms below: | | | |\n| Advanced settings | | | |\n| Obligation | Permission | Prohibition | |\n| Lesser Copyleft Attribution | Derivative Works Distribution | Commercial use | |\n| Sharealike Notice Copyleft | Reproduction Sublicensing | | |\n| State Changes | Use patent claims | | |\n| Name Terms | | | |\n| CC BY 3.0 Austria | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY 4.0 | Obligation: Attribution Permission: Derivative Works | Permission: Distribution Obligation: Notice | |\n| | Obligation: State Changes Permission: Reproduction | | |\n| CC-BY 3.0 NL | Obligation: Attribution Permission: Derivative Works | Obligation: Notice Permission: Distribution | |\n| | Permission: Reproduction | | |\n| CC-BY-NC 4.0 | Obligation: Attribution Permission: Derivative Works | Obligation: Notice | |\n| | Prohibition: Commercial use Permission: Distribution | Obligation: State Changes | |\n| | Permission: Reproduction | | |\n| CC-BY-NC-ND 4.0 | Obligation: Attribution Obligation: Notice | Prohibition: Commercial use Permission: Distribution | |\n| | Obligation: State Changes Permission: Reproduction | | |", - "page_start": 34, - "page_end": 34, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "# **II.13.7. Moral rights of creators**\n\nBy delivering the *results*, the contractor warrants that the *creators* will not object to the following on the basis of their moral rights under copyright:\n\n- (a) that their names be mentioned or not mentioned when the *results* are presented to the public;\n- (b) that the *results* be divulged or not after they have been delivered in their final version to the contracting authority;\n- (c) that the *results* be adapted, provided that this is done in a manner which is not prejudicial to the *creator*'s honour or reputation.\n\nIf moral rights on parts of the *results* protected by copyright may exist, the contractor must obtain the consent of *creators* regarding the granting or waiver of the relevant moral rights in accordance with the applicable legal provisions and be ready to provide documentary evidence upon request.\n\n# **II.13.8. Image rights and sound recordings**\n\nIf natural persons appear in a *result* or their voice or any other private element is recorded in a recognisable manner, the contractor must obtain a statement by these persons (or, in the case of minors, by the persons exercising parental authority) giving their permission for the described use of their image, voice or private element and, on request, submit a copy of the permission to the contracting authority. The contractor must take the necessary measures to obtain such consent in accordance with the applicable legal provisions.\n\n# **II.13.9. Copyright notice for pre-existing rights**\n\nWhen the contractor retains *pre-existing rights* on parts of the *results*, reference must be inserted to that effect when the *result* is used as set out in Article I.10.1, with the following disclaimer: '© — year — European Union. All rights reserved. Certain parts are licensed under conditions to the EU', or with any other equivalent disclaimer as the contracting authority may consider best appropriate, or as the parties may agree on a case-by-case basis. This does not apply where inserting such reference would be impossible, notably for practical reasons.\n\n# **II.13.10. Visibility of ECHA funding and disclaimer**\n\nWhen making use of the *results*, the contractor must declare that they have been produced under a contract with the contracting authority and that the opinions expressed are those of the contractor only and do not represent the contracting authority's official position. The contracting authority may waive this obligation in writing or provide the text of the disclaimer.\n\n# **II.14. Force majeure**\n\n- **II.14.1** If a party is affected by *force majeure*, it must immediately *notify* the other party, stating the nature of the circumstances, their likely duration and foreseeable effects.\n- **II.14.2** A party is not liable for any delay or failure to perform its obligations under the FWC if that delay or failure is a *result* of *force majeure*. If the contractor is unable to fulfil its contractual obligations owing to *force majeure*, it has the right to remuneration only for the services actually provided.", - "page_start": 26, - "page_end": 26, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "### **3.2.6 How to view licensing information**\n\nLicensing information is available for all datasets associated with common licences, which are supported by the Licence Assistant. When available a link to the assistant is provided on left side of a dataset page.\n\nBy clicking on the **licence name** (here: cc-by), the Licence Assistant tool is opened in a new window, displaying relevant information for this particular licence.\n\n| IROPFAN | | | Newsletter FAQ Search Contact Cookies Legal notice | English (en) | ◀ |\n| --- | --- | --- | --- | --- | --- |\n| | | European Data Portal > Datasets > Daten über Anbieter von Hochs ... | | Search site content ... | ರ |\n| 1 European Data Po | | What we do ▼ Data ▼ Pro | | | |\n| WI G | | Data · Dataset Categories Similar Datasets | Using Data - | Resources . | |\n| | | Higher Education Provider Daing Assistant | SPARQL Manager | Statistics | |\n| | | data.gov.uk | | | |\n| Licensi | | | | | |\n| | | We publish the full HESA Finance return as open data | | | |\n| | | providers for the reference of funding and requlatory | | | |\n| CC-BY | | | | | |\n| Open licer | | Distributions (21) | | | |\n| You are f | | | | Comparable licences | |\n| Deriva | | Tahle 12 Analysis of staff costs 2016/17 to 2017/18 | | · CC-BY-NC-ND4.0 | |\n| CSV Create | | | | · CC-BY-NC-SA4.0 | |\n| Distrib | | Licence: cc-by (i | | · CC-BY-NC4.0 | |\n| | | | | · CC-BY-ND4.0 | |\n| Redistr | | | | · CC-BY-SA3.0NL | |\n| Reproc CSV | | Table 1 - Consolidated statement of comprehensive | | | |\n| \"Repro | | expenditure year ended 31 July 2015/16 to 2017/18 | without limitation by sound or | · CC-BY-SA4.0 | |\n| | | | | · CC-BY3.0NL | |\n| visual r | Licence: cc-by (i | | Work, including storage of a | | |\n| protect | | | edium. | · CC-BY4.0 | |\n| | | | | · CCBY3.0Austria | |\n| | | | | · DL-DE-BY-NC1.0 | |\n| You are obligated to: | | | | · DL-DE-BY1.0 | |\n| | | | | · DL-DE-BY2.0 | |\n| Attribution | | | | · EUPL-1.1 | |\n| Give proper credit to the copyright holder and/or author | | | | · FR-LO | |\n| Notice | | | | · GFDL-1.1 | |\n| Keep copyright and licence notices intact | | | | · GFDL-1.2 | |\n| State Changes | | | | · GFDL-1.3 | |\n| | | Indicate which changes have been made to the original licenced work in a manner that permits attribution. | | · IODLv1.0 | |\n| | | | | · IODLv2.0 | |\n| | | | | · NLOD | |\n| | | | | · ODC-BY | |\n| | | | | · ODC-ODbL | |\n| | | | | · OGL-NC | |\n| | | | | · OGL-ROU-1.0 | |\n| | | | | · OGL1.0 | |\n| | | | | · OGL2.0 | |\n| | | | | · OGL3.0 | |\n| | | | | · PSEUL | |", - "page_start": 33, - "page_end": 33, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "In how many regions the Republic Services operations are organized ?", - "target_page": 9, - "target_passage": "As of December 31, 2004, our operations were organized into five regions whose boundaries may change from time to time: Eastern, Central, Southern, Southwestern and Western.", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "#### **(19) Business Segment Information**\n\nE u ronet and its subsidiaries operate in two business segments: (1) a segment that provides an independent shared ATM network and other e l e c t ronic payment network services to banks, retail and financial institutions (the \"Network Services Segment\"); and (2) a segment that p roduces application software and solutions for payment and transaction delivery systems (the \"Software Solutions Segment\"). These business segments are supported by a corporate service segment which provides corporate and other administrative services which are not d i rectly identifiable with the two business segments, (the \"Corporate Services Segment\"). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss fro m operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation.\n\nAs the Network Services Segment continued to grow throughout 1999, the Company's management began to divide the internal org a n i z a t i o n of the segment into Sub-segments. Accord i n g l y, beginning in January 2000, the Company divided the Network Services Segment into thre e Sub-segments: \"Central European Sub-segment\" (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), \"We s t e rn E u ropean Sub-segment\" (including Germ a n y, France, and the United Kingdom) and \"Other Operations Sub-segment\" (including the United States and unallocated processing center costs). Where practical, certain amounts have been reclassified to reflect the change in intern a l re p o rting. The Company is unable to present Network Services Segment assets by Sub-segment as of December 31, 1999. Prior to January 1, 2000, certain assets that were used to provide support services to the Company as a whole were included in the assets in the balance sheet of the Company's wholly owned Hungarian subsidiary, Bank Tech. In order to segregate corporate assets from those of the Hungarian operations, these assets were transferred as of December 31, 1999, from Bank Tech to an existing Hungarian shell company, Administrative S e rvices. Those assets are now shown under the Other Operations Sub-segment.\n\nThe following tables present the segment results of the Company's operations for the years ended December 31, 2000, 1999 and 1998.\n\n| | | Year Ended December 31, 2000 | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | |\n| | | | | | N e t w o r k | | | | | |\n| | Central | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | (in thousands) | | | | | |\n| Total Revenues | $ 1 8 , 5 9 9. | $ 1 6 , 6 1 5. | $ | 1 , 7 0 0. | $ | 3 6 , 9 1 4. | $ 1 6 , 0 0 6. | $ | —. | $ 5 2 , 9 2 0. |\n| Total operating expenses | ( 2 1 , 6 6 9 ) | ( 1 8 , 9 0 1 ) | | ( 2 , 4 0 9 ) | | ( 4 2 , 9 7 9 ) | ( 3 7 , 4 7 5 ) | ( 7 , 8 6 2 ) | | ( 8 8 , 3 1 6 ) |\n| Operating loss. | ( 3 , 0 7 0 ) | ( 2 , 2 8 6 ) | | ( 7 0 9 ) | | ( 6 , 0 6 5 ) | ( 2 1 , 4 6 9 ) | ( 7 , 8 6 2 ) | | ( 3 5 , 3 9 6 ) |\n| I n t e rest income | 2 8 9. | 6 5. | | 1 9 0. | | 5 4 4. | 1 0 3. | | 4 4 2. | 1 , 0 8 9. |\n| I n t e rest expense | ( 1 , 0 1 6 ) | ( 1 6 8 ) | | ( 1 5 0 ) | | ( 1 , 3 3 4 ) | —. | ( 9 , 4 9 5 ) | | ( 1 0 , 8 2 9 ) |\n| F o reign exchange (loss)/gain, net | ( 6 1 6 ) | ( 4 9 4 ) | | ( 1 5 5 ) | | ( 1 , 2 6 5 ) | 1. . | ( 1 , 9 6 3 ) | | ( 3 , 2 2 7 ) |\n| Net loss before income taxes | $ ( 4 , 4 1 3 ) | $ ( 2 , 8 8 3 ) | $ | ( 8 2 4 ) | $ | ( 8 , 1 2 0 ) | $( 2 1 , 3 6 5 ) | $( 1 8 , 8 7 8 ) | | $ ( 4 8 , 3 6 3 ) |\n| Segment assets | $ 2 5 , 6 9 7. | $ 1 6 , 7 5 5 | $ | 3 , 6 5 2. | | $ 4 6 , 1 0 4. | $ 9 , 4 3 3. | $ 5 , 3 5 3. | | $ 6 0 , 8 9 0. |\n| Fixed assets | 1 7 , 1 4 5. | 1 1 , 7 0 7. | | 1 , 6 8 2. | | 3 0 , 5 3 4. | 9 6 8. | | 1 5 5. | 3 1 , 6 5 7. |\n| D e p reciation and amort i z a t i o n | 3 , 9 7 7. | 2 , 8 8 4. | | 1 , 1 0 0. | | 7 , 9 6 1. | 2 , 2 1 5. | | 2 0 8. | 1 0 , 3 8 4. |\n| Asset write down | 6 6 8. | 1 1 0. | | — | | 7 7 8. | 1 1 , 1 9 0 | | —. | 1 1 , 9 6 8. |\n\n| | Year Ended December 31, 2000 | | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Network Serv i c e s | | | | | | | | | | |\n| | | | | | | N e t w o r k | | | | | |\n| | Central | | We s t e rn | | | S e rvices | | S o f t w a re | C o r p o r a t e | | |\n| | E u rope | | E u rope | O t h e r | | To t a l | | Solutions | S e rvices | | To t a l |\n| | | | | | | (in thousands) | | | | | |\n| Total Revenues | $ | 1 2 , 6 6 4. | $ 1 2 , 6 3 7. | $ | 1 , 2 0 2. | $ 2 6 , 5 0 3. | | $ 1 5 , 1 4 9. | $ | —. | $ 4 1 , 6 5 2. |\n| Total operating expenses | | ( 2 0 , 6 8 3 ) | ( 1 6 , 4 7 7 ) | | ( 2 , 2 5 0 ) | ( 3 9 , 4 1 0 ) | | ( 2 2 , 2 9 0 ) | ( 6 , 7 5 0 ) | | ( 6 8 , 4 5 0 ) |\n| Operating loss. | | ( 8 , 0 1 9 ) | ( 3 , 8 4 0 ) | | ( 1 , 0 4 8 ) | ( 1 2 , 9 0 7 ) | | ( 7 , 1 4 1 ) | ( 6 , 7 5 0 ) | | ( 2 6 , 7 9 8 ) |\n| I n t e rest income | | 4 4 8. | 1 6. | | 1 0 3. | | 5 6 7. | 1 4 8. | 1 , 2 3 5. | | 1 , 9 5 0. |\n| I n t e rest expense | | ( 9 8 1 ) | ( 1 0 1 ) | | ( 5 1 ) | | ( 1 , 1 3 3 ) | —. | ( 9 , 7 6 6 ) | | ( 1 0 , 8 9 9 ) |\n| F o reign exchange (loss)/gain, net | | ( 3 9 9 ) | ( 1 9 ) | | ( 1 4 6 ) | | ( 5 6 4 ) | 2. | ( 1 , 5 4 8 ) | | ( 2 , 1 1 0 ) |\n| Net loss before income taxes | $ | ( 8 , 9 5 1 ) | $ ( 3 , 9 4 4 ) | $ | ( 1 , 1 4 2 ) | $ ( 1 4 , 0 3 7 ) | | $ ( 6 , 9 9 1 ) | $ ( 1 6 , 8 2 9 ) | | $ ( 3 7 , 8 5 7 ) |\n| Segment assets | | n / a. | n / a. | | n / a. | $ 5 6 , 6 5 8. | | $ 2 1 , 5 2 7. | $ 1 8 , 6 5 9. | | $ 9 6 , 8 4 4. |\n| Fixed assets | | n / a. | n / a. | | n / a. | 3 5 , 4 3 8. | | 1 , 1 1 3. | 1 4 2. | | 3 6 , 6 9 3. |\n| D e p reciation and amort i z a t i o n | | n / a. | n / a. | | n / a. | | 7 , 4 1 0. | 2 , 6 8 3. | 1 4 5. | | 1 0 , 2 3 8. |", - "page_start": 42, - "page_end": 42, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "The other corporate oÇcers with responsibility for our operations have an average of over 23 years of management experience in the solid waste industry. Our Ñve regional vice presidents and our 23 area presidents have an average of 24 years of experience in the industry.\n\nIn addition, Harris W. Hudson, who has served as our Vice Chairman since our initial public oÅering, has over 40 years of experience in the solid waste industry, including 11 years with Waste Management and 19 years with private waste collection companies.\n\n- ' *Decentralized Management Structure.* We maintain a relatively small corporate headquarters staÅ, relying on a decentralized management structure to minimize administrative overhead costs and to manage our day-to-day operations more eÇciently. Our local management has extensive industry experience in growing, operating and managing solid waste companies and has substantial experience in their local geographic markets. In early 2001, we added a sales, maintenance and operations manager to each of our regional management teams, which previously consisted of a regional vice president and a regional controller. We believe that strengthening our regional management teams allows us to more eÅectively and eÇciently drive our company's initiatives and helps ensure consistency throughout our organization. Our regional management teams and our area presidents have extensive authority, responsibility and autonomy for operations within their respective geographic markets. Compensation for regional and area management teams is primarily based on the improvement in operating income produced and the free cash Öow and return on invested capital generated in each manager's geographic area of responsibility. In addition, through long-term incentive programs, including stock options, we believe we have one of the lowest turnover levels in the industry for our local management teams. As a result of retaining experienced managers with extensive knowledge of and involvement in their local communities, we are proactive in anticipating our customers' needs and adjusting to changes in our markets. We also seek to implement the best practices of our various regions and areas throughout our operations to improve operating margins.\n- ' *Integrated Operations.* We seek to achieve a high rate of internalization by controlling waste streams from the point of collection through disposal. We expect that our fully integrated markets generally will have a lower cost of operations and more favorable cash Öows than our non-integrated markets. Through acquisitions and other market development activities, we create market-speciÑc, integrated operations typically consisting of one or more collection companies, transfer stations and landÑlls. We consider acquiring companies that own or operate landÑlls with signiÑcant permitted disposal capacity and appropriate levels of waste volume. We also seek to acquire solid waste collection companies in markets in which we own or operate landÑlls. In addition, we generate internal growth in our disposal operations by developing new landÑlls and expanding our existing landÑlls from time to time in markets in which we have signiÑcant collection operations or in markets that we determine lack suÇcient disposal capacity. During the three months ended December 31, 2004, approximately 54% of the total volume of waste that we collected was disposed of at landÑlls we own or operate. In a number of our larger markets, we and our competitors are required to take waste to government-controlled disposal facilities. This provides us with an opportunity to eÅectively compete in these markets without investing in landÑll capacity. Because we do not have landÑll facilities or government-controlled disposal facilities for all markets in which we provide collection services, we believe that through landÑll and transfer station acquisitions and development we have the opportunity to increase our waste internalization rate and further integrate our operations. By further integrating operations in existing markets through acquisitions and development of landÑlls and transfer stations, we may be able to reduce our disposal costs.\n- ' *Economies of Scale, Cost EÇciencies and Asset Utilization.* To improve operating margins, our management focuses on achieving economies of scale and cost eÇciencies. The consolidation of acquired businesses into existing operations reduces costs by decreasing capital and expenses used for routing, personnel, equipment and vehicle maintenance, inventories and back-oÇce administration. Generally, we consolidate our acquired administrative centers to reduce our general and administrative costs. Our goal is to maintain our selling, general and administrative costs in the range of 10% of revenue, which we feel is appropriate given our existing business platform. In addition, our size allows", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "#### MA N A G E M E N T'S DI S C U S S I O N A N D AN A LY S I S O F FI N A N C I A L CO N D I T I O N A N D RE S U LT S O F OP E R AT I O N S\n\n#### **General Overv i e w**\n\nE u ronet Worldwide is a leading provider of secure electronic financial transaction solutions. The Company provides financial payment m i d d l e w a re, financial network gateways, outsourcing, and consulting services to financial institutions, retailers and mobile operators. The Company operates an independent automated teller machine (\"ATM\") network of over 2,600 ATMs in Europe and the United States, and t h rough its software subsidiary, Euronet USA Inc. (form e r l y, Arkansas Systems, Inc.)(\"Euronet USA\"), offers a suite of integrated software solutions for electronic payment and transaction delivery systems. Euronet Worldwide thus offers comprehensive electronic payment solutions consisting of ATM network participation, outsourced ATM management solutions and software solutions. Its principal customers are banks and other companies such as retail outlets that re q u i re transaction processing services. With eleven offices in Europe and three in the United States, the Company offers its solutions in more than 60 countries around the world.\n\nE u ronet Worldwide and its subsidiaries operate in two business segments: (1) a segment providing secure processing of financial transactions (the \"Network Services Segment\"); and (2) a segment producing application software for the processing of secure electronic financial transaction (the \" S o f t w a re Solutions Segment\"). In addition, the Company's management divides the Network Services Segment into three sub-segments: \"Central E u ropean Sub-segment\" (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), \"We s t e rn European Sub-segment\" (including Germ a n y, France and the United Kingdom) and \"Other Operations Sub-segment\" (including the United States and unallocated p rocessing center costs). These business segments, and their sub-segments, are supported by a corporate service segment, which pro v i d e s corporate and other administrative services that are not directly identifiable with the two business segments (the \"Corporate Services Segment\"). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss from operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation. (See Note 19 to the Consolidated Financial Statements - Business segment information.)\n\n#### **Comparison of Results of Operations for the Years Ended December 31, 2000, 1999 and 1998**\n\n**Revenues** The Company's total revenues increased to $52.7 million for the year ended December 31, 2000 from $41.5 million for the year ended December 31, 1999 and $11.9 million for the year ended December 31, 1998. The increase in revenues from 1999 to 2000 is primarily due to two factors: (1) a $10.4 million increase in Network Services Segment revenues resulting from the i n c rease in transaction volumes in the Company owned ATMs and an increase in the number of AT M s operated by the Company during this period; and (2) an increase of $800,000 in Software Solutions Segment revenues. The increase in revenues from 1998 to 1999 is primarily due to two factors: (1) a $15.0 million increase in Network Services Segment revenues resulting from the increase in transaction volume attributable to an increase in the number of ATMs operated by the Company during this period; and (2) the addition of $14.6 million of Software Solutions Segment re v e n u e s . Revenues for the years ended December 31, 2000 and 1999 are discussed more fully in the Segment Results of Operations sections below.\n\n**Operating Expenses** Total operating expenses increased to $88.1 million for the year ended December 31, 2000 from $68.3 million for the year ended December 31, 1999 and from $34.5 million for the year ended December 31, 1998. The increase from 1999 to 2000 can be broken down\n\nby segment as follows: (1) a $3.5 million increase in Network Services Segment operating costs due to growth in the size of the network operations; (2) a $15.2 million increase in Software Services Segment due to write down of intangibles of $11.2 million and investment in personnel and re s o u rces; and (3) a $1.1 million increase in Corporate Services Segment operating costs due to the expended operations. The i n c rease from 1998 to 1999 can be broken down by segment as follows: (1) a $13.0 million increase in Network Services Segment operating costs, (2) the addition of $19.6 million of Software Solutions Segment operating costs, and (3) a $1.2 million increase in Corporate Services Segment operating costs. Operating expenses for the years ended December 31, 2000 and 1999 are discussed more fully in the Segment Results of Operations sections below.\n\n**Operating Loss** The Company generated an operating loss of $35.4 million for the year ended December 31, 2000 compared to $26.8 million for the year ended December 31, 1999 and $22.6 million for the year ended December 31, 1998. The increased operating loss from 1999 to 2000 is due to the net effect of three factors: (1) a $6.8 million decrease in the operating loss from the Company's Network Services Segment; (2) a $14.3 million increase in the operating loss from the Company's Software Solutions Segment; and (3) a $1.1 million increase in the operating loss f rom the Company's Corporate Services Segment. The increased operating loss from 1998 to 1999 is due to the net effect of three factors: (1) a $1.9 million decrease in operating losses from the Company's Network Services Segment; (2) the addition of $4.8 million in operating losses fro m the Company's Software Solutions Segment; and (3) a $1.3 million increase in operating losses from the Company's Corporate Services Segment.", - "page_start": 16, - "page_end": 16, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "### **CONSENT OF INDEPENDENT REGISTERED PUBLIC ACCOUNTING FIRM**\n\nWe consent to the incorporation by reference in the Registration Statements (Form S-8 Nos. 333-81801, 333-78125, 333-45542 and 333-104048) pertaining to the Republic Services 401(k) Plan, 1998 Stock Incentive Plan, Republic Services, Inc. Amended and Restated Employee Stock Purchase Plan, and Republic Services, Inc. Amended and Restated 1998 Stock Incentive Plan, respectively, of our reports dated February 24, 2005, with respect to the consolidated Ñnancial statements and schedule of Republic Services, Inc., Republic Services, Inc. management's assessment of the eÅectiveness of internal control over Ñnancial reporting, and the eÅectiveness of internal control over Ñnancial reporting of Republic Services, Inc., included in this Annual Report (Form 10-K) for the year ended December 31, 2004.\n\n> /s/ ERNST & YOUNG LLP CertiÑed Public Accountants\n\nFort Lauderdale, Florida February 24, 2005", - "page_start": 102, - "page_end": 102, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "First Financial Bankshares, Inc. is a financial holding company\n\nheadquartered in Abilene, Texas, with consolidated assets of $2.0 billion as of December 31, 2002. The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. The common stock of First Financial Bankshares, Inc. is held by more than 3,500 shareholders and is listed on The NASDAQ Stock Market® under the symbol FFIN.\n\n\"Our 10 affiliate banks provide services from 28 full-service locations in the Central, West and High Plains regions of Texas.\"", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "```\narn:partition:service:region:account-id:resource-id\narn:partition:service:region:account-id:resource-type/resource-id\narn:partition:service:region:account-id:resource-type:resource-id\n```\n- arn: literally, the string \"arn\"\n- partition is one of the three partitions: AWS Regions, AWS China Regions, or AWS GovCloud (US) Regions\n- service is the specific service such as Amazon EC2 or DynamoDB\n- region is the AWS region like us-east-1 (North Virginia)\n- account-id is the AWS account ID\n- resource-id is the unique resource ID. Other forms for resource IDs like resource-type/ resource-id, are used by services like IAM where IAM users have resource-type of user and resource-id a username like MyUsername,\n\nTry to identify the service, region, and resource for the following example ARNs:\n\n```\narn:aws::dynamodb:us-west-2:123456789012:table/myDynamoDBTable\narn:aws::lambda:us-east-2:123456789012:function:my-function:1\n```\nIf you are interested in learning more, check out a map of Regions and Availability Zones, a view of our data centers, and the complete list of regional service endpoints.\n\n# **Security model**\n\nSecurity is a top priority for AWS. Before you start building serverless solutions, you need to know how security factors into AWS solutions.\n\nAmazon Web Services has a *shared responsibility model:*", - "page_start": 18, - "page_end": 18, - "source_file": "serverless-core.pdf" - }, - { - "text": "### **PART I**\n\n### **ITEM 1. BUSINESS**\n\n### **Company Overview**\n\nWe are a leading provider of services in the domestic non-hazardous solid waste industry. We provide non-hazardous solid waste collection services for commercial, industrial, municipal and residential customers through 140 collection companies in 22 states. We also own or operate 96 transfer stations, 58 solid waste landÑlls and 35 recycling facilities.\n\nAs of December 31, 2004, our operations were organized into Ñve regions whose boundaries may change from time to time: Eastern, Central, Southern, Southwestern and Western. Each region is organized into several operating areas and each area contains a group of operating locations. Each of our regions and substantially all our areas provide collection, transfer, recycling and disposal services. We believe that this organizational structure facilitates the integration of our operations within each region, which is a critical component of our operating strategy. See Note 10 of the Notes to Consolidated Financial Statements for further discussion of operating segments.\n\nWe had revenue of $2,708.1 million and $2,517.8 million and operating income of $452.3 million and $412.7 million for the years ended December 31, 2004 and 2003, respectively. The $190.3 million, or 7.6%, increase in revenue from 2003 to 2004 is primarily attributable to the successful execution of our operating and growth strategies described below. The $39.6 million, or 9.6%, increase in operating income from 2003 to 2004 is partially due to higher self-insurance expense during 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through the 2003 policy year. The remaining increase in operating income is due to the successful execution of our operating and growth strategies described below.\n\nOur presence in high growth markets throughout the Sunbelt, including California, Florida, Georgia, Nevada, North Carolina, South Carolina and Texas, and in other domestic markets that have experienced higher than average population growth during the past several years, supports our internal growth strategy. We believe that our presence in these markets positions our company to experience growth at rates that are generally higher than the industry's overall growth rate.\n\nWe continue to focus on enhancing stockholder value by implementing our Ñnancial, operating and growth strategies as described below.\n\n### **Industry Overview**\n\nBased on analysts' reports and industry trade publications, we believe that the United States nonhazardous solid waste services industry generates annual revenue of approximately $44.0 billion, of which approximately 50% is generated by publicly-owned waste companies, 21% is generated by privately-held waste companies, and 29% is generated by municipal and other local governmental authorities. Three companies generate the substantial majority of the publicly-owned companies' total revenue. However, according to industry data, the domestic non-hazardous waste industry remains highly fragmented as privately-held companies and municipal and other local governmental authorities generate approximately 50% of total industry revenue. In general, growth in the solid waste industry is linked to growth in the overall economy, including the level of new household and business formation.\n\nThe solid waste industry experienced a period of rapid consolidation in the late 1990's. During that time we were able to grow signiÑcantly through acquisitions. However, acquisitions in the industry have slowed considerably since late 1999. Despite this, we believe that the opportunity to grow through acquisitions still exists, albeit at a slower pace than experienced in previous years, as a result of the following factors:\n\n*Subtitle D Regulation.* Subtitle D of the Resource Conservation and Recovery Act of 1976, as currently in eÅect, and similar state regulations have signiÑcantly increased the amount of capital, technical expertise, operating costs and Ñnancial assurance obligations required to own and operate a", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "# **NOTE 28 – CONTINGENT ASSETS AND LIABILITIES**\n\nAt the date of signing this report, the Group is not aware of any contingent assets or liabilities that should be recognised or disclosed in accordance with AASB 137/IFRS 37 – *Provisions, Contingent Liabilities and Contingent Assets.*\n\n# **NOTE 29 – OPERATING SEGMENTS**\n\nThe Company's strategic focus is the exploration, development and production of large, repeatable onshore resource plays in North America, which is the Company's only major line of business and only major geographic area of operations. All of the basins and/or formations in which the Company operates have common operational characteristics, challenges and economic characteristics. As such, Management has determined, based upon the reports reviewed and used to make strategic decisions by the Chief Operating Decision Maker (\"CODM\"), whom is the Company's Managing Director and Chief Executive Officer, that the Company has one reportable segment being oil and natural gas exploration and production in North America.\n\nThe CODM reviews internal management reports on a monthly basis that are consistent with the information provided in the statement of profit or loss and other comprehensive income, statement of financial position and statement of cash flows. As a result no reconciliation is required, because the information as presented is used by the CODM to make strategic decisions.\n\n### **Geographic Information**\n\nThe operations of the Group are located in only one geographic location, North America. All revenue is generated from sales to customers located in North America.\n\nRevenue from one major customer exceeded 10 percent of Group consolidated revenue for the year ended 31 December 2014 and accounted for 65 percent (2013: four major customers accounted for 47 percent, 15 percent, 10 percent and 10 percent) of our consolidated oil, natural gas and NGL revenues.", - "page_start": 94, - "page_end": 94, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Amazon has many regions all across the globe. Inside each region, there are one or more Availability Zones located tens of miles apart. The distance is near enough for low latency the gap between requesting and receiving a response, and far enough to reduce the chance that multiple zones are affected if a disaster happens.\n\nEach region is identified by a code, such as \"us-west-1\", \"us-east-1\" or \"eu-west-2\". Within each region, the multiple isolated locations known as *Availability Zones* or AZs are identified with the region code followed by a letter identifier. For example, us-east-1a. AWS handles deploying to multiple availability zones within a region for resilience.\n\n# **Amazon Resource Name (ARN)**\n\nServices are identified with regional endpoints. The general syntax of a regional endpoint is as follows:\n\n```\nprotocol://..amazonaws.com\n```\nFor example, https://dynamodb.us-west-1.amazonaws.com is the endpoint for the Amazon DynamoDB service in the US West (N. California) Region.\n\nThe region code is also used to identify AWS resources with Amazon Resource Names, also called \"ARNs\". Because AWS is deployed all over the world, ARNs function like an addressing system to precisely locate which specific part of AWS we are referring to. ARNs have a hierarchical structure:", - "page_start": 17, - "page_end": 17, - "source_file": "serverless-core.pdf" - }, - { - "text": "The subsidiaries of Euronet Services Inc., all of which are, directly or indire c t l y, wholly owned are:\n\n- EFT Services Holding B.V., incorporated in the Netherlands\n- Euronet Banktechnikai Szolgaltato Kft. (\"Bank Tech\"), incorporated in Hungary\n- Euronet Adminisztracios Szolgaltato Kft. (\"Administrative Services\") (formerly SatComNet), incorporated in Hungary\n- Bankomat 24/Euronet Sp. z o.o. (\"Bankomat\"), incorporated in Poland\n- EFT-Usluge d o.o., incorporated in Croatia\n- Euronet Services GmbH, incorporated in Germany\n- EFT Services France SAS, incorporated in France\n- Euronet Services spol. s.r.o., incorporated in the Czech Republic\n- Euronet Services SRL, incorporated in Romania\n- Euronet Services (UK) Limited, incorporated in the United Kingdom\n- Euronet USA Inc. (formerly Arkansas Systems, Inc.) (\"Euronet USA\") incorporated in Arkansas, United States of America\n- EFT Network Services LLC (\"Dash\"), incorporated in Arkansas, United States of America\n- Euronet Holding N.V., incorporated in the Netherlands Antilles (in liquidation)\n- Euronet Eft Services Hellas, incorporated in Greece\n\n#### **( 2 ) Financial Position and Basis of Preparation**\n\nThe Company generated an operating loss of $35.4 million and negative cash flows from operations of $16.4 million for the year ended December 31, 2000, primarily due to the significant costs associated with its investment in delivery, support, re s e a rch and development in its s o f t w a re subsidiary which was acquired in December 1998. Based on the Company's current business plan and financial projections, the Company expects to reduce operating losses and net cash used in operating activities in 2001. In the Network Services Segment, the Company anticipates that increased transaction levels in its ATM network will result in additional revenues without a corresponding incre a s e in expenses. In addition, the Company expects to further expand its ATM outsourcing services and offer new value-added services, which will p rovide continued revenue growth without significantly increasing direct operating expenses or capital investments. In the Software Solutions Segment, the Company expects reduced operating expenses and improved operating perf o rmance due to a cost re s t ructuring pro g r a m i n t roduced in the first quarter of 2001. The Company believes that the credit facility (see note 13), certain asset sales and cash and cash equivalents at December 31, 2000 will provide the Company with sufficient cash re s o u rces until it achieves positive cash flow.\n\nBased on the above, management is confident that the Company will be able to continue as a going concern. Accord i n g l y, these consolidated financial statements have been pre p a red on a going concern basis which contemplates the continuation and expansion of trading activities as well as the realization of assets and liquidation of liabilities in the ord i n a ry course of business.\n\n#### **( 3 ) S u m m a ry of Significant Accounting Policies and Practices**\n\n- (a) Basis of presentation\nThe accompanying consolidated financial statements have been pre p a red in accordance with generally accepted accounting principles in the United States of America.\n\nAll significant intercompany balances and transactions have been eliminated.\n\n- (b) Foreign currencies\nF o reign currency transactions are re c o rded at the exchange rate prevailing on the date of the transactions. Assets and liabilitiesdenominated in foreign currencies are re m e a s u red at rates of exchange on the balance sheet date. Resulting gains and losses on f o reign currency transactions are included in the consolidated statement of operations and comprehensive loss.\n\nThe financial statements of foreign subsidiaries where the local currency is the functional currency are translated to U.S. dollars using (i) exchange rates in effect at period end for assets and liabilities, and (ii) average exchange rates during the period for results of operations. Adjustments resulting from translation of such financial statements are reflected in accumulated other comprehensive income as aseparate component of consolidated stockholders' equity.\n\nThe financial statements of foreign subsidiaries where the functional currency is the U.S. dollar are re m e a s u red using historical exchangerates for nonmonetary items while current exchange rates are used for monetary items. Foreign exchange gains and losses arising from the re m e a s u rement are re p o rted in the consolidated statement of operations and comprehensive loss.\n\n- (c) Cash equivalents\nFor the purposes of the consolidated statements of cash flows, the Company considers all highly liquid debt instruments purchased with an original maturity of three months or less to be cash equivalents.\n\n(d) Investment securities\n\nThe Company has classified its investment securities as held-to-maturity or available-for-sale. Held-to-maturity securities are those securities in which the Company has the ability and intent to hold the security to maturity. All securities not included in held-to-maturity a re classified as available-for sale.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_EEFT_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": "What was one of the seminal moment of 2004 for MGM MIRAGE ?", - "target_page": 12, - "target_passage": "The announcement of the merger between MGM MIRAGE and Mandalay Resort Group was one of the seminal moments of 2004", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "The announcement of the merger between MGM MIRAGE and Mandalay Resort Group was one of the seminal moments of 2004.\n\n# U S I N G O U R S T R E N G T H...", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "Recently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n#### Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n**AL FACCINTO** President, MGM MIRAGE International Marketing\n\n**ALAN FELDMAN** Senior VP Public Affairs, MGM MIRAGE\n\n**BRUCE GEBHARDT** Senior VP, MGM MIRAGE Global Security\n\n**WILLIAM J. HORNBUCKLE** President & COO, MGM MIRAGE Europe\n\n**PHYLLIS JAMES** Senior VP & Senior Counsel, MGM MIRAGE", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "### FINANCIAL OVERVIEW\n\n# ACHIEVING MOMENTOUS RESULTS\n\n**JAMES J. MURREN** President, CFO & Treasurer\n\no some, momentum is intangible – a product of fortune, a power that cannot be harnessed, and typically a short-lived sensation. Others wonder how they lost their momentum. At MGM MIRAGE, we are constantly thinking of better ways to maximize it. We believe momentum is a product of effort and excellence, a force which can be observed and measured, and something that can be a lasting and defining quality of a great company. Our 2004 results are a clear reminder of the power of moving forward. Our financial policies have long been designed to create and maintain momentum. By investing in our best assets and thinking of new ways to add value to our shareholders, we are able to redefine our Company's place in history every year – and 2004 was a defining time even by our exacting standards. T\n\nSo how did we get here? Last year, we discussed the importance of focus, and the laser-like precision with which we operated our resorts in 2004 affirms the power of our single-minded dedication to excellence. The hard work of our 40,000 employees resulted in a record year in almost every regard. Net revenues increased 10% over 2003 to a record $4.2 billion, with 12% REVPAR growth at our Las Vegas resorts; property-level EBITDA was an all-time record, nearly $1.5 billion, and 23% higher than the prior year. We exceeded the expectations of every market observer, and significantly beat our forecasts. And 2004 will not be a zenith year for your company – rather, we expect to continue our excellent operating performance, re-invest the resulting cash flow to stimulate future growth and move forward to new defining moments.\n\nHow do we re-define a company that is already at the top of its industry? First, we continue to execute on our vision for our existing resorts – to continually evolve and increase the \"Wow!\" factor for our guests. This strategy requires investment, and we will ensure that our resorts are not only world-class, but best-in-class. Examples include the beautiful Spa Tower at Bellagio and *KÀ*, the latest spectacular creation in collaboration with Cirque du Soleil.\n\n**GAMAL AZIZ** President, MGM Grand\n\n**GLENN BONNER** Senior VP & CIO, MGM MIRAGE Information Systems\n\n**GEORGE R. BOYER III** President, MGM Grand Detroit\n\n**JOSEPH BRUNINI** President, MGM Grand Resorts National Marketing\n\n**JEFF DAHL** President, Beau Rivage", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS EXPANDING WITH EXCELLENCE\n\n**BELLAGIO** underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\n**MGM GRAND LAS VEGAS** completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, *KA`* TM, and several of the Strip's finest restaurants and hottest nightspots. **18.0%**\n\n**TI**'s transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\n**THE MIRAGE** was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of \"old\" Las Vegas.\n\n2004 Revenue Mix Casino\n\n**SKYLOFTS** MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\n- Rooms Food & Beverage Entertainment, Retail,\n- & Other\n\n**BELLAGIO SPA** Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n#### Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support.\n\nIn each year since we established the program, employees have given record amounts to support a\n\n**KÀ** The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's *KÀ* debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "wide array of community needs. From homeless shelters to after-school programs, MGM MIRAGE employees have generously donated more than $8 million since 2001.\n\nYour company also sets aside a portion of its profits each year to be given to important programs intended to build stronger communities. Since 2001, your company has given more than $18 million to support such programs.\n\n### Defining Momentum in Our Family\n\nOur momentum is driven from within by acknowledging the contributions of each and every one of our employees, business partners and customers. Our commitment to diversity is recognition of the fact that in today's everchanging marketplace, we must reflect that which we see in the world around us.\n\nThis commitment should be seen as a commonsense business decision. That said, we are proud of the recognition our Diversity program has received, including accolades from prestigious media such as *Fortune* and *DiversityInc.* magazines.\n\nSince formalizing our program only four years ago, we've made enormous strides. There is still progress to be made and your company has the momentum to remain at the forefront on diversity initiatives, providing yet another advantage for sustaining performance in the long term.\n\n(from left to right) **KENNETH ROSEVEAR** President, MGM MIRAGE Development; **JOHN T. REDMOND** President & CEO, MGM Grand Resorts, LLC; **J. TERRENCE LANNI** Chairman & CEO, MGM MIRAGE; **ROBERT H. BALDWIN** President & CEO, Mirage Resorts, Incorporated & President, Project CityCenter; **GARY N. JACOBS** Executive Vice President, General Counsel & Secretary, MGM MIRAGE; **JAMES J. MURREN** President, CFO & Treasurer, MGM MIRAGE\n\n### Defining Momentum in the Future\n\nYour company achieved many business goals in 2004 and set in motion plans for future growth. These initiatives will provide unmatched returns. We have also created unrivaled opportunities for our employees and will continue our rich history of strengthening the communities in which we do business.\n\nAs exciting as 2004 was, our momentum will carry us to even greater achievements in 2005 and beyond.\n\n**J. TERRENCE LANNI** Chairman of the Board & Chief Executive Officer March 31, 2005\n\n**SENSI** BELLAGIO An eclectic menu features diverse cuisines in an earthy arena replete with waterfalls and chrome. A bold wine list complements Chef Martin Heierling's sumptuous work.\n\n**JEAN-PHILIPPE PATISSERIE** BELLAGIO A mesmerizing fountain of cascading liquid chocolate showcases a splendid selection of chocolates, cakes, crêpes, salads and sandwiches.\n\n**ISLA** TI Designed by Jeffrey Beers, Isla brightens all the senses. Chef Richard Sandoval gives an innovative and modern interpretation of traditional Mexican cuisine.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "# POINTS IN TIME DEFINING MOMENTS OF MGM MIRAGE\n\n**19**\n\n**THE NEW YORK-NEW YORK SKYLINE BECOMES A TOWERING PRESENCE IN THE PORTFOLIO.** We acquired Primadonna Resorts to gain full ownership of the spectacular New York-New York as well as three hotel-casinos on the Nevada state line and two championship golf courses.\n\n**IT ALL BEGINS WITH MGM GRAND.** MGM Grand, the largest hotel-casino in the world, opened to great fanfare. \"The City of Entertainment\" redefined the urban resort and provided the foundation for our company's momentous growth.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "### TO OUR SHAREHOLDERS\n\n## MGM MIRAGE DEFINES MOMENTUM\n\n### \"Your company has undergone several defining moments throughout its history.\"\n\nrom its roots some 35 years ago with the opening of the International Hotel, we have played a leading role in continuously redefining the Las Vegas experience. F\n\nWe announced two significant initiatives in 2004 that, taken together, give your company unrivaled momentum to set industry standards for creativity, performance and responsibility for decades to come.\n\n### Defining Momentum for Las Vegas\n\nOur merger agreement with Mandalay Resort Group and our plans to develop Project CityCenter on the Las Vegas Strip are among the most significant announcements in Las Vegas history. As this fabled city begins its second hundred years, MGM MIRAGE is positioned like no other company to take advantage of unsurpassed growth opportunities in the most dynamic gaming and entertainment market in the world.\n\nProject CityCenter will uniquely re-position Las Vegas like no other project before it. Far more than simply another casino-hotel, Project CityCenter encompasses a\n\nmyriad of elements that will propel Las Vegas into a new generation of urban sophistication.\n\nWhile additional details of this extraordinary development will come in the months ahead, I am pleased to tell you that we have secured the services of the internationally acclaimed architect Cesar Pelli to design our anchor resort at the heart of Project CityCenter.\n\nCesar Pelli & Associates has worked with corporate, government and private clients to design major public spaces, museums, airports, research centers, performing arts centers, academic buildings, hotels, office and residential towers and mixed-use projects.\n\nThe work of Cesar Pelli is not constrained by a personal style or a signature that would limit his architecture; instead, it celebrates the unique characteristics of each project. Using this approach, he has designed several exceptional buildings in the United States and abroad.\n\nWe are very excited about our partnership with Mr. Pelli and his colleagues and believe they will deliver for MGM MIRAGE and the residents of Southern Nevada a building of iconic stature around the world.\n\n**J. TERRENCE LANNI** Chairman & Chief Executive Officer\n\n**BELLAGIO SPA TOWER** The quintessential luxury hotel is now even more opulent. This expansion includes 928 rooms and suites, 80,000 square feet of convention space, retail outlets, and restaurants.\n\n**SHIBUYA** MGM GRAND Designed by superstar team Yabu Pushelberg, Shibuya features stellar sushi and the widest sake selection this side of the Pacific, all served in a sleek, airy ambiance.\n\n**CRAVINGS** THE MIRAGE The zenith of all-you-can-eat. Designed by Adam Tihany, Cravings boasts 11 cooking stations, a street of unique restaurants, and an array of temptations in what's unquestionably the ultimate buffet dining experience.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **Overall Outlook**\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of *KÀ* and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. *KÀ* opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, *KÀ* will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there will be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n#### **Mandalay Merger**\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group (\"Mandalay\"), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.\n\nThe Mandalay merger will impact our operations in several ways. We will have to integrate Mandalay's operations into ours. This could require additional operating and capital expenditures. However, we expect to achieve ongoing cost savings and", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **NOTE 1 — ORGANIZATION**\n\nMGM MIRAGE (the \"Company\"), formerly MGM Grand, Inc., is a Delaware corporation, incorporated on January 29, 1986. As of December 31, 2004 approximately 58% of the outstanding shares of the Company's common stock were owned by Tracinda Corporation, a Nevada corporation wholly owned by Kirk Kerkorian. MGM MIRAGE acts largely as a holding company and, through wholly-owned subsidiaries, owns and/or operates casino resorts.\n\nThe Company owns and operates the following casino resorts on the Las Vegas Strip in Las Vegas, Nevada: Bellagio, MGM Grand Las Vegas, The Mirage, Treasure Island (\"TI\"), New York-New York and the Boardwalk Hotel and Casino. The Company owns a 50% interest in the joint venture that owns and operates the Monte Carlo Resort & Casino, also located on the Las Vegas Strip.\n\nThe Company owns three resorts in Primm, Nevada at the California/Nevada state line – Whiskey Pete's, Buffalo Bill's and the Primm Valley Resort – as well as two championship golf courses located near the resorts. The Company also owns Shadow Creek, an exclusive world-class golf course located approximately ten miles north of its Las Vegas Strip resorts.\n\nThe Company, through its wholly owned subsidiary, MGM Grand Detroit, Inc., and its local partners formed MGM Grand Detroit, LLC, to develop a hotel, casino and entertainment complex in Detroit, Michigan. MGM Grand Detroit, LLC operates a casino in an interim facility in downtown Detroit. See Note 10 for discussion of the revised development agreement with the City of Detroit and plans for a permanent casino resort.\n\nThe Company owns and operates Beau Rivage, a beachfront resort located in Biloxi, Mississippi. The Company also owns a 50% interest in a limited liability company that owns Borgata, a casino resort at Renaissance Pointe, located in the Marina area\n\nof Atlantic City, New Jersey. Boyd Gaming Corporation owns the other 50% of Borgata and also operates the resort. Borgata opened in July 2003. The Company owns approximately 95 developable acres adjacent to Borgata, a portion of which consists of common roads, landscaping and master plan improvements which the Company designed and developed as required under the agreement with Boyd.\n\nUntil July 2004, the Company owned and operated MGM Grand Australia and until January 2004, the Company owned and operated the Golden Nugget Las Vegas in downtown Las Vegas and the Golden Nugget Laughlin in Laughlin, Nevada (the \"Golden Nugget Subsidiaries\"). Until June 2003, the Company operated PLAYMGMMIRAGE.com, the Company's online gaming website based in the Isle of Man. See Note 3 for further information regarding these discontinued operations. In the second quarter of 2002, the Company received proceeds of $11 million upon termination of management agreements covering four casinos in the Republic of South Africa. Prior to the termination, the Company managed three permanent casinos and one interim casino and received management fees from its partner, Tsogo Sun Gaming & Entertainment. The termination fee was recorded as part of other revenues in the accompanying consolidated statements of income.\n\nThe Company is actively seeking future development opportunities in the United Kingdom. In May 2003, the Company acquired a 25% interest in Metro Casinos Limited, a United Kingdom gaming company which operates a casino in Bristol. See Note 10 for discussion of other potential developments in the United Kingdom.\n\nIn June 2004, the Company entered into a joint venture agreement to develop, build and operate a hotel-casino resort in Macau S.A.R. The agreement is subject to, among other things, the approval of the government of Macau S.A.R., and other regulatory approvals, as well as the entry into a subconcession agreement with the holder of one of the existing concessions.", - "page_start": 55, - "page_end": 55, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "**and are available to our employees who are striving to grow within the company.** \n\n**Whether through brick and mortar projects or initiatives that support at-risk youths and deserving senior citizens, MGM MIRAGE is a proud contributor to hundreds of worthwhile causes.** \n\n**In 2004, our employees contributed more than $3 million to help improve the quality of life for others. Of this amount, employees raised a record-breaking $2.7 million to benefit more than 400 charities in southern Nevada alone. These funds were collected and distributed by the MGM MIRAGE Voice**\n\n**Foundation, the company's nonprofit, philanthropic arm established three years ago to encourage and empower employee giving.** \n\n**With hundreds of organizations benefiting from our employees' generosity, MGM MIRAGE absorbs all administrative costs associated with operating and managing the Voice Foundation, resulting in 100 percent of our employee contributions going directly to charities. Additionally, employees are able to choose qualified grant recipients to receive the funding. Since its founding, employees have raised more than $8 million to support deserving nonprofit organizations.** \n\nEmployee giving achieved momentous results last year. While contributions to the Voice Foundation reached record amounts, MGM MIRAGE employees also provided manpower to Habitat for Humanity to build homes for single working mothers.\n\n**Through the \"Dollars for Doers\" program, also administered by the Voice Foundation, the company provides grants to eligible organizations in which our employees volunteer their time.**", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": " What are the most significant piece of undeveloped land remaining on the Las Vegas Strip ?", - "target_page": 21, - "target_passage": "W RESIDENTIAL In lofts, brown stones and high-rise buildings, residential options abound to populate the new city and ener gize the surrounding areas. e have been working for some time on con ceiving the best use of the 66 acres between Monte Carlo and Bellagio, the most significant piece of undeveloped land remaining on the Las Vegas Strip.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "**RESIDENTIAL In lofts, brownstones and high-rise buildings, residential options abound to populate the new city and energize the surrounding areas.**\n\n**ENTERTAINMENT From street performers to Broadway shows, our entertainment will evoke the best of New York or London.**\n\n**e have been working for some time on conceiving the best use of the 66 acres between Monte Carlo and Bellagio, the most significant piece of undeveloped land remaining on the Las Vegas Strip. We certainly could have come up with a spectacular casino-hotel. But, the truth is, Las Vegas is ready for so much more.** W\n\n**As the city eclipses two million residents on its way to passing three million by the end of the decade, and with land prices on the Strip soaring, it has become clear that there is a much better and higher use for this location. As Las Vegas marks its Centennial, Project CityCenter stands as a defining moment for development in this fabled city.** \n\n**Project CityCenter represents a new era of the urban complex, one that encompasses tourism, entertainment, gaming, retail and residential elements. Only MGM MIRAGE has the momentum – financially, intellectually and professionally – to effectively develop such a project.**\n\n**The signature building within Project CityCenter is the 4,000-room hotel-casino. The internationally acclaimed architect Cesar Pelli has been commissioned to design this iconic structure. Pelli's initial concept drawing defines a new generation of urban landscape for the Las Vegas Strip, one which includes gaming at its economic center but not as an emotional centerpiece.** \n\n**Project CityCenter will provide the momentum for the next era of amazing growth for your company and Las Vegas.**\n\n**THE SITE Located in the heart of the Las Vegas Strip, Project CityCenter will dwarf every development that preceded it. Its 66 acres will include a 4,000-room hotel-casino and three boutique hotels.**", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **Overall Outlook**\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of *KÀ* and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. *KÀ* opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, *KÀ* will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there will be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n#### **Mandalay Merger**\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group (\"Mandalay\"), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.\n\nThe Mandalay merger will impact our operations in several ways. We will have to integrate Mandalay's operations into ours. This could require additional operating and capital expenditures. However, we expect to achieve ongoing cost savings and", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **RESULTS OF OPERATIONS**\n\nAt December 31, 2004, our operations consisted of 11 wholly-owned casino resorts and 50% investments in two other casino resorts, including:\n\n- **Las Vegas, Nevada:** Bellagio, MGM Grand Las Vegas, The Mirage, TI, New York-New York, Boardwalk, and Monte Carlo (50% owned).\n- **Other:** The Primm Valley Resorts (Buffalo Bill's, Primm Valley Resort and Whiskey Pete's) in Primm, Nevada; Beau Rivage in Biloxi, Mississippi; MGM Grand Detroit; Borgata (50% owned) in Atlantic City, New Jersey.\n\nWe operate in one segment, the operation of casino resorts, which includes offering gaming, hotel, dining, entertainment, retail and other resort amenities. Slightly over half of our net revenues are derived from gaming activities, a lower percentage than many of our competitors, as our operating philosophy is to provide a complete resort experience for our guests, including non-gaming amenities which command premium prices based on their quality.\n\nWe generate a majority of our net revenues and operating income from our Las Vegas Strip resorts. In 2004, over 75% of our net revenues and operating income was generated by wholly-owned Las Vegas Strip resorts. We believe that we own the premier casino resorts on the Las Vegas Strip, and a main focus of our strategy is to continually reinvest in these resorts to maintain that competitive advantage. Our concentration on the Las Vegas Strip exposes us to certain risks outside of our control, such as competition from other Las Vegas Strip resorts as well as new or expanded resorts in Las Vegas, including Wynn Las Vegas expected to open in 2005, and the impact from potential expansion of gaming in California. This concentration also exposes us to risks related to tourism and the general economy, including national and global economic conditions and terrorist attacks or other global events.\n\n#### **Key Performance Indicators**\n\nAs a resort-based company, our operating results are highly dependent on the volume of customers at our resorts, which in turn impacts the price we can charge for our hotel rooms and other amenities. We also generate a significant portion of our operating income from the high-end gaming segment, which can cause variability in our results. Key performance indicators related to revenue are:\n\n- Gaming revenue indicators table games drop and slot handle (volume indicators); \"win\" or \"hold\" percentage, which is not fully controllable by us. Our normal table games win percentage is in the range of 18% to 22% of table games drop and our normal slot win percentage is in the range of 6% to 7% of slot handle;\n- Hotel revenue indicators hotel occupancy (volume indicator); average daily rate (\"ADR\", price indicator); revenue per available room (\"REVPAR\"), a summary measure of hotel results, combining ADR and occupancy rate.\n\nMost of our revenue is essentially cash-based, through customers wagering with cash or paying for non-gaming services with cash or credit cards. Our resorts, like many in the industry, generate significant operating cash flow. Our industry is capital intensive and we rely heavily on the ability of our resorts to generate operating cash flow to repay debt financing, fund maintenance capital expenditures and provide excess cash for future development.\n\nOur results of operations do not tend to be seasonal in nature, though a variety of factors can affect the results of any interim period, including the timing of major Las Vegas conventions, the amount and timing of marketing and special events for our high-end customers, and the level of play during major holidays, including New Year and Chinese New Year.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "(from left to right) **ROBERT C. SELWOOD** Senior Vice President— Accounting; **JAMES J. MURREN** President, CFO & Treasurer; **BRYAN L. WRIGHT** Senior Vice President — Assistant General Counsel & Assistant Secretary; **DANIEL J. D'ARRIGO** Senior Vice President—Finance\n\nNo company is better positioned to help shape the future of Las Vegas than MGM MIRAGE.\n\ncombination of Mandalay's assets with our financial strength and industry-leading financial discipline will yield significant returns for all of our stakeholders.\n\nWe are currently planning the integration of the two companies, and over time, we expect to realize the full potential of cost and revenue synergies. We will report on our progress throughout the coming year.\n\n### The Next Moment – A City is Born\n\nWhat makes a great city? Las Vegas has long been recognized as the leisure capital of the world. The resorts in our valley have been the innovative leaders in the hospitality industry and have driven the tremendous growth in visitor volume, high occupancy rates and surging food, beverage, entertainment and gaming volumes. But there is another Las Vegas – a community of two million residents on its way to three million by the end of the decade. Las Vegas is leading the U.S. migration to the Southwest. Our newcomers are attracted by the lifestyle, weather, cost of living and economic opportunity. Many have come from cities in the East, West and Midwest and take elements of established communities for granted, such as medical, educational and cultural excellence and diversity.\n\nThe people of Las Vegas today have great aspirations and\n\nexpect and demand more of our community. We are a city without a proper city, and that is about to change. Ambitious plans are underway to revitalize Downtown Las Vegas, centered around a beautiful performing arts center and an academic medical center; UNLV is in the midst of a major capital campaign to enhance the Midtown section of Las Vegas; and your company has embarked on the most comprehensive project to date – Project CityCenter, at the heart of the Las Vegas Strip.\n\nThe Las Vegas Strip has no sense of city now – but we believe it can. The future of Las Vegas is centered around our great resorts and our future development. There are many reasons we believe Project CityCenter is the right project for our Las Vegas Strip development. We believe there is a social imperative that Las Vegas mature as a city, not just a conglomeration of suburbs. A city deserves a center – a center for living, working and playing. We want to be an integral part in defining the Las Vegas of the future.\n\nAnd there is a business motivation. Companies in the gaming industry have historically not been valued on par with other hospitality companies and mixed-use real estate companies. We plan to break out of the gaming mold, and define a company based on extensive holdings in multiple businesses. Project CityCenter will include major residential, retail and entertainment components. We will partner with boutique\n\n**CYNTHIA KISER MURPHEY** Senior VP, MGM MIRAGE Human Resources\n\n**PUNAM MATHUR** Senior VP, MGM MIRAGE Diversity/Community Relations\n\n**WILLIAM MCBEATH** President, The Mirage\n\n**ROBERT V. MOON** Chairman, MGM MIRAGE Marketing\n\n**FELIX D. RAPPAPORT** President, New York-New York\n\n**SCOTT SIBELLA** President, TI", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "Recently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n#### Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n**AL FACCINTO** President, MGM MIRAGE International Marketing\n\n**ALAN FELDMAN** Senior VP Public Affairs, MGM MIRAGE\n\n**BRUCE GEBHARDT** Senior VP, MGM MIRAGE Global Security\n\n**WILLIAM J. HORNBUCKLE** President & COO, MGM MIRAGE Europe\n\n**PHYLLIS JAMES** Senior VP & Senior Counsel, MGM MIRAGE", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS EXPANDING WITH EXCELLENCE\n\n**BELLAGIO** underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\n**MGM GRAND LAS VEGAS** completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, *KA`* TM, and several of the Strip's finest restaurants and hottest nightspots. **18.0%**\n\n**TI**'s transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\n**THE MIRAGE** was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of \"old\" Las Vegas.\n\n2004 Revenue Mix Casino\n\n**SKYLOFTS** MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\n- Rooms Food & Beverage Entertainment, Retail,\n- & Other\n\n**BELLAGIO SPA** Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n#### Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support.\n\nIn each year since we established the program, employees have given record amounts to support a\n\n**KÀ** The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's *KÀ* debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **NOTE 1 — ORGANIZATION**\n\nMGM MIRAGE (the \"Company\"), formerly MGM Grand, Inc., is a Delaware corporation, incorporated on January 29, 1986. As of December 31, 2004 approximately 58% of the outstanding shares of the Company's common stock were owned by Tracinda Corporation, a Nevada corporation wholly owned by Kirk Kerkorian. MGM MIRAGE acts largely as a holding company and, through wholly-owned subsidiaries, owns and/or operates casino resorts.\n\nThe Company owns and operates the following casino resorts on the Las Vegas Strip in Las Vegas, Nevada: Bellagio, MGM Grand Las Vegas, The Mirage, Treasure Island (\"TI\"), New York-New York and the Boardwalk Hotel and Casino. The Company owns a 50% interest in the joint venture that owns and operates the Monte Carlo Resort & Casino, also located on the Las Vegas Strip.\n\nThe Company owns three resorts in Primm, Nevada at the California/Nevada state line – Whiskey Pete's, Buffalo Bill's and the Primm Valley Resort – as well as two championship golf courses located near the resorts. The Company also owns Shadow Creek, an exclusive world-class golf course located approximately ten miles north of its Las Vegas Strip resorts.\n\nThe Company, through its wholly owned subsidiary, MGM Grand Detroit, Inc., and its local partners formed MGM Grand Detroit, LLC, to develop a hotel, casino and entertainment complex in Detroit, Michigan. MGM Grand Detroit, LLC operates a casino in an interim facility in downtown Detroit. See Note 10 for discussion of the revised development agreement with the City of Detroit and plans for a permanent casino resort.\n\nThe Company owns and operates Beau Rivage, a beachfront resort located in Biloxi, Mississippi. The Company also owns a 50% interest in a limited liability company that owns Borgata, a casino resort at Renaissance Pointe, located in the Marina area\n\nof Atlantic City, New Jersey. Boyd Gaming Corporation owns the other 50% of Borgata and also operates the resort. Borgata opened in July 2003. The Company owns approximately 95 developable acres adjacent to Borgata, a portion of which consists of common roads, landscaping and master plan improvements which the Company designed and developed as required under the agreement with Boyd.\n\nUntil July 2004, the Company owned and operated MGM Grand Australia and until January 2004, the Company owned and operated the Golden Nugget Las Vegas in downtown Las Vegas and the Golden Nugget Laughlin in Laughlin, Nevada (the \"Golden Nugget Subsidiaries\"). Until June 2003, the Company operated PLAYMGMMIRAGE.com, the Company's online gaming website based in the Isle of Man. See Note 3 for further information regarding these discontinued operations. In the second quarter of 2002, the Company received proceeds of $11 million upon termination of management agreements covering four casinos in the Republic of South Africa. Prior to the termination, the Company managed three permanent casinos and one interim casino and received management fees from its partner, Tsogo Sun Gaming & Entertainment. The termination fee was recorded as part of other revenues in the accompanying consolidated statements of income.\n\nThe Company is actively seeking future development opportunities in the United Kingdom. In May 2003, the Company acquired a 25% interest in Metro Casinos Limited, a United Kingdom gaming company which operates a casino in Bristol. See Note 10 for discussion of other potential developments in the United Kingdom.\n\nIn June 2004, the Company entered into a joint venture agreement to develop, build and operate a hotel-casino resort in Macau S.A.R. The agreement is subject to, among other things, the approval of the government of Macau S.A.R., and other regulatory approvals, as well as the entry into a subconcession agreement with the holder of one of the existing concessions.", - "page_start": 55, - "page_end": 55, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "### TO OUR SHAREHOLDERS\n\n## MGM MIRAGE DEFINES MOMENTUM\n\n### \"Your company has undergone several defining moments throughout its history.\"\n\nrom its roots some 35 years ago with the opening of the International Hotel, we have played a leading role in continuously redefining the Las Vegas experience. F\n\nWe announced two significant initiatives in 2004 that, taken together, give your company unrivaled momentum to set industry standards for creativity, performance and responsibility for decades to come.\n\n### Defining Momentum for Las Vegas\n\nOur merger agreement with Mandalay Resort Group and our plans to develop Project CityCenter on the Las Vegas Strip are among the most significant announcements in Las Vegas history. As this fabled city begins its second hundred years, MGM MIRAGE is positioned like no other company to take advantage of unsurpassed growth opportunities in the most dynamic gaming and entertainment market in the world.\n\nProject CityCenter will uniquely re-position Las Vegas like no other project before it. Far more than simply another casino-hotel, Project CityCenter encompasses a\n\nmyriad of elements that will propel Las Vegas into a new generation of urban sophistication.\n\nWhile additional details of this extraordinary development will come in the months ahead, I am pleased to tell you that we have secured the services of the internationally acclaimed architect Cesar Pelli to design our anchor resort at the heart of Project CityCenter.\n\nCesar Pelli & Associates has worked with corporate, government and private clients to design major public spaces, museums, airports, research centers, performing arts centers, academic buildings, hotels, office and residential towers and mixed-use projects.\n\nThe work of Cesar Pelli is not constrained by a personal style or a signature that would limit his architecture; instead, it celebrates the unique characteristics of each project. Using this approach, he has designed several exceptional buildings in the United States and abroad.\n\nWe are very excited about our partnership with Mr. Pelli and his colleagues and believe they will deliver for MGM MIRAGE and the residents of Southern Nevada a building of iconic stature around the world.\n\n**J. TERRENCE LANNI** Chairman & Chief Executive Officer\n\n**BELLAGIO SPA TOWER** The quintessential luxury hotel is now even more opulent. This expansion includes 928 rooms and suites, 80,000 square feet of convention space, retail outlets, and restaurants.\n\n**SHIBUYA** MGM GRAND Designed by superstar team Yabu Pushelberg, Shibuya features stellar sushi and the widest sake selection this side of the Pacific, all served in a sleek, airy ambiance.\n\n**CRAVINGS** THE MIRAGE The zenith of all-you-can-eat. Designed by Adam Tihany, Cravings boasts 11 cooking stations, a street of unique restaurants, and an array of temptations in what's unquestionably the ultimate buffet dining experience.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "# SETTING THE FUTURE IN MOTION\n\n**MGM GRAND MACAU Our joint venture has secured a prime location to develop and construct an exciting addition to this dynamic gaming destination.**\n\n**hile the international opportunities for growth remain to be fully defined, in 2004 MGM MIRAGE entered into a joint venture agreement with Pansy Ho Chiu-king to develop, build and operate a major hotel-casino resort in Macau S.A.R. No other international market has shown its ability to sustain improved growth even as the government takes important steps to modernize its regulatory structure. We have methodically moved through the regulatory process and look forward to initiating construction in 2005 and opening in 2007.** W\n\n**We continue to monitor and pursue opportunities as they arise in the United Kingdom. The bill modernizing British gaming law has moved steadily through the legislative process throughout the year. Several key issues are yet to be resolved, but we remain hopeful that Great Britain will become one of the world's leading jurisdictions with significant growth opportunities for decades to come.**\n\n**We are also excited about the emergence of possible new jurisdictions in the Far East. We plan to pursue additional development opportunities as they become available, as we believe that the Far East holds considerable promise as a growing gaming market.** \n\n**Domestically, we are selectively expanding our presence as well, moving into markets and business lines where our superior brands and assets can provide the best returns. In Las Vegas we will maximize the use of our vast land holdings, beginning with The Residences at MGM Grand. This unique venture is a breakthrough combination of a hotel and condominiums – the first of its kind in Las Vegas. In Atlantic City, we own an exceptional site for future development. The already successful Borgata is prepared to grow bigger and better. Expansion plans include more casino space, a new hotel tower, more restaurants, retail outlets and an expanded spa.**\n\n**THE RESIDENCES AT MGM GRAND Our joint venture with Turnberry Associates to build luxury condo/hotels ignited a flurry of development in Las Vegas.**", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "**BELLAGIO ADDS A JEWEL TO THE FAMILY CROWN.** The Mirage Resorts merger provided outstanding resorts, people and land, and has propelled our earnings and provided an unparalleled platform for future growth.\n\n**MANDALAY RESORT GROUP AND MGM MIRAGE ANNOUNCE MERGER.** Mandalay Resort Group will add iconic resorts and great people to our family. We will own 832 acres in the heart of Las Vegas, the fastest growing city in the United States.\n\n09 **20**\n\n**BORGATA CHANGES THE FACE OF ATLANTIC CITY.** Borgata is launched in Atlantic City with our joint-venture partner Boyd Gaming. Borgata has been a tremendous success, raising the bar for casino entertainment in that market.\n\n**SOON, A SPECTACULAR NEW CITY WILL RISE.** Project CityCenter – an ambitious multi-dimensional urban plan – will contribute to the remarkable transformation of Las Vegas as an emerging city of global significance.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": "Which events negatively impacted leisure travel and MCM Mirage high-end gaming business in late 2002 and early 2003 ?", - "target_page": 32, - "target_passage": "The war with Iraq and the outbreak of SARS in Asia, both of which negatively impacted leisure travel and our high-end gaming business in late 2002 and early 2003", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "Recently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n#### Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n**AL FACCINTO** President, MGM MIRAGE International Marketing\n\n**ALAN FELDMAN** Senior VP Public Affairs, MGM MIRAGE\n\n**BRUCE GEBHARDT** Senior VP, MGM MIRAGE Global Security\n\n**WILLIAM J. HORNBUCKLE** President & COO, MGM MIRAGE Europe\n\n**PHYLLIS JAMES** Senior VP & Senior Counsel, MGM MIRAGE", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **RESULTS OF OPERATIONS**\n\nAt December 31, 2004, our operations consisted of 11 wholly-owned casino resorts and 50% investments in two other casino resorts, including:\n\n- **Las Vegas, Nevada:** Bellagio, MGM Grand Las Vegas, The Mirage, TI, New York-New York, Boardwalk, and Monte Carlo (50% owned).\n- **Other:** The Primm Valley Resorts (Buffalo Bill's, Primm Valley Resort and Whiskey Pete's) in Primm, Nevada; Beau Rivage in Biloxi, Mississippi; MGM Grand Detroit; Borgata (50% owned) in Atlantic City, New Jersey.\n\nWe operate in one segment, the operation of casino resorts, which includes offering gaming, hotel, dining, entertainment, retail and other resort amenities. Slightly over half of our net revenues are derived from gaming activities, a lower percentage than many of our competitors, as our operating philosophy is to provide a complete resort experience for our guests, including non-gaming amenities which command premium prices based on their quality.\n\nWe generate a majority of our net revenues and operating income from our Las Vegas Strip resorts. In 2004, over 75% of our net revenues and operating income was generated by wholly-owned Las Vegas Strip resorts. We believe that we own the premier casino resorts on the Las Vegas Strip, and a main focus of our strategy is to continually reinvest in these resorts to maintain that competitive advantage. Our concentration on the Las Vegas Strip exposes us to certain risks outside of our control, such as competition from other Las Vegas Strip resorts as well as new or expanded resorts in Las Vegas, including Wynn Las Vegas expected to open in 2005, and the impact from potential expansion of gaming in California. This concentration also exposes us to risks related to tourism and the general economy, including national and global economic conditions and terrorist attacks or other global events.\n\n#### **Key Performance Indicators**\n\nAs a resort-based company, our operating results are highly dependent on the volume of customers at our resorts, which in turn impacts the price we can charge for our hotel rooms and other amenities. We also generate a significant portion of our operating income from the high-end gaming segment, which can cause variability in our results. Key performance indicators related to revenue are:\n\n- Gaming revenue indicators table games drop and slot handle (volume indicators); \"win\" or \"hold\" percentage, which is not fully controllable by us. Our normal table games win percentage is in the range of 18% to 22% of table games drop and our normal slot win percentage is in the range of 6% to 7% of slot handle;\n- Hotel revenue indicators hotel occupancy (volume indicator); average daily rate (\"ADR\", price indicator); revenue per available room (\"REVPAR\"), a summary measure of hotel results, combining ADR and occupancy rate.\n\nMost of our revenue is essentially cash-based, through customers wagering with cash or paying for non-gaming services with cash or credit cards. Our resorts, like many in the industry, generate significant operating cash flow. Our industry is capital intensive and we rely heavily on the ability of our resorts to generate operating cash flow to repay debt financing, fund maintenance capital expenditures and provide excess cash for future development.\n\nOur results of operations do not tend to be seasonal in nature, though a variety of factors can affect the results of any interim period, including the timing of major Las Vegas conventions, the amount and timing of marketing and special events for our high-end customers, and the level of play during major holidays, including New Year and Chinese New Year.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "previously laid off or terminated employees, management determined in 2002 that a portion of the remaining accrual was no longer necessary. This resulted in a restructuring credit of $10 million in 2002.\n\nProperty transactions, net consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Gain on sale of North Las Vegas land $ | — | $ (36,776) | $ — |\n| Siegfried & Roy theatre write-down – The Mirage . . . | — | 1,408 | — |\n| Storm damage – Beau Rivage | — | — | 7,824 |\n| Write-off of Detroit development costs | — | — | 4,754 |\n| Impairment of assets to be disposed of | 473 | 5,764 | 2,134 |\n| Demolition costs | 7,057 | 6,614 | — |\n| Other net losses on asset sales or disposals | 1,135 | 4,049 | — |\n| | $ 8,665 | $ (18,941) | $ 14,712 |\n\nIn 2004, there were no material unusual property transactions. In 2003, we sold 315 acres of land in North Las Vegas, Nevada near Shadow Creek for approximately $55 million, resulting in the $37 million gain reflected above. Prior to 2003, we classified gains and losses on routine assets sales or disposals as a non-operating item at some resorts and as an operating item at other resorts. We believe the preferable presentation of these items is as an element of operating income. Prior period statements have not been reclassified as such transactions were not material in periods prior to 2003. Until 2003, demolition costs were typically capitalized as part of new construction. We began expensing demolition costs on major construction projects as incurred on January 1, 2003, and are accounting for this change in policy prospectively. Demolition costs were not material in periods prior to 2003. Demolition costs in 2004 and 2003 related primarily to preparation for the Bellagio standard room remodel, Bellagio expansion and new theatre at MGM Grand Las Vegas. Impairments of assets to be disposed of in 2003 consisted primarily of assets related to the former EFX! show and restaurants closed during 2003 at MGM Grand Las Vegas.\n\nIn 2002, Tropical Storm Isidore caused property damage at Beau Rivage totaling $8 million, including clean-up costs. The amount of the write-down for damaged assets was determined based on the net book value of the assets and engineering estimates. In connection with the revised development agreement in Detroit, we wrote off $5 million, which was the net book value of previously incurred development costs associated with the riverfront permanent casino site ($9 million), offset by previously accrued obligations no longer required under the revised development agreement ($4 million).\n\n#### **Non-operating Results**\n\nThe following table summarizes information related to interest on our long-term debt:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| Interest cost $ 401,391 | | $ 352,820 | $ | 345,448 |\n| Less: Capitalized interest | (23,005) | (15,234) | | (61,712) |\n| Interest expense, net $ 378,386 | | $ 337,586 | $ | 283,736 |\n| Cash paid for interest, net of amounts capitalized . . . $ 321,008 | | $ 308,198 | $ | 266,071 |\n| Average total debt balance $ 5.5 billion | | $ 5.2 billion | | $ 5.2 billion |\n| Weighted average interest rate | 7.2% | 6.9% | | 6.8% |\n\nInterest cost was higher in 2004 as we had a higher average borrowing rate due to increases in variable interest rates and the issuance of significant fixed rate debt in the second half of 2004 in anticipation of the Mandalay merger.\n\nCapitalized interest increased in 2004 due to the ongoing Bellagio expansion and *KÀ* theatre projects. Capitalized interest in 2005 will include interest capitalized on Project CityCenter. Capitalized interest decreased in 2003 due to the suspension of development in Atlantic City in late 2002 and the mid-2003 cessation of interest capitalization on the Company's investment in Borgata, which opened on July 3, 2003.\n\nNon-operating items from unconsolidated affiliates, primarily our share of Borgata's interest expense and state income taxes, increased from $10 million in 2003 to", - "page_start": 34, - "page_end": 34, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "Slot revenues increased substantially in both 2003 and 2004. Improvements were the result of strong customer visitation, enhanced marketing programs, the impact of our Players Club rewards program, and the implementation of cashless gaming technology in 2003. Slot win percentages were consistent among all three periods.\n\nNon-casino revenue increased in 2004 primarily due to the enhanced amenities at our resorts. In addition, we were able to increase the pricing for our rooms and other non-gaming amenities. Our hotel results began to improve notably in the latter half of 2003, particularly at our Las Vegas Strip resorts. For the year ended December 31, 2004 REVPAR at our Las Vegas Strip resorts was $141 compared to $126 in 2003, an increase of 12%. Company-wide REVPAR was $121, an increase of 10% over 2003. This increase was largely rate driven, as occupancy increased from 91% to 92% and ADR increased from $121 to $132. In 2003, company-wide REVPAR increased 6% from $104 to $110, with most of the gains coming in the second half of the year.\n\n#### **Operating Results – Details of Certain Charges** Pre-opening and start-up expenses consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Bellagio expansion $ 3,805 | | $ — | $ — |\n| KÀ | 3,655 | — | — |\n| Borgata | — | 19,326 | 7,757 |\n| New York-New York (Zumanity, Nine Fine Irishmen) | — | 4,310 | — |\n| Players Club | — | 3,051 | 5,117 |\n| Other | 2,816 | 2,579 | 1,267 |\n| $ 10,276 | | $ 29,266 | $ 14,141 |\n\nPre-opening and start-up expenses related to Borgata represent our share of the operating results of Borgata prior to its July 2003 opening.\n\n#### Restructuring costs (credit) consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Contract termination costs $ 3,693 | | $ 4,049 | $ 3,257 |\n| Reversal of certain September 11 charges | — | — | (10,421) |\n| Siegfried & Roy show closure – The Mirage | — | 1,623 | — |\n| Reversal of 2000 contract termination costs | — | — | (9,857) |\n| Other | 1,932 | 925 | — |\n| $ 5,625 | | $ 6,597 | $ (17,021) |\n\nIn 2004, restructuring costs include $3 million for contract termination costs related to the Aqua restaurant at Bellagio and $2 million of workforce reduction costs at MGM Grand Detroit as a result of our efforts to minimize the impact of a gaming tax increase in Michigan.\n\nIn 2003, our primary restructuring activities included closing two marketing offices and terminating the related leases, terminating a lease agreement with a restaurant tenant at MGM Grand Las Vegas, and closing the Siegfried & Roy show, which resulted in a charge for employee severance costs.\n\nIn December 2002, we recorded a restructuring credit of $10 million related to a lease contract termination accrual originally recorded in June 2000 as we determined that payment under this obligation was not probable. We recorded $3 million of restructuring charges in December 2002 related to contract termination costs for a restaurant lease and the EFX! show at MGM Grand Las Vegas. In 2001, management responded to a decline in business volumes caused by the September 11 attacks by implementing cost containment strategies which included a significant reduction in payroll and a refocusing of several of our marketing programs. This resulted in a $22 million charge against earnings. As a result of improving business levels and our success at re-hiring a substantial number of", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- The ongoing capital investments in upscale amenities at our resorts, which we believe is allowing us to market more effectively to visitors, capture a greater share of these visitors' increased travel budgets, and generate premium pricing for our resorts' rooms and other amenities.\nAs a result of the above trends, our net revenues increased 10% in 2004, while increasing only 3% in 2003. Net revenues at MGM Grand Las Vegas increased 14% in 2004, due to the addition of several new restaurants, bars and other amenities, and in spite of fewer rooms in service due to room remodel activity. Net revenues at New York-New York increased 26% as the resort continues to benefit from *Zumanity* and Nine Fine Irishmen, both of which opened in summer 2003. Net revenues at The Mirage decreased 2% as the resort was without the Siegfried & Roy show and the buffet was closed for a portion of the year while Cravings was constructed.\n\nOur operating income in 2004 increased 36%, due primarily to the strong revenue trends and a full year of Borgata's results. The increase in income from unconsolidated affiliates is responsible for approximately one-third of the increase in operating income, while improvements at our operating resorts, particularly Bellagio, MGM Grand Las Vegas and New York-New York, make up the rest of the increase. Operating income at MGM Grand Detroit was essentially flat year-overyear, despite an increase in the gaming tax rate from 18% to 24% effective September 2004. Several other factors largely offset: Higher corporate expense due to increased development costs; lower bad debt expense due to improved collections; lower preopening expenses due to Borgata preopening expenses in 2003; and higher property transactions, net due to a $37 million gain on sale of land in 2003.\n\nIn 2003, our operating income decreased by 6%. While revenues grew especially in the second half of 2003, expense growth, particularly in payroll, outpaced revenues.\n\n#### **Operating Results – Detailed Revenue Information** The following table presents details of our net revenues:\n\n#### (In thousands)\n\n| Year Ended December 31 | 2004 | % Change | 2003 | % Change | 2002 |\n| --- | --- | --- | --- | --- | --- |\n| Casino revenues, net: | | | | | |\n| Table games $ | 943,343 | 9% | $ 866,096 | (3%) | $ 893,836 |\n| Slots | 1,218,589 | 9% | 1,115,029 | 5% | 1,064,491 |\n| Other | 62,033 | 10% | 56,389 | 3% | 54,513 |\n| Casino revenues, net . . | 2,223,965 | 9% | 2,037,514 | 1% | 2,012,840 |\n| Non-casino revenue: | | | | | |\n| Rooms | 911,259 | 9% | 833,272 | 5% | 796,861 |\n| Food and beverage | 841,147 | 11% | 757,278 | 7% | 706,153 |\n| Entertainment, retail | | | | | |\n| and other | 696,117 | 7% | 647,702 | 2% | 637,625 |\n| Non-casino revenues | 2,448,523 | 9% | 2,238,252 | 5% | 2,140,639 |\n| | 4,672,488 | 9% | 4,275,766 | 3% | 4,153,479 |\n| Less: Promotional allowances . | (434,384) | 5% | (413,023) | 4% | (396,551) |\n| | $ 4,238,104 | 10% | $ 3,862,743 | 3% | $ 3,756,928 |\n\nTable games revenues increased as a result of the improvements in the U.S. economy and the general economy worldwide, as well as increased attendance at targeted marketing events, including the New Years period. Total table games volume for the year was up 9%, with particular strength in baccarat volume, up 18%. These are the most significant increases in table games volumes since 2000. Table games revenues decreased in 2003, as a slightly lower hold percentage and the impact of the Iraq war and SARS outbreak in early 2003 were not fully offset by strong volume levels over the latter half of 2003. Table games win percentages were within our normal range for all periods presented.", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **Overall Outlook**\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of *KÀ* and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. *KÀ* opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, *KÀ* will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there will be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n#### **Mandalay Merger**\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group (\"Mandalay\"), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.\n\nThe Mandalay merger will impact our operations in several ways. We will have to integrate Mandalay's operations into ours. This could require additional operating and capital expenditures. However, we expect to achieve ongoing cost savings and", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "The increase in aggregate dollars in all periods presented is primarily a result of the expansion of our operations through internal growth and acquisitions.\n\nThe increase in cost of operations as a percentage of revenue from 2002 to 2003 and the decrease in cost of operations as a percentage of revenue from 2003 to 2004 is primarily attributable to higher self-insurance expense in 2003. Self-insurance expense was $165.3 million, $189.5 million and $138.1 million for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in self-insurance expense in 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through the 2003 policy year.\n\nExcluding self-insurance expense, cost of operations as a percentage of revenue increased during the year ended December 31, 2004 versus the comparable 2003 period. This increase is primarily attributable to increased fuel prices, labor costs and subcontracting costs associated with the long-haul transport of waste by third-party vendors. Excluding self-insurance expense, cost of operations as a percentage of revenue decreased in 2003 versus the comparable 2002 period due to the elimination of closure and post-closure expense as a component of cost of operations in accordance with SFAS 143 in 2003 and the termination of our operating lease facility in July 2002. This decrease was partially oÅset by increased fuel prices, an increase in waste taxes levied on landÑll volumes in certain states, an increase in revenue generated by lines of business that produce lower operating margins and an increase in the long-haul transport of waste by third-party vendors.\n\nTo date in 2005, we have experienced a signiÑcant increase in fuel prices. We believe that cost of operations as a percentage of revenue may continue to remain high depending upon the cost of fuel, health insurance, risk insurance and other key components of our cost structure and general economic conditions.\n\n*Depreciation, Amortization and Depletion of Property and Equipment.* Depreciation, amortization and depletion expenses for property and equipment were $252.4 million, $233.8 million and $193.5 million, or, as a percentage of revenue, 9.3%, 9.3% and 8.2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in aggregate dollars from 2003 to 2004 is primarily due to the expansion of our operations through internal growth and acquisitions. The increase in aggregate dollars and as a percentage of revenue from 2002 to 2003 is primarily due to an increase in landÑll amortization associated with the adoption of SFAS 143. The remaining increase from 2002 to 2003 is due to increased depreciation expense resulting from capital expenditures, acquisitions and the purchase of equipment originally placed into service pursuant to an operating lease.\n\n*Amortization of Intangible Assets.* Intangible assets consist primarily of cost in excess of fair value of net assets acquired, but also includes values assigned to long-term contracts, covenants not to compete and customer relationships. Expenses for amortization of intangible assets were $7.0 million, $5.3 million and $6.1 million, or, as a percentage of revenue, .3%, .2% and .2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in such expenses in aggregate dollars and as a percentage of revenue from 2003 to 2004 is primarily due to amortization expense on amounts that were recorded in other intangible assets during the three months ended September 30, 2004 resulting from an extensive internal review of all recent acquisitions. The increase in amortization of intangible assets in aggregate dollars is also due to the amortization of intangible assets associated with businesses acquired during 2004.\n\n*Accretion expense.* Accretion expense was $13.7 million and $12.7 million or, as a percentage of revenue, .5% and .5%, for the years ended December 31, 2004 and 2003, respectively, versus $0 for 2002. Accretion expense resulted from the adoption of SFAS 143 as of January 1, 2003. The increase in such expenses in aggregate dollars in 2004 is primarily due to expansion of our landÑll operations.\n\n*Selling, General and Administrative Expenses.* Selling, general and administrative expenses were $268.3 million, $247.9 million and $238.7 million, or, as a percentage of revenue, 9.9%, 9.8% and 10.1%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increases in aggregate dollars are primarily a result of the expansion of our operations through internal growth and acquisitions. The increase in such expenses as a percentage of revenue from 2003 to 2004 is primarily due to higher compensation costs. The decrease in such expenses as a percentage of revenue from 2002 to 2003 is primarily due to leveraging our existing overhead structure over an expanding revenue base.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS EXPANDING WITH EXCELLENCE\n\n**BELLAGIO** underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\n**MGM GRAND LAS VEGAS** completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, *KA`* TM, and several of the Strip's finest restaurants and hottest nightspots. **18.0%**\n\n**TI**'s transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\n**THE MIRAGE** was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of \"old\" Las Vegas.\n\n2004 Revenue Mix Casino\n\n**SKYLOFTS** MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\n- Rooms Food & Beverage Entertainment, Retail,\n- & Other\n\n**BELLAGIO SPA** Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n#### Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support.\n\nIn each year since we established the program, employees have given record amounts to support a\n\n**KÀ** The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's *KÀ* debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "### **NOTE 14 — PROPERTY TRANSACTIONS, NET**\n\nProperty transactions, net consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Gain on sale of North Las Vegas land $ | — | $ (36,776) | $ — |\n| Siegfried & Roy theatre write-down – The Mirage | — | 1,408 | — |\n| Storm damage – Beau Rivage | — | — | 7,824 |\n| Write-off of Detroit development costs | — | — | 4,754 |\n| Impairment of assets to be disposed of | 473 | 5,764 | 2,134 |\n| Demolition costs | 7,057 | 6,614 | — |\n| Other net losses on asset sales or disposals | 1,135 | 4,049 | — |\n| $ | 8,665 | $ (18,941) | $ 14,712 |\n\nIn 2004, there were no material unusual property transactions. In 2003 the Company sold 315 acres of land in North Las Vegas, Nevada near Shadow Creek for approximately $55 million, which resulted in a pretax gain of approximately $37 million. Also in 2003, the Company recorded write-downs and impairments of assets abandoned or replaced with new construction, primarily at MGM Grand Las Vegas in preparation for new restaurants and the new theatre. Prior to 2003, the Company classified gains and losses on routine asset sales or disposals as a non-operating item at some resorts and as an operating item at other resorts. Management believes the preferable presentation of these items is as an element of operating income. Prior period statements have not been reclassified as such transactions were not material in the prior periods. Until 2003, demolition costs were typically capitalized as part of new construction. The Company began expensing demolition costs on major construction projects as incurred on January 1, 2003, and is accounting for this change in policy prospectively. Demolition costs were not material in prior periods. Demolition costs in 2004 and 2003 relate primarily to preparation for the Bellagio standard room remodel, Bellagio expansion and new theatre at MGM Grand Las Vegas.\n\nIn 2002, Tropical Storm Isidore caused property damage at Beau Rivage totaling $8 million, including clean-up costs. The amount of the write-down for damaged assets was determined based on the net book value of the assets and engineering estimates. In connection with the revised development agreement in Detroit, the Company wrote off $5 million, which was the net book value of previously incurred development costs associated with the riverfront permanent casino site ($9 million), offset by previously accrued obligations no longer required under the revised development agreement ($4 million). Also in 2002, the Company recorded write-downs and impairments of assets abandoned or replaced with new construction.\n\n#### **NOTE 15 — RELATED PARTY TRANSACTIONS**\n\nThe Company's related party transactions consisted of the following revenues (expenses):\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Hotel and other revenue from related parties $ | 416 | $ 871 | $ 764 |\n| License fees to entities under common ownership . . . | (1,000) | (1,000) | (1,000) |\n| Professional fees to directors or firms | | | |\n| affiliated with directors | (4,084) | (1,551) | (1,815) |\n| Other related party expenses | (62) | (468) | (224) |\n| | $ (4,730) | $ (2,148) | $ (2,275) |\n\nAt December 31, 2004, the Company owed $2 million for legal fees to a firm affiliated with one of the Company's directors. The Company also engaged in transactions with its unconsolidated affiliates. In each of 2004 and 2003, the Company paid Monte Carlo $4 million as a result of closing the tram between Bellagio and Monte Carlo in preparation for the Bellagio expansion. The Company leases two acres of land to Borgata and received $1 million in each of 2004, 2003 and 2002 under this lease. Borgata is required to pay for a portion of the masterplan improvements at Renaissance Pointe, and the Company is responsible for environmental cleanup costs incurred by Borgata. The net amount reimbursed to the Company under these arrangements for the years ended December 31, 2004, 2003 and 2002 was $1 million, $10 million and $8 million, respectively.", - "page_start": 74, - "page_end": 74, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "#### **NOTE 1 — ORGANIZATION**\n\nMGM MIRAGE (the \"Company\"), formerly MGM Grand, Inc., is a Delaware corporation, incorporated on January 29, 1986. As of December 31, 2004 approximately 58% of the outstanding shares of the Company's common stock were owned by Tracinda Corporation, a Nevada corporation wholly owned by Kirk Kerkorian. MGM MIRAGE acts largely as a holding company and, through wholly-owned subsidiaries, owns and/or operates casino resorts.\n\nThe Company owns and operates the following casino resorts on the Las Vegas Strip in Las Vegas, Nevada: Bellagio, MGM Grand Las Vegas, The Mirage, Treasure Island (\"TI\"), New York-New York and the Boardwalk Hotel and Casino. The Company owns a 50% interest in the joint venture that owns and operates the Monte Carlo Resort & Casino, also located on the Las Vegas Strip.\n\nThe Company owns three resorts in Primm, Nevada at the California/Nevada state line – Whiskey Pete's, Buffalo Bill's and the Primm Valley Resort – as well as two championship golf courses located near the resorts. The Company also owns Shadow Creek, an exclusive world-class golf course located approximately ten miles north of its Las Vegas Strip resorts.\n\nThe Company, through its wholly owned subsidiary, MGM Grand Detroit, Inc., and its local partners formed MGM Grand Detroit, LLC, to develop a hotel, casino and entertainment complex in Detroit, Michigan. MGM Grand Detroit, LLC operates a casino in an interim facility in downtown Detroit. See Note 10 for discussion of the revised development agreement with the City of Detroit and plans for a permanent casino resort.\n\nThe Company owns and operates Beau Rivage, a beachfront resort located in Biloxi, Mississippi. The Company also owns a 50% interest in a limited liability company that owns Borgata, a casino resort at Renaissance Pointe, located in the Marina area\n\nof Atlantic City, New Jersey. Boyd Gaming Corporation owns the other 50% of Borgata and also operates the resort. Borgata opened in July 2003. The Company owns approximately 95 developable acres adjacent to Borgata, a portion of which consists of common roads, landscaping and master plan improvements which the Company designed and developed as required under the agreement with Boyd.\n\nUntil July 2004, the Company owned and operated MGM Grand Australia and until January 2004, the Company owned and operated the Golden Nugget Las Vegas in downtown Las Vegas and the Golden Nugget Laughlin in Laughlin, Nevada (the \"Golden Nugget Subsidiaries\"). Until June 2003, the Company operated PLAYMGMMIRAGE.com, the Company's online gaming website based in the Isle of Man. See Note 3 for further information regarding these discontinued operations. In the second quarter of 2002, the Company received proceeds of $11 million upon termination of management agreements covering four casinos in the Republic of South Africa. Prior to the termination, the Company managed three permanent casinos and one interim casino and received management fees from its partner, Tsogo Sun Gaming & Entertainment. The termination fee was recorded as part of other revenues in the accompanying consolidated statements of income.\n\nThe Company is actively seeking future development opportunities in the United Kingdom. In May 2003, the Company acquired a 25% interest in Metro Casinos Limited, a United Kingdom gaming company which operates a casino in Bristol. See Note 10 for discussion of other potential developments in the United Kingdom.\n\nIn June 2004, the Company entered into a joint venture agreement to develop, build and operate a hotel-casino resort in Macau S.A.R. The agreement is subject to, among other things, the approval of the government of Macau S.A.R., and other regulatory approvals, as well as the entry into a subconcession agreement with the holder of one of the existing concessions.", - "page_start": 55, - "page_end": 55, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What possess all naval aviators ?", - "target_page": 5, - "target_passage": "All Naval Aviators possess a natural interest in the basic aerodynamic factors which affect the performance of all aircraft. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# PREFACE\n\nThe purpose of this textbook is to present the elements of applied aerodynamics and aeronautical engineering which relate directly to the problems of flying operations. All Naval Aviators possess a natural interest in the basic aerodynamic factors which affect the performance of all aircraft. Due .to the increasing complexity of modern aircraft, this natural interest must be applied to develop a sound understanding of basic engineering principles and an appreciation of some of the more advanced problems of aerodynamics and engineering. The safety and effectiveness of flying operations will depend greatly on the understanding and appreciation of how and why an airplane flies. The principles of aerodynamics will provide the foundations for developing exacting and precise flying techniques and operational procedures.\n\nThe content of this textbook has been arranged to provide as complete as possible a reference for all phases of flying in Naval Aviation. Hence, the text material is applicable to the problems of flight training, transition training, and general flying operations. The manner of presentation throughout the text has been designed to provide the elements of both theory and application and will allow either directed or unassisted study. As a result, the text material'will be applicable to supplement formal class Iectures and briefings and provide reading material as a background for training and flying operations.\n\nMuch of the specialized mathematical detail of aerodynamics has been omitted wherever it was considered unnecessary in the field of flying operations. Also, many of the basic assumptions and limitations of certain parts of aerodynamic theory have been omitted for the sake of simplicity and clarity of presentation. In order to contend with these specific shortcomings, the Naval Aviator should rely on the assistance of certain specially qualified individuals within Naval Aviation. For example, graduate aeronautical engineers, graduates of the Test Pilot Training School at the Naval Air Test Center, graduates of the Naval Aviation Safety Officers Course, and technical representatives of the manufacturers are qualified to assist in interpreting and applying the more difficult parts of aerodynamics and aeronautical engineering. To be sure, the specialized qualifications of these individuals should be utilized wherever possible.", - "page_start": 4, - "page_end": 4, - "source_file": "00-80T-80.pdf" - }, - { - "text": "important feature which defines its suitability the design performance of his aircraft. The for specific missions. The principal items of performance section of the flight handbook airplane performance deserve detailed consid- provides the specific information regarding the eration in order to better understand and capabilities and limitations of each airplane. appreciate the capabilities of each airplane. Knowledge of the various items of airplane performance will provide the Naval Aviator rive operation of his aircraft. with a more complete appreciation of the\n\nThe performance of an aircraft is. the most operating limitations and insight to obtain Every Naval Aviator must rely upon these handbook data as the guide to safe and effec-", - "page_start": 112, - "page_end": 112, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS OD-8OT-80 APPLICATION OF AERODYNAMICS TO SPECIFIC PROBLEMS OF FLYING\n\n# Chapter 6\n\n# APPLICATION OF AERODYNAMICS TO SPECIFBC PROW OF FLYING\n\nWhile the previous chapters have presented the detailed parts of the general field of aerodynamics, there remain various problems of flying which require the application of principles from many parts of aerodynamics. The application of aerodynamics to these various problems of flying will assist the Naval Aviator in understanding these problems and developing good flying techniques.\n\n#### PRIMARY CONTROL OF AIRSPEED AND ALTITUDE\n\nFor the conditions of steady flight, the airplane must be in equilibrium. Equilibrium will be achieved when there is no unbalance of force'or moment acting on the airplane. If it is assumed that the airplane is trimmed so that no unbalance of pitching, yawing, or rolling moments exists, the principal concern is for", - "page_start": 366, - "page_end": 366, - "source_file": "00-80T-80.pdf" - }, - { - "text": "# **AERODYNAMICS FOR NAVAL AVIATORS**\n\n**BY** \n\n**H. H. HURT, JR. UNIVERSITY OF SOUTHERN CALIFORNIA** \n\nDISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. DESTRUCTION NOTICE - For unclassified, limited documents, destroy by any method that will prevent disclosure of contents or reconstruction of the document.\n\n**PUBLISHED BY DIRECTION OF COMMANDER, NAVAL AIR SYSTEMS COMMAND** \n\n/3", - "page_start": 0, - "page_end": 0, - "source_file": "00-80T-80.pdf" - }, - { - "text": "# Chapter 1 BASIC AERODYNAMKS\n\nIn order to understand the characteristics of his aircraft and develop precision flying techniques, the Naval Aviator must be familiar with the fundamentals of aerodynamics. There are certain physical laws which describe the behavior of airflow and define the various aerodynamic forces and moments acting on a surface. These principles of aerodynamics provide the foundations for good, precise flying techniques.\n\n#### WING AND AIRFOIL FORCES\n\n#### PROPERTIES OF THE ATMOSPHERE\n\nThe aerodynamic forces and moments acting on a surface are due in great part to the properties of the air mass in which the surface is operating.~ The composition, of the earth's atmosphere by volume is approximately 78 percent. nitrogen, 21 percent oxygen, and 1", - "page_start": 18, - "page_end": 18, - "source_file": "00-80T-80.pdf" - }, - { - "text": "# Chapter 5\n\n# OPERATING STRENGTH LIMITATIONS\n\nThe weight of the structural components of an aircraft is an extremely important factor in the development of an efficient aircraft configuration. In no other field of mechanical design is there such necessary importance assigned to structural weight. The efficient aircraft and powerplant structure is the zenith of highly reined rknimum weight design. in an aircraft.\n\norder to obtain the required service life from his aircraft, the Naval Aviator must undetstand, appreciate, and observe the operating strength limitations. Failure to do so will incur excessive maintenance costs and a high incidence of failure during the service life of", - "page_start": 342, - "page_end": 342, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| loo | l.lm | 1.30 | 20.00 |\n| --- | --- | --- | --- |\n| 110 | ,826 | 1.24 | 15.P |\n| 17.0 | ,694 | 1.04 | 12.7' |\n| lY) | .444 | .61 | 8.20 |\n| 200 | 230 | .38 | 4.6' |\n| MO | ,111 | .I7 | 2.10 |\n| 4&l | .c453 | .o!J | 1.10 |\n| 30.7 | ,040 | .06 | .T= |\n| 600 | .028 | .04 | .5O |\n\nNote that for the conditions of steady flight, each airspeed requites a specific angle of attack and lift coefficient. This fact provides a fundamental concept of flying technique: Angle of attack is tbs primary Control of airspeed in steady flight. Of course, the control stick or wheel allows the pilot to control the angle of attack and, thus, control the airspeed in steady flight. In the same sense, the throttle controls the output of the powerplant and allows the pilot to control rate of climb and descent at various airspeeds.\n\nThe teal believers of these concepts ate professional instrument pilots, LSO's, and glider pilots.. The glider pilot (or flameout enthusiast) has no recourse but to control airspeed by angle of attack and accept whatever rate of descent is incurred at the various airspeeds. The LSO must become quite proficient at judging the flight path and angle of attack of the airplane in the pattern. The more complete visual reference field available to the LSO allows him to judge the angle of attack of the airplane mote accurately than the pilot. When the airplane approaches the LSO, the precise judgment of airspeed is by the angle of attack rather than the rate of closure. If the LSO sees the airplane on the desired flight path but with too low an angle of attack, the airspeed is too high; if the angle of attack is too high, the airspeed is too low and the aitplane is approaching the stall. The mirror landing system coupled with an angle of attack indicator is an obvious refinement. The mittot indicates the desired flight path and the angle of attack indicator allows precision control of the airspeed. The accomplished insttument pilot is the devotee of \"attitude\" flying technique-his creed being \"attitude plus power equals performance.\" During a GCA approach, the professional instrument pilot controls airspeed with stick (angle of attack) and rate of descent with power adjustment.\n\nManeuvering flight and certain transient conditions of flight tend to complicate the relationship of angle of attack and airspeed. However, the majority of flight and, certainly, the most critical regime of flight (takeoff, approach, and landing), is conducted in essentially steady flight condition.\n\nAIRFOIL LIFT CHARACTERISTICS. Airfoil section properties differ from wing or airplane properties because of the effect of the planform. Actually, the wing may have vatious airfoil sections from root to tip with taper, twist, sweepback and local flow components in a spanwise direction. The resulting aetodynamic properties of the wing are determined by the action of each section along the span and the three-dimensional flow. Airfoil section properties are derived from the basic shape or profile in two-dimensional flow and the force coefficients are given a notation of lower case letters. For example, a wing or airplane lift coefficient is C, while an airfoil section lift coefficient is termed cr. Also, wing angle of attack is Q while section angle of attack is differentiated by the use of 01~. The study of section properties allows an objective consideration of the effects of camber, thickness, etc.\n\nThe lift characteristics of five illustrative airfoil sections are shown in figure 1.12. The section lift coe&icient, c,, is plotted versus section angle of attack, olO, for five standard NACA airfoil profiles. One characteristic feature of all airfoil sections is that the slope of the various lift curves is essentially the same. At low lift coefhcients, the section lift coefficient increases approximately 0.1 for each degree increase in angle of attack. For each of the airfoils shown, a S' change in angle of", - "page_start": 44, - "page_end": 44, - "source_file": "00-80T-80.pdf" - }, - { - "text": "# Chapter 3\n\n# HIGH SPEED AERODYNAMICS\n\nDevelopments in aircraft and powerplants have produced high performance airplanes with capabilities for very high speed flight. The study of aerodynamics at these very high flight speeds has many significant differences from the study of classical low speed aerodynamics. Therefore, it is quite necessary that the Naval Aviator be familiar with the nature of high speed airflow and the characteristics of high performance airplane configurations.\n\n# GENERAL CONCEPTS AND SUPERSONIC FLOW PATTERNS\n\n#### NATURE OF COMPRESSIBILITY\n\nAt low flight speeds the study of aerodynamics is greatly simplified by the fact that air may experience relatively small changes in pressure with only negligible changes in density. This airflow is termed incompressible since the air may undergo changes", - "page_start": 218, - "page_end": 218, - "source_file": "00-80T-80.pdf" - }, - { - "text": "The majority of aircraft accidents are due to some type of error of the pilot. This fact has been true in the past and, unfortunately, most probably will be true in the future. Each Naval Aviator should strive to arm himself with knowledge, training, and exacting, professional attitudes and techniques. The fundamentals of aerodynamics as presented in this text will provide the knowledge and background for safe and effective flying operations. The flight handbooks for the aircraft will provide the particular techniques, procedures, and operating data which are necessary for each aircraft. Diligent study and continuous training are necessary to develop the professional skills and techniques for successful flying operations.\n\nThe author takes this opportunity to express appreciation to those who have assisted in the preparation of the manuscript. In particular, thanks are due to Mr. J. E. Fairchild for his assistance with the portions dealing with helicopter aerodynamics and roll coupling phenomena. Also, thanks are due to Mr. J. F. Detwiler and Mr. E. Dimitruk for their review of the text material.\n\nHUGH HARRISON HURT, Jr.\n\nAugust 1959 University of Southern California Los Angelesj Cnlif.", - "page_start": 5, - "page_end": 5, - "source_file": "00-80T-80.pdf" - }, - { - "text": "The range sf the reciprocating powered airplane can be augmented by the use of ground effect. When the airplane is close to the ground or water surface the reduction of induced drag increases the maximum lift-drag ratio and causes a corresponding increase in range. Of course, the airplane must be quite close to the surface to obtain a noticeable increase in (L/D),., and range. The difficulty in holding the airplane at the precise altitude without contacting the ground or water will preclude the use of ground effect during ordinary flying operations. The use of ground effect to extend range should be reserved as a final measure in case of emergency. Because of the very detrimental effect of low altitude on the range of the turbojet, ground effect will not be of a particular advantage in an attempt to augment range.\n\nThe most outstanding examples of the use of ground effect are shown in the cases of multiengine airplanes with some engines inoperative. When the power loss is quite severe, the airplane may not be capable of sustaining altitude and will descend. As ground effect is encountered, the reduced power required may allow the airplane to sustain flight at extremely low altitude with the remaining powerplants functioning. In ground effect, the reciprocating powered airplane will encounter a greater (L/D),, which occurs at a lower airspeed and power required and the increase in range may be quite important during emergency conditions.\n\n#### INTERFERENCE BETWEEN AIRPLANES IN FLIGHT\n\nDuring formation flying and inflight refueling, airplanes in proximity to one another will produce a mutual interference of the flow patterns and alter the aerodynamic characteristics of each airplane. The principal effects of this interference must be appreciated since certain factors due to the mutual interference may enhance the possibility of a collision.\n\n#### NAVWEPS D&ROT-R0 APPLICATION OF AERODYNAMICS TO SPECIFIC 'PROBLEMS OF FLYING\n\nOne example of interference between airplanes in flight is shown first in figure 6.10 with the effect of lateral separation of two airplanes flying in line abreast. A plane of symmetry would exist halfway between two identical airplanes and would furnish a boundary of flow across which there would be no lateral components of flow. As the two airplane wing tips are in proximity, the effect is to reduce the strength of the tip or trailing vortices and reduce the induced velocities in the vicinity of wing tip. Thus, each airplane will experience a local increase in the lift distribution as the tip vortices are reduced and a rolling moment is developed which tends to roll each airplane away from the other. This disturbance may provide the possibility of collision if other airplanes are in the vicinity and there is delay in control correction or overcontrol. If the wing tips are displaced in a fore-and-aft direction, the same effect exists but generally it is of a lower magnitude.\n\nThe magnitude of the interference effect due to lateral separation of the wing tips depends on the proximity of the wi.ig tips and the extent of induced Pov;. This implies that the interference v-r 1 e grealest when the tips are very close AL-L the airplanes are operating at high lift coefficients. An interesting ramification of this effect is that several airplanes in line abreast with the wing tips quite close will experience a reduction in induced drag.\n\nAn indirect form of interference can be encountered from the vortex system created by a preceding airplane along the intended flight path. The vortex sheet rolls up a considerable distance behind an airplane and creates considerable turbulence for any closely following airplane. This wake can prove troublesome if airplanes taking off and landing are not provided adequate separation. The rolled-up vortex sheet will be strongest when the preceding airplanes is large, high gross weight, and operating at high lift coefhcients. At times this turbulence may be falsely attributed to propwash or jetwash.", - "page_start": 400, - "page_end": 400, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What is the static pressure of the aire at standard sea level ?", - "target_page": 20, - "target_passage": "At standard sea level conditions the static pressure of the air is 2,116 psf (or 14.7 psi, 29.92 in. Hg, etc.) ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "percent water vapor, argon, carbon dioxide, etc. For the majority of all aerodynamic considerations air is considered as a uniform mixture of these gases. The usual quantities used to define the properties of an air mass are as follows:\n\nSTATIC PRESSURE. The absolute static pressure of the air is a property of primary importance. The static pressure of the air at any altitude results from the mass of air supported above that level. At standard sea level conditions the static pressure of the air is 2,116 psf (or 14.7 psi, 29.92 in. Hg, etc.) and at 40,000 feet altitude this static pressure decreases to approximately 19 percent of the sea level value. The shorthand notation for the ambient static pressure is \"p\" and the standard sea level static pressure is given the subscript \"a\" for zero altitude, pa. A more usual reference in aerodynamics and performance is the proportion of the ambient sta~tic pressure and the standard sea level static pressure. This static pressure ratio is assigned the shorthand notation of 8 (delta).\n\nAltitude pressure ratio\n\n \n \n\\begin{tabular}{l l} \\multicolumn{2}{l}{**Ambient static pressure**} \\\\ Standard sea level static pressure \\\\ \\end{tabular} \n \n\nMany items of gas turbine engine performance are directly related to some parameter involving the altitude pressure ratio.\n\nTEMPERATURE. The absolute temperacure of the air is another important property. The ordinary temperature measurement by the Centigrade scale has a/datum at the freezing point of water but absolute zero temperature is obtained at a temperature of -273\" Centigrade. Thus, the standard sea level tcmperature of 15\" C. is an absolute temperature of 288\". This scale of absolute temperature using the Centigrade increments is the Kelvin scale, e.g., o K. The shorthand notation for the ambient air temperature is \"T\" and the standard sea level air temperature of 288' K. is signified by Ta. The more usual reference is,\n\nthe proportion of the ambient air temperature and the standard sea level air temperature. This temperature ratio is assigned the shorthand notation of 0 (theta).\n\nTemperature ratio\n\n| Ambient | air temperature |\n| --- | --- |\n| =Standard | sea level air temperature |\n| @=TITtl | |\n| ,+273 | |\n| 288 | |\n\nMany items of compressibility effects and jet engine performance involve consideration of the temperature ratio.\n\nDENSITY. The density of the air is a property of greatest importance in the study of aerodynamics. The density of air is simply the mass of air per~cubic foot of volume and is a direct measure of the quantity of matter in each cubic foot of air. Air at standard sea lcvcl conditions weighs 0.0765 pounds per cubic foot and has a density of 0.002378 slugs per cubic foot. At an altitude of 40,000 feet the air density is approximately 25 percent of the sea level value.\n\nThe shorthand notation used for air density is p (rho) and the standard sea level air density is then pO. In many parts of aerodynamics it is very convenient to consider the proportion of the ambient air density and standard sea level air density. This density ratio is assigned the shorthand notation of c (sigma).\n\ndensity ratio = $\\frac{\\text{ambient air density}}{\\text{standard sea level air density}}$ \n \n$\\sigma=\\rho/\\rho_{0}$\n\nA general gas law defines the relationship of pressure temperature, and density when there is no change of state or heat transfer. Simply stated this would be \"density varies directly with pressure, inversely with temperature.\" Using the properties previously defined,\n\ndensity ratio= Pressure rat'o. temperature rat10", - "page_start": 19, - "page_end": 19, - "source_file": "00-80T-80.pdf" - }, - { - "text": "surface anflow continues to the aft stagnation point where the local velocity is again zero. The important point of this example of aerodynamic flow is existence of the stagnation point. The change in airflow static pressure which takes place at the stagnation point IS equal to the free stream dynamic pressure, q.\n\nThe measurement of free stream dynamic pressure is fundamental to the indication of airspeed. In fact, airspeed indicators are simply pressure gauges which measure dynamic pressure related to various airspeeds. Typical airspeed measuring systems are illustrated in figure 1.5. The pitot head has no internal flow velocity and the pressure in the pitot tube is equal to the total pressure of the airstream. The purpose of the static-ports is to sense the true static pressure of the free airstream. The total pressure and static pressure lines are attached to a differential pressure gauge and the net pressure indicated is the dynamic\n\npressure, q. The pressure gauge is then calibrated to indicate flight speed in the standard sea level air mass. For example, a dynamic pressure of 305 psf would be realized at a sea level flight ,speed of 300 knots.\n\nActually there can be many conditions of flight where the airspeed indicator does not truly reflect the actual velocity through the air mass. The corrections that must be applied are many and lisred in sequence below:\n\n(1) The indicated airspeed (IAS) is the actual instrument indication for some given flight condition. Factors such as an altitude other than standard sea level, errors of the instrument and errors due to the installation, compressibility, etc. may create great variance between this instrument indication and the actual flight speed.\n\n(2) The calibrated airspeed (CM) is the result of correcting IAS for errors of the", - "page_start": 27, - "page_end": 27, - "source_file": "00-80T-80.pdf" - }, - { - "text": "If the potential energy is represented by the static pressure, p, the sum of the potential and kinetic energy is the total pressure of the airstream.\n\n$$H{=}\\,p{+}\\,\\frac{1}{2}/\\,\\rho\\,\\,\\,V^{2}$$\n\nwhere H=total pressure, psf (sometimes referred to as \"head ' pressure)\n\n> p=static pressure, psf. p=density, siugs per cu. ft. V= velocity, ft./set.\n\nThis equation is the Bernoulli equation for 'incompressible flow. It is important to appreciate that the term >$pV2 has the units of pressure, psf. This term is one of the most important in all aerodynamics and appears so frequently t&it is given the name \"dynamic pressure\" and the shorthand notation \"4\".\n\nq= dynamic pressure, psf = jgpv2\n\nWith this definition it could be said that the sum of static and dynamic pressure in the flow tube remains constant.\n\nFigure 1.3 illustrates the variation of static, dynamic, and total pressure of air flowing through a closed tube. Note that the total pressure is con,stant throughout the length and any change in dynamic pressure produces the same magnitude change in static pressure.\n\nThe dynamic pressure of a free airstream is the one 'common denominator of all aerodynamic forces and moments. Dynamic pressure represents the kinetic energy of the free airstream and is a factor relating the capability for producing changes in static pressure on a surface. As defined, the dynamic, pressure varies directly as the density and the square of the velocity. Typical values of dynamic pressure, 4, are shown in table l-1 for various true airspeeds in the standard atmosphere. Notice that the dynamic pressure at some fixed velocity varies directly with the density ratio at any altitude. Also, appreciate the fact that at an altitude of 40,oM) feet (where the density ratio, b, is 0.2462) it is necessary to have a true air velocity twice that at sea level in order to product the same dynamic pressure.\n\n| True air | |\n| --- | --- |\n| speed | - |\n| (fr./scc.) | ,I I |\n| m= | c |\n| | _- |\n| 169 | |\n| 338 | |\n| 507 | |\n| 616 | |\n| 845 | |\n| I, 013 | |\n\nTABLE l-l. Effect of Speed and Altitvde on Dwzmnic Prerrure\n\nAIRSPEED MEASUREMENT. If a symmetrically shaped object were placed in a moving airstream, the flow pattern typical of figure 1.4 would result. The airstream at the very nose of the object would stagnate and the relative flow velocity at this point would be zero. The airflow ahead of the object possesses some certain dynamic pressure and ambient static pressure. At the very nose of the object the local velocity will drop to zero and the airstream dynamic pressure will be converted into an increase in static pressure at the stagnation point. In other words, there will exist a static pressure at the stagnation point which is equal to the airstream total pressure-ambient static pressure plus dynamic pressure.\n\nAround the surface of the object the airflow will divide and the local velocity will increase from zero at the stagnation point to some maximum on the sides of the object. If friction and viscosity effects are neglected, the", - "page_start": 26, - "page_end": 26, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Figure. 1.5. Airspeed Measurement\n\ninstrument and errors due to position or location of the installation. The instrument error must be small by design of the equipment and is usually negligible in equjpment which is properly maintained and cared for. The position error of the installation must be small in the range of airspeeds involving critical performance conditions. Position errors are most usually confine,d to the static source in that the actual static pressure sensed at the static port may be different from the free airstream static pressure. When the .,aircraft is operated through a large range' of angles of attack, the static pressure distribution varies 'quite greatly and it becomes quite difficult to'minimize the static source error. In most instances a compensating group of static sources may be combined to reduce the position error. In order to appreciate the magnitude of this problem, at flight speed near 100 knots a\n\n0.05 psi position error is an airspeed error of 10 knots. A typical variation of airspeed system position error is illustrated in figure 1.6.\n\n(3) The equivalent airspeed (PAS) is the result of correcting the (CAS) for compressibility effects. At high flight speeds the stagnation pressure recovered in the pitot tube is not representative of the airstream dynamic pressure due to a magnification by compressibility. Compressibility of the airflow produces a stagnation pressure in the pitot which is greater than if the flow were incompressible. As a result, the airspeed indication is given an erroneous magnihcation. The standard airspeed indicator is calibrated to read correct when at standard sea level conditions and thus has a compressibility correction appropriate for these conditions. However, when the aircraft is operating above standard sea level altitude,", - "page_start": 28, - "page_end": 28, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS 00-SOT-80 BASIC AERODYNAMICS\n\nthe inherent compensation is inadequate and additional correction must be applied. The subtractive corrections that must be applied to CA$ depend on pressure altitude and CAS and are shown on figure 1.6 for the subsonic flight range. The equivalent airspeed (EAS) is the flight speed in the standard sea level air mass which would produce the same free stream dynamic pressure as the actual flight condition.\n\n(4) The true airspeed (TAS) results when the &4X is corrected for density altitude. Since the airspeed indicator is calibrated for the dynamic pressures corresponding to airspeeds at standard sea level conditions, variations in air density must be accounted for. To relate EAS and TAX requires consideration that the EAS coupled with stand- .ard sea level density produces the same dynamic pressure as the TAX Soupled with the ^^_._^ 1 .:.. 2---:... ,.f *L., bl:A.* rnrJ;r;m.. dCLUd, 'all UcIIJIcy \"I L11L \"'6°C C\"IIUACI\"L'. From this reasoning, it can be shown that:\n\n$$(T A S)^{2}\\rho=(E A S)^{2}\\ \\rho_{0}$$\n \n \nor, $T A S\\!=\\!E A S{\\sqrt{\\frac{\\rho_{0}}{\\rho}}}$ \n \n\n$$T A S\\!=\\!E A S{\\frac{1}{\\sqrt{\\sigma}}}$$\n\nwhere TAX= true airspeed EAS=equivalent airspeed p=actual air density PO= standard sea level air density n=altitude density ratio, p/pa\n\nThe result shows that the TAX is a function of EAS and density altitude. Figure 1.6 shows a chart of density altitude as a function of pressure altitude and temperature. Each particular density altitude fixes the proportion between TAX and EAS. The use of a navigation computer requires setting appropriate values of pressure altitude and temperature on the scales which then fixes rhe proportion between the scales of TAS and EAS (or TAS and CAS when compressibiliry corrections are applicable).\n\nThus, the airspeed indicator system measures dynamic pressure and will relate true flight velocity when instrument, position, compressibility, and density corrections are applied. These corrections are quite necessary for accurate determination of true airspeed and accurate navigation.\n\nBernoulli's principle and the concepts of static, dynamic, and total pressure are the basis of aerodynamic fundamentals. The pressure distribution caused by the variation of local stack and dynamic pressures on a surface is the source of the major aerodynamic forces and moment.\n\n### DEVELOPMENT OF AERODYNAMIC FORCES\n\nThe typical airflow patterns exemplify the relationship of static pressure and velocity defined by Bernoulli. Any object placed in an airstream will have the a& to impact or stagnate at some point near the leading edge. The pressure at this point of stagnation will be an absolute static pressure equal to the total pressure of the airstream. In other words, the static pressure at the stagnation point will be greater than the atmospheric pressure by the amount of the dynamic pressure of the airstream. As the flow divides and proceeds around. the object, the increases in local velocity produce decreases in static pressure. This procedure of flow is best illustrated by the flow patterns and pressure distributions of figure 1.7.\n\nSTREAMLINE PATTERN AND PRES-SURE DISTRIBUTION. The flow pattern of the cylinder of figure 1.7 is characterized by the streamlines which denote the local flow direction. Velocity distribution is noted by the streamline pattern since the streamlines effect a boundary of flow, and the airflow between the streamlines is similar to flow in a closed tube. When the streamlines contract and are close together, high local velocities exist; when the streamlines expand and are far apart, low local velocities exist. At the", - "page_start": 31, - "page_end": 31, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS 00401-80 BASIC AERODYNAMICS\n\nthe pressure forces created on an aerodynamic surface can be studied in a simple form which at first neglects the effect of friction and viscosity of the airflow. The most appropriate means of visualizing the effect of airflow and the resulting aerodynamic pressures is to study the fluid flow within a closed tube.\n\nSuppose a stream of air is flowing through the tube shown in figure 1.2. The airflow at station 1 in the tube has a certain velocity, static pressure, and density. As the airstream approaches the constriction at station 2 certain changes must take place. Since the airflow is enclosed within the tube, the mass flow at any point along the tube must be the same and the velocity, pressure, or density must change to accommodate this continuity of flow.\n\nBERNOULLI'S EQUATION. A distinguishing feature of submnic airflow is that changes in pressure and velocity take place with sniall and negligible changes in density. For this reason the study of subsonic airflow can be simplified by neglecting the variation of density in the flow and assuming the flow to be incomprmiblc. Of course, at high flow speeds whjch approach the speed of sound, the flow must be considered as compressible and \"compressibility effects\" taken into account. However, if the flow through the tube of figure 1.2 is considered subsonic, the density of the airstream is essentially constant at all stations along the length.\n\nIf the density of the flow remains constant, static pressure and velocity are the variable quantities. As the flow approaches the constriction of station 2 the velocity must increase to maintain the same mass flow. As the velocity increases the static pressure will decrease and the decrease in static pressure which accompanies the increase in velocity can be verified in two ways:\n\n(I) Newton's laws of motion state the requirement of an unbalanced force to produce an acceleration (velocity change). If the airstream experiences an increase in velocity approaching the constriction, there must\n\nbe an unbalance of force to provide the acceleration. Since there is only air within the tube, the unbalance of force is provided by the static pressure at station 1 being greater than the static pressure at the constriction, station 2.\n\n(2) The total energy of the air stream in the tube is unchanged. However, the air- .' stream energy may be in two forms. The airstream may have a potential energy which is related by the static pressure and a kimtic energy by virtue of mass and motion. As the total energy is unchanged, an increase in velocity (kinetic energy) will be accompanied by a decrease in static pressure (potential energy). This situation is analagous to a ball rolling along-a smooth surface. As the ball rolls downhill, the potential energy due to position is exchanged for kinetic energy of motion. If .friction- were negligibie, the change of potential energy would equal the change in ki,netic energy. This- is also the case for the airflow within the tube.\n\nThe relationship of static pressure and velocity is maintained throughout the length of the tube. As the flow moves past the constriction toward station 3, the velocity decreases and the static pressure increases.\n\nThe Bernoulli equation for incompressible flow is most readily explained ,by accounting for the energy of the~airflow within the tube. As the airstream has no energy added or subtracted at any point, the sum of the potential +id kinetic energy must be constant. The kinetic energy of an object is found by:\n\n\"KE. =%MV=\n\nwhere K;E. = kinetic energy, ft.-lbs.\n\n$$M=\\mathrm{mass,~slugs~}$$\n\nV'=velocity, ft./set.\n\nThe kinetic energy of a cubic foot of air is:\n\n$${\\frac{\\mathrm{K.E.}}{\\mathrm{ft.}^{3}}}=\\gamma_{2}\\rho V^{2}$$\n\nwhere g= kinetic energy per cu. ft., psf\n\np=air density, slugs per cu. ft. V=ait velocity, ft./set.", - "page_start": 23, - "page_end": 23, - "source_file": "00-80T-80.pdf" - }, - { - "text": "This relationship has great application in aerodynamics and is quite fundamental and necessary in certain parts of airplane performance.\n\nVISCOSITY. The viscosity of the air is important in scale and friction effects. The coefficient of absolute viscosity is the proportion between the shearing stress and velocity gradient for a fluid flow. The viscosity of gases is unusual in that the viscosity is generally a function of temperature alone and an increase in temperature increases the viscosity. The coefficient of absolute viscosity is assigned the shorthand notation I, (mu). Since many parts of aerodynamics involve consideration of viscosity and density, a more usual form of viscosity measure is the proportion of the coefficient of absolute viscosity and density. This combination is termed the \"kinematic viscosity\" and is noted by Y (nu).\n\nkinematic viscosity\n\ncoefficient of absolute viscosity density \n \n\n$$\\nu=\\mu/\\rho$$\n\nThe kinematic viscosity of air at standard sea level conditions is 0.0001576 square feet per second. At an altitude of 40,000 feet the kinematic viscosity is increased to 0.0005059 square foot per second.\n\nIn order to provide a common denominator for comparison of various aircraft, a standard atmosphere has been adopted. The standard atmosphere actually represents the mean or average properties of the atmosphere. Figure 1.1 illustrates the variation of the most important properties of the air throughout the standard atmosphere. Notice that the lapse rate is constant in the troposphere and the stratosphere begins with the isothermal region.\n\nSince all aircraft performance is compared and,evaluated in the environment of the standard atmosphere, all of the aircraft instrumentation is calibrated for the standard atmosphere.\n\nThus, certain corrections must apply to the instrumentation as well as the aircraft performance if the operating conditions do not fit the standard atmosphere. In order to properly account for the nonstandard atmosphere certain terms must be defined. Pressure .&itudc is the altitude in the standard atmosphere corresponditrg to a particular pressure. The aircraft altimeter is essentially a sensitive barometer calibrated to indicate altitude in the staotlard atmosphere. If the altimeter is set for 29.92 in. Hg the altitude indicated is the pressure altitude-the altitude in the standard atmosphere corresponding to the sensed pressure. Of course, this indicated pressure altitude may not be the actual height above sea level due to variations in remperature, lapse rate; atniospheric pressure, and possible errors in the sensed pressure.\n\nThe more appropriate term for correlating aerodynamic performance in the nonstandard atmosphere is density &it&-the altitude in the standard atmosphere corresponding to a particular value of air density. The computation of density altitude must certainly involve consideration of pressure (pressure altitude) and temperature. Figure 1.6 illustrates the manner in which pressure altitude and temperature combine to produce a certain density altitude. This chart is quite standard in use and is usually included in the performance section of the flight handbook. Many subject areas of aerodynamics and aircraft performance will emphasize density altitude and temperature as the most important factors requiring consideration.\n\n# BERNOULLI'S PRINCIPLE AND SUBSONIC AIRFLOW\n\nAll of the external aerodynamic forces on a surface are the result of air pressure or air friction. Friction effects are generally confined to a thin layer of air in the immediate vicinity of the surface and friction forces are not the predominating aerodynamic forces. Therefore,", - "page_start": 21, - "page_end": 21, - "source_file": "00-80T-80.pdf" - }, - { - "text": "in pressure without apparent changes in density. Such a condition of airflow is analogous to the flow of water, hydraulic fluid, or any other incompressible fluid. However, at high flight speeds the pressure changes that take place are quite large and significant changes in air density occur. The study of airflow at high speeds must account for these changes 1 in air density and must consider that the 1 air is compressible and that there will be \"compressibility effects.\"\n\nA factor of great importance in the study of high speed airflow is the speed of sound. The speed of sound is the rate at which small pressure disturbances will be propagated through the air and this propagation speed is solely a function of air temperature. The accompanying table illustrates the variation of the speed of sound in the standard atmosphere.\n\nTABLE 3-I. V.r;afIm < Altitude in ,I T< the\n\n| - | | |\n| --- | --- | --- |\n| -- | | |\n| D F. | - c. | K?uI, |\n| 59.0 | 15.0 | 661.7 |\n| 41.1 | 5.1 | 650.3 |\n| 23.3 | -4.8 | 6%. 6 |\n| 5.5 | -14.7 | 6X6.7 |\n| --12., | --24.6 | 614.6 |\n| --30.2 | -34.5 | 602.2 |\n| -48.0 | -44.4 | 589.6 |\n| -65.8 | --w.3 | 516.6 |\n| -69.7 | -56.5 | 573:s |\n| -69.1 | -56.5 | 573.8 |\n| -69.7 | -56.5 | 573.8 |\n| - | | |\n\nAs an object moves through the air mass, velocity and pressure changes occur which create pressure disturbances in the airflow surrounding the object. Of course, these pressure disturbances are propagated through the air at the speed of sound. If the object is travelling at low speed the pressure disturbances are propagated ahead of the object and the airflow immediately ahead of the object is influenced by the pressure field on the object. Actually, these pressure disturbances are transmitted in all directions and extend indefinitely in all\n\ndirections. Evidence of this \"pressure warning' ' is seeii in the typical subsonic flow pattern of figure 3.1 where there is upwash and flow direction change well ahead of the leading edge. If the object is travelling at some ,speed above the speed of sound the airflow ahead of the object will not be influenced by the pressure field on the object since pres- -sure disturbances cannot. be propagated ahead of the object. Thus, as the flight speed nears the speed of sound a compression wave will form at the leading edge and all changes in velocity and pressure will take place quite sharply and suddenly. The airflow, ahead of the object is not influenced until the air particles are suddenly forced out .of the way by the concentrated pressure wave set up by the object. Evidence of this phenomenon is seen in the typical supersonic flow pattern of figure 3.1.\n\nThe analogy of surface waves on the water may help clarify these phenomena. Since a surface wave is simply the propagation of a pressure disturbance, a ship moving at a speed much less than the wave speed will not form a \"bow wave.\" As the. ship's speed nears the wave pro$agation speed the bow wave will form and become stronger as speed is increased beyond the wave speed.\n\nAt this point it should become apparent that all compressibility effects depend upon the relationship of airspeed to the speed of sound. The term used to describe this relationship is the Mach number, M, and this term is the ratio of the true airspeed to the speed of sound. ,-I M=;\n\nwhere\n\nM=Mach number V= true airspeed, knots\n\nd= speed of sound, knots\n\n$$=\\mathbf{a}_{0}{\\sqrt{\\Theta}}$$\n\n- aO=speed of sound at standard sea level conditions, 661 knots\ne= temperature ratio\n\n$$=T/T_{\\circ}$$", - "page_start": 219, - "page_end": 219, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS OO-BOT-80 BASIC AERODYNAMICS\n\nforward stagnation point the local velocity is zero and the maximum positive pressure results. As the flow proceeds from the forward stagnation point the velocity increases as shown by the change in streamlines. The local velocities reach a maximum at the upper and lower extremities and a peak suction pressure is produced at these points on the cylinder. (NOTE: Positive pressures are pressures above atmospheric and negative or .ruction pressures are less than atmospheric.) As the flow continues aft from the peak suction pressure, the diverging streamlines indicate decreasing local velocities and increasing local pressures. If friction and compressibility effects are not considered, the velocity would decrease to zero at the aft stagnation point and the full stagnation pressure would be recovered. The pressure distribution for the cylinder in perfect fluid flow would be symmetrical and no net force (lift or dragj wvuid rcsuit. Of course, thr relationship between static pressure and ~elocity along the surface is defined by Bernoulli's equation.\n\nThe flow pattern for the cylinder in an actual fluid demonstrates the effect of friction or viscosity. The viscosity of air produces a thin layer of retarded flow immediately adjacent to the surface. The energy expended in this \"boundary layer\" can alter the pressure distribution and destroy the symmetry of the pattern. The force unbalance caused by the change in pressure distribution creates a drag force which is in addition to the drag due to skin friction.\n\nThe streamline pattern for the symmetrical airfoil of figure 1.7 again provides the basis for the velocity and pressure distribution. At the leading edge the streamlines are widely diverged in the vicinity of the positive pressures. The maximum local velocities and suction (or negative) pressures exist where the streamlines are the closest together, One notable difference between the flow on the cylinder and the airfoil is that the maximum velocity and minimum pressure points on the airfoil do not ,necessarily occtir at the point of maximum thickness. However, a similarity does exist in that the minimum pressure points correspond to the points where the streamlines are closest together and this condition exists when the streamlines are forced to the greatest curvature.\n\nGENERATION OF LIFT. An important phenomenon associated with the production of lift by an airfoil is the \"circulation\" imparted to the airstream. The best practical illustration of this phenomenon is shown in figure 1.8 by the streamlines and pressure distributions existing on cylinders in an airstream. The cylinder without circulation has a symmetrical streamline pattern and a pressure distribution which creates n-0 n_et lift. If the cylinder is given a clockwise rotation and induces a rotational or circulatory flow, a distinct change takes place in the streamline pattern and p'ess.~re &str~'\"u~~oii, The vriocitirs due to the vortex of circulatory flow cause increased 104 velocity on the upper surface of the cylinder and decreased local velocity on the lower surface of the cylinder. Also, the circulatory flow produces an upwash immediately ahead and downwash immediately behind the cylinder and both fore and aft stagnation points are lowered.\n\nThe effect of the addition of circulatory flow is appreciated by the change in the pressure distribution on the cylinder. The increased local velocity on the upper surface causes an increase in upper surface suction while the decreased local velocity on the lower surface causes a decrease in lower surface suction. As a result, the cylinder with circulation will produce a net lift. This mechanically induced circulation-called Magnus effect-illustrates the relationship between circulation and lift and is important to golfers, baseball and tennis players as well as pilots and aerodynamicists. The curvature of the flight path of a golf ball or baseball rcluites an unbalance df force which is created by rotation of the ball. The pitcher that can accurately control a .powerful", - "page_start": 33, - "page_end": 33, - "source_file": "00-80T-80.pdf" - }, - { - "text": "by the local flow changes at these surfaces. Of course, the strength of the shock waves and the pressure jump through the wave decreases rapidly with distance away from the airplane. While the pressure jump through the shock wave decreases with distance away from the surface, it does not disappear completely and a measurable-but very smallpressure will exist at a considerable distance from the airplane.\n\nSound is transmitted through the air as a series of very weak pressure waves. In the ordinary range of audible frequencies, the threshold of audibility for intensity of sound is for pressure waves with an approximate R.M.S. value of pressure as low as 0.0000002 psf. Within this same range of frequencies, the threshold of feeling for intensity of sound is for pressure values with an approximate R.M.S. value of pressure of 0.2 to 0.5 psf. Continuous sound at the threshold of feeling is of the intensity to cause painful hearing. Thus, the shock waves generated by an airplane in supersonic flight are capable of creating audible sound and, in the extreme case, can be of a magnitude to cause considerable disturbance. Pressure jumps of 0.02 to 0.3 psf have been recorded during the passage of an airplane in supersonic flight. As a result, the sonic \"booms\" are the pressure waves generated by the shock waves formed on the airplane in supersonic flight.\n\nThe source of sonic booms is illustrated by figure 6.14. When the airplane is in level supersonic flight, a pattern of shock waves is developed which is much dependent on the configuration and flight Mach number of the airplane. At a considerable distance from the airplane, these shock waves tend to combine along two common fronts and extend away from the airplane in a sort of conical surface. The waves decrease in strength with distance away from the airplane but the pressure jump remains of an audible intensity for a considerable distance from the airplane. If the wave extends to the ground or water surface, it will\n\nbe reflected and attenuated to some extent depending on the character of the reflecting surface. Of course, if this attached wave form is carried,across a populated area at the surface, th.- population will experience the pressure waves as a sonic boom.\n\nThe intensity of the boom will depend on many different factors. The characteristics of the airplane generating the shock waves will be of some importance since a large, high drag, high gross weight airplane in flight at high Mach number will be transferring a greater energy to the air mass. Flight altitude will have an important bearing on boom intensity since at high altitude the pressure jump across a given wave form is much less. In addition, at high altitude a greater distance exists between the generating source of the pressure disturbance and the ground level and the strength of the wave will have a greater distance in which to decay. The ordinary variation of temperature and density plus the natural turbulence of atmosphere will tend to reflect or dissipate the shock wave generated at high altitude. However, in a stable, quiescent atmosphere, the pressure wave from the airplane in high supersonic flight at high altitude may be of an audible magnitude at lateral distances as great as 10 to 30 miles. Thus, supersonic flight over or adjacent to populated areas will produce a sonic boom.\n\nActually, it is not necessary for any airplane to fly supersonic over or adjacent to a populated area to create a sonic boom. This possibility is shown by the second illustration of figure 6.14 where an airplane decelerates to subsonic from a supersonic dive. As the airplane slows to subsonic from supersonic speed, the airplane will release the leading bow and tail waves which formed as the airplane accelerated from subsonic to supersonic speed. The release of these shock waves is analogous to the case where a surface ship slows to below the wave propagation speed and releases the bow wave which then travels out ahead of", - "page_start": 414, - "page_end": 414, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What is the phenomenon associated with the production of lift by an airfoil ?", - "target_page": 34, - "target_passage": "An important phenomenon associated with the production of lift by an airfoil is the “circulation” parted to the airstream. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### NAVWEPS 00-801-80 BASIC AERODYNAMICS\n\nrotation will be quite a \"curve ball artist\" the golfer that cannot control the lateral motion of the club face striking the golf ball will impart an uncontrollable spin and have trouble with a \"hook\" or \"slice.\"\n\nWhile a rotating cylinder can produce a net lift from the circulatory flow, the method is relatively inefficient and only serves to point out the relationship between lift and circula-, tion. An airfoil is capable of producing lift with relatively high efficiency and the process is illustrated in figure 1.8. If a symmetrical airfoil is placed at zero angle of attack to the airstream, the streamline pattern and pressure distribution give evidence of zero lift. HOWever, if the airfoil is given a positive angle of attack, changes occur in the streamline pattern and pressure distribution similar to changes caused by the addition of circulation to the cylinder. The positive angle of attack causes increased velocity on the upper surface with an increase in upper surface suction while the decreased velocity on the lower surface causes a decrease in lower surface suction. Also, upwash is generated ahead of the airfoil, the forward stagnation point moves under the leading edge, and a downwash is evident aft of the airfoil. The pressure distribution 0\" the airfoil now provides a net force perpendicular to the airstream-lift.\n\nThe generation of lift by an airfoil is dependent upon the airfoil being able to create circulation in the airstream and develop the lifting, pressure distribution on the surface. In all cases, the generated lift will be the net force caused by the distribution of pressure over the upper and lower surfaces of the airfoil. At low angles of attack, suction pressures usually will exist on both upper and lower surfaces. but the upper surface suction must be greater for positive lift. At high angles of attack near that for maximum lift, a positive pressure will exist on the lower surface but this will account for approximately one-third the net lift.\n\nThe effect of free stream density and velocity is a necessary consideration when studying the development of the various aerodynamic forces. Suppose that a particular shape of airfoil is fixed at a particular angle to the airstream. The relative velocity and pressure distribution will be determined by the shape of the airfoil and the angle to the airstream. The effect of varying the airfoil size, air density and airspeed is shown in figure 1.9. If the same airfoil shape is placed at the same angle to an airstream with twice as great a dynamic pressure the magnitude of the pressure distribution will be twice as great but the r&rive shape of the pressure distribution will be the same. With twice as great a pressure existing over the surface, all aerodynamic forces and moments will ~double. If a half-size airfoil ib placed at the same angle to the original airstream, the magnitude of the pressure distribution is the same as the origina! airfoi! and again the relative shape of the pressure distribution is identical. The same pressure acting on the half-size surface would reduce all aerodynamic forces to one-half that of the original. This similarity of flow patterns means that the stagnation point occurs at the same place, the peak suction pressure occurs at the same place, and the actual magnitude of the aerodynamic forces and moments depends upon the airstream dynamic pressure and the surface area. This concept is extremely important when attempting to separate and analyze the most important factors affecting the development of aerodynamic forces.\n\nAIRFOIL TERMINOLOGY. Since the shape of an airfoil and the inclination to the airstream are so important in determining the pressure distribution, it is necessary to properly define the airfoil terminology. Figure 1.10 shows a typical airfoil and illustrates the various items of airfoil terminology\n\n(1) The chord line is a straight line connecting the leading and trailing edges of the airfoil.", - "page_start": 37, - "page_end": 37, - "source_file": "00-80T-80.pdf" - }, - { - "text": "and high power, the dynamic pressure in the shaded area can be much greater than the free stream and this causes considerably greater lift than at zero thrust. At high power conditions the induced flow also causes an effect similar to boundary layer control and increases the maximum lift angle of attack. The typical four-engine propeller driven airplane may have 60 to 80 percent of the wing area affected by the induced flow and power effects on stall speeds may be considerable. Also, the lift of the airplane at a given angle of attack and airspeed will be greatly affected. Suppose the airplane shown is in the process of landing flare from a power-on approach. If there is a sharp, sudden reduction of power, the airplane may drop suddenly because of the reduced lift.\n\nThe typical jet aircraft does not experience the induced flow velocities encountered in propeller driven airplanes, thus the only significant factor is the vertical component of thrust. Since this vertical component contributes to supporting the airplane, less aerodynamic lift is required to hold the airplane in flight. If the thrust is small and the thrust inclination is slight at maximum lift angle, only negligible changes in stall speed will result. On the other hand, if the thrust is very great and is given a large inclination at maximum lift angle, the effect on stall speed can be very large. One important relationship remains-since there is very little induced flow from the jet, the angle of attack at stall is essentially the same power-on or power-off.\n\n#### DEVELOPMENT OF AERODYNAMIC PITCHING MOMENTS\n\nThe distribution of pressure over a surface is the ,source of the aerodynamic moments as well as the aerodynamic forces. A typical example of this fact is the pressure distribution acting on the cambered airfoil of figure 1.21. The upper surface has pressures distributed which produce the upper surface lift; the lower surface has pressures distributed which produce the lower surface lift. Of course, the\n\nnet lift produced by the airfoil is difference between the lifts on the upper and lower surfaces. The point along the chord where the distributed lift is effectively concentrated is termed the \"center of pressure, c.p.\" The center of pressure is essentially the \"center of gravity\" of the distributed lift pressure and the location of the c.p. is a function of camber and section lift coe&cient\n\nAnother aerodynamic reference point is the \"aerodynamic center, d.e.\" The aerodynamic center is defmed as the point along the chord where all changes in lift effectively take place. To visualize the existence of such a point, notice the change in pressure distribution with angle of attack for the symmetrical airfoil of figure 1.21. When at zero lift, the upper and lower surface lifts are equal and located at the same point. With an increase in angle of attack, the upper surface lift increases while the lower surface lift decreases. The change ,of lift has taken place with no change in the center of pressure-a characteristic of symmetrical airfoils.\n\nNext, consider the cambered airfoil of figure 1.21 at zero lift. To produce zero lift, the upper and lower surface lifts must be equal. One difference noted from the symmetrical airfoil is that the upper and lower surface lifts are not opposite one another. While no net lift exists on the airfoil, the couple produced by the upper and lower surface lifts creates a nose down moment. As the angle of attack is increased, the upper surface lift increases while the lower surface lift decreases. While a change in lift has taken place, no change in moment takes place about the point where the lift change occurs. Since the moment about the aerodynamic center is the product of a force (lift at the c.P.) and a lever arm (distance from c.9. to a.~.), an increase in lift moves the center of pressure toward the aerodynamic center.\n\nIt should be noted that the symmetrical airfoil at zero lift has no pitching moment about the aerodynamic center because the upper and", - "page_start": 64, - "page_end": 64, - "source_file": "00-80T-80.pdf" - }, - { - "text": "However, if high speed flight is the primary consideration, the airfoil must be chosen to have. the highest practical critical Mach number.\n\nCritical Mach number has been defined as the flight Mach number which produces first evidence of local sonic flow. Thus, the airfoil shape and lift coe&ient-which determine the pressure and velocity distribution-will have a profound effect on critical Mach number. Conventional, low speed airfoil shapes have relatively poor compressibility characteristics because of the high local velocities near the leading edge. These high local velocities are inevitable if both the maximum thickness and camber are well forward on the chord. An improvement of the compressibility characteristics can be obtained by moving the points of maximum camber and thickness aft on the chord. This would distribute the pressure and velocity more evenly along the chord and produce a lower peak velocity for the same lift coefficient. Fortunately, the airfoil shape to provide extensive lamiaar flow and low profile drag in low speed, subsonic flight will provide a pressure distribution which is favorable for high speed flight. Figure 3.12 illustrates the pressure distributions and variation of critical Mach number with lift coefficient for a conventional low speed airfoil and a high speed section.\n\nIn order to obtain a high critical Mach number from an airfoil at some low lift coefficient the section must have:\n\n(u) Low thickness ratio. The point of maximum thickness should be aft to smooth the pressure distribution.\n\n(6) Low camber. The mean camber line should be shaped to help minimize the local velocity peaks.\n\nIn addition, the higher the required lift coefficient the lower the critical Mach number and more camber is required of the airfoil. If supersonic flight is a possibility the thickness ratio and leading edge radius must be small to decrease wave drag.\n\nFigure 3.13 shows the flow patterns for two basic supersonic airfoil sections and provides the approximate equations for lift,drag, and lift curve slope. Since the wave drag is the only factor of difference between -the two airfoil sections, notice the configuration factors which affect the wave drag. For the same thickness ratio, the circular arc airfoil would have a larger wedge angle formed between the upper and lower surfaces at the leading edge. At the same flight Mach number the larger angle at the leading edge would form the stronger shock wave at the nose and cause a greater pressure change on the circular arc airfoil. This same principle applies when investigating the effect of airfoil thickness. Notice that the wave drag coefficients for both airfoils vary as the SQUARE of the thickness ratio, e.g., if the thickness ratio were doubled, the wave drag coefhcient would he four times as great. If the thickness were increased, the airflow at the leading edge will experience a greater change in direction and a stronger shock wave will be formed. This powerful variation of wave drag with thickness ratio necessitates the use of very thin airfoils with sharp leading edges for supersonic flight. An additional consideration is that thin airfoil sections favor the use of low aspect ratios and high taper to obtain lightweight structures and preserve stiffness and rigidity.\n\nThe parameter JMz-l appears in the denominator of each of the equations for the aerodynamic coefficients and indicates a decrease in each of these coefficients with an increase in Mach number. Essentially, this means that any aerodynamic surface becomes less sensitive to changes in angle of attack at higher Mach numbers. The decrease in lift curve slope with Mach number has tremendous implications in the stability and control of high speed aircraft. The vertical tail becomes less sensitive to angles of sideslip and the directional stability of the aircraft will deteriorate with Mach number. The horizontal tail of the airplane experiences the same", - "page_start": 240, - "page_end": 240, - "source_file": "00-80T-80.pdf" - }, - { - "text": "attack would produce an approximate 0.5 change in lift coefficient. Evidently, lift,~curve slope is not a factor important in the selection of an airfoil.\n\nAn important lift property affected by the airfoil shape is the section maximum lift coefficient, ci-. The effect of airfoil shape on ci- can be appreciated by comparison of the lift curves for the five airfoils of figure 1.12. The NACA airfoils 63X06,63-009, and 63i-012 ate symmetrical sections of a basic thickness distribution but maximum thicknesses of 6, 9, and 12 percent respectively. The effect of thickness on ~1% is obvious from an inspection of these curves :\n\n| NACA 63-005 .~. | :. | Cl.82 | 9.0° |\n| --- | --- | --- | --- |\n| NACA 6Mo9. | | 1.10 | 10.5~ |\n| NACA 63'-01?,. | | 1.40 | 13.80 |\n\nThe 12-percent section has a cr- approximately 70 percent greater than the 6-percent thick section. In addition, the thicker airfoils have greater benefit from the use of various high lift devices.\n\nThe effect of camber is illustrated by the lift curves of the NACA 4412 and 631-412 sections. The NACA 4412 section is a 12 percent thick airfoil which has 4 percent maximum camber located at 40 percent of the chord. The NACA 63i-412 airfoil has the same thickness and thickness distribution as the 631-012 but camber added to give a \"design\"' lift coefficient (c, for minimum section drag) of 0.4. The lift curves for these two airfoils show that camber has a beneficial e&t on cl-.\n\n| ScCdO\" | %.I | a0 for \"&* |\n| --- | --- | --- |\n| NACA 6h-312 (symmctricd) :. | 1.40 | 13.e |\n| NACA 631-412 Whmd). | 1.73 | IS. z\" |\n\nAn additional effect of camber is the change in zero lift angle. While the symmetrical\n\nsections have zero lift at zero angle of attack, the sections with positive camber have negative angles for zero lift.\n\nThe importance of maximum lift coefficient is obvious. If the maximum lift coefficient is high, the stall speed will be low. However, the high thickness and camber necessary for high section maximum lift coefficients may produce low critical Mach numbers and large twisting moments at high speed. In other words, a high maximum lift coefficient is just one of the many features desired of an airfoil section.\n\nDRAG CHARACTERISTICS. Drag is the net aerodynamic force parallel to the relative wind and its source is the pressure distribution and skin friction on the surface. Large, thick bluff bodies in an airstream show a predominance of form drag due to the unbalanced pressure distribution. However, streamlined bodies with smooth contours show a ptedominance of drag due to skin friction. In a fashion similar to other aerodynamic forces, drag forces may be considered in the form of a coefficient which is independent of dynamic pressure and surface area. The basic drag equation is as follows:\n\nwhere\n\nD=GqS\n\nD=drag, lbs. C,= drag coefficient q= dynamic pressure, psf UP =z (V in knots, TAS) S= wing surface area, sq. ft.\n\nThe force of drag is shown as the product of dynamic pressure, surface area, and drag coefficient, C,. The drag coefficient in this equation is similar to any other aerodynamic force coefficient-it is the ratio of drag pressure to dynamic pressure. If the drag coefficient of a conventional airplane were plotted versus angle of attack, the result would be typical of the graph shown in figure 1.13. At low angles of attack the drag coefficient is low and small changes in angle of attack create only slight changes in drag coefficient. At", - "page_start": 46, - "page_end": 46, - "source_file": "00-80T-80.pdf" - }, - { - "text": "basic section. The effect of a fixed slot on the lift characteristics is shown in figure 1.18.\n\n.UO~J ana' &Z~J can produce significant increases in cl, but the increased angle of attack for maximum lift can be a disadvantage. If slots were the only high lift device on the wing, the high take off and landing angles of attack may complicate the design of the landing gear. For this reason slots or slats are usually used in conjunction with flaps since the flaps provide reduction in the maximum lift angle of attack. The use of a slot has two important advantages: there is only a negligible change in the pitching moment due to the slot and no significant change in section drag at low angles of attack. In fact, the slotted section will have less drag than the basic section near the maximum lift angle for the basic section.\n\nThe slot-slat device finds great application in modern airplane configurations. The tailless airplane configuration can utilize only the high lift devices which have negligible effect on the pitching moments. The slot and slat are often used to increase the cl- in high speed flight when compressibility effects are considerable. The small change in twisting moment is a favorable feature for any high lift device to be used at high speed. Leading edge high lift devices are more effective on the highiy swept wing than trailing edge flaps since slats are quite powerful in controlling the flow pattern. Small amounts of local camber added to the leading edge as a high lift device is most effective on wings of very low thickness and sharp leading edges. Most usually the slope of the leading edge high lift device is used to control the spanwise lift distribution on the wing.\n\n'Boundary larcr control devices are additional means of increasing the maximum lift coe& cient of a section. The thin layer of airflow adjacent to the surface of an airfoil shows reduced local velocities from the effect of skin friction. When at high angles of attack this boundary layer on the upper surface tends to\n\nstagnate and come to a stop. If this happens the airflow will separate from the surface and stall occurs. Boundary layer control for high lift applications features various devices to maintain high velocity in the boundary layer to allay separation of the airflow. This control of the boundary layer kinetic energy can be accomplished in two ways. One method is the application of a suction through ports to draw off low energy boundary layer and replace it with high velocity air from outside the boundary layer. The effect of surface suction boundary layer control on lift characteristics is typified by figure 1.18. Increasing surface suction produces greater maximum lift coe5 cients which occur at higher angles of attack. The effect is similar to that of a slot because the slot is essentially a boundary layer control device ducting high energy air to the upper surface.\n\nAnother method of boundary layer control is accomplished by injecting a high speed jet of air into the boundary layer. This method produces essentially the same results as the suction method and is the more practical installation. The suction type BLC requires the installation of a separate pump while the \"blown\" BLC system can utilize the high pressure source of a jet engine compressor. The typical installation of a high pressure BU system would be the augmentation of a deflected flap. Since any boundary layer control tends to increase the angle of attack for maximum lift, it is important to combine the boundary layer control with flaps since the flap deflection tends to reduce the angIe of attack for maximum lift\n\nOPERATION OF HIGH LIFT DEVICES. The management of the high lift devices on an airplane is an important factor in flying operations. The devices which are actuated automatically-such as automatic slats and slotsare usually of little concern and cause little complication since relatively small changes in drag and pitching moments take place. However, the flaps must be properly managed by the pilot to take advantage of the capability", - "page_start": 60, - "page_end": 60, - "source_file": "00-80T-80.pdf" - }, - { - "text": "weight if the airplane is flown at the angle of attack for (L/D),. Of course, the gross weight would affect the glide airspeed necessary for this particular angle of attack but the glide ratio would be unaffected.\n\nAIRFOIL DRAG CHARACTERISTICS. The total drag of an airplane is composed of the drags of the individual components and the forces caused by interference between these components. The drag of an airplane configuration must include the various drags due to lift, form, friction, interference, leakage, etc. To appreciate the factors which affect the drag of an airplane configuration, it is most logical to consider the factors which affect the drag of airfoil sections. In order to allow an objective consideration of the effects of thickness, camber, etc., the properties of two-dimensional sections must be studied. Airfoil section properties are derived from the basic profile in two-dimensional. flow and are provided the lower case shorthand notation to distinguish them from wing or airplane properties, e.g., wing or airplane drag coe5 cient is C, while airfoil section drag coefficient is c,.\n\nThe drag characteristics of three illustrative airfoil sections are shown in figure 1.14. The section drag coe&cient, c,, is plotted versus the section lift coefficient, cr. The drag on the airfoil section is composed of pressure drag and skin friction. When the airfoil is at low lift coe&cients, the drag due to skin friction predominates. The drag curve for a conventional airfoil tends to be quite shallow in this region since there is very little variation of skin friction with angle of attack. When the airfoil is at high lift coefficients, form or pressure drag predominates and the drag coefficient varies rapidly with lift coefficient. The NACA 0006 is a thin symmetrical profile which has a maximum thickness of 6 percent located at 30 percent of the chord. This section shows a typical variation of cd and cr.\n\nThe NACA 4412 section is a 12 percent thick airfoil with 4 percent maximum camber at 40 percent chord. When this section is compared with the NACA 0006 section the effect of camber can be appreciated. At low lift coefficients the thtn, symmetrical section has much lower drag. However, at lift coefficients above 03 the thicker, cambered section has the lower drag. Thus, proper camber and thickness can improve the lift-drag ratio of the section.\n\nThe NACA 63,412 is a cambered 12 percent thick airfoil of the '\"laminar flow\" type. This airfoil is shaped to produce a design lift coe5cient of 0.4. Notice that the drag curve of this airfoil has distinct aberrations with very low drag coefficients near the lift coefficient of 0.4. This airfoil profile has its camber and thickness distributed to produce very low uniform velocity on the forward surface (minimum pressure point well aft) at this lift coefficient. The resulting pressure and velocity distribution enhance extensive laminar flow in the boundary layer and greatly reduce the skin friction drag. The benefit of the laminar flow is appreciated by comparing the minimum drag of this airfoil with an airfoil which has one-half the maximum thickness-the NACA ooo6.\n\nThe choice of an airfoil section will depend on the consideration oftmany different factors. While the cI, of the section is an important quality, a more appropriate factor for consideration is the maximum lift coefficient of the section when various high lift devices are applied. Trailing edge flaps and leading edge high lift devices are applied to increase the cr,, for low speed performance. Thus, an appropriate factor for comparison is the ratio of section drag coe5cient to section maximum lift coefficient with flaps-cd/crm,. When this quantity is corrected for compressibility, a preliminary selection of an airfoil section is possible. The airfoil having the lowest value of c&~, at the design flight condition (endurance, range, high speed, etc.) will create the least section drag for a given .design stall speed.", - "page_start": 50, - "page_end": 50, - "source_file": "00-80T-80.pdf" - }, - { - "text": "(4) Sweepback contributes to lateral stability in rhe same sense as dihedral. When the swept wing aircraft is placed in a sideslip, the wing into the wind experiences an increase in lift since the sweep is less and the wing away from the wind produces less lift since rhe sweep is greater. As shown in figure 3.15, the swept wing aircraft in a sideslip experiences lift changes and a subsequent rolling moment which tends to right the aircraft. This lateral stability conrribution depends on the sweepback and the lift coefficient of the wing. A highly swept wing operating at high lift coeflicient usually experiences such an excess of this lateral stability contribution that adequate controllability may be a significant problem. As shown, the swept wing has certain important advantages. However, the use of\n\nsweepback produces certain inevitable disadvantages which are important from the standpoint of both airplane design and flight operations. The most important of these disadvantages are as follows:\n\n(1) When sweepback is combined with taper there is an extremely powerful tendency for the wing to stall tip first. This pattern of stall is very undesirable since there would be little stall warning, a serious reduction in lateral control effectiveness, and the forward shift of the center of pressure would contribute to a nose up moment (\"pitch up\" or \"stick force lightening\"). Taper has its own effect of producing higher local lift coefhcients toward the tip and one of the effects of sweepback is very similar. All outboard wing sections are affected by the upwash of the preceding inboard sections and the lift distribution resulting from sweepback alone is similar to that of high taper.\n\nAn additional effect is the tendency to develop a strong spanwise flow of the boundary layer toward the tip when the wing is at high lift coefficients. This spanwise flow produces a relatively low energy boundary layer near the tip which can be easily sep-\n\narated. The combined effect of taper and sweep present a considerable problem of tip stall and this is illustrated by the flow patterns of figure 3.16. Design for high speed performance may dictate high sweepback, while structural efficiency may demand a highly tapered planform. When such is the case, the wing may require extensive aerodynamic tailoring to provide a suitable stall pattern and a lift distribution at cruise condition which reduces drag due to lift. Washout of the tip, variation of section camber throughout span, flow fences, slats, leading edge extension, etc., are typical devices used to modify the stall pattern and minimize drag due to lift at cruise condition.\n\n(2) As shown by the lift curve of figure 3.15 the use of sweepback will reduce the lift curve slope and the subsonic maximum lift coefficient. It is important to note this case is definitely subsonic since sweepback may be used to improve the transonic maneuvering capability. Various sweep angles applied to wings of moderate aspect ratio produce these approximate effects on the subsonic lift characteristics:\n\n| sweep Angle (A): | |\n| --- | --- |\n| O\" | 0 |\n| w | 4 |\n| 300. | 14 |\n| 450 | 30 |\n| M)Q | yl |\n\nThe reduction of the low speed maximum lift coefficient (which is in addition to that lost due to tip stall) has very important implications in design. If wing loading is not reduced, stall speeds increase and subsonic maneuverability decreases. On the other hand, if wing loading is reduced, the increase in wing surface area may reduce the anticipated benefit of sweepback in the transonic flight regime. Since the requirements of performance predominate, certain increases of stall speeds, takeoff speeds,", - "page_start": 248, - "page_end": 248, - "source_file": "00-80T-80.pdf" - }, - { - "text": "control speeds\" set by these factors rather than simple stall speeds based on C&,.\n\nWhen a wing of a given planform has various high lift devices added, the lift distribution and stall pattern can be greatly affected. Deflection of trailing edge flaps increases the local lift coe5cients in the flapped areas and since the stall angle of the flapped section is decreased, initial stall usually begins in the flapped area. The extension of slats simply allows the slatted areas to go to higher lift coe5cients and angles of attack and generally delays stall in that vicinity. Also, power effects may adversely affect the stall pattern of the propeller powered airplane. When the propeller powered airplane is at high power and low speed, the flow induced at the wing root by the slipstream may cause considerable delay in the stall of the root sections. Hence, the propeller powered airplane may have its most undesirable stall characteristics during the power-on stall rather than the power-off stall.\n\n#### PARASITE DRAG\n\nIn addition to the drag caused by the development of lift (induced drag) there is the obvious drag which is nor due to the develop ment of lift. A wing surface even at zero lift will have \"profile\" drag due to skin friction and form. The other components of the airplane such as the fuselage, tail, nacelles, etc., contribute to drag because of their own form and skin friction. Any loss of momentum of the airstream due to powerplant cooling, air conditioning, or leakage through construction or access gaps is, in effect, an additional drag. When the various components of the airplane are put together the total drag will be greater than the sum of the individual components because of \"interference\" of one surface on the other.\n\nThe most usual interference of importance occurs at the wing-body intersection where the growth of boundary layer on the fuselage reduces the boundary layer velocities on the wing root surface. This reduction in energy allows\n\nthe wing root boundary layer to be more easily separated in the presence of an adverse pressure gradient. Since the upper wing surface has the more critical pressure gradients, a low wing position on a circular fuselage would create greater interference drag than a high wing position. Adequate filleting and control of local pressure gradients is necessary to minimize such additional drag due to interference.\n\nThe sum of all the drags due to form, friction, leakage and momentum losses, and interference drag is termed \"parasite\" drag since it is not directly associated with the development of lift. While this parasite drag is not directly associated with the production of lift it is a variable with lift. The variation of parasite drag coefficient, C+, with lift coefficient, C,, is shown for a typical airplane in figure 1.34. The minimum parasite drag coefficient, CDpmi,, usually occurs at or near zero lift and parasite drag coefficient increases above this point,in a smooth curve. The induced drag coefficient is shown on the same graph for purposes of comparison since the total drag of the airplane is a sum of the parasite and induced drag.\n\nIn many parts of airplane performance it is necessary to completely distinguish between drag due to lift and drag not due to lift. The total drag of an airplane is the sum of the parasite and induced drags.\n\nG=c++cD;\n\nwhere\n\nC, = airplane drag coefficient C+=parasite drag coefficient C,,= induced drag coeaicient\n\n$$=0.318\\,{\\frac{C_{L}{}^{2}}{A R}}$$\n\nFrom inspection of figure 1.34 it is seen that both CD, and CD, vary with lift coefticient. However, the usual variation of parasite drag allows a simple correlation with the induced drag term. In effect, the part of parasite drag above the minimum at zero lift can be \"lumped\"", - "page_start": 104, - "page_end": 104, - "source_file": "00-80T-80.pdf" - }, - { - "text": "#### NAVWEPS DO-ROT-80 APPLICATION OF AERODYNAMICS TO SPECIFIC PROBLEMS OF FLYING\n\nusually means remaining over a particular spot on the ground, it shall be considered here as flight at zero airspeed. This is necessary because the aerodynamic characteristics of the rotor depend on its motion with respect to the air and not the ground. Hovering in a 20 knot wind is aerodynamically equivalent to flying at an airspeed of 20 knots in a no-wind condition, and the characteristics will be identical in the two conditions.\n\nThe first point to realize is that the rotor is subject to the same physical laws of aerodynamics and motion that govern flight of the fixed-wing airplane. The manner in which the rotor is subject to these laws is much more complicated due to the complex flow conditions.\n\nRotor lift can be explained by either of two methods. The first method, utilizing simple momentum theory based on Newton's Laws, merely states that lift results from the rotor accelerating a mass of air downward in the same way that the jet engine develops thrust by accelerating a mass of air out the tailpipe. The second method of viewing rotor lift concerns the pressure forces acting on the various sections of the blade from root to tip. The simple momentum theory is useful in determining only lift characteristics while the \"blade element\" theory gives drag as well as lift characteristics and is useful in giving a picture of the forces at work on the rotor. In the \"blade element\" theory, the blade is divided up into \"blade elements\" as shown in figure 6.15. The forces acting on each blade element are analyzed. Then the forces on all elements are summed up to give the characteristics of the whole rotor. The relative wind acting on each segment is the resultant of two velocity components: (1) the velocity due CO the rotation of the blades about the hub and (2) the induced velocity, or downwash velocity caused by the rotor. the velocity due to rotation at a particular element is proportional to the rotor speed and the distance of the element from the rotor hub.\n\nThus, the velocity due to rotation varies linearly from zero at the hub to a maximum at the tip. A typical blade section with the forces acting on it is shown in figure 6.15.\n\nA summation of the forces acting perpendicular to the plane of rotation (tip path plane) will determine the rotor thrust (or lift) characteristics while summation of the moments resulting from forces acting in the plane of rotation will determine the rotor torque characteristics. As a result of this analysis, the rotor thrust (or lift) is found to be propoctional to the air density, a nondimensional thrust coefficient, and the square of the tip speed, or linear speed of the tip of the blade. The thrust coefficient is a function of the average blade section lift coefficient and the rotor solidity, which is the proportion of blade area to disc area. The lift coefficient is identical to that used in airplane aerodynamics while the solidity is analagous to the aspect ratio in airplane aerodynamics. The rotor torque is found to be proportional to a nondimensional torque coefficient, the air density, the disc area, the square of the tip speed, and the blade radius. The torque coefficient is dependent upon the average profile drag coefficient of the blades, the blade pitch angle, and the average lift coefficient of the blades. The torque can be thought to result from components of profile and induced drag forces acting on the blades, similar to those on an airplane.\n\nAs in the airplane, there is one angle of attack or blade pitch condition that will result in the most efficient operation. Unfortunately, the typical helicopter rotor operates at a near constant RPM and thus a constant true airspeed and cannot operate at this most eficient condition over a wide range of altitude and gross weight as the fixed-wing airplane. The airplane is able to maintain an efficient angle of attack at various altitudes, and gross weights by flying at various airspeeds but the helicopter will operate with a near constant rotor velocity and vary blade angle to contend with variations in altitude and gross weight.", - "page_start": 417, - "page_end": 417, - "source_file": "00-80T-80.pdf" - }, - { - "text": "EFFECT OF FLAPS\n\nFigure 1.15. Flight at High Lift Conditions", - "page_start": 53, - "page_end": 53, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What are the recyclable waste ?", - "target_page": 3, - "target_passage": "All types of paper and cardboard, Metal packaging, even the smallest ones, Plastic bottles and flasks, All other packaging", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "*Transfer and Disposal Services.* We own or operate 96 transfer stations. We deposit waste at these stations, as do other private haulers and municipal haulers, for compaction and transfer to trailers for transport to disposal sites or recycling facilities. As of December 31, 2004, we owned or operated 58 landÑlls, which had approximately 8,904 permitted acres and total available permitted and probable expansion disposal capacity of approximately 1.7 billion in-place cubic yards. The in-place capacity of our landÑlls is subject to change based on engineering factors, requirements of regulatory authorities and the ability to expand our sites successfully. Some of our landÑlls accept non-hazardous special waste, including utility ash, asbestos and contaminated soils. See \"\"Ì Properties.''\n\nMost of our existing landÑll sites have the potential for expanded disposal capacity beyond the currently permitted acreage. We monitor the availability of permitted disposal capacity at each of our landÑlls and evaluate whether to pursue expansion at a given landÑll based on estimated future waste volumes and prices, market needs, remaining capacity and likelihood of obtaining an expansion. To satisfy future disposal demand, we are currently seeking to expand permitted capacity at certain of our landÑlls, although no assurances can be made that all future expansions will be permitted as designed.\n\n*Other Services.* We have 35 materials recovery facilities and other recycling operations, which are generally required to fulÑll our obligations under long-term municipal contracts for residential collection services. These facilities sort recyclable paper, aluminum, glass and other materials. Most of these recyclable materials are internally collected by our residential collection operations. In some areas, we receive commercial and industrial solid waste that is sorted at our facilities into recyclable materials and nonrecyclable waste. The recyclable materials are salvaged, repackaged and sold to third parties and the nonrecyclable waste is disposed of at landÑlls or incinerators. Wherever possible, our strategy is to reduce our exposure to Öuctuations in recyclable commodity prices by utilizing third party recycling facilities, thereby minimizing our recycling investment.\n\nWe provide remediation and other heavy construction services primarily through our subsidiary located in Missouri.\n\nWe also have a Texas-based compost, mulch and soil business at which yard, mill and other waste is processed, packaged and sold as various products.\n\n### **Sales and Marketing**\n\nWe seek to provide quality services that will enable our company to maintain high levels of customer satisfaction. We derive our business from a broad customer base which we believe will enable our company to experience stable growth. We focus our marketing eÅorts on continuing and expanding business with existing customers, as well as attracting new customers.\n\nWe employ approximately 500 sales and marketing employees. Our sales and marketing strategy is to provide high-quality, comprehensive solid waste collection, recycling, transfer and disposal services to our customers at competitive prices. We target potential customers of all sizes, from small quantity generators to large \"\"Fortune 500'' companies and municipalities.\n\nMost of our marketing activity is local in nature. However, in 2000 we initiated a national accounts program in response to our customers' needs.\n\nWe generally do not change the tradenames of the local businesses we acquire, and therefore we do not operate nationally under any one mark or tradename. Rather, we rely on the goodwill associated with the acquired companies' local tradenames as used in each geographic market in which we operate.\n\n### **Customers**\n\nWe provide services to commercial, industrial, municipal and residential customers. No one customer has individually accounted for more than 10% of our consolidated revenue or of our reportable segment revenue in any of the last three years.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## **INSTRUCTIONS**\n\n### in the Pays de Lauzun district\n\n#### **RECYCLABLE WASTE**\n\n### **ORGANIC WASTE**", - "page_start": 2, - "page_end": 2, - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n#### **What is compost?**\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil – it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n#### **What materials (\"feedstocks\") are used to make compost?**\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n#### **How do I know I'm getting safe, quality compost?**\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n#### **What about weed seeds, plant diseases or pesticide residues?**\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n# Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\nl The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n\n- l Please keep yard debris free of :\n\t- x Garbage x Plastic of any sort\n- Plastic plant pots\n- Plastic plant tabs\n- Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n\t- x Rock, brick, or masonry x Glass or metal x Pet waste.\n\t-\n\t-\n\n* Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "transportation, treatment, storage and disposal of hazardous and non-hazardous solid waste, and require states to develop programs to ensure the safe disposal of solid waste in sanitary landÑlls.\n\nSubtitle D of RCRA establishes a framework for regulating the disposal of municipal solid waste. Regulations under Subtitle D currently include minimum comprehensive solid waste management criteria and guidelines, including location restrictions, facility design and operating criteria, closure and post-closure requirements, Ñnancial assurance standards, groundwater monitoring requirements and corrective action standards, many of which had not commonly been in eÅect or enforced in the past in connection with municipal solid waste landÑlls. Each state was required to submit to the U.S. EPA a permit program designed to implement Subtitle D regulations by April 9, 1993. All of the states in which we operate have implemented permit programs pursuant to RCRA and Subtitle D. These state permit programs may include landÑll requirements which are more stringent than those of Subtitle D.\n\nAll of our planned landÑll expansions or new landÑll development projects have been engineered to meet or exceed Subtitle D requirements. Operating and design criteria for existing operations have been modiÑed to comply with these new regulations. Compliance with Subtitle D regulations has resulted in increased costs and may in the future require substantial additional expenditures in addition to other costs normally associated with our waste management activities.\n\n(2) *The Comprehensive Environmental Response, Compensation and Liability Act of 1980, as amended.* CERCLA, among other things, provides for the cleanup of sites from which there is a release or threatened release of a hazardous substance into the environment. CERCLA may impose strict joint and several liability for the costs of cleanup and for damages to natural resources upon current owners and operators of the site, parties who were owners or operators of the site at the time the hazardous substances were disposed of, parties who transported the hazardous substances to the site and parties who arranged for the disposal of the hazardous substances at the site. Under the authority of CERCLA and its implementing regulations, detailed requirements apply to the manner and degree of investigation and remediation of facilities and sites where hazardous substances have been or are threatened to be released into the environment. Liability under CERCLA is not dependent upon the existence or disposal of only \"\"hazardous wastes'' but can also be based upon the existence of small quantities of more than 700 \"\"substances'' characterized by the U.S. EPA as \"\"hazardous,'' many of which may be found in common household waste.\n\nAmong other things, CERCLA authorizes the federal government to investigate and remediate sites at which hazardous substances have been or are threatened to be released into the environment or to order (or oÅer an opportunity to) persons potentially liable for the cleanup of the hazardous substances to do so. In addition, the U.S. EPA has established a National Priorities List of sites at which hazardous substances have been or are threatened to be released and which require investigation or cleanup.\n\nLiability under CERCLA is not dependent upon the intentional disposal of hazardous waste or hazardous substances. It can be founded upon the release or threatened release, even as a result of unintentional, non-negligent or lawful action, of thousands of hazardous substances, including very small quantities of such substances. Thus, even if our landÑlls have never knowingly received hazardous waste as such, it is possible that one or more hazardous substances may have been deposited or \"\"released'' at our landÑlls or at other properties which we currently own or operate or may have owned or operated. Therefore, we could be liable under CERCLA for the cost of cleaning up such hazardous substances at such sites and for damages to natural resources, even if those substances were deposited at our facilities before we acquired or operated them. The costs of a CERCLA cleanup can be very expensive. Given the diÇculty of obtaining insurance for environmental impairment liability, such liability could have a material impact on our business and Ñnancial condition. For a further discussion, see \"\"Ì Liability Insurance and Bonding.''\n\n(3) *The Federal Water Pollution Control Act of 1972, as amended.* This Act regulates the discharge of pollutants from a variety of sources, including solid waste disposal sites, into streams, rivers and other waters of the United States. Point source runoÅ from our landÑlls and transfer stations that is discharged into surface waters must be covered by discharge permits that generally require us to conduct", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "### **Competition**\n\nWe operate in a highly competitive industry. Entry into our business and the ability to operate proÑtably in the industry requires substantial amounts of capital and managerial experience.\n\nCompetition in the non-hazardous solid waste industry comes from a few large, national publicly-owned companies, including Waste Management and Allied Waste Industries, several regional publicly- and privately-owned solid waste companies, and thousands of small privately-owned companies. Some of our competitors have signiÑcantly larger operations, and may have signiÑcantly greater Ñnancial resources, than we do. In addition to national and regional Ñrms and numerous local companies, we compete with municipalities that maintain waste collection or disposal operations. These municipalities may have Ñnancial advantages due to the availability of tax revenues and tax-exempt Ñnancing.\n\nWe compete for collection accounts primarily on the basis of price and the quality of our services. From time to time, our competitors may reduce the price of their services in an eÅort to expand market share or to win a competitively bid municipal contract. This may have an impact on our future revenue and proÑtability.\n\nIn each market in which we own or operate a landÑll, we compete for landÑll business on the basis of disposal costs, geographical location and quality of operations. Our ability to obtain landÑll business may be limited by the fact that some major collection companies also own or operate landÑlls to which they send their waste. There also has been an increasing trend at the state and local levels to mandate waste reduction at the source and to prohibit the disposal of certain types of waste, such as yard waste, at landÑlls. This may result in the volume of waste going to landÑlls being reduced in certain areas, which may aÅect our ability to operate our landÑlls at their full capacity and/or aÅect the prices that we can charge for landÑll disposal services. In addition, most of the states in which we operate landÑlls have adopted plans or requirements that set goals for speciÑed percentages of certain solid waste items to be recycled.\n\n### **Regulation**\n\nOur facilities and operations are subject to a variety of federal, state and local requirements that regulate the environment, public health, safety, zoning and land use. Operating and other permits, licenses and other approvals are generally required for landÑlls and transfer stations, certain solid waste collection vehicles, fuel storage tanks and other facilities that we own or operate, and these permits are subject to revocation, modiÑcation and renewal in certain circumstances. Federal, state and local laws and regulations vary, but generally govern wastewater or stormwater discharges, air emissions, the handling, transportation, treatment, storage and disposal of hazardous and non-hazardous waste, and the remediation of contamination associated with the release or threatened release of hazardous substances. These laws and regulations provide governmental authorities with strict powers of enforcement, which include the ability to obtain injunctions and/or impose Ñnes or penalties in the case of violations, including criminal penalties. The U.S. Environmental Protection Agency and various other federal, state and local environmental, public and occupational health and safety agencies and authorities administer these regulations, including the Occupational Safety and Health Administration of the U.S. Department of Labor.\n\nWe strive to conduct our operations in compliance with applicable laws and regulations. However, in the existing climate of heightened environmental concerns, from time to time, we have been issued citations or notices from governmental authorities that have resulted in the need to expend funds for remedial work and related activities at various landÑlls and other facilities. There is no assurance that citations and notices will not be issued in the future despite our regulatory compliance eÅorts. We have established remediation reserves that we believe, based on currently available information, will be adequate to cover our current estimates of regulatory costs. However, we cannot assure you that actual costs will not exceed our reserves.\n\n*Federal Regulation.* The following summarizes the primary environmental, public and occupational health and safety-related statutes of the United States that aÅect our facilities and operations:\n\n(1) *The Solid Waste Disposal Act, as amended, including the Resource Conservation and Recovery Act.* RCRA and its implementing regulations establish a framework for regulating the handling,", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\n**Special thanks:** the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n- \n# **original artwork provided by:**\n\n## Tips to Remember:\n\n- *• Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.*\n- *• When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.*\n- *• Ask your compost supplier which compost product is best for your intended use.*\n- *• Use compost at the recommended application rate.*\n- *• To maintain healthy soil, reapply compost or mulch every 1-2 years.*\n- *• Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.*\n- *• Compost can also reduce your lawn and garden's summer irrigation needs.*\n- *• Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.*", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Resources\n\n#### **Compost Organizations**\n\n**Washington Organic Recycling Council** Find a compost producer in your area www.compostwashington.org\n\n**US Composting Council** Seal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n#### **Restoring the Soil to Protect our Waterways**\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n#### **Natural Lawn & Garden Care, Soils, and Home Composting**\n\n**City of Seattle** www.seattle.gov/util/services/yard\n\n> **King County** www.kingcounty.gov/soils\n\n**Washington State University** www.puyallup.wsu.edu/soilmgmt/\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "market knowledge, community relations and name recognition, and to instill their entrepreneurial drive at all levels of our operations. By furnishing the local management of such acquired companies with our Ñnancial and marketing resources and technical expertise, we believe that the acquired companies are better able to secure additional municipal franchises and other contracts.\n\n*Privatize Municipal Operations and Acquire Divested Operations.* We also seek to acquire solid waste collection operations, transfer stations and landÑlls that municipalities and other governmental authorities are privatizing. Many municipalities are seeking to outsource or sell these types of solid waste operations, as they lack the capital, technical expertise and/or operational resources necessary to comply with increasingly stringent regulatory standards and/or to compete eÅectively with privatesector companies. In addition, we have acquired, and will continue to seek to acquire, operations and facilities that may be divested by other publicly-owned waste companies.\n\n### **Operations**\n\nOur operations primarily consist of the collection, transfer and disposal of non-hazardous solid waste.\n\n*Collection Services.* We provide solid waste collection services to commercial, industrial, municipal and residential customers in 22 states through 140 collection companies. In 2004, 74.3% of our revenue was derived from collection services consisting of approximately 32.5% from services provided to municipal and residential customers, 36.6% from services provided to commercial customers, and 30.9% from services provided to industrial and other customers.\n\nOur residential collection operations involve the curbside collection of refuse from small containers into collection vehicles for transport to transfer stations or directly to landÑlls. Residential solid waste collection services are typically performed under contracts with municipalities, which we generally secure by competitive bid and which give our company exclusive rights to service all or a portion of the homes in their respective jurisdictions. These contracts or franchises usually range in duration from one to Ñve years, although some of our exclusive franchises are for signiÑcantly longer periods. Residential solid waste collection services may also be performed on a subscription basis, in which individual households contract directly with our company. The fees received for subscription residential collection are based primarily on market factors, frequency and type of service, the distance to the disposal facility and cost of disposal. In general, subscription residential collection fees are paid quarterly in advance by the residential customers receiving the service.\n\nIn our commercial and industrial collection operations, we supply our customers with waste containers of varying sizes. We also rent compactors to large waste generators. Commercial collection services are generally performed under one- to three-year service agreements, and fees are determined by such considerations as:\n\n- ' market factors,\n- ' collection frequency,\n- ' type of equipment furnished,\n- ' the type and volume or weight of the waste collected,\n- ' the distance to the disposal facility and\n- ' the cost of disposal.\n\nWe rent waste containers to construction sites and also provide waste collection services to industrial and construction facilities on a contractual basis with terms generally ranging from a single pickup to one year or longer. We collect the containers or compacted waste and transport the waste either to a landÑll or a transfer station for disposal.\n\nAlso, we currently provide recycling services in certain markets primarily to comply with local laws or obligations under our franchise agreements. These services include the curbside collection of residential recyclable waste and the provision of a variety of recycling services to commercial and industrial customers.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "nordstrom.com/companyreview | Connect with us: #NORDSTROM\n\nThis annual report is printed on FSC® certified paper. The recycled content of our paper is 30% post-consumer waste. ©2015 Nordstrom, Inc. All rights reserved. Printed in the USA. 374047840 PLEASE RECYCLE.\n\n21008 - 037404A 2014 ANNUAL REPORT - pg BC-FC\n\nANNUAL REPORT 2014\n\nANNUAL REPORT 2014\n\n8.375 X 10.875 - PDF X1A - KODAK", - "page_start": 95, - "page_end": 95, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "nordstrom.com/companyreview | Connect with us: #NORDSTROM\n\nThis annual report is printed on FSC® certified paper. The recycled content of our paper is 30% post-consumer waste. ©2015 Nordstrom, Inc. All rights reserved. Printed in the USA.\n\n374047840 PLEASE RECYCLE.\n\n21008 - 037404A 2014 ANNUAL REPORT - pg BC-FC\n\n8.375 X 10.875 - PDF X1A - KODAK\n\nANNUAL REPORT 2014\n\nANNUAL REPORT 2014", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What is the day of the black container in Lachapelle ?", - "target_page": 4, - "target_passage": "LACHAPELLE MONDAY green weeks", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "Figure 11.3 Gruff Visualization of the EmployeeShape\n\nFigure 11.4 Gruff Visualization of the CustomerShape", - "page_start": 81, - "page_end": 81, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "#### **HOW DOES IT WORK?**\n\n**When to put my garbage container outside?** The evening before the waste collection day.\n\n**Who is responsible for the maintenance of the containers?** You will have to keep them in a clean working state (periodical washing).\n\n**Container stolen: What to do?** In case of theft, your container will be replaced on presentation of a theft report effected at your local police station.\n\n**Out container = full container** Put your rubbish container out only when full.\n\n**Attention !** Black garbage bags left on the ground will no longer be collected.\n\nPlease be respectful with the agents.\n\n### **HOW TO GET A COMPOST KIT?**\n\n**Buy your own compost kit and get tips for good composting practice.** Only during opening hours every wednesday from 2 pm to 4 pm at the old recycling centre impasse Elie Teyssier-Miramont. (In case of unavailability, please contact the environment department). 30 minute workshops/awarenessraising sessions are regularly organised (starting at 4pm). It is possible to leave with a composter during these workshops**. Registration and information with the service.\n\n| Compost kit | Plastic | Wood |\n| --- | --- | --- |\n| 300 L | 20 € | 30 € |\n| 400 L | 25 € | 35 € |\n\n* Only payment by cheque made payable to the 'Tresor Public' are accepted\n\n**Specific condition of acquisition apply according to your municipality of residence\n\n| Town | Black container | Yellow container |\n| --- | --- | --- |\n| AGNAC | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| ALLEMANS-DU-DROPT | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| ARMILLAC | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| BOURGOUGNAGUE | WEDNESDAY | FRIDAY |\n| | green weeks | white weeks |\n| CAMBES | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| LACHAPELLE | MONDAY | THURSDAY |\n| | green weeks | white weeks |\n| LAPERCHE | TUESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| LA-SAUVETAT-DU-DROPT | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| LAUZUN | MONDAY | FRIDAY |\n| | green weeks | white weeks |\n| LAVERGNE | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| MIRAMONT-DE-GUYENNE | TUESDAY | THURSDAY |\n| | green weeks | white weeks |\n| MONTIGNAC-DE-LAUZUN | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| MONTIGNAC-TOUPINERIE | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| MOUSTIER | WEDNESDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| PEYRIÈRE | MONDAY | THURSDAY |\n| | green weeks | white weeks |\n| PUYSSERAMPION | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| ROUMAGNE | MONDAY | THURSDAY |\n| | white weeks | green weeks |\n| SAINT-COLOMB-DE-LAUZUN | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| SAINT-PARDOUX-ISAAC | MONDAY | FRIDAY |\n| | white weeks | green weeks |\n| SEGALAS | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n\n#### **MORE QUESTIONS ?**\n\nWebsite: **www.ccpl47.fr** / Section En Pratique > Environnement > Gestion des déchets\n\n**Environnement Service**:\n\n12 rue du Renfort 47410 LAUZUN\n\n**05 53 94 11 23 / secretariat.environnement@ccpl47.fr Composting** : anim.biodechets@ccpl47.fr / 06 33 72 84 18\n\n**Recycling centre access, registration or modification** : iris@ccpl47.fr / 05 53 64 12 26\n\nOn the CCPL website", - "page_start": 3, - "page_end": 3, - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf" - }, - { - "text": "## **Climate**\n\nLyon has a humid subtropical climate (Köppen: *Cfa*), bordering an oceanic climate (*Köppen*: *Cfb*, Trewartha: *Do*).[38] The mean temperature in Lyon in the coldest month is 4.1 °C (39.4 °F) in January and in the warmest month in July is 22.6 °C (72.7 °F). Precipitation is adequate year-round, at an average of 820 mm (32.3 in), the winter months are the driest. The highest recorded temperature was 40.5 °C (104.9 °F) on 13 August 2003 while the lowest recorded temperature was −24.6 °C (−12.3 °F) on 22 December 1938.[39]\n\nIce on the Saône, 2012", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia4.pdf" - }, - { - "text": "#### **Registry**\n\nOpenShift can build container images from source code, deploy them, and manage their lifecycle. To enable this process, OpenShift provides an internal, integrated registry that can be deployed in the OpenShift environment to manage images.\n\nThe registry stores images and metadata. For production environments, persistent storage must be used for the registry; otherwise, any images that were built or pushed into the registry disappear if the pod restarts.\n\n#### **Aggregated logging**\n\nOne of the Red Hat OpenShift Container Platform optional components is named Red Hat OpenShift Container Platform aggregated logging. This component collects and aggregates logs from the pods that are running in the Red Hat OpenShift Container Platform cluster and /var/log/messages on nodes. This configuration enables Red Hat OpenShift Container Platform users to view the logs of projects that they can view by using a web interface.\n\nRed Hat OpenShift Container Platform aggregated logging component is a modified version of the ELK stack, which is composed of a few pods that are running on the Red Hat OpenShift Container Platform environment:\n\n- -Elasticsearch: An object store where all logs are stored.\n- -Kibana: A web UI for Elasticsearch.\n- - Curator: Elasticsearch maintenance operations that are performed automatically on a per-project basis.\n- -Fluentd: Gathers logs from nodes and containers and feeds them to Elasticsearch.\n\nConsider the following basic concepts for aggregated logging:\n\n- -Cluster: A set of Elasticsearch nodes that distribute the workload.\n- -Node: A container that is running an instance of Elasticsearch, which is part of the cluster.\n- -Index: Collection of documents (container logs).\n- - Shards and Replicas: Indexes can be divided into sets of data that contain the primary copy of the documents that are stored (primary shards) or backups of that primary copies (replica shards). Sharding allows the application to horizontally scale the information and distributed/paralellized operations. Replication instead provides HA and also better search throughput because searches are also run on replicas.\n\n**Note:** Using NFS storage as a volume or a persistent volume (or by way of an NAS such as Gluster) is not supported for Elasticsearch storage because Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.\n\nRed Hat OpenShift Container Platform can gather metrics from kubelet and store the values in Heapster. Red Hat OpenShift Container Platform Metrics provide the ability to view CPU, memory, and network-based metrics and display the values in the user interface. These metrics can allow for the horizontal autoscaling of pods based on parameters that are provided by a Red Hat OpenShift Container Platform user. It is important to understand capacity planning when metrics are deployed into an Red Hat OpenShift Container Platform environment.\n\nRed Hat OpenShift Container Platform metrics is composed by the following pods that are running on the Red Hat OpenShift Container Platform environment:\n\n- - Heapster: Heapster scrapes the metrics for CPU, memory, and network usage on every Pod. Then, it exports them into Hawkular Metrics.", - "page_start": 112, - "page_end": 112, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **2.4.3 Kubernetes operating environment, objects, and basic operations**\n\nThis section describes the Kubernetes operating environment, including its objects and basic operations.\n\n#### **Master node**\n\nThis node runs multiple controllers that are responsible for the health of the cluster, replication, scheduling, endpoints (linking Services and Pods), Kubernetes API. It interacts with the underlying cloud providers and others. Generally, it ensures that everything is running and monitors worker nodes.\n\n#### **Worker node**\n\nThis node runs the Kubernetes agent that is responsible for running Pod containers by way of Docker or rkt, requests secrets or configurations, mounts required Pod volumes, performs health checks, and reports the status of Pods and the node to the rest of the system.\n\n#### **Pod**\n\nWithin a cluster, a pod encapsulates an application that is composed of one or more processes from one and at time multiple containers. Every pod includes dedicated I/O resources, such as storage, a unique IP, and a set of configuration properties for the runtime environment. These features make pod the smallest unit of deployment and basic unit of execution.\n\nDocker is the most popular container run time that is used for Kubernetes Pod1. Depending on associated containers, pods are available in the following types:\n\n- -Pod with a single container: This configuration is the most common.\n- - Pod with multiple containers: Must be colocated containers to serve a functional requirement.\n- - Networking: Each pod shares its namespace, IP, and port. However, for optimal performance, containers in same Pod communicates with the localhost identity.\n- - Storage: A pod specifies shared storage volume. All containers in a pod can share persistent data through this volume.\n\nAfter a pod is created and is scheduled to run on a node, it persists until one of the following actions occurs:\n\n- -The process is ended.\n- -The pod objected is deleted.\n- -The pod is evicted for lack of resources.\n- -The node fails.\n\nA pod alone is not self-healing, which means that during any failure, a pod does not attempt to restart. A pod is the encapsulation of containers, which primarily are executable entities. Therefore, to \"run a pod\" means running an application and service through containers.\n\n1 https://www.sumologic.com/blog/kubernetes-vs-docker/", - "page_start": 41, - "page_end": 41, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **4.4 OpenShift registry**\n\nOpenShift Container Platform can use any server that implements the container image registry API as a source of images, including the Docker Hub, private registries that are run by third parties, and the integrated OpenShift Container Platform registry.\n\n#### **4.4.1 Integrated OpenShift Container Registry**\n\nOpenShift Container Platform provides an integrated container image registry called OpenShift Container Registry (OCR). This registry that adds the ability to automatically provision new image repositories on demand. This feature provides users with a built-in location for their application builds to push the resulting images.\n\nWhenever a new image is pushed to OCR, the registry notifies OpenShift Container Platform about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different components of OpenShift Container Platform react to new images, creating builds and deployments.\n\nOCR can also be deployed as a stand-alone component that acts solely as a container image registry, without the build and deployment integration.\n\n#### **4.4.2 Third-party registries**\n\nOpenShift Container Platform can create containers by using images from third-party registries. However, these registries do *not* offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform fetches tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running the **oc import-image ** command. When new images are detected, the build that was described in 4.4.1, \"Integrated OpenShift Container Registry\" and deployment reactions occur.\n\n#### **4.5 Managing OpenShift resources**\n\nAll OpenShift resources, images, containers, pods, services, builders, templates, and so on, are stored on etcd and can be managed by the OpenShift CLI, web console, or REST API. These resources also are defined in text files in JSON or YAML format and can be changed by editing those files and shared on an SCM system, such as GIT.\n\nOpenShift can even retrieve these resource definitions directly from an external SCM.", - "page_start": 84, - "page_end": 84, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **6.3 OpenShift container platform deployment**\n\nThis section provides an OpenShift container deployment platform.\n\n#### **6.3.1 Deployment scenarios**\n\nThis section describes the following most common scenarios that can be used to start deploying OpenShift clusters:\n\n- - Single node deployment (all-in-one) is not an officially supported OpenShift deployment. The all-in-one (AIO) configuration is considered a testing or development environment. The Master, Infrastructure and Compute Roles are deployed to a single node (see Figure 6-1).\n*Figure 6-1 OpenShift Container Platform 3.11 all-in-one*", - "page_start": 126, - "page_end": 126, - "source_file": "sg248459.pdf" - }, - { - "text": "# **2**\n\n## **Chapter 2. Introduction to containers and orchestration with Kubernetes**\n\nThis chapter presents the conceptual foundations of containers and the open source container orchestration Kubernetes. It also introduces the Red Hat Enterprise Kubernetes product that is called Red Hat OpenShift.\n\nThis chapter includes the following topics:\n\n- -2.1, \"A new computing paradigm in cloud transformation\" on page 8\n- -2.2, \"Virtual machines meet containers\" on page 12\n- -2.3, \"Containers\" on page 19\n- -2.4, \"Kubernetes: An open source container orchestration\" on page 24\n- -2.5, \"Enterprise Kubernetes: Red Hat OpenShift\" on page 31", - "page_start": 22, - "page_end": 22, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **4.1 OpenShift cluster components**\n\nFigure 4-1 shows an overview of the OpenShift container platform components.\n\n*Figure 4-1 Red Hat OpenShift cluster platform components*\n\n#### **4.1.1 Docker service and Kubernetes**\n\nThe Docker service that runs in every OpenShift cluster node provides the container image administration. The Kubernetes cluster provides cluster management and orchestrates containers on multiple nodes in the OpenShift cluster.\n\n#### **4.1.2 etcd store**\n\nThe etcd store is a distributed key value store that is used by Kubernetes to store configuration and state information about the containers and resources that are inside the OpenShift cluster.\n\n#### **4.1.3 OpenShift-Kubernetes extensions**\n\nOpenShift-Kubernetes extensions are more resources that are used to save the OpenShift configuration and cluster internal state. These resource extensions are stored in etcd with application resources that are managed by Kubernetes.", - "page_start": 75, - "page_end": 75, - "source_file": "sg248459.pdf" - }, - { - "text": "The Docker architecture includes the following components:\n\n- -Docker Server Daemon\nDaemon is the Docker process that runs as background process and listen for API requests. It also manages Dockers objects, such as images, containers, networks, and volumes.\n\n- -Docker Registry\nA Docker registry stores Docker images. Docker Hub is a public registry that anyone can use. Docker is configured to look for images on Docker Hub by default.\n\n- - Docker Objects:\n\t- Images: This template is read-only with instruction to build a Docker container. An image can be layer on another image with specific changes. An images library is available from the Docker registry.\n\nA Dockerfile contains the configuration information that is needed to build and run an image. Based on instructions that are defined in the Dockerfile, layers are created for an image. During the build process, only the changed layer is rebuilt; therefore, Docker remains lightweight, which makes it small and fast.\n\n- Container: A container is an executable instance that is built from the image, which can be started, stopped, moved, or deleted by using Docker API or CLI. Containers can be connected to one or many networks. Storage can be added and a new container can be built by using a container.\n- Services: Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate by using the Docker API.\n- NameSpace: In context of Docker, NameSpace is the technology that provides isolated workspaces for a container (see Table 2-1). Each container encapsulates all its features within the namespace that is associated with that specific container.\n\n| Namespace | Description |\n| --- | --- |\n| PID | Process isolation (PID: Process ID) |\n| NET | Managing network interfaces (NET: Networking) |\n| IPC | Managing access to IPC resources (IPC: InterProcess Communication) |\n| MNT | Managing file system mount points (MNT: Mount) |\n| UTS | Isolating kernel and version identifiers. (UTS: UNIX Timesharing System) |\n\n| Table 2-1 NameSpace |\n| --- |\n\n- Control groups: A control group (cgroup) is the technology that limits an application to a specific set of resources. This feature allows Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.\n- Union file system: Union file systems (UnionFS) are file systems that operate by creating layers, which makes them lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers.\n- Container format: Container format is the wrapper around NameSpaces, control groups, and UnionFS. The default container format is libcontainer.", - "page_start": 38, - "page_end": 38, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What to do if my container is stolen ?", - "target_page": 4, - "target_passage": "Container stolen: What to do? In case of theft, your container will be replaced on presentation of a theft report effected at your local police station.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "#### **Registry**\n\nOpenShift can build container images from source code, deploy them, and manage their lifecycle. To enable this process, OpenShift provides an internal, integrated registry that can be deployed in the OpenShift environment to manage images.\n\nThe registry stores images and metadata. For production environments, persistent storage must be used for the registry; otherwise, any images that were built or pushed into the registry disappear if the pod restarts.\n\n#### **Aggregated logging**\n\nOne of the Red Hat OpenShift Container Platform optional components is named Red Hat OpenShift Container Platform aggregated logging. This component collects and aggregates logs from the pods that are running in the Red Hat OpenShift Container Platform cluster and /var/log/messages on nodes. This configuration enables Red Hat OpenShift Container Platform users to view the logs of projects that they can view by using a web interface.\n\nRed Hat OpenShift Container Platform aggregated logging component is a modified version of the ELK stack, which is composed of a few pods that are running on the Red Hat OpenShift Container Platform environment:\n\n- -Elasticsearch: An object store where all logs are stored.\n- -Kibana: A web UI for Elasticsearch.\n- - Curator: Elasticsearch maintenance operations that are performed automatically on a per-project basis.\n- -Fluentd: Gathers logs from nodes and containers and feeds them to Elasticsearch.\n\nConsider the following basic concepts for aggregated logging:\n\n- -Cluster: A set of Elasticsearch nodes that distribute the workload.\n- -Node: A container that is running an instance of Elasticsearch, which is part of the cluster.\n- -Index: Collection of documents (container logs).\n- - Shards and Replicas: Indexes can be divided into sets of data that contain the primary copy of the documents that are stored (primary shards) or backups of that primary copies (replica shards). Sharding allows the application to horizontally scale the information and distributed/paralellized operations. Replication instead provides HA and also better search throughput because searches are also run on replicas.\n\n**Note:** Using NFS storage as a volume or a persistent volume (or by way of an NAS such as Gluster) is not supported for Elasticsearch storage because Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.\n\nRed Hat OpenShift Container Platform can gather metrics from kubelet and store the values in Heapster. Red Hat OpenShift Container Platform Metrics provide the ability to view CPU, memory, and network-based metrics and display the values in the user interface. These metrics can allow for the horizontal autoscaling of pods based on parameters that are provided by a Red Hat OpenShift Container Platform user. It is important to understand capacity planning when metrics are deployed into an Red Hat OpenShift Container Platform environment.\n\nRed Hat OpenShift Container Platform metrics is composed by the following pods that are running on the Red Hat OpenShift Container Platform environment:\n\n- - Heapster: Heapster scrapes the metrics for CPU, memory, and network usage on every Pod. Then, it exports them into Hawkular Metrics.", - "page_start": 112, - "page_end": 112, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **4.4 OpenShift registry**\n\nOpenShift Container Platform can use any server that implements the container image registry API as a source of images, including the Docker Hub, private registries that are run by third parties, and the integrated OpenShift Container Platform registry.\n\n#### **4.4.1 Integrated OpenShift Container Registry**\n\nOpenShift Container Platform provides an integrated container image registry called OpenShift Container Registry (OCR). This registry that adds the ability to automatically provision new image repositories on demand. This feature provides users with a built-in location for their application builds to push the resulting images.\n\nWhenever a new image is pushed to OCR, the registry notifies OpenShift Container Platform about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different components of OpenShift Container Platform react to new images, creating builds and deployments.\n\nOCR can also be deployed as a stand-alone component that acts solely as a container image registry, without the build and deployment integration.\n\n#### **4.4.2 Third-party registries**\n\nOpenShift Container Platform can create containers by using images from third-party registries. However, these registries do *not* offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform fetches tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running the **oc import-image ** command. When new images are detected, the build that was described in 4.4.1, \"Integrated OpenShift Container Registry\" and deployment reactions occur.\n\n#### **4.5 Managing OpenShift resources**\n\nAll OpenShift resources, images, containers, pods, services, builders, templates, and so on, are stored on etcd and can be managed by the OpenShift CLI, web console, or REST API. These resources also are defined in text files in JSON or YAML format and can be changed by editing those files and shared on an SCM system, such as GIT.\n\nOpenShift can even retrieve these resource definitions directly from an external SCM.", - "page_start": 84, - "page_end": 84, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **HOW DOES IT WORK?**\n\n**When to put my garbage container outside?** The evening before the waste collection day.\n\n**Who is responsible for the maintenance of the containers?** You will have to keep them in a clean working state (periodical washing).\n\n**Container stolen: What to do?** In case of theft, your container will be replaced on presentation of a theft report effected at your local police station.\n\n**Out container = full container** Put your rubbish container out only when full.\n\n**Attention !** Black garbage bags left on the ground will no longer be collected.\n\nPlease be respectful with the agents.\n\n### **HOW TO GET A COMPOST KIT?**\n\n**Buy your own compost kit and get tips for good composting practice.** Only during opening hours every wednesday from 2 pm to 4 pm at the old recycling centre impasse Elie Teyssier-Miramont. (In case of unavailability, please contact the environment department). 30 minute workshops/awarenessraising sessions are regularly organised (starting at 4pm). It is possible to leave with a composter during these workshops**. Registration and information with the service.\n\n| Compost kit | Plastic | Wood |\n| --- | --- | --- |\n| 300 L | 20 € | 30 € |\n| 400 L | 25 € | 35 € |\n\n* Only payment by cheque made payable to the 'Tresor Public' are accepted\n\n**Specific condition of acquisition apply according to your municipality of residence\n\n| Town | Black container | Yellow container |\n| --- | --- | --- |\n| AGNAC | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| ALLEMANS-DU-DROPT | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| ARMILLAC | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| BOURGOUGNAGUE | WEDNESDAY | FRIDAY |\n| | green weeks | white weeks |\n| CAMBES | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| LACHAPELLE | MONDAY | THURSDAY |\n| | green weeks | white weeks |\n| LAPERCHE | TUESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| LA-SAUVETAT-DU-DROPT | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| LAUZUN | MONDAY | FRIDAY |\n| | green weeks | white weeks |\n| LAVERGNE | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| MIRAMONT-DE-GUYENNE | TUESDAY | THURSDAY |\n| | green weeks | white weeks |\n| MONTIGNAC-DE-LAUZUN | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| MONTIGNAC-TOUPINERIE | TUESDAY | THURSDAY |\n| | white weeks | green weeks |\n| MOUSTIER | WEDNESDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| PEYRIÈRE | MONDAY | THURSDAY |\n| | green weeks | white weeks |\n| PUYSSERAMPION | MONDAY | WEDNESDAY |\n| | green weeks | white weeks |\n| ROUMAGNE | MONDAY | THURSDAY |\n| | white weeks | green weeks |\n| SAINT-COLOMB-DE-LAUZUN | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n| SAINT-PARDOUX-ISAAC | MONDAY | FRIDAY |\n| | white weeks | green weeks |\n| SEGALAS | WEDNESDAY | WEDNESDAY |\n| | white weeks | green weeks |\n\n#### **MORE QUESTIONS ?**\n\nWebsite: **www.ccpl47.fr** / Section En Pratique > Environnement > Gestion des déchets\n\n**Environnement Service**:\n\n12 rue du Renfort 47410 LAUZUN\n\n**05 53 94 11 23 / secretariat.environnement@ccpl47.fr Composting** : anim.biodechets@ccpl47.fr / 06 33 72 84 18\n\n**Recycling centre access, registration or modification** : iris@ccpl47.fr / 05 53 64 12 26\n\nOn the CCPL website", - "page_start": 3, - "page_end": 3, - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf" - }, - { - "text": "*Figure 2-2 IBM PowerVC*\n\nAround 2011, Container technology started to be a strong player in the cloud arena, which is a method to package an application in a box so it can be run with its dependencies, isolated from other applications. For more information, see 2.3, \"Containers\" on page 19.\n\nA year later, Docker Containers exploded in popularity, but one thing was missing: the thorough view and management of the entire environment.", - "page_start": 28, - "page_end": 28, - "source_file": "sg248459.pdf" - }, - { - "text": "The following timeline highlights the major shifts in the development of container to date (see Figure 2-9):\n\n- - 2000 FreeBSD Jails: FreeBSD Jails enabled Computer systems to be partitioned into multiple servers that were independent subsystems named Jail with unique IP address.\n- - 2001 Linux Vserver: Similar to FreeBSD Jails, Linux also developed a feature for operating system virtualization where a file system, memory, and network can be shared among independent systems.\n- - 2004 Solaris Containers: Solaris Containers combined system resource controls and boundary separation that was provided by zones to take advantage of features, such as snapshots and cloning from ZFS.\n- - 2006 Google process containers: Process Containers was designed for limiting, accounting, and isolating resource usage (CPU, memory, disk I/O, and network) of a collection of processes. Later, this was renamed as Control Groups (cgroups) and merged to Linux kernel 2.6.24.\n- - 2008 LXC evolved (Linux Container Group): Linux Containers (LXC) was the first, most complete implementation of Linux container manager. It was implemented in 2008 by using cgroups and Linux namespaces.\n- - 2013 Let Me Contain That For You (LMCTFY): Let Me Contain That For You (LMCTFY) started in 2013 as an open source version of Google's container stack. Applications can be made container aware, which creates and manages their own subcontainers.\n- - 2013 Docker: Docker emerged, which made container service even more popular. Docker and container grew together.\n- - 2016 Security and DevOps: Container security enhanced and DevOps method evolved as most preferred Container Application process.\n\n- -2017 Container becomes more matured with CNCF and Kubernetes.\n*Figure 2-9 Containers timeline*", - "page_start": 36, - "page_end": 36, - "source_file": "sg248459.pdf" - }, - { - "text": "If the RESOURCE_NAME parameter is omitted, all resources of the specified RESOURCE_TYPE are summarized, as shown in Example 6-12.\n\n*Example 6-12 oc get pod*\n\n| # oc get pod | | | | |\n| --- | --- | --- | --- | --- |\n| NAME | READY | STATUS | RESTARTS | AGE |\n| docker-registry-3-4flql | 1/1 | Running | 2 | 1d |\n| router-2-4gnmj | 1/1 | Running | 3 | 1d |\n| router-2-cp5sf | 1/1 | Running | 3 | 1d |\n| router-2-slkjf | 1/1 | Running | 3 | 1d |\n\nUse the **oc types** command for a quick refresher on the concepts of the available RESOURCE_TYPES, as shown in Example 6-13.\n\n*Example 6-13 oc types*\n\n#### **# oc types** Command \"types\" is deprecated, refer to official documentation instead Concepts and Types\n\nKubernetes and OpenShift help developers and operators build, test, and deploy applications in a containerized cloud environment. Applications may be composed of all of the components below, although most developers will be concerned with Services, Deployments, and Builds for delivering changes.\n\nConcepts:\n\n- * Containers:\n A definition of how to run one or more processes inside of a portable Linux environment. Containers are started from an Image and are usually isolated from other containers on the same machine.\n\n- * Image:\n A layered Linux filesystem that contains application code, dependencies, and any supporting operating system libraries. An image is identified by a name that can be local to the current cluster or point to a remote Docker registry (a storage server for images).\n\n- * Pods [pod]:\n A set of one or more containers that are deployed onto a Node together and share a unique IP and Volumes (persistent storage). Pods also define the security and runtime policy for each container.\n\n- * Labels:\n Labels are key value pairs that can be assigned to any resource in the system for grouping and selection. Many resources use labels to identify sets of other resources.\n\n- * Volumes:\n Containers are not persistent by default - on restart their contents are cleared. Volumes are mounted filesystems available to Pods and their containers which may be backed by a number of host-local or network attached storage endpoints. The simplest volume type is EmptyDir, which is a temporary directory on a single machine. Administrators may also allow you to request a Persistent Volume that is automatically attached to your pods.\n\n- * Nodes [node]:", - "page_start": 163, - "page_end": 163, - "source_file": "sg248459.pdf" - }, - { - "text": "#### *Ephemeral storage*\n\nContainer images are stored locally on the nodes running Red Hat OpenShift Container Platform pods.\n\nWhen Docker run time is used, the /var/lib/docker mount point is used by active containers and pods. This local storage is where the node maintains a copy of container images that are pulled from a container registry. This mount point is managed by docker-storage and it uses the following naming format: /var/lib/docker/overlay2/ and /var/lib/docker/containers/.\n\n#### *Persistent storage*\n\nPersistent Volume Claims (PVC) are used to store the application data. These claims can be added to the environment manually or provisioned dynamically by using a StorageClass object.\n\n#### *Storage classes*\n\nThe StorageClass resource object describes and classifies different types of storage that can be requested. It also provides a means for passing parameters to the backend for dynamically provisioned storage on demand.\n\nStorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin) or Storage Administrators (storage-admin) define and create the StorageClass objects that users can use without needing any intimate knowledge about the underlying storage volume sources. Therefore, the naming of the storage class that is defined in the StorageClass object must be useful in understanding the type of storage it maps, whether that is storage from PowerVC Cinder or from other storage provider.\n\n#### *Persistent Volumes*\n\nPersistentVolumes (PV) objects provide pods with non-ephemeral storage by configuring and encapsulating underlying storage sources. A PersistentVolumeClaim (PVC) abstracts an underlying PV to provide provider-independent storage to OpenShift resources. When successfully fulfilled by the system, a PVC mounts the persistent storage to a specific directory (mountPath) within one or more pods. From the container perspective, the mountPath is connected to the underlying storage mount points by a regular bind mount.\n\n#### *FlexVolumes*\n\nFlexVolume is known as an *out-of-tree plug-in interface* because it is developed outside the main Kubernetes source code. The FlexVolume interface enables users to write their own drivers. These drivers can be written in any programming or scripting language.\n\nWhen an application that is running on OpenShift needs a persistent volume, it submits a persistent volume claim to the PowerVC FlexVolume driver. The PowerVC FlexVolume call is translated into a Cinder API call to create a volume. When the volume is ready, it is presented back to OpenShift and attached to the requesting pod.\n\nThe persistent volume claim needs to include only the volume size and access mode. The backend implementation information about how and where the volume is created are handled by PowerVC. The OpenShift API abstracts them from the user that is making the resource claim.\n\n**Note:** For more information about the PowerVC FlexVolume, see this web page.", - "page_start": 111, - "page_end": 111, - "source_file": "sg248459.pdf" - }, - { - "text": "- -Docker Client\nA Docker client is primary interface with which the user can use Docker features, as shown in Figure 2-11. Docker client starts APIs in this process. Developers build applications by using Docker APIs.\n\n*Figure 2-11 Docker orchestration* \n\n#### **2.4 Kubernetes: An open source container orchestration**\n\nThis section describes Kubernetes open source container orchestration.\n\n#### **2.4.1 What is container orchestration?**\n\n*Container orchestration* is the process of organizing properly to achieve the wanted performance. Cloud-based applications are intended to be hosted on several commodity hardware on several hardware environments. These loosely coupled containerized objects must be organized and coordinated to meet functional requirement, such as starting and stopping an application, and grouping and coordinating applications in a cluster. Some of the most popular orchestration services are Apache Mesos, Google Kubernetes, and Docker Swarm.\n\nA container platform that is lead by Docker is used to package applications that were divided into micro services. Such discrete services can be hosted on separate containers that are helpful during the continues integration and continues delivery process. Container orchestration is primarily focused on managing the lifecycle of containers for automated deployment, management of nodes, scalability and availability of services based on work load, and networking among distributed containers in large systems.\n\nContainer orchestration configuration is created in YAML or JSON files. Based on the declarative configuration, container tools perform one or more of the following tasks:\n\n- -Fetch required configuration image from repository by way of Docker Hub\n- -Establish networks across container\n- -Allocate storage and space", - "page_start": 39, - "page_end": 39, - "source_file": "sg248459.pdf" - }, - { - "text": "The Docker architecture includes the following components:\n\n- -Docker Server Daemon\nDaemon is the Docker process that runs as background process and listen for API requests. It also manages Dockers objects, such as images, containers, networks, and volumes.\n\n- -Docker Registry\nA Docker registry stores Docker images. Docker Hub is a public registry that anyone can use. Docker is configured to look for images on Docker Hub by default.\n\n- - Docker Objects:\n\t- Images: This template is read-only with instruction to build a Docker container. An image can be layer on another image with specific changes. An images library is available from the Docker registry.\n\nA Dockerfile contains the configuration information that is needed to build and run an image. Based on instructions that are defined in the Dockerfile, layers are created for an image. During the build process, only the changed layer is rebuilt; therefore, Docker remains lightweight, which makes it small and fast.\n\n- Container: A container is an executable instance that is built from the image, which can be started, stopped, moved, or deleted by using Docker API or CLI. Containers can be connected to one or many networks. Storage can be added and a new container can be built by using a container.\n- Services: Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate by using the Docker API.\n- NameSpace: In context of Docker, NameSpace is the technology that provides isolated workspaces for a container (see Table 2-1). Each container encapsulates all its features within the namespace that is associated with that specific container.\n\n| Namespace | Description |\n| --- | --- |\n| PID | Process isolation (PID: Process ID) |\n| NET | Managing network interfaces (NET: Networking) |\n| IPC | Managing access to IPC resources (IPC: InterProcess Communication) |\n| MNT | Managing file system mount points (MNT: Mount) |\n| UTS | Isolating kernel and version identifiers. (UTS: UNIX Timesharing System) |\n\n| Table 2-1 NameSpace |\n| --- |\n\n- Control groups: A control group (cgroup) is the technology that limits an application to a specific set of resources. This feature allows Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.\n- Union file system: Union file systems (UnionFS) are file systems that operate by creating layers, which makes them lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers.\n- Container format: Container format is the wrapper around NameSpaces, control groups, and UnionFS. The default container format is libcontainer.", - "page_start": 38, - "page_end": 38, - "source_file": "sg248459.pdf" - }, - { - "text": "Containers include the following key features and benefits:\n\n- - Portability:\n\t- Single executable package with all code, configuration files, dependencies, and required libraries\n\t- Bundle must not include operating system-related files\n\t- Open runtime engine is a prerequisite\n\t- Common bins and libraries can be shared across multiple containers\n- - Agility:\n\t- Container system is managed by Open Container Initiative\n\t- DevOps tools and process are used for rapid code deployment by using continuous integration and continuous deployment (CI/CD)\n\t- Open Source Docker engine works for Linux and Windows platforms\n- - Performance:\n\t- Multiple containers share operating system kernel for lightweight execution mode\n\t- Improves service usage, which results in reduced software license costs\n\t- Container start time is much faster than VM start time\n- - Fault isolation:\n\t- During concurrent execution, each container runs independently. A fault in one container does not affect other container's execution.\n\t- Container engine takes advantage of operating system security isolation technique.\n- - Ease of management:\n\t- Container orchestration manages installation, scalability, availability as defined\n\t- Application version upgrade, monitoring, and debugging managed centrally through container orchestration system\n- - Security:\n\t- Encapsulation and isolation is the first level of security for any containerized application. A rogue application does not affect other applications of the hosting environment\n\t- Container engine inherits default security features from hosting platform\n\t- Namespace provides an isolated view; for example, file system, mount point, network, process ID, and User ID\n\n#### **2.3.2 History of containers**\n\nContainer seems a latest buzzword with cloud technology. However, a similar concept was used for the first time as early as 1970 where application code was decoupled from UNIX native system calls.\n\nOvertime, a few more enhancements were made, from stand-alone computer systems to integrated environments. Then, virtualization features evolved with LPAR and workload partition (WPAR) concepts. Although these virtualization features added flexibility, application portability always was a challenge. With containerization, new age software developers received all of the flexibility, portability, and security that they needed.", - "page_start": 35, - "page_end": 35, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "How many people include the Dyspnea study ?", - "target_page": 1, - "target_passage": "This population-based study included 2,857 adults who were experiencing respiratory symptoms.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identified with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort.1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%.2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks.3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a significant burden on those with undiagnosed conditions. In a systematic review by Müller et al,4 the combined\n\n#### Study Design and Methods Recruitment of Undiagnosed Cases and Healthy Control Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case finding study. Approval for prevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily influenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants.5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identified potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits.7\n\nThe three objectives of our study were as follows: (1) to evaluate the impact of dyspnea in adults from the general population who had no prior diagnosis of respiratory disease but who reported having significant respiratory symptoms in the past 6 months; (2) to identify associated risk factors for dyspnea and estimate their influence on the symptom; and (3) to explore the relationship between dyspnea and health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nthe study was obtained from the research ethics boards of the 17 participating study sites across Canada. Informed, written consent was provided by all study participants.\n\nBoth landlines and cellphones within a 90-minute radius of any of the 17 study sites were dialed randomly. A\n\nDOI: https://doi.org/10.1016/j.chest.2024.07.183\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George's Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael's Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\n(P. H.), Dalhousie University, Halifax, NS; the Department of Medicine (I. M. and M. B.), University of Alberta, Edmonton, AB; the Department of Medicine (M. D. L.), Queen's University, Kingston; the Department of Medicine (C. J. L.), University of Western Ontario, London, ON; the Department of Medicine (T. A.), Memorial University, St. John's, NF; the Department of Medicine (N. E.), McGill University, Montreal, QC; the Department of Medicine (M. A.), University of Manitoba, Winnipeg, MN, Canada.\n\nDrs Bierbrier and Gerstein contributed equally to this manuscript.\n\nPart of this work has been presented at the American Thoracic Society Conference, May 17-22, 2024, San Diego, CA.\n\nCORRESPONDENCE TO: Shawn D. Aaron, MD; email: saaron@ohri.ca Copyright 2024 The Author(s). Published by Elsevier Inc under license from the American College of Chest Physicians. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case finding strategy identified asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD.27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status.28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions.29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective.30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits definitive clinical trials.31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al32 revealed that physicians underestimated their patients' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea.19 Patient underreporting of symptoms, coupled with inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population.33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case finding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n# Financial/Nonfinancial Disclosures\n\nNone declared.\n\n# Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David Savage; Natasha Verzosa; Ravneet Mahal; and Mary Justine Angeles; Queen Elizabeth II Health Sciences Centre, Halifax, NS: Scott Fulton, RRT; Hôpital du Sacré Coeur de Montréal, Montréal, QC: Simone Chaboillez, MT; and Meliza Benabdallah; St. Joseph's Hamilton, Hamilton, ON: Liz Johnson; St. Boniface Hospital, Winnipeg, MB: Cheryl Noble, RN; Institut Universitaire de Cardiologie et de Pneumologie de Québec-Université Laval, Québec, QC: Johane Lepage, BSc; Joanne Milot, RN; and Christiane Balizet, RN; University of Calgary, Calgary, AB: Lisette Machado, MD; and Curtis Dumonceaux, BSc; University of Alberta, Edmonton, AB: Miranda Bowen, RRT; Fay Hartt; Angie Hillaby, RRT; and Amy Haartsma, RRT; St. Michael's Hospital, Toronto, ON: Stephanie Segovia, PhD; and Carolyn Spiegel-Feld; Queen's University Kingston General Hospital, Kingston, ON: Ann Taite, BSc; Alison Morra, BScN; Emma Bullock, HBSc; and Taylar Wall, RRT; University of Saskatchewan Royal University Hospital, Saskatoon, SK: Nancy Zacher; Janet Baran, RN; and Yessica Lopez, BA; London Health Sciences Centre - Victoria Hospital, London, ON: Katie Maguire; Heba Almadhoun; and Robert Campbell-Pereira, BSc; St. Clare's Mercy Hospital, St John's, NL: Sarah Anthony, BNRN; and Tanya Nolan, BNRN; McGill University Health Centre, Montreal, QC: Francine Noel; Royal Victoria Regional Health Centre, Barrie, ON: Masoud Mahdavian; and Ashley Brown, RRT; and Michael Garron Hospital, Toronto, ON: Ian Fraser; Han Byul (Liz) Lee; and Yuna Lee, BA. We would also thank Dong Vo We (data manager, Ottawa Hospital Research Institute, Ottawa, ON). We also thank the thousands of study participants who gave their time and came in for the study visits. We also thank ASDE Survey Sampler, Inc (Gatineau, QC, Canada) for organizing the random digit dialing.\n\n# References\n\n- 1. Parshall MB, Schwarthzstein RM, Adams L, et al. An Official American Thoracic Society Statement: update on the mechanisms, assessment, and management of dyspnea. Am J Respir Crit Care Med. 2012;185:435-452.\n- 2. Ho SF, O'Mahony MS, Steward JA, et al. Dyspnoea and quality of life in older people at home. Age Ageing. 2001;30: 155-159.\n- 3. Laviolette L, Laveneziana P. Dyspnoea: a multidimensional and multidisciplinary approach. Eur Respir J. 2014;43: 1750-1762.\n- 4. Müller A, Mraz T, Wouters EFM, et al. Prevalence of dyspnea in general adult populations: a systematic review and meta-analysis. Respir Med. 2023;218: 107379.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# TABLE 8 ] Unadjusted and Adjusted Dyspnea Associations With Health Care Use\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | Value P | Dyspnea OR (95% CI) | Value P |\n| In the past 12 mo, did you visit your general | 1.011 (1.007-1.014) | < .001 | 1.011 (1.007-1.014) | < .001 |\n| practitioner or a nurse practitioner or another physician at a walk-in clinic for any breathing | | | | |\n| problems? | | | | |\n| In the past 12 mo, did you visit an emergency | 1.015 (1.009-1.021) | < .001 | 1.015 (1.009-1.022) | < .001 |\n| department for any breathing problems? | | | | |\n| In the past 12 mo, were you hospitalized for any | 1.021 (1.006-1.037) | .006 | 1.023 (1.007-1.039) | .005 |\n| breathing problems or respiratory illness? | | | | |\n\nData are presented as OR (95% CI) with Pvalues. Adjusted values are adjusted for age, sex, and BMI.\n\noutpatients with cardiorespiratory disease25 and the Dyspnea-12 in patients with asthma26 and found that the affective aspect of dyspnea can significantly influence the impact of dyspnea on health status, irrespective of the intensity of breathlessness.\n\nIn those with PRISm, there was a strong, positive association between higher values for the FEV1/FVC ratio and dyspnea. For the PRISm group, a higher FEV1/FVC ratio may reflect diminished lung compliance due to interstitial lung disease and/or respiratory system restriction due to obesity, which could contribute to worse dyspnea. Conversely, the association of dyspnea with the FEV1/FVC ratio was in the opposite direction for those with asthma or COPD, and a lower FEV1/FVC ratio correlated with worse dyspnea, as expected.\n\nOur study complements the literature by focusing on adults with undiagnosed respiratory symptoms who were randomly selected and recruited through active case finding in the community. This increases the generalizability of our results to a broader population. Our dyspnea questions were derived from widely used and validated respiratory health questionnaires, and our dyspnea assessment measure is a weighted average of responses to these validated questions. Consequently, the measure has an immediate interpretation in terms of the lived day-to-day experience of individuals.\n\nOur study has limitations. We did not undertake reliability/reproducibility testing of our questionnaire. The dyspnea impact assessment score was statistically associated with increased health care utilization, lower quality of life, and reduced work productivity; therefore, by virtue of this analysis, our questionnaire has construct validity. However, further attempts at external validation of the questionnaire using an independent data set would be important. Health care utilization during the preceding 12 months was assessed on entry into the study, and there is potential for impaired recall of events. Our study may have missed asthma in some participants because bronchial challenge testing was not conducted on those who tested negative for airflow obstruction or BD responsiveness. A previous study showed that an additional diagnostic step incorporating\n\n| TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| (working for pay)? | | | | |\n| | Dyspnea Coefficient | | Dyspnea Coefficient | |\n| Measurea | (95% CI) | Value P | (95% CI) | Value P |\n| Absenteeism | 0.061 (0.040-0.083) | <.001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | <.001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | <.001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | <.001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coefficients are presented with 95% CIs and P values. Adjusted coefficients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\na Measures calculated from WPAI questions.21", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "#### Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered first, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n#### Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36- Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n#### Results\n\nFigure 1 illustrates the results of the case finding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n#### Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-specific risk factors, spirometry disease classification, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 – Study flow diagram demonstrating the case finding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ¼ COPD Diagnostic Questionnaire; CF ¼ cystic fibrosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "# Impact of Dyspnea on Adults With Respiratory Symptoms Without a Defined Diagnosis\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\n> BACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\n> RESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\n> STUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George's Respiratory questionnaire.\n\n> RESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported significantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-specific risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classification and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\n> INTERPRETATION: Our findings showed that in community-based adults with undiagnosed respiratory symptoms, those identified with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case finding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Disease Group | Reversibility of FEV1, % | | Post-BD FEV1/FVC Ratio | | Post-BD FEV1 % predicted | Overall Value P |\n| --- | --- | --- | --- | --- | --- | --- |\n| Control | 0.163 (P ¼ .47) | | P 0.274 ( | [ .05) | 0.090 (P ¼ .17) | .096 |\n| Normal spirometry | 0.186 (P ¼ .16) | | 0.240 ( P | [ .005) | P < .001) 0.131 ( | < .001 |\n| Asthma | 0.545 ( P | [ .01) | 0.107 (P ¼ .58) | | 0.158 (P ¼ .08) | .009 |\n| COPD | P 0.392 ( | [ .002) | P 0.307 ( | [ .05) | 0.075 (P ¼ .37) | < .001 |\n| PRISm | 0.290 (P ¼ .39) | | 0.854 ( P | [ .002) | P [ .004) 0.650 ( | < .001 |\n\nTABLE 6 ] Dyspnea Regressed on Lung Function Variables Representing Severity of Impairment\n\nDyspnea regressed on lung function variables representing severity of impairment, after removing contributions of patient-specific factors and spirometry disease group Tables 4 and 5 (1.7% of variability explained). Boldface indicates statitistical significance. BD ¼ bronchodilator; PRISm ¼ preserved ratio impaired spirometry.\n\nApproximately 65% of the variability in dyspnea remained unexplained by the factors examined in our study. Most individuals in our study showed normal spirometry results but still carried a substantial burden of dyspnea, an inconsistency that needs explanation. Several factors not included in our analysis may have contributed to the unexplained variation. Environmental factors (eg, air pollution, allergen exposure, seasonal variations in symptoms) are potential contributors to this unexplained variability.22 Genetic predispositions could also play a significant role, as suggested by a study that revealed that parents with dyspnea were 1.8 times more likely to have offspring with dyspnea.23 Additionally, fitness could be a contributing factor, especially in individuals with undiagnosed PRISm, asthma, or COPD who may restrict their activities to avoid dyspnea, and hence become deconditioned.6\n\nThere were significant but modest differences in mean dyspnea levels across the 17 study sites (data not shown), which are not explained by the risk factors we accounted for in our study. This finding is not surprising because some of the potential contributing factors previously mentioned and other site-specific factors\n\n(eg, climate, air quality/industrialization, socioeconomic status) of the catchment population tend to vary across study sites.\n\nDyspnea is a complex, subjective symptom that is modified by nonrespiratory factors including psychosocial, social, and environmental influences.5 Interindividual variability in the perception of dyspnea, influenced by these nonrespiratory factors, may play an important role. A study conducted by Ziegler et al24 assessed the perception of dyspnea in 42 healthy individuals using a standardized inspiratory resistive loading stimulus. The study used the modified Borg scale to measure dyspnea perception levels. Among the participants subjected to the same inspiratory resistive load, 31%, 45%, and 24% of participants classified their level of dyspnea as low, intermediate, and high, respectively. The study revealed that differences between individuals contribute considerable variability to the perception of dyspnea, even among healthy participants.\n\nThe affective dimension of dyspnea can be captured using additional questionnaires (eg, Multidimensional Dyspnea Profile, Dyspnea-12). Studies have explored the use of the Multidimensional Dyspnea Profile in\n\n| TABLE 7 ] Unadjusted and Adjusted Dyspnea Associations With Quality of Life (SF-36) |\n| --- |\n\n| | Unadjusted | | Adjusted | |\n| --- | --- | --- | --- | --- |\n| Measure | Dyspnea Coefficient (95% CI) | Value P | Dyspnea Coefficient (95% CI) | Value P |\n| Physical functioning | 0.693 (0.718 to 0.668) | < .001 | 0.655 (0.680 to 0.630) | < .001 |\n| Physical health limitations | 0.634 (0.666 to 0.603) | < .001 | 0.628 (0.661 to 0.595) | < .001 |\n| Emotional problems | 0.403 (0.438 to 0.369) | < .001 | 0.407 (0.443 to 0.370) | < .001 |\n| Energy/fatigue | 0.454 (0.479 to 0.428) | < .001 | 0.452 (0.479 to 0.425) | < .001 |\n| Emotional well-being | 0.230 (0.256 to 0.204) | < .001 | 0.239 (0.266 to 0.213) | < .001 |\n| Social functioning | 0.433 (0.466 to 0.399) | < .001 | 0.434 (0.469 to 0.399) | < .001 |\n| Pain | 0.410 (0.444 to 0.377) | < .001 | 0.387 (0.423 to 0.352) | < .001 |\n| General health | 0.390 (0.416 to 0.364) | < .001 | 0.382 (0.409 to 0.355) | < .001 |\n| Total score | 0.485 (0.504 to 0.467) | < .001 | 0.473 (0.493 to 0.454) | < .001 |\n\nAdjusted coefficients are adjusted for age, sex, and BMI. Regression coefficients are presented with 95% CIs and Pvalues.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "| Risk Factor | Regression Coefficient | P Value |\n| --- | --- | --- |\n| Age | 0.0909 | .005 |\n| Female | 8.217 | < .001 |\n| BMI | 0.899 | < .001 |\n| Household income < CAD $30,000 | 1.420 | .40 |\n| Household income $ CAD $30,000 | 2.149 | .07 |\n| Smoking history, pack-y | 0.144 | < .001 |\n| Smoking exposure | 5.123 | < .001 |\n| Occupational exposure | 0.00975 | < .001 |\n| Congestive heart failure | 10.119 | .004 |\n| Coronary artery disease | 4.813 | .001 |\n| Depression/anxiety | 6.892 | < .001 |\n| Diabetes mellitus | 1.627 | .22 |\n| Hypertension | 3.433 | < .001 |\n| Anemia | 1.738 | .15 |\n| Cancer | 0.952 | .49 |\n| GERD | 4.663 | < .001 |\n| Liver disease | 1.081 | .61 |\n| Renal disease | 2.073 | .32 |\n| Stroke | 8.463 | < .001 |\n\nTABLE 4 ] Sequential Regression Analyses of Risk Factors Contributing to Variability in Dyspnea: Dyspnea Regressed on Patient-Specific Risk Factors (20.6% of Variability Explained)\n\nBoldface indicates statitistical significance. GERD¼ gastroesophageal reflux disease.\n\n1.011; P < .001 for general practitioner visits; OR, 1.015; P < .001 for emergency department visits; and OR, 1.023, P ¼ .005 for hospitalization for respiratory illness) (Table 8).\n\nAfter adjusting for age, sex, and BMI, dyspnea was associated with a reduced likelihood of current employment (OR, 0.993; P < .001), increased absenteeism (coefficient, 0.066; P < .001), increased presenteeism (coefficient, 0.349; P < .001), higher work\n\nTABLE 5 ] Dyspnea Regressed on Spirometry Disease Group\n\n| Disease Group | Regression Coefficient | Value P |\n| --- | --- | --- |\n| Control | 31.2 | < .001 |\n| Normal spirometrya | NA | NA |\n| Asthma | 4.6 | .001 |\n| COPD | 3.8 | .003 |\n| PRISm | 5.5 | .001 |\n| Constant | 51.9 | NA |\n\nDyspnea regressed on spirometry disease group, after removing contributions from subject-specific factors in Table 4 (12.4% of variability explained). Boldface indicates statitistical significance. NA ¼ not applicable; PRISm ¼ preserved ratio impaired spirometry. a Normal spirometry group is the reference category.\n\nproductivity loss (coefficient, 0.383; P < .001), and greater activity impairment (coefficient, 0.501; P < .001), as measured by the Work Productivity and Activity Impairment questionnaire21 (Table 9).\n\n### Discussion\n\nOur study explored dyspnea in community-based adults with undiagnosed respiratory symptoms identified via case finding. Surprisingly, we found that the dyspnea experienced by those with PRISm had a greater impact on their activities and health status than those with newly diagnosed COPD or asthma.\n\nThe prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-specific risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5 point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "prerecorded message then inquired whether any household member was $ 18 years of age and had experienced respiratory symptoms (eg, shortness of breath, wheezing, increased mucus or sputum, prolonged cough) within the past 6 months. Households with affirmative responses were subsequently contacted by the local study coordinator for a follow-up call. The household member reporting respiratory symptoms was verbally consented and screened for eligibility to participate in the study over the telephone.8,9\n\nExclusion criteria included the following: (1) a history of diagnosis of lung or airway disease, (2) use of respiratory inhalers aside from as-needed salbutamol, (3) contraindications for spirometry (eg, occurrences of myocardial infarction, stroke, aortic or cerebral aneurysm, eye surgery, detached retina within the last 3 months), (4) inability or refusal to provide informed consent, (5) being in the third trimester of pregnancy, and (6) being < 18 years of age.\n\nEach participant completed the Asthma Screening Questionnaire (ASQ)10 via telephone. Individuals aged $ 60 years, and those aged < 60 years who scored < 6 points on the ASQ, also completed the COPD-Diagnostic Questionnaire.11,12 Participants scoring $ 6 points on the ASQ or $ 20 points on the COPD-Diagnostic Questionnaire were invited to the study site for pre- and postbronchodilator (BD) spirometry.\n\nA control group without respiratory symptoms was selected randomly using identical random digit dialing methods. Control patients reported no respiratory symptoms in the preceding 6 months and obtained a score of 0 on the ASQ. Participants were recruited as control patients if they could be matched with an individual from the undiagnosed group based on age (- 5 years) and sex. This matching process aimed to have similar demographic profiles between the control group and the newly found cases. This matching was implemented solely to ensure demographic comparability across the study groups and not for pairing patients for statistical analysis purposes.\n\nAll participants filled out the COPD Assessment Test (CAT) questionnaire. Elevated CAT scores indicate a greater burden of respiratory symptoms impacting daily activities and health status.13 The St. George's Respiratory Questionnaire (SGRQ)14-16 was used to assess respiratory disease-related quality of life. Higher SGRQ scores indicate poorer health status. Both the CAT and SGRQ questionnaires were completed prior to spirometry to avoid influencing patients' perceptions of their dyspnea.\n\n### Classification of Undiagnosed Cases\n\nCertified study personnel administered spirometry tests before and after BD use. Participants showing an increase of at least 12% and 200 mL in their FEV1 after receiving 400 mg of salbutamol were classified as having spirometry indicative of asthma.17 Those whose post-BD ratio of FEV1/FVC fell below the lower 95% confidence limit (ie, FEV1/FVC < lower limit of normal) were classified as having spirometry indicative of COPD.18 Participants meeting the criteria for both conditions were labeled as having COPD. Those with a post-BD FEV1 < 80% of the predicted normal and a post-BD FEV1/FVC ratio > 0.70 were classified as having spirometry indicative of preserved ratio impaired spirometry (PRISm). PRISm was defined based on post-BD spirometry values for a more specific classification.19 Participants not meeting criteria for asthma, COPD, or PRISm were labeled as having normal spirometry.\n\nAssessment of the Impact of Participants' Dyspnea Although neither the CAT nor the SGRQ are dyspneaspecific tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea,20 and both yield a richer assessment of dyspnea than the modified Medical Research Council breathlessness scale.20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the first component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "- 5. Nishino T. Dyspnoea: underlying mechanisms and treatment. Br J Anaesth. 2011;106:463-474.\n- 6. Neder J, Berton D, Müller P, et al. Ventilatory inefficiency and exertional dyspnea in early chronic obstructive pulmonary disease. Ann Am Thorac Soc. 2017;14(suppl_1): S22-S29.\n- 7. Gruenberger JB, Vietri J, Keininger DL, Mahler DA. Greater dyspnea is associated with lower health- related quality of life among European patients with COPD. Int J Chron Obstruct Pulmon Dis. 2017;12: 937-944.\n- 8. Preteroti M, Whitmore GA, Vandemheen KL, et al. Population-based case-finding to identify subjects with undiagnosed asthma or COPD. Eur Respir J. 2020;55:2000024.\n- 9. Huynh C, Whitmore GA, Vandemheen KL, et al. Derivation and validation of the UCAP-Q case-finding questionnaire to detect undiagnosed asthma and COPD. Eur Respir J. 2022;60(3):2103243.\n- 10. Shin B, Cole SL, Park SJ, et al. A new symptom-based questionnaire for predicting the presence of asthma. J Investig Allergol Clin Immunol. 2010;20: 27-34.\n- 11. Price DB, Tinkelman DG, Nordyke RJ, et al. Scoring system and clinical application of COPD diagnostic questionnaires. Chest. 2006;129: 1531-1539.\n- 12. Price DB, Tinkelman DG, Halbert RJ, et al. Symptom-based questionnaire for identifying COPD in smokers. Respiration. 2006;73:285-295.\n- 13. Jones PW, Harding G, Berry P, et al. Development and first validation of the COPD Assessment Test. Eur Respir J. 2009;34:648-654.\n- 14. Jones PW. Quality of life measurement for patients with diseases of the airways. Thorax. 1991;46:676-682.\n- 15. Jones PW, Quirk FH, Baveystock CM. The St George's Respiratory Questionnaire. Respir Med. 1991;85:25-31.\n- 16. Jones PW. St George's Respiratory Questionnaire: MCID. J Chronic Obstr Pulm Dis. 2005;2:75-79.\n- 17. Global Initiative for Asthma. Global strategy for asthma management and prevention. Global Initiative for Asthma website. Accessed July 30, 2023. https:// ginasthma.org/wp-content/uploads/2023/ 07/GINA-2023-Full-report-23_07_06- WMS.pdf\n- 18. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease. Global Initiative for Chronic Obstructive Lung Disease website. Accessed July 30, 2023. https://goldcopd.org/wp-content/ uploads/2023/03/GOLD-2023-ver-1.3-17 Feb2023_WMV.pdf\n- 19. Magner KMA, Cherian M, Whitmore GA, et al. Assessment of preserved ratio impaired spirometry (PRISm) using pre and post bronchodilator spirometry in a randomly-sampled symptomatic cohort. Am J Resp Crit Care Med. 2023;208(10): 1129-1131.\n- 20. Hanania NA, O'Donnell DE. Activityrelated dyspnea in chronic obstructive pulmonary disease: physical and psychological consequences, unmet needs, and future directions. Int J Chron Obstruct Pulmon Dis. 2019;14: 1127-1138.\n- 21. Reilly Associates. WPAI scoring. Reilly Associates website. Accessed May 1, 2024. http://www.reillyassociates.net/wpai_ scoring.html\n- 22. Carlsen HK, Haga SL, Olsson D, et al. Birch pollen, air pollution and their interactive effects on airway symptoms and peak expiratory flow in allergic asthma during pollen season – a panel study in Northern and Southern Sweden. Environ Health. 2022;21:63.\n- 23. Ekström M, Johannessen A, Abramson MJ, et al. Breathlessness across generations: results from the RHINESSA generation study. Thorax. 2022;77(2): 172-177.\n- 24. Ziegler B, Fernandes AK, Sanches PR, Konzen GL, Dalcin Pde T. Variability of dyspnea perception in healthy subjects\n\nassessed through inspiratory resistive loading. J Bras Pneumol. 2015;41(2): 143-150.\n\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Profile (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res. 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma. 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway inflammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J. 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med. 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med. 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med. 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur. 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med. 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J. 2023;61(2): 2201721.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "Even in the **short period between 2013 and 2018** (the period covered by these pilot statistics) the data show an overall decline and a decline of several relevant occupational diseases. The strongest decrease — practically a halving — can be seen for hearing impairments (diseases of the inner ear). Pneumoconiosis, mesothelioma and selected occupational cancers went down between 7% and 14%. **Asthma and some recognised MSDs** are more or less stagnating, probably due to unchanged exposure to biological or chemical substances and no change regarding the health outcomes of ergonomic working conditions.\n\nIf work is **one of some** causative factors, a clear assignment of work to a health outcome is complex. Moreover, in many cases a quite **long observation period** is necessary simply due to the **latency time between exposure at work, outbreak and detection of a disease**, which is obviously very different from the clear and immediate consequence of an accident at work.\n\nThe detection of a disease and the correlation between work and this disease depends highly on the **monitoring capacities of the health system and its ability, tradition and standards to connect diseases and work-related causes**. In a study on 'Asbestos‐related occupational diseases in Central and East European Countries' the authors refer to different policies for identifying workers formerly exposed to asbestos and conclude:\n\n*'Consequently, large differences are observed from one country to another regarding the number of recognised asbestos-related cases. In Slovenia, for example, the annual asbestosis rate (cases of asbestosis/population) amounts to 14.9, in Croatia 5.3, and in Poland 2.1. Moreover, in Estonia, the incidence of asbestosis is unknown as there is no systematic collection of data.'*181\n\nFor example, until now very few occupational diseases have been recognised as outcomes of psychosocial risks at work. The ILO proposes in its 'List of Occupational Diseases Recommendation' a large number of very specific and 'classic' occupational diseases — a very broad definition of *'Mental and behavioural disorders'* but leaving the responsibility to science and to 'national conditions'. 182 Similarly, the development of the European Schedule of Occupational Diseases (ESOD) aims to improve knowledge, step up prevention and provide assistance in linking occupational activities and diseases.", - "page_start": 74, - "page_end": 74, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "Can I put my plants directly on my compost ?", - "target_page": 2, - "target_passage": "Don’t\tput\tplants\tinto\t100%\tcompost.\t\tMix\t\t\t\t\t\t\t\t\t compost\tthoroughly\tinto\texisting\tsoil\tbefore\t\t\t planting.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Compost Questions and Answers\n\n#### **What is compost?**\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil – it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n#### **What materials (\"feedstocks\") are used to make compost?**\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n#### **How do I know I'm getting safe, quality compost?**\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n#### **What about weed seeds, plant diseases or pesticide residues?**\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n# Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\nl The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n\n- l Please keep yard debris free of :\n\t- x Garbage x Plastic of any sort\n- Plastic plant pots\n- Plastic plant tabs\n- Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n\t- x Rock, brick, or masonry x Glass or metal x Pet waste.\n\t-\n\t-\n\n* Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\n**Special thanks:** the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n- \n# **original artwork provided by:**\n\n## Tips to Remember:\n\n- *• Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.*\n- *• When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.*\n- *• Ask your compost supplier which compost product is best for your intended use.*\n- *• Use compost at the recommended application rate.*\n- *• To maintain healthy soil, reapply compost or mulch every 1-2 years.*\n- *• Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.*\n- *• Compost can also reduce your lawn and garden's summer irrigation needs.*\n- *• Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.*", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Building Rich and Healthy Soil With Compost\n\nTo grow healthy plants you need healthy soil.\n\n#### **Healthy Soil:**\n\n- l Is teeming with life! Healthy soil is a miniature ecosystem. A teaspoon of healthy soil will have upwards of four billion tiny organisms which recycle nutrients, suppress disease, and discourage pests.\n- l Retains moisture but allows drainage. Healthy soil has structure that allows water to drain through, retains moisture, and promotes strong root growth.\n- l Is full of organic nutrients. Plants depend on the microorganisms found in healthy organic-rich soil to provide nutrients to their roots, and help them thrive.\n\nA healthy garden and landscape is naturally resistant to pests, drought, weeds, and diseases. Maintaining healthy soil may allow you to reduce use of chemical fertilizers and pesticides.\n\n#### **Soil is a planting medium. Compost is a soil amendment. Do not place plants directly into 100% compost. Ask your supplier or see next page for mixes for different uses.**\n\n#### **Washington State Encourages the Use of Compost, to Protect Our Water Quality**\n\nThe Washington State Department of Ecology recommends that soils on construction sites be restored with compost before planting, and also encourages the use of compost for construction site erosion control, to reduce stormwater runoff and help keep our rivers, lakes, and Puget Sound clean. Learn more at **www.SoilsforSalmon.org** or **www.BuildingSoil.org.**\n\n## Selecting Quality Compost\n\nCompost is available in many product types and blends that may be used for different gardening applications. The type of feedstock, the composting process, and any supplementary additives determine the end product.\n\nMany facilities offer a variety of blends based on compost, such as garden mix, potting soil, planting mix, mulches, turf top-dressing and soil blends.\n\n#### **What to Look for in Compost**\n\nFor most compost applications you will want a finished product that has matured and stabilized. Look for material\n\n- l with a dark, crumbly texture\n- l with a mild odor\nFor most compost applications you will not want compost that is extremely dry or wet, or extremely hot. (Note that it is okay for compost to be warm and to give off some steam and mild odor.)\n\n## **Quality Testing at Composting Facilities**\n\nFeel free to ask your compost provider if they have a quality control program, and ask for test results. Compost facilities in Washington are permitted by the Department of Ecology and must meet standards for both the composting process and contaminants, ensuring a quality product. Some facilities also participate in the \"Seal of Testing Assurance\" (STA) testing program. See \"Resources\" on page 11 to learn more.\n\n#### **Remember:**\n\n**Your compost provider can help you pick the best compost mix for your needs.**", - "page_start": 5, - "page_end": 5, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## **1. Grinding Organic Materials:**\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n- Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n- Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## **2. Heating Up:**\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n#### **3. Finishing:**\n\nTypically \"finished\" compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n## Applications for Compost\n\n#### **Planting New Garden Beds or Lawns**\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n#### **Mulch (surface applications on landscape beds)**\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n#### **Top Dressing for Lawns**\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n#### **Blended (Manufactured) Topsoils**\n\nGood quality \"topsoil\" products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n#### **When to Use Compost?**\n\n- Any time you're preparing soil for planting\n- Mulching beds and gardens in spring, summer, or fall\n- Top-dressing lawns in spring or fall.", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process. Their controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n#### Ask Your Compost Supplier\n\n**Whether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:**\n\n- **• What ingredients go into your compost?**\n- **• What compost products or blends do you sell?**\n- **• Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)**\n\t- **• Which product is best for my intended use?**\n\t- **• What application rate do you recommend?**\n\t\t- **• How much do I need for my area? (Or see pages 4-6.)**\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\n**Compost** is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\n**Mulch** is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\n**Peat Moss** is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\n**Fertilizers** are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\n**Topsoil** that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\nCompost improves soil structure and plant growth by\n\n- Replenishing soil organic matter, and storing nutrients in plant-available forms\n- Supporting beneficial soil life\n- Reducing erosion and water run-off\n- Loosening clay soils for better root development (increasing soil pore space)\n- Retaining moisture in sandy soils so plants need less watering.", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Resources\n\n#### **Compost Organizations**\n\n**Washington Organic Recycling Council** Find a compost producer in your area www.compostwashington.org\n\n**US Composting Council** Seal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n#### **Restoring the Soil to Protect our Waterways**\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n#### **Natural Lawn & Garden Care, Soils, and Home Composting**\n\n**City of Seattle** www.seattle.gov/util/services/yard\n\n> **King County** www.kingcounty.gov/soils\n\n**Washington State University** www.puyallup.wsu.edu/soilmgmt/\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## How Much Compost to Use\n\n- l Estimate the planting area (Math Hint: Square feet = length x width)\n- l Decide upon the appropriate application depth of the compost (page 4)\n- l Use the charts below to estimate your compost needs. (Abbreviations: ft = foot; yd = yard; sq = square; cu = cubic.)\n- l Conversions: 9 square feet = 1 square yard; 27 cubic feet = 1 cubic yard.\n\n## **Question:** *I have a plot about this big, how much compost do I buy?*\n\n| Plot Size | # of Sq Feet | 1/2\" Deep - Mulching | 2\" Deep - Amending new |\n| --- | --- | --- | --- |\n| | | or Top-dressing | lawns or gardens |\n| 5' x 10' plot | 50 sq ft | 2.08 cu ft of compost | 8.33 cu ft of compost (0.31 cu yd) |\n| 10' x 10' plot | 100 sq ft | 4.17 cu ft of compost | 16.66 cu ft of compost (0.62 cu yd) |\n| 20 x 50' plot | 1000 sq ft | 41.7 cu ft of compost | 166.7 cu ft of compost (6.2 cu yd) |\n| 1 acre | 43,600 sq ft | 1,815 cu ft of compost (67 cu yd) | 7,257 cu ft of compost (268 cu yd) |\n\n## **Question:** *If I buy this much compost, how many square feet will it cover?*\n\n| Compost Quantity | 1/2\" Deep - Mulching | 2\" Deep - Amending new |\n| --- | --- | --- |\n| | or Top-dressing | lawns or gardens |\n| 1 cu ft bag of compost | 24 sq foot area | 6 sq foot area |\n| 1.5 cu ft bag of compost | 36 sq foot area | 9 sq foot area |\n| 2.2 cu ft bag of compost | 53 sq foot area | 13 sq foot area |\n| 2.5 cu ft bag of compost | 60 sq foot area | 15 sq foot area |\n| 1 cubic yard of compost | 648 sq foot area | 162 sq foot area |\n\n*Compost Works! Soil blending trials conducted in 2008 by the Washington Organic Recycling Council, with funding from the Washington Department of Ecology, demonstrated that compost improves soil structure (lowers bulk density), nutrient availability (increases cation exchange capacity), moisture holding capacity, and supplies both nutrients that plants need and organic matter that supports soil life. See the 2008 Soil Blending Trial report at* **www.compostwashington.org.**", - "page_start": 7, - "page_end": 7, - "source_file": "CompostGuide.pdf" - }, - { - "text": "**Compost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.**", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "### THE PURPOSE OF A RESIGNATION LETTER:\n\nThe purpose of a resignation letter is to give your employer official notice that you will be leaving the organisation. However, it is usually appropriate to inform your manager of your intention to resign in person, and then to follow up your conversation with the formal resignation letter.\n\nWhat to include:\n\nYour resignation letter should be short and to the point. Keep it positive and professional – this is not the place to voice your dissatisfaction with your job.\n\nIn your letter, you should make sure that you include the following:\n\n#### 1. A clear statement of your intention to resign.\n\nExample:\n\n\"Please accept this letter as formal notice of my resignation from my post as Assistant IT Manager at XYZ.\"\n\n### 2.\n\n### Reference to your notice period (where applicable), as well as your last working day with the organisation.\n\nExample:\n\n\"My last working day will be in two weeks' time, on 31 August 2015.\"\n\n#### 3.\n\n#### Your reason for leaving.\n\nYou don't need to elaborate on this if you don't want to. Remember to keep it positive, and not to make any rude, offensive or insulting remarks about the organisation or your co- workers, no matter how tempting it might be.", - "page_start": 48, - "page_end": 48, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "If you have any questions about your course work, you are always welcome to approach your tutors for help. Just remember that your tutors cannot guess what your needs are: you will have to make contact with your tutors and communicate your questions clearly if you want to get the assistance that you need.\n\nWhen it comes to contacting your tutors, your best option will usually be to send an e-mail.\n\nHere are some important tips to keep in mind when requesting help from a tutor via e-mail:\n\n#### **Use a relevant and descriptive subject line.**\n\nThis way, your tutor will immediately know what your e-mail is about, and he or she will be more likely to open it. A good subject line might read as follows: \"Enquiry regarding Assignment 1 for Safety Management 101\"\n\n#### **Be polite, and use an appropriate form of address.**\n\nAlways start your e-mail with an appropriate form of address, such as \"Hello Mr/Ms …\" and sign it off with your full name and student number. This will help to give your message a friendly, yet professional tone.\n\n#### **Be clear and concise.**\n\nMake sure that your tutor will be able to understand what it is that you are asking.", - "page_start": 33, - "page_end": 33, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "What are fertilizers ?", - "target_page": 4, - "target_passage": " Fertilizers are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Resources\n\n#### **Compost Organizations**\n\n**Washington Organic Recycling Council** Find a compost producer in your area www.compostwashington.org\n\n**US Composting Council** Seal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n#### **Restoring the Soil to Protect our Waterways**\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n#### **Natural Lawn & Garden Care, Soils, and Home Composting**\n\n**City of Seattle** www.seattle.gov/util/services/yard\n\n> **King County** www.kingcounty.gov/soils\n\n**Washington State University** www.puyallup.wsu.edu/soilmgmt/\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process. Their controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n#### Ask Your Compost Supplier\n\n**Whether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:**\n\n- **• What ingredients go into your compost?**\n- **• What compost products or blends do you sell?**\n- **• Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)**\n\t- **• Which product is best for my intended use?**\n\t- **• What application rate do you recommend?**\n\t\t- **• How much do I need for my area? (Or see pages 4-6.)**\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\n**Compost** is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\n**Mulch** is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\n**Peat Moss** is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\n**Fertilizers** are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\n**Topsoil** that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\nCompost improves soil structure and plant growth by\n\n- Replenishing soil organic matter, and storing nutrients in plant-available forms\n- Supporting beneficial soil life\n- Reducing erosion and water run-off\n- Loosening clay soils for better root development (increasing soil pore space)\n- Retaining moisture in sandy soils so plants need less watering.", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Building Rich and Healthy Soil With Compost\n\nTo grow healthy plants you need healthy soil.\n\n#### **Healthy Soil:**\n\n- l Is teeming with life! Healthy soil is a miniature ecosystem. A teaspoon of healthy soil will have upwards of four billion tiny organisms which recycle nutrients, suppress disease, and discourage pests.\n- l Retains moisture but allows drainage. Healthy soil has structure that allows water to drain through, retains moisture, and promotes strong root growth.\n- l Is full of organic nutrients. Plants depend on the microorganisms found in healthy organic-rich soil to provide nutrients to their roots, and help them thrive.\n\nA healthy garden and landscape is naturally resistant to pests, drought, weeds, and diseases. Maintaining healthy soil may allow you to reduce use of chemical fertilizers and pesticides.\n\n#### **Soil is a planting medium. Compost is a soil amendment. Do not place plants directly into 100% compost. Ask your supplier or see next page for mixes for different uses.**\n\n#### **Washington State Encourages the Use of Compost, to Protect Our Water Quality**\n\nThe Washington State Department of Ecology recommends that soils on construction sites be restored with compost before planting, and also encourages the use of compost for construction site erosion control, to reduce stormwater runoff and help keep our rivers, lakes, and Puget Sound clean. Learn more at **www.SoilsforSalmon.org** or **www.BuildingSoil.org.**\n\n## Selecting Quality Compost\n\nCompost is available in many product types and blends that may be used for different gardening applications. The type of feedstock, the composting process, and any supplementary additives determine the end product.\n\nMany facilities offer a variety of blends based on compost, such as garden mix, potting soil, planting mix, mulches, turf top-dressing and soil blends.\n\n#### **What to Look for in Compost**\n\nFor most compost applications you will want a finished product that has matured and stabilized. Look for material\n\n- l with a dark, crumbly texture\n- l with a mild odor\nFor most compost applications you will not want compost that is extremely dry or wet, or extremely hot. (Note that it is okay for compost to be warm and to give off some steam and mild odor.)\n\n## **Quality Testing at Composting Facilities**\n\nFeel free to ask your compost provider if they have a quality control program, and ask for test results. Compost facilities in Washington are permitted by the Department of Ecology and must meet standards for both the composting process and contaminants, ensuring a quality product. Some facilities also participate in the \"Seal of Testing Assurance\" (STA) testing program. See \"Resources\" on page 11 to learn more.\n\n#### **Remember:**\n\n**Your compost provider can help you pick the best compost mix for your needs.**", - "page_start": 5, - "page_end": 5, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n#### **What is compost?**\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil – it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n#### **What materials (\"feedstocks\") are used to make compost?**\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n#### **How do I know I'm getting safe, quality compost?**\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n#### **What about weed seeds, plant diseases or pesticide residues?**\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n# Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\nl The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n\n- l Please keep yard debris free of :\n\t- x Garbage x Plastic of any sort\n- Plastic plant pots\n- Plastic plant tabs\n- Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n\t- x Rock, brick, or masonry x Glass or metal x Pet waste.\n\t-\n\t-\n\n* Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\n**Special thanks:** the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n- \n# **original artwork provided by:**\n\n## Tips to Remember:\n\n- *• Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.*\n- *• When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.*\n- *• Ask your compost supplier which compost product is best for your intended use.*\n- *• Use compost at the recommended application rate.*\n- *• To maintain healthy soil, reapply compost or mulch every 1-2 years.*\n- *• Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.*\n- *• Compost can also reduce your lawn and garden's summer irrigation needs.*\n- *• Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.*", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## **1. Grinding Organic Materials:**\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n- Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n- Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## **2. Heating Up:**\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n#### **3. Finishing:**\n\nTypically \"finished\" compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n## Applications for Compost\n\n#### **Planting New Garden Beds or Lawns**\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n#### **Mulch (surface applications on landscape beds)**\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n#### **Top Dressing for Lawns**\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n#### **Blended (Manufactured) Topsoils**\n\nGood quality \"topsoil\" products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n#### **When to Use Compost?**\n\n- Any time you're preparing soil for planting\n- Mulching beds and gardens in spring, summer, or fall\n- Top-dressing lawns in spring or fall.", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "**Compost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.**", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "green spaces has increased52 , green spaces often lose out in the competition for land as the share of the population living in urban areas continues to rise.\n\nThis strategy aims to reverse these trends and stop the loss of green urban ecosystems. The promotion of healthy ecosystems, green infrastructure and **nature-based solutions** should be systematically integrated into urban planning, including in public spaces, infrastructure, and the design of buildings and their surroundings.\n\nTo bring nature back to cities and reward community action, the Commission calls on European cities of at least 20,000 inhabitants to develop ambitious **Urban Greening Plans** by the end of 2021. These should include measures to create biodiverse and accessible urban forests, parks and gardens; urban farms; green roofs and walls; treelined streets; urban meadows; and urban hedges. They should also help improve connections between green spaces, eliminate the use of pesticides, limit excessive mowing of urban green spaces and other biodiversity harmful practices. Such plans could mobilise policy, regulatory and financial tools.\n\nTo facilitate this work, the Commission will in 2021 set up an **EU Urban Greening Platform**, under a new 'Green City Accord'53 with cities and mayors. This will be done in close coordination with the European Covenant of Mayors. The Urban Greening Plans will have a central role in choosing the European Green Capital 2023 and European Green Leaf 2022.\n\nThe Commission will support Member States and local and regional authorities through technical guidance and help to mobilise funding and capacity building. It will also reflect these objectives in the **European Climate Pact**.\n\n#### *2.2.9. Reducing pollution*\n\nPollution is a key driver of biodiversity loss and has a harmful impact on our health and environment. While the EU has a solid legal framework in place to reduce pollution, greater efforts are still required. Biodiversity is suffering from the release of nutrients, chemical pesticides, pharmaceuticals, hazardous chemicals, urban and industrial wastewater, and other waste including litter and plastics. All of these pressures must be reduced.\n\nAs part of the Commission's Zero Pollution Ambition for a toxic-free environment, a new EU Chemicals Strategy for Sustainability will be put forward along with a **Zero Pollution Action Plan for Air, Water and Soil**.\n\nThe Commission will also promote the goal of zero pollution from nitrogen and phosphorus flows from fertilisers through reducing nutrient losses by at least 50%, while ensuring that there is no deterioration in soil fertility. This will result in the **reduction of use of fertilisers by at least 20%**. This will be achieved by implementing and enforcing the relevant environmental and climate legislation in full, identifying with Member States the nutrient load reductions needed to achieve these goals, applying balanced fertilisation and sustainable nutrient management, and by managing nitrogen and phosphorus better throughout their lifecycle. To this end, the Commission will work with Member States to\n\n52 There are 11,000 Natura 2000 sites within, or partly within, cities, representing 15% of the total area of the Natura 2000 network.\n\n53 The Green City Accord.", - "page_start": 13, - "page_end": 13, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "progress towards the target will be under constant review, and adjustment if needed, to mitigate against undue impact on biodiversity, food security and farmers' competitiveness.\n\nAgroecology can provide healthy food while maintaining productivity, increase soil fertility and biodiversity, and reduce the footprint of food production. Organic farming in particular holds great potential for farmers and consumers alike. The sector creates jobs and attracts young farmers. Organic farming also provides 10-20 % more jobs per hectare than conventional farms, and creates added value for agricultural products32 . To make the most of this potential, at least **25% of the EU's agricultural land must be organically farmed by 2030**. In addition to CAP measures, the Commission will put forward an Action Plan on organic farming, helping Member States stimulate both supply and demand of organic products. It will also ensure consumer's trust through promotion campaigns and green public procurement. In the implementation of the EU-wide agroecological targets set out in this strategy and in the Farm to Fork Strategy, the different starting points and differences in progress already made in Member States will be taken into account.\n\nThe uptake of agroforestry support measures under rural development should be increased as it has great potential to provide multiple benefits for biodiversity, people and climate.\n\nThe decline of **genetic diversity** must also be reversed, including by facilitating the use of traditional varieties of crops and breeds. This would also bring health benefits through more varied and nutritious diets. The Commission is considering the revision of marketing rules for traditional crop varieties in order to contribute to their conservation and sustainable use. The Commission will also take measures to facilitate the registration of seed varieties, including for organic farming, and to ensure easier market access for traditional and locally adapted varieties.\n\n#### *2.2.3. Addressing land take and restoring soil ecosystems*\n\nSoil is one of the most complex of all ecosystems. It is a habitat in its own right, and home to an incredible diversity of organisms that regulate and control key ecosystem services such as soil fertility, nutrient cycling and climate regulation. **Soil is a hugely important non-renewable resource**, vital for human and economic health, as well as the production of food and new medications.\n\nIn the EU, the degradation of soil is having considerable environmental and economic consequences. Poor land management, such as deforestation, overgrazing, unsustainable farming and forestry practices, construction activities and land sealing are among the main causes of this situation33 . Despite recent reductions in the pace of soil sealing, fertile soils continue to be lost to land take and urban sprawl34. When compounded by\n\n32 OECD (2016), Farm Management Practices to Foster Green Growth.\n\n33 European Environment Agency (2019), EEA Signals 2019: Land and Soil in Europe.\n\n34 European Environment Agency and Swiss Federal Office for the Environment (FOEN) (2016), Urban sprawl in Europe.", - "page_start": 8, - "page_end": 8, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## How Much Compost to Use\n\n- l Estimate the planting area (Math Hint: Square feet = length x width)\n- l Decide upon the appropriate application depth of the compost (page 4)\n- l Use the charts below to estimate your compost needs. (Abbreviations: ft = foot; yd = yard; sq = square; cu = cubic.)\n- l Conversions: 9 square feet = 1 square yard; 27 cubic feet = 1 cubic yard.\n\n## **Question:** *I have a plot about this big, how much compost do I buy?*\n\n| Plot Size | # of Sq Feet | 1/2\" Deep - Mulching | 2\" Deep - Amending new |\n| --- | --- | --- | --- |\n| | | or Top-dressing | lawns or gardens |\n| 5' x 10' plot | 50 sq ft | 2.08 cu ft of compost | 8.33 cu ft of compost (0.31 cu yd) |\n| 10' x 10' plot | 100 sq ft | 4.17 cu ft of compost | 16.66 cu ft of compost (0.62 cu yd) |\n| 20 x 50' plot | 1000 sq ft | 41.7 cu ft of compost | 166.7 cu ft of compost (6.2 cu yd) |\n| 1 acre | 43,600 sq ft | 1,815 cu ft of compost (67 cu yd) | 7,257 cu ft of compost (268 cu yd) |\n\n## **Question:** *If I buy this much compost, how many square feet will it cover?*\n\n| Compost Quantity | 1/2\" Deep - Mulching | 2\" Deep - Amending new |\n| --- | --- | --- |\n| | or Top-dressing | lawns or gardens |\n| 1 cu ft bag of compost | 24 sq foot area | 6 sq foot area |\n| 1.5 cu ft bag of compost | 36 sq foot area | 9 sq foot area |\n| 2.2 cu ft bag of compost | 53 sq foot area | 13 sq foot area |\n| 2.5 cu ft bag of compost | 60 sq foot area | 15 sq foot area |\n| 1 cubic yard of compost | 648 sq foot area | 162 sq foot area |\n\n*Compost Works! Soil blending trials conducted in 2008 by the Washington Organic Recycling Council, with funding from the Washington Department of Ecology, demonstrated that compost improves soil structure (lowers bulk density), nutrient availability (increases cation exchange capacity), moisture holding capacity, and supplies both nutrients that plants need and organic matter that supports soil life. See the 2008 Soil Blending Trial report at* **www.compostwashington.org.**", - "page_start": 7, - "page_end": 7, - "source_file": "CompostGuide.pdf" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "Explain to me what is peat moss ?", - "target_page": 4, - "target_passage": "Peat Moss is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process. Their controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n#### Ask Your Compost Supplier\n\n**Whether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:**\n\n- **• What ingredients go into your compost?**\n- **• What compost products or blends do you sell?**\n- **• Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)**\n\t- **• Which product is best for my intended use?**\n\t- **• What application rate do you recommend?**\n\t\t- **• How much do I need for my area? (Or see pages 4-6.)**\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\n**Compost** is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\n**Mulch** is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\n**Peat Moss** is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\n**Fertilizers** are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\n**Topsoil** that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\nCompost improves soil structure and plant growth by\n\n- Replenishing soil organic matter, and storing nutrients in plant-available forms\n- Supporting beneficial soil life\n- Reducing erosion and water run-off\n- Loosening clay soils for better root development (increasing soil pore space)\n- Retaining moisture in sandy soils so plants need less watering.", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "**Compost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.**", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "Afforestation, reforestation and tree planting to support biodiversity and ecosystem restoration will be promoted through the CAP Strategic Plans, and the Cohesion Policy funds. The new **European Urban Greening Platform**38 will also facilitate urban tree planting, including under the LIFE programme.\n\nThe share of forest areas covered by management plans should cover all managed public forests and an increased number of private forests, and biodiversity-friendly practices such as closer-to-nature-forestry should continue and be further developed. To support this, the Commission will develop guidelines on biodiversity-friendly afforestation and reforestation and closer-to-nature-forestry practices. This will be done in parallel with the new EU Forest Strategy.\n\nTo gain a better picture of the health of European forests, the Commission will work with other data providers to further develop the **Forest Information System for Europe**. This will help produce up-to-date assessments of the condition of European forests and link all EU forest-data web-platforms. This will also be presented as part of the EU Forest Strategy.\n\n### *2.2.5. Win-win solutions for energy generation*\n\nDecarbonising the energy system is critical for climate neutrality, as well as for the EU's recovery from the COVID-19 crisis and long-term prosperity. More sustainably sourced renewable energy will be essential to fight climate change and biodiversity loss. The EU will prioritise solutions such as ocean energy, offshore wind, which also allows for fish stock regeneration, solar-panel farms that provide biodiversity-friendly soil cover, and sustainable bioenergy.\n\nTo mitigate climate and environmental risks created by the increasing use of certain sources for bioenergy, the revised Renewable Energy Directive39 includes strengthened sustainability criteria. It also promotes the shift to advanced biofuels based on residues and non-reusable and non-recyclable waste. This approach should continue for all forms of bioenergy. The use of whole trees and food and feed crops for energy production – whether produced in the EU or imported – should be minimised.\n\nTo better understand and monitor the potential climate and biodiversity risks, the Commission is assessing the **EU and global biomass supply and demand** and related sustainability40. As part of its increased ambition to protect and restore forest ecosystems, the Commission will publish the results of this work on the use of forest biomass for energy production by the end of 2020. This will inform the Commission's policymaking, including the review and revision, where necessary, of the level of ambition of the Renewable Energy Directive, the Emissions Trading Scheme, and the Regulation on land use, land use change and forestry (LULUCF) set for 2021.\n\nIn line with the Renewable Energy Directive, the Commission will also develop operational guidance in 2021 on the **new sustainability criteria on forest biomass for** \n\n38 See Section 2.2.8.\n\n39 Directive (EU) 2018/2001 on the promotion of the use of energy from renewable sources.\n\n40 JRC Biomass Assessment Study.", - "page_start": 10, - "page_end": 10, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "strength (n) [ME < OE (n) [ME < O *strengou.*] 5. Firm will or c ] 5. Firm will or character: moral courage or power. mora", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## **1. Grinding Organic Materials:**\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n- Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n- Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## **2. Heating Up:**\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n#### **3. Finishing:**\n\nTypically \"finished\" compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n## Applications for Compost\n\n#### **Planting New Garden Beds or Lawns**\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n#### **Mulch (surface applications on landscape beds)**\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n#### **Top Dressing for Lawns**\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n#### **Blended (Manufactured) Topsoils**\n\nGood quality \"topsoil\" products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n#### **When to Use Compost?**\n\n- Any time you're preparing soil for planting\n- Mulching beds and gardens in spring, summer, or fall\n- Top-dressing lawns in spring or fall.", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C5 and C8) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C10 and C12). For even longer chains (C14), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale (> 100µm) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "form of the imaginary part.\n\nFIG. 17: Conductivities and ∆W for a fixed λωsf . Top – ωsf = 26 meV ,λ = 1,ωo = 40 meV ,Zo = 0.77 Bottom – ωsf = 2.6 meV ,λ = 10,ωo = 13.5 meV ,Zo = 1.22. The zero crossing for ∆W is not affected by a change in λ because it is determined only by λωsf . We set ∆ = 30 meV .\n\nFIG. 18: The behavior of Kubo sums in the CB model. Note that the spectral weight in the NS is always larger than in the SCS. We set ωsf = 26 meV ,λ = 1, and ∆ = 30 meV .\n\nWe performed the same calculations of conductivities and optical integrals as in the previous three cases. The results are summarized in Figs. 17 - 22. Fig 17 shows conductivities in the NS and the SCS for two couplings λ = 1 and λ = 10 (keeping λωsf constant). Other parameters Zo and ωo are calculated according to the discussion after Eq 21. for ωsf = 26 meV , λ = 1, we find ωo = 40 meV , Zo = 0.77. And for ωsf = 2.6 meV , λ = 10, we find ωo = 13.5 meV , Zo = 1.22. Note that the conductivity in the SCS starts at 2∆ + ωo (i.e. the resonance energy\n\nFIG. 19: The evolution of the optical integrals in the NS and the SCS in the CB model. Note that about ∼ 75% of the spectral weight is recovered up to 1 eV . We set ωsf = 26 meV ,λ = 1, and ∆ = 30 meV .\n\nFIG. 20: ∆W (in meV) for λ = 1(top) and λ = 10(bottom). We used ωsf = 26 meV /λ and ∆ = 30meV . The zero crossing is not affected because we keep λωsf constant. The notable difference is the widening of the dip at a larger λ.", - "page_start": 11, - "page_end": 11, - "source_file": "1001.0764.pdf" - }, - { - "text": "climate change, the effects of erosion and losses of soil organic carbon are becoming increasingly apparent. Desertification is also a growing threat in the EU35 .\n\nIt is therefore essential to step up efforts to **protect soil fertility, reduce soil erosion and increase soil organic matter**. This should be done by adopting sustainable soil management practices, including as part of the CAP. Significant progress is also needed on identifying contaminated soil sites, restoring degraded soils, defining the conditions for their good ecological status, introducing restoration objectives, and improving the monitoring of soil quality.\n\nTo address these issues in a comprehensive way and help to fulfil EU and international commitments on land-degradation neutrality, the Commission will update the **EU Soil Thematic Strategy**36 in 2021. The **Zero Pollution Action Plan for Air, Water and Soil** that the Commission will adopt in 2021 will also look at these issues. Soil sealing and rehabilitation of contaminated brownfields will be addressed in the upcoming Strategy for a Sustainable Built Environment. A **mission in the area of soil health and food** under Horizon Europe37 will aim to develop solutions for restoring soil health and functions.\n\n#### *2.2.4. Increasing the quantity of forests and improving their health and resilience*\n\nForests are hugely important for biodiversity, climate and water regulation, the provision of food, medicines and materials, carbon sequestration and storage, soil stabilisation and the purification of air and water. They are also a natural home for recreation and learning about nature. Foresters have a key role to play in ensuring sustainable forest management and in restoring and sustaining biodiversity in forests.\n\nIn addition to strictly protecting all remaining EU primary and old-growth forests, **the EU must increase the quantity, quality and resilience of its forests**, notably against fires, droughts, pests, diseases and other threats likely to increase with climate change. To retain their function for both biodiversity and climate, all forests need to be preserved in good health. More resilient forests can support a more resilient economy. They also play an important role in providing materials, products and services, which are key for the circular bio-economy.\n\nTo make this happen, the Commission will propose a dedicated **EU Forest Strategy** in 2021 in line with our wider biodiversity and climate neutrality ambitions. It will include a roadmap for **planting at least 3 billion additional trees in the EU by 2030**, in full respect of ecological principles. This will create substantial job opportunities linked to the collecting and cultivating of seeds, planting seedlings, and ensuring their development. Tree planting is particularly beneficial in cities, while in rural areas it can work well with agroforestry, landscape features and increased carbon sequestration. At the same time, the Commission will continue to work with Member States to ensure that the EU is sufficiently equipped to prevent and respond to major forest fires, which can inflict significant damages on forest biodiversity.\n\n35 European Court of Auditors (2018), Combating desertification in the EU: a growing threat in need of more action, Special Report n°33/2018.\n\n36 Thematic Strategy for Soil Protection (COM(2006) 231).\n\n37 Horizon Europe mission area on soil health and food.", - "page_start": 9, - "page_end": 9, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "### **First Financial Bankshares customers and shareholders also know a thing or two about Value and Values – and we learn from them every day. We're proud to share in their success. Here are just a few of their stories.**\n\n**George Marti believes in doing things. Good things.** \n\nBorn to humble roots on his parents' farm in 1920, Marti has accomplished much, including founding three radio stations (and investing in 10 more) and developing a remote pickup device that became standard equipment in 80 percent of all radio stations worldwide. He still has part ownership of KCLE in Cleburne, Texas (the town where he was once mayor for 12 years).\n\nMarti's dedication to his hometown is part of the reason why he bought Cleburne State Bank in 1992. His business skills (and success in the broadcasting industry) gave him the resources to turn the bank into yet another winning venture. Five years later, he sold it to First Financial, which merged it with their existing First Financial Bank, Cleburne.\n\nThe proceeds from the sale helped Marti complete the funding for his proudest achievement: the Marti Foundation, which he created in the 1970s to help send students from Johnson County to college. \"We help over 100 students a year … most are the first from their family ever to attend college,\" says Marti. \"I know what education did for me, so it's a great thing to help these young people.\" Marti says that when he dies, the Foundation will live on, $20 million strong.\n\nMarti still serves on the board of First Financial Bank, Cleburne. \"First Financial's merger of the banks was positive for the community. They have a good customer base. They are friendly, helpful and creative. They are growing, and the branches in Alvarado and Burleson are both doing well. Those are all good things.\"\n\n\"They are friendly, helpful and creative. Those are all good things.\"\n\nGeorge Marti Founder Marti Enterprises Cleburne, Texas 6", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Building Rich and Healthy Soil With Compost\n\nTo grow healthy plants you need healthy soil.\n\n#### **Healthy Soil:**\n\n- l Is teeming with life! Healthy soil is a miniature ecosystem. A teaspoon of healthy soil will have upwards of four billion tiny organisms which recycle nutrients, suppress disease, and discourage pests.\n- l Retains moisture but allows drainage. Healthy soil has structure that allows water to drain through, retains moisture, and promotes strong root growth.\n- l Is full of organic nutrients. Plants depend on the microorganisms found in healthy organic-rich soil to provide nutrients to their roots, and help them thrive.\n\nA healthy garden and landscape is naturally resistant to pests, drought, weeds, and diseases. Maintaining healthy soil may allow you to reduce use of chemical fertilizers and pesticides.\n\n#### **Soil is a planting medium. Compost is a soil amendment. Do not place plants directly into 100% compost. Ask your supplier or see next page for mixes for different uses.**\n\n#### **Washington State Encourages the Use of Compost, to Protect Our Water Quality**\n\nThe Washington State Department of Ecology recommends that soils on construction sites be restored with compost before planting, and also encourages the use of compost for construction site erosion control, to reduce stormwater runoff and help keep our rivers, lakes, and Puget Sound clean. Learn more at **www.SoilsforSalmon.org** or **www.BuildingSoil.org.**\n\n## Selecting Quality Compost\n\nCompost is available in many product types and blends that may be used for different gardening applications. The type of feedstock, the composting process, and any supplementary additives determine the end product.\n\nMany facilities offer a variety of blends based on compost, such as garden mix, potting soil, planting mix, mulches, turf top-dressing and soil blends.\n\n#### **What to Look for in Compost**\n\nFor most compost applications you will want a finished product that has matured and stabilized. Look for material\n\n- l with a dark, crumbly texture\n- l with a mild odor\nFor most compost applications you will not want compost that is extremely dry or wet, or extremely hot. (Note that it is okay for compost to be warm and to give off some steam and mild odor.)\n\n## **Quality Testing at Composting Facilities**\n\nFeel free to ask your compost provider if they have a quality control program, and ask for test results. Compost facilities in Washington are permitted by the Department of Ecology and must meet standards for both the composting process and contaminants, ensuring a quality product. Some facilities also participate in the \"Seal of Testing Assurance\" (STA) testing program. See \"Resources\" on page 11 to learn more.\n\n#### **Remember:**\n\n**Your compost provider can help you pick the best compost mix for your needs.**", - "page_start": 5, - "page_end": 5, - "source_file": "CompostGuide.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "How encourage temporally adjacent representations to be predictive of each other ?", - "target_page": 2, - "target_passage": "One way to encourage temporally adjacent representations to be predictive of each other is to ensure that they vary slowly over time. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "To that end, we pretrain a family of V-JEPA models on a dataset of 2 million videos collected from publicly available datasets by combining a masked modeling prediction task with a joint-embedding predictive architecture (see Figure 2). We measure performance on several downstream image and video tasks, using both frozen evaluation and end-to-end fine-tuning. Our findings suggest that feature prediction can indeed serve as an effective stand-alone objective for unsupervised learning from video, while using significantly shorter training schedules than pixel prediction methods. Specifically:\n\n- Feature prediction leads to versatile visual representations that perform well across downstream image and video tasks without adaption of the model's weights; i.e., using a frozen backbone. V-JEPA achieves the best performance among methods we consider (+6% accuracy) on the SomethingSomething-v2 task, which requires finegrained temporal understanding. V-JEPA is also competitive on tasks like Kinetics400, where appearance-based features are sufficient and hence state-of-the-art image models such as DINOv2 excel (Figure 1 and Table 6).\n- Models trained with feature prediction are superior to pixel prediction approaches under a frozen evaluation protocol (attentive probing) and are competitive with pixel prediction under full fine-tuning, while using significantly shorter training schedules (Tables 5 and 6).\n- Models trained with feature prediction are more label-efficient than pixel prediction approaches. Decreasing the available number of labeled examples results in an increase in the performance gap between V-JEPA and pixel-reconstruction models (Table 7).\n\n# 2 Related Works\n\nSlow Features. One way to encourage temporally adjacent representations to be predictive of each other is to ensure that they vary slowly over time. Early works targeting predictive features encouraged representations of individual video frames to be locally temporally invariant, while preventing representation collapse by using spectral methods, as in SFA (Wiskott and Sejnowski, 2002), SSA (Kayser et al., 2001), and Simulated Fixations (Zou et al., 2012). More recently, Goroshin et al. (2015); Wang et al. (2010) train a siamese convolutional network to map the representations of two subsequent frames to the same point, while encouraging distant frames to have diverse representations via a pairwise margin loss and a triplet loss, respectively. Other works (Oord et al., 2018; Surís et al., 2021; Feichtenhofer et al., 2021) implement temporal invariance using noisecontrastive estimation (Gutmann and Hyvärinen, 2012). Our exploration in this paper goes beyond temporal invariance and explores feature prediction using masked modeling.\n\nPredictive Features. Going beyond local invariance, a family of works trains a predictor network to map the representation of a frame or clip at one time-step to a distinct representation at another time-step. Srivastava et al. (2015); Vondrick et al. (2016); Wang et al. (2023b) train such a video feature predictor network on top of a frozen pretrained image or video encoder. Unfreezing the target feature extractor, several methods train the video encoder and the predictor network simultaneously, while preventing collapse by using a supervised action forecasting loss (Girdhar and Grauman, 2021), or by using the representations of distant clips as negative samples in a contrastive loss (Han et al., 2019, 2020; Tan et al., 2023), often focusing on small convolutional encoders (Han et al., 2019, 2020). The idea of learning a representation by predicting missing information in feature space is also core to the joint-embedding predictive architecture (JEPA) (LeCun, 2022), which combines a siamese encoder with a predictor network. JEPAs have been successfully instantiated in several modalities, such as with audio data (Baevski et al., 2022b) and image data (Zhou et al., 2021; Oquab et al., 2023; Assran et al., 2023). In this work, we extend this paradigm to video data by leveraging recent advances in self-supervised learning.\n\nAdvances in Self-Supervised Learning. The use of vision transformers (Dosovitskiy et al., 2020; Li et al., 2022) has become standard practice in self-supervised learning with joint-embedding architectures (Chen et al., 2021; Caron et al., 2021; Oquab et al., 2023; Zhou et al., 2021; Assran et al., 2022), and unlocked masked image modeling in pixel space by parameterizing the pixel decoder as a transformer with learnable mask tokens (Dosovitskiy et al., 2020; Xie et al., 2021; He et al., 2021; Bao et al., 2021), demonstrating a step-change in the representation quality of autoencoding methods (Vincent et al., 2010). This line of generative methods was subsequently extended to video data using spatio-temporal masking (Tong et al., 2022; Feichtenhofer et al., 2022; Wang et al., 2023a; Kalluri et al., 2023; Gupta et al., 2023). It was also recently shown that the representations of masked image autoencoders could be significantly improved by using learnable pooling mechanisms based on cross-attention (Chen et al., 2022). Finally, through careful selection of design choices, the non-contrastive collapse prevention strategy in BYOL (Grill et al., 2020) was recently made to work with image feature prediction methods (Baevski et al., 2022b; Assran et al., 2023), which demonstrated the ability to learn representations that can be leveraged for various downstream tasks without relying on invariance to hand-crafted image transformations.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv3.pdf" - }, - { - "text": "Feature Prediction versus Pixel Reconstruction. Approaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n# 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x. The additional variable z provides the predictor with information about the transformation that computes y from x.\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x. The basic architecture is made up of an encoder, Eθ(·), which computes the representation of the inputs, and a predictor, Pϕ(·), which predicts the representation of y from the representation of x, conditioned on a variable z indicating the transformation (or corruption) between x and y. Conditioning on z enables the generation of distinct predictions for various transformations of x.\n\n### 3.1 Training Objective\n\nWe train our visual encoder Eθ(·) to satisfy the constraint that representations computed from one part of the video, y, should be predictable from representations\n\ncomputed from another part of the video, x. The predictor network Pϕ(·), which maps the representation of x to the representation of y, is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆y.\n\nNaively implementing the objective using the regression\n\n$$\\begin{array}{r l}{{\\mathrm{minimize}_{\\theta,\\phi}}}&{{}\\|P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-E_{\\theta}(y)\\|_{1},}\\end{array}$$\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize${}_{\\theta,\\phi}\\quad||P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-\\mbox{sg}(\\overline{E}_{\\theta}(y))||_{1},$ (1)\n\nwhere sg(·) denotes a stop-gradient operation, which does not backpropagate through its argument, and Eθ(·) is an exponential moving average of the network Eθ(·). The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ1 regression, which we found to be more stable.\n\nTheoretical motivation. A theoretical motivation for the effectiveness of this collapse prevention strategy was proposed in Grill et al. (2020) for the BYOL method. We provide a simple adaptation of their analysis for our ℓ1 loss. For ease of exposition, we will disregard the effect of the conditioning variable z and consider one dimensional representations. Denote the representation Eθ(y) by a random variable Y . The optimal predictor under equation (1) is thus given by the following functional expression,\n\n$P^{\\star}(E_{\\theta}(x))=\\text{argmin}_{P}\\|P(E_{\\theta}(x))-Y\\|_{1}$ \n \n$=\\text{median}(Y|E_{\\theta}(x))$. \n \n\nSubstituting this expression for the optimal predictor into the loss function and evaluating the expected gradient of the encoder gives\n\n$$\\nabla_{\\theta}\\mathbb{E}\\|P^{\\star}(E_{\\theta}(x))-Y\\|_{1}=\\nabla_{\\theta}\\mathrm{MAD}(Y|E_{\\theta}(x)),$$\n\nwhere MAD(· |Eθ(x)) is the median absolute deviation of a random variable conditioned on Eθ(x). Thus, in the case where the predictor is optimal, the encoder must learn to capture as much information about the video as possible to minimize the deviation of the target. The hypothesis is that incorporating an exponential moving average to compute the representation of y ensures that the predictor evolves faster than the encoder and remains close to optimal, thereby preventing collapse.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and fnally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative infuence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superfcial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specifc brain areas or networks but are plausibly distributed across many of them. However, as a frst approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this fgure.\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli suffciently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (refex arcs) and autonomic actions (autonomic refexes). Endowing predictive coding with these refexes—hence realizing an \"active inference\" architecture—permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.\n\nEquipped with a generative model like the one shown in Fig. 1, an active inference agent can continuously infer (and act upon) the state of the world and of the body, including the internal milieu, at multiple time scales. Of particular interest, here are multimodal inferences that unite exteroceptive and interoceptive sources of evidence. One example of this is the perception of faces expressing emotions. Two studies reported that", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "# Revisiting Feature Prediction for Learning Visual Representations from Video\n\nAdrien Bardes1,2,3 , Quentin Garrido1,4 , Jean Ponce3,5,6 , Xinlei Chen1 , Michael Rabbat1 , Yann LeCun1,5,6 , Mahmoud Assran1,† , Nicolas Ballas1,†\n\n1FAIR at Meta, 2 Inria, 3École normale supérieure, CNRS, PSL Research University, 4Univ. Gustave Eiffel, CNRS, LIGM, 5Courant Institute, New York University, 6Center for Data Science, New York University † Joint last author\n\nThis paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision. The models are trained on 2 million videos collected from public datasets and are evaluated on downstream image and video tasks. Our results show that learning by predicting video features leads to versatile visual representations that perform well on both motion and appearance-based tasks, without adaption of the model's parameters; e.g., using a frozen backbone. Our largest model, a ViT-H/16 trained only on videos, obtains 81.9% on Kinetics-400, 72.2% on Something-Something-v2, and 77.9% on ImageNet1K.\n\n### Date: April 15, 2024\n\nCorrespondence: {abardes, massran, ballasn}@meta.com Code: https://github.com/facebookresearch/jepa Blogpost: Click here\n\n# 1 Introduction\n\nHumans possess the remarkable ability to map low-level signals originating from the retina into a semantic spatiotemporal understanding of the world; synthesizing notions such as objects and global motion (Spelke et al., 1995). A long-standing goal of the machine learning community is to identify the principles or objectives that may guide such unsupervised learning in humans (Field, 1994; Berkes and Wiskott, 2005; Hinton, 1989). One related hypothesis is based on the predictive feature principle (Rao and Ballard, 1999), which posits that representations of temporally adjacent sensory stimuli should be predictive of each other.\n\nIn this work, we revisit feature prediction as a standalone objective for unsupervised learning of visual representations from video. Numerous advances in the field such as the standard use of transformer architectures in vision (Dosovitskiy et al., 2020), the maturing of masked autoencoding frameworks (Xie et al., 2021; Bao et al., 2021; He et al., 2021), query-based feature pooling (Chen et al., 2022), joint-embedding predictive architectures (JEPA) (LeCun, 2022; Assran et al., 2023; Baevski et al., 2022b), and larger datasets — form a unique arsenal of tools, which we integrate in a modern and conceptually simple method, the video joint-embedding predictive architecture or V-JEPA, which is based solely on feature prediction, without using pretrained image encoders, text, negative examples, human annotations, or pixel-\n\n### Frozen Evaluation\n\nFigure 1 V-JEPA models pretrained on video learn versatile visual representations. It performs well on motion-based tasks (Something-Something-v2) and appearance-based tasks (Kinetics 400) without adaptation of the model's parameters, i.e., using the same frozen backbone for both tasks.\n\nlevel reconstruction.\n\nWe seek to answer the simple question:\n\nHow effective is feature prediction as a standalone objective for unsupervised learning from video with modern tools?", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 3 V-JEPA. Training operates on a video clip of T frames with spatial resolution H × W, flattened into a sequence of L tokens. (Left to right): We first obtain the input of the x-encoder by dropping tokens from the video clip. The x-encoder then processes the masked video sequence, and outputs an embedding vector for each input token. Next, the outputs of the x-encoder are concatenated with a set of learnable mask tokens containing positional embeddings of the masked spatio-temporal patches. The predictor network processes the combined token sequence, and outputs an embedding vector for each mask token. The outputs of the predictor are then regressed to the prediction targets using an L1 loss. The prediction targets correspond to the output of the y-encoder.\n\n### 3.2 Prediction Task: Predicting y from x\n\nThe feature prediction task is based on a masked modeling formulation (He et al., 2021; Tong et al., 2022); i.e., regions x and y from the video are sampled using masking. To sample y from a video, we sample several (possibly overlapping) spatially continuous blocks with various aspect ratios and repeat the spatial blocks across the entire temporal dimension of the video; x is taken to be the complement. Masking a large continuous block that covers the full temporal dimension limits information leakage due to the spatial and temporal redundancy of videos, and results in a harder prediction task (Tong et al., 2022).\n\nWe leverage two types of masks: short-range masks, where we take the union of 8 randomly sampled target blocks covering 15% of each frame, and long-range masks, where we take the union of 2 randomly sampled target blocks covering 70% of each frame. In both cases, the aspect ratio for all sampled blocks is randomly chosen in the range (0.75, 1.5). Given that both short-range and long-range masks are produced by sampling many blocks and taking their union, the result is an average masking ratio of ∼ 90%. We refer to our masking strategy as multi-block, and compare it to other possible masking strategies in Section 4.\n\n### 3.3 Network Parameterization\n\nWe use a Vision Transformer (ViT) (Dosovitskiy et al., 2020; Arnab et al., 2021) as our video backbone. To process a video with a transformer network, we split the video clip into a 3D grid of L spatio-temporal patches, where a patch consists of a 16 × 16 pixel block spanning 2 consecutive frames; we refer to these spatio-temporal patches as tokens. This sequence of tokens is then directly processed by the stack of transformer blocks. In-\n\nputs x and y correspond to masked regions of a video, we apply the video masks by simply dropping a subset of the tokens. We apply masking at the input of the x-encoder, and at the output of the y-encoder to construct contextualized targets (Baevski et al., 2022b). The encoder is parameterized using standard ViT networks, while the predictor is a narrow transformer implemented using 12 blocks with an embedding dimension of 384. Taking inspiration from masked autoencoders (He et al., 2021), our predictor takes as input the sequence of embeddings produced by the x-encoder as well as a sequence of learnable mask tokens with positional embeddings indicating the spatio-temporal positions of the y tokens. The output of the predictor is an embedding vector for each mask token; see Figure 3 and refer to Appendix B for more details.\n\n### 3.4 Pretraining Data and Evaluation Setup\n\nPretraining. We combine several public datasets to construct an unsupervised video pretraining dataset, which we refer to as VideoMix2M. Specifically, we combine the videos from HowTo100M (HT) (Miech et al., 2019), Kinetics-400/600/700 (K710) (Kay et al., 2017), and Something-Something-v2 (SSv2) (Goyal et al., 2017), and remove any overlap with the validation sets of Kinetics-400/600/700 and Something-Something-v2, resulting in approximately 2 million videos. We train a ViT-L/16, a ViT-H/16, and a ViT-H/16384 transformer model on VideoMix2M. We use a batch size of 3072 for the ViT-L/16 and ViT-H/16 models, and a batch size of 2400 for the ViT-H/16384 model. Each model takes as input a video clip of 16 frames sampled with a frameskip of 4, corresponding to roughly 3 second clips on average. The ViT-L/16 and ViT-H/16 process the video at a spatial resolution of 224, while the ViT-H/16384 uses an input resolution of 384; cf. Appendix C.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv3.pdf" - }, - { - "text": "**Frozen**\n\n(a) Visualization Methodology. We train a conditional diffusion model to decode the V-JEPA feature-space predictions to interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video.\n\n(b) Visualizations. First Row: Masked videos used as input to the V-JEPA models (a pretrained ViT-H/16 encoder and its corresponding predictor network). Other rows: Bounding boxes contain various samples from the decoder overlayed on the original video. V-JEPA is not a generative model and the decoder does not have access to the context (first row), so we do not expect samples to exactly match the input. This experiment qualitatively illustrates what information is encoded and predicted by V-JEPA. In particular, characteristics that are common across samples represent information that is encoded in the V-JEPA predictions. V-JEPA generates predictions that are spatially and temporally coherent with unmask region of the video. The predictions also capture consistent motion through time.\n\nFigure 6 Qualitative Analysis. Offline visualizations of the V-JEPA feature-space predictions.\n\n# 7 Conclusion\n\nIn this work, we explored the effectiveness of feature prediction as a stand-alone objective for unsupervised learning from video and introduced V-JEPA, a collection of vision models trained solely using a self-supervised feature prediction objective. The V-JEPA models demonstrate the ability to solve various downstream image and video tasks without adaption of the model parameters, and outperform previous video representation learning approaches in frozen evaluation on action recognition, spatio-temporal action detection, and image classification tasks. Additionally, we show that pretraining V-JEPA on videos is particularly effective for solving downstream tasks requiring fine-grained motion understanding, while large-scale image models trained on internet scale datasets fall short on such tasks. Finally, we empirically observed that V-JEPA models are label-efficient learners, and exhibit good performance on downstream tasks, even when only few labeled examples are available.\n\n# References\n\n- Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems, 34:24206–24221, 2021.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv3.pdf" - }, - { - "text": "performed an outlier check, labeling images as a 'low-quality outlier' if the correlation coefficient was >3 s.d. from the absolute mean. None of our scans were flagged as outliers. The reconstructed participant files were aggregated into one connectometry database per metric.\n\n*Day2Day control dataset*. To compare our findings against a control group of nonpregnant densely-sampled individuals, we used the Day-2Day dataset23 which offered comparable whole-brain T1 and T2 MTL scans for eight participants (two male) scanned 12–50 times over 2–7 months. Each participant was run through the ANTs CT and ASHS processing pipelines as outlined above ('Cortical volume and thickness' and 'Hippocampal segmentation'). To note, for each participant, we created an SST based on their first two sessions for consistency with the primary dataset; subfield volumes for the T2 MTL scans did not undergo manual retouching. Due to missing header information on the publicly available diffusion scans, we were unable to benchmark our white matter changes with the Day2Day dataset.\n\n**Statistical analysis.** Statistical analyses were conducted using R (sMRI; version 3.4.4) and DSI Studio (dMRI; Chen-2022-07-31).\n\n*Summary brain metrics*. To reflect the existing literature, we first explored brain metrics across the entire study duration (prepregnancy through postpartum, *n* = 26 scans). When including all sessions, total brain volume, GMV, CT, global QA, ventricle volume and CSF displayed nonlinear trends over time; therefore, we used generalized additive models (GAM; cubic spline basis, *k* = 10, smoothing = GCV), a method of nonparametric regression analysis (R package, mgcv76), to explore the relationship between summary brain metrics (outcome variables) and gestation week (smooth term). Each model underwent examination (gam.check function) to ensure it was correctly specified with regards to (1) the choice of basis dimension (*k*) and (2) the distribution of model residuals (see mgcv documentation in ref. 76). The general pattern of results held after toggling model parameters; however, we note the risk of overinterpreting complex models with small sample sizes77. To address overfitting and cross-validate our basis type selection, we also fit the data using nonpenalized general linear models (GLM) with both linear and polynomial terms for gestation week. We compared the performance of each GLM (that is, models using only a linear term versus models with polynomial terms) via the Akaike information criterion (AIC), which revealed that cubic models consistently outperformed both linear and quadratic models (AICdiff > 3), providing additional evidence for nonlinear changes in structural brain variables over time. Determining whether these patterns replicate in larger cohorts and whether complex models are better suited to capture data patterns across individuals will be a necessary next step.\n\n*Cortical GMV and CT*. We then narrowed our analyses to the first 19 sessions (baseline—36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1–5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at *q* < 0.05).\n\n*Subcortical GMV*. A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at *q* < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy (*n* = 7 bilateral subregions and *n* = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at *q* < 0.05.\n\n*White matter microstructure*. DSI Studio's correlational tractography74 was used to analyze the relationship between white matter structure and gestational week (*n* = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones (*n* = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: *t* score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas78.\n\n*Day2Day dataset: measurement variability*. To establish a marker of normative variability over half a year, we computed metrics of measurement variability using the Day2Day dataset23, which provided both whole-brain T1 and high-resolution T2 MTL scans. For each region, *j*, of the Schaefer parcellation, we assessed across-session variability, *ε*, as\n\n$$\\varepsilon_{j}=100\\times\\mathrm{mean}\\left({\\frac{|t_{s}-{\\hat{t}}|}{{\\hat{t}}}}\\right)$$\n\nWhere *ts* is the morphometric measurement of a parcel for session *s* and *t* ̂ is the mean of *t* across sessions55,79. Thus, we defined variability as the mean absolute percent difference between each individual and the mean across sessions. Across-session variability estimates for all 400 regions were then averaged across eight participants, and a global measure of cortical GMV variability was computed by averaging across the 400 regions. This approach was repeated independently for the T2 hippocampal scans, wherein we computed across-session variability for each parcel of the ASHS parcellation scheme (*n* = 7 bilateral subfields). However, it is important to note that raw subfield values (that is, no manual retouching) were used for Day2Day variability assessments and should be interpreted with caution. Finally, to better compare against our own data, we repeated this approach using our", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - }, - { - "text": "Bayesian networks[92] are a tool that can be used for reasoning (using the Bayesian inference algorithm),[g][94] learning (using the expectation–maximization algorithm),[h][96] planning (using decision networks) [97] and perception (using dynamic Bayesian networks).[90]\n\nProbabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[90]\n\nA simple Bayesian network, with the associated conditional probability tables\n\n### **Classifiers and statistical learning methods**\n\nThe simplest AI applications can be divided into two types: classifiers (e.g., \"if shiny then diamond\"), on one hand, and controllers (e.g., \"if diamond then pick up\"), on the other hand. Classifiers[98] are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an \"observation\") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[45]\n\nExpectation–maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.\n\nThere are many kinds of classifiers in use.[99] The decision tree is the simplest and most widely used symbolic machine learning algorithm.[100] K-nearest neighbor algorithm was\n\nthe most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[101] The naive Bayes classifier is reportedly the \"most widely used learner\"[102] at Google, due in part to its scalability. [103] Neural networks are also used as classifiers.[104]\n\n#### **Artificial neural networks**\n\nAn artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[104]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. [105] Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.[106]\n\nIn feedforward neural networks the signal passes in only one direction.[107] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks.[108] Perceptrons[109] use only a single layer of neurons; deep learning[110] uses multiple layers. Convolutional neural networks strengthen the connection\n\nA neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.\n\nbetween neurons that are \"close\" to each other—this is especially important in image processing, where a local set of neurons must identify an \"edge\" before the network can identify an object.[111]\n\n#### **Deep learning**\n\nDeep learning[110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higherlevel features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[112]\n\nDeep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, [113] and others. The reason that deep learning performs so\n\nwell in so many applications is not known as of 2023.[114] The sudden success of deep learning in 2012– 2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet. [j]\n\n#### **GPT**\n\nGenerative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia3.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "What does mean the JEPA acronym ?", - "target_page": 3, - "target_passage": " joint-embedding predictive architecture (JEPA)", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "**Frozen**\n\n(a) Visualization Methodology. We train a conditional diffusion model to decode the V-JEPA feature-space predictions to interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video.\n\n(b) Visualizations. First Row: Masked videos used as input to the V-JEPA models (a pretrained ViT-H/16 encoder and its corresponding predictor network). Other rows: Bounding boxes contain various samples from the decoder overlayed on the original video. V-JEPA is not a generative model and the decoder does not have access to the context (first row), so we do not expect samples to exactly match the input. This experiment qualitatively illustrates what information is encoded and predicted by V-JEPA. In particular, characteristics that are common across samples represent information that is encoded in the V-JEPA predictions. V-JEPA generates predictions that are spatially and temporally coherent with unmask region of the video. The predictions also capture consistent motion through time.\n\nFigure 6 Qualitative Analysis. Offline visualizations of the V-JEPA feature-space predictions.\n\n# 7 Conclusion\n\nIn this work, we explored the effectiveness of feature prediction as a stand-alone objective for unsupervised learning from video and introduced V-JEPA, a collection of vision models trained solely using a self-supervised feature prediction objective. The V-JEPA models demonstrate the ability to solve various downstream image and video tasks without adaption of the model parameters, and outperform previous video representation learning approaches in frozen evaluation on action recognition, spatio-temporal action detection, and image classification tasks. Additionally, we show that pretraining V-JEPA on videos is particularly effective for solving downstream tasks requiring fine-grained motion understanding, while large-scale image models trained on internet scale datasets fall short on such tasks. Finally, we empirically observed that V-JEPA models are label-efficient learners, and exhibit good performance on downstream tasks, even when only few labeled examples are available.\n\n# References\n\n- Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems, 34:24206–24221, 2021.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv3.pdf" - }, - { - "text": "**SOTA fine-tuned task-specific model on SSv2 (MVD)**\n\nFigure 4 SSv2 fine-tuning performance vs. Samples Seen. We report SSv2 fine-tuning for V-JEPA and pixel-reconstruction baselines using a ViT-L/16 or Hiera-L architecture. V-JEPA outperforms all pixel-reconstruction methods using a ViT-L/16 and matches the Hiera-L performance while seeing significantly less samples during pretraining.\n\nageNet; hence, V-JEPA achieves comparable ImageNet performance despite only pretraining on video.\n\nUnder the fine-tuning protocol, V-JEPA also achieves the best performance of any model trained with a ViT-L/16, and matches the performance of the Hiera-L on SSv2, which benefits from a hierachical prior (Ryali et al., 2023). The V-JEPA models achieve this result while processing significantly fewer samples during pretraining (Figure 4), demonstrating the efficiency of feature prediction as a learning principle.\n\n### 5.2 Comparison with State-of-the-Art\n\nNext, in Table 6, we inspect how the V-JEPA models pretrained on video stack up next to the largest stateof-the-art self-supervised image and video models when freezing the backbone encoder and training an attentive probe on top. Our image pretrained baselines include OpenCLIP (Cherti et al., 2023), DINOv2 (Oquab et al., 2023), and I-JEPA (Assran et al., 2023). The Open-CLIP model is trained with a contrastive image-text alignment objective, DINOv2 and I-JEPA are trained with self-supervision. These models are known to excel in their frozen-evaluation performance (Oquab et al., 2023); i.e., their ability to produce visual features that can be applied to many downstream tasks simultaneously, without end-to-end fine-tuning, and thus provide highly competitive baselines. Our video pretrained baselines include VideoMAE (Tong et al., 2022), Omni-MAE (Girdhar et al., 2023), Hiera (Ryali et al., 2023), VideoMAEv2 (Wang et al., 2023a), and MVD (Wang et al., 2023b). The OpenCLIP, DINOv2 and Video-MAEv2 models are parameterized as Giant/Gigantic vision transformer architectures containing over 1B parameters trained on large-scale image or video datasets.\n\nComparison with video models. Compared to large-scale video baselines, the V-JEPA models outperform all previous models on every downstream video\n\nFigure 5 SSv2 frozen-evaluation performance vs. Pretraining Time. Wallclock times for all methods are measured on a single GPU with a batch size of 10 clips, using the official codebases for VideoMAE and VideoMAEv2, and linearly extrapolated assuming a global batch size of 2400 samples. However, note that the SSv2 accuracies of video pixel prediction methods are actually obtained with small batch sizes and significantly longer training schedules. V-JEPA outperforms pixel-reconstruction methods while training significantly faster.\n\nand image task with notable margin (see Table 6). Our H/16 model outperforms the largest publicly available VideoMAE, VideoMAEv2, OmniMAE, MVD, and Hiera models by at least +5 points in motion understanding (Something-Something-v2), +2 points in action recognition (Kinetics-400), +5 points on action detection (AVA), +1 point on object recognition (ImageNet-1K), +2 points in scene recognition (Places205), and +0.2 points on finegrained recognition (iNaturalist). Moreover, when comparing pretraining wallclock time in Figure 5, we see that V-JEPA achieves this performance with a roughly 2× speedup compared to the large pixel prediction models.\n\nComparison with image models. On tasks that require a fine-grained understanding of motion (Something-Something-v2), the V-JEPA models provide a major improvement (over +21 points) compared to large-scale image baselines, such as DINOv2, OpenCLIP, and I-JEPA. Self-supervised pretraining from videos allows to model dynamic concepts that are not easily learned from static image datasets. Similarly, we observe that the V-JEPA models outperform image-based pretraining on action localization.\n\nOn Kinetics-400, we find image models to perform well; e.g., while DINOv2 (Oquab et al., 2023) previously reported 78.4% on K400 with a linear probe, we improve the frozen evaluation of the g/14 model to 83.4% by using an attentive probe. In this case, our H/16 model achieves 82.0% top-1 accuracy. It is worth noting that the label for many Kinetics videos can be inferred using appearance-based cues, without requiring an understanding of motion (Sevilla-Lara et al., 2021).\n\nThe V-JEPA models narrow the gap with image models on image classification tasks. In particular, V-JEPA achieves a score of 77.4% on ImageNet using a one-", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 7 Low-Shot Frozen Evaluation. Comparing V-JEPA to other video models in frozen evaluation on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several low-shot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. We report the mean performances and standard deviation using the K400 and SSv2 validation sets. V-JEPA is more label-efficient than other models; specifically, decreasing the available number of labeled examples from each class increases the performance gap between V-JEPA and the baselines.\n\n| | | | | Frozen Evaluation | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | K400 | | SSv2 | | |\n| | | | (16×8×3) | | (16×2×3) | | |\n| | | 5% | 10% | 50% | 5% | 10% | 50% |\n| Method | Arch. | (∼29 samples per class) | (∼58 samples per class) | (∼287 samples per class) | (∼48 samples per class) | (∼96 samples per class) | (∼440 samples per class) |\n| MVD | ViT-L/16 | 62.6 ± 0.2 | 68.3 ± 0.2 | 77.2 ± 0.3 | 42.9 ± 0.8 | 49.5 ± 0.6 | 61.0 ± 0.2 |\n| VideoMAE | ViT-H/16 | 62.3 ± 0.3 | 68.5 ± 0.2 | 78.2 ± 0.1 | 41.4 ± 0.8 | 48.1 ± 0.2 | 60.5 ± 0.4 |\n| VideoMAEv2 | ViT-g/14 | 37.0 ± 0.3 | 48.8 ± 0.4 | 67.8 ± 0.1 | 28.0 ± 1.0 | 37.3 ± 0.3 | 54.0 ± 0.3 |\n| V-JEPA | ViT-H/16 | 67.0 ± 0.2 | 72.1 ± 0.1 | 80.2 ± 0.2 | 51.9 ± 0.3 | 57.5 ± 0.4 | 67.3 ± 0.2 |\n| | ViT-H/16384 | 68.2 ± 0.2 | 72.8 ± 0.2 | 80.6 ± 0.2 | 54.0 ± 0.2 | 59.3 ± 0.5 | 67.9 ± 0.2 |\n\nlayer attentive probe, which can be further improved to 77.9% using a two-layer attentive probe. More generally, we hypothesize that the datasets used to train V-JEPA and other video models are too constrained and lack the visual diversity of the internet-scale pretraining data used by the images models; as such, there is value in focusing future work on building diverse publicly available video datasets.\n\n# 5.3 Label-efficiency\n\nWe examine the label-efficiency of V-JEPA compared to other self-supervised video models by measuring the ability of the pretrained backbones to adapt to downstream tasks with few labels. Specifically, we investigate the performance of the frozen models on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several lowshot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. Table 7 reports the mean performances and standard deviation using the K400 and SSv2 validation sets.\n\nWe find V-JEPA to be more label-efficient than other self-supervised video models: decreasing the available number of labeled examples for training the attentive probe results in an increase in the performance gap between V-JEPA and the other models. In particular, the performance of the largest V-JEPA model on K400 drops by 12% to 68.2% top-1 when we reduce the number of labeled examples by a factor of 10× (from roughly 287 examples per class to 29 examples per class). By contrast, VideoMAEv2 drops by 30% to 37.0% top-1, VideoMAE drops by 15.9% to 62.3% top-1, and MVD drops by 14.6% to 62.6% top-1.\n\nSimilar observations hold on SSv2. The performance of the largest V-JEPA model on SSv2 drops by 13.9%\n\nto 54.0% top-1 when we reduce the number of labeled examples by a factor of 10× (from roughly 440 examples per class to 48 examples per class). By contrast, Video-MAEv2 drops by 26% to 28.0% top-1, VideoMAE drops by 19.1% to 41.4% top-1, and MVD drops by 18.1% to 42.9% top-1.\n\n# 6 Evaluating the Predictor\n\nNext, we seek to qualitatively inspect the V-JEPA models. Recall that the predictor network in V-JEPA predicts the representations of a masked spatio-temporal region y from a visible region x, given the positional information of the masked regions (see Section 3). To qualitatively investigate the grounding of the feature-space predictions, we freeze the pretrained encoder and predictor networks and train a conditional diffusion decoder to map the V-JEPA predictions to interpretable pixels. Notably, the decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video (see Figure 6a).\n\nGiven a masked video, we use the V-JEPA pretrained models to predict the representations of the missing regions, and then use the decoder to project the representations to pixel space. Figure 6b shows decoder outputs for various random seeds. Qualities that are common across samples represent information that is contained in the predictor representation.\n\nFigure 6b shows that the V-JEPA feature predictions are indeed grounded, and exhibit spatio-temporal consistency with the unmasked regions of the video. Specifically, the samples in Figure 6b show that the V-JEPA predictor correctly captures positional uncertainty and produces a variety of visual objects at various locations with consistent motion. Some of the samples also demonstrate an understanding of object-permanence, as the visual objects remain consistent after partial occlusion.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv3.pdf" - }, - { - "text": "FIG. 8: XTEJ1752-223 light curve. Horizontal scale is in modified Julian days.\n\n- [1] C. Meegan et al., Ap. J. 702, 791 (2009).\n- [2] C. Wilson-Hodge et al. (2010), these proceedings.\n- [3] B. A. Harmon et al., Ap. J. Suppl. 138, 149 (2002).\n- [4] B. A. Harmon et al., Ap. J. Suppl. 154, 585 (2004).\n- [5] G. L. Case et al., in The First GLAST Symposium, edited by S. Ritz, P. Michelson, and C. Meegan (2007), vol. 921 of AIP Conf. Proceedings, p. 538.\n- [6] J. Tueller et al. (2010), ap. J. Suppl., (to be published), astro-ph/0903.3037.\n- [7] J. C. Ling and W. A. Wheaton, Ap. J. 598, 334 (2003).\n- [8] E. Jourdain and J. P. Roques, Ap. J. 704, 17 (2009).\n- [9] H. Steinle et al., Astron. and Astrophys. 330, 97\n\n12-25 keV band, where the flux initially rose to about 240 mCrab (2009 Oct 25-28), suddenly dropped to non-detectable on 2009 October 29-30, then rose again during the period 2009 October 31 to November 2. As of mid December 2009, the source remains in a high intensity state. The light curve is shown for the period MJD 54700-55200, again with 1-day resolution, in Fig. 8. The fluxes for XTE J1752-223 in Table 1 are given are for the interval of flaring activity, TJD 55130-55180.\n\n#### Acknowledgments\n\nThis work is supported by the NASA Fermi Guest Investigator program. At LSU, additional support is provided by NASA/Louisiana Board of Regents Cooperative Agreement NNX07AT62A.\n\n(1998).\n\n- [10] M. McConnell et al., Ap. J. 523, 928 (2000).\n- [11] J. C. Ling and W. A. Wheaton, Chinese J. Astron. Astrophys. Suppl. 5, 80 (2005).\n- [12] G. L. Case et al., Chinese J. Astron. Astrophys. Suppl. 5, 341 (2005).\n- [13] L. Bouchet et al., Ap. J. 693, 1871 (2009).\n- [14] M. C. Bell et al., Ap. J. 659, 549 (2007).\n- [15] G. L. Case et al. (2010), to be submitted.\n- [16] C. Wilson-Hodge et al., Astron. Telegram 2280 (2009).", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0955.pdf" - }, - { - "text": "| Acronym | Description |\n| --- | --- |\n| SPARQL | Query language for linked data (RDF) |\n| SSL | Secure Socket Layer |\n| URL | Uniform Resource Locator |\n| XML | Extensible Markup Language |\n\n*Table 1-2: Abbreviations and Acronyms*", - "page_start": 4, - "page_end": 4, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### **Stationary/Stationery**\n\n\"Stationary\" means not moving. E.g. The stationary truck held up the traffic.\n\n\"Stationery\" refers to writing materials. E.g. She needed new stationery for school.\n\n#### **There/Their/They're**\n\n\"There\" is a preposition that refers to a place. E.g. He will be there in ten minutes.\n\n\"Their\" is a possessive pronoun. It indicates that something belongs to them.\n\nE.g. Due to unforeseen circumstances, their meeting was cancelled.\n\n\"They're\" is a contraction of \"they are\".\n\nE.g. They're not going to be pleased when they find out that he lost the report.\n\n#### **To/Too/Two**\n\n\"To\" is a preposition, and indicates the relationship between one thing and another.\n\nE.g. I gave the letter to him.\n\n\"Too\" means \"also\", \"additional\" or \"more than what is necessary or desirable\".\n\nE.g. He is going on holiday too. As a result, there are too few people available to work over December.\n\n\"Two\" is a number.\n\nE.g. There are only two staff members in the office.", - "page_start": 18, - "page_end": 18, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "Table 5 Comparison with Pixel Prediction Methods. We compare V-JEPA with OmniMAE (Girdhar et al., 2023), Video-MAE (Tong et al., 2022), and Hiera (Ryali et al., 2023), which leverage a pixel-reconstruction loss. All models are trained using a ViT-L architecture or a comparable Hiera-L. We evaluate the approaches on downstream image tasks (IN1K, Places205, iNat201) and video tasks (K400, SSv2, AVA) in both frozen evaluation (with a frozen backbone), and end-to-end fine-tuning. All models are evaluated at resolution 224. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. In frozen evaluation, V-JEPA outperforms the baselines on all downstream tasks, except ImageNet, where the model achieves 74.8% compared to 75.1% of an OmniMAE model trained directly on ImageNet. V-JEPA also achieves the best fine-tuning performance amongs all ViT-L models and matches the Hiera-L on SSv2. The V-JEPA results are achieved while processing significantly fewer examples during pretraining.\n\n| | | | | | | | Frozen Evaluation w/ Att. Pooling | | | Fine-Tuning | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | #Samples | | K400 | SSv2 | AVA | IN1K | Places205 | iNat21 | K400-ft | SSv2-ft |\n| Method | Arch. | Seen | Iter. | (16×8×3) | (16×2×3) | | | | | (16×5×3) | (16×2×3) |\n| | Methods pretrained using pixel prediction | | | | | | | | | | |\n| OmniMAE | ViT-L/16 | 2400M | 1170K | 65.6 | 60.6 | 14.4 | 75.1 | 59.8 | 66.1 | 84.0 | 74.2 |\n| VideoMAE | ViT-L/16 | 410M | 400K | 77.8 | 65.5 | 21.6 | 71.1 | 59.3 | 64.6 | 85.4 | 74.3 |\n| Hiera | Hiera-L | 770M | 1500K | 75.5 | 64.2 | 15.8 | 68.9 | 58.5 | 56.9 | 87.3 | 75.1 |\n| V-JEPA | ViT-L/16 | 270M | 90K | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 | 85.6 | 75.1 |\n\nTable 6 Comparison with State-of-the-Art Models. We compare V-JEPA with state-of-the-art baselines in frozen evaluation with an attentive probe on downstream image tasks (IN1K, Place205, iNat21) and video tasks (K400, SSv2, AVA). All models are evaluated at resolution 224, except I-JEPA512 and V-JEPA384 which are evaluated respectively at resolution 512 and 384. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. Compared to other video baselines, V-JEPA exhibits a consistent improvement across all downstream tasks. Compared to image-models that excel under the frozen evaluation, V-JEPA shows a significant performance improvement on tasks requiring motion understanding (+21 points on SSv2), and reduces the gap between video and image models on tasks requiring static appearance-based features.\n\n| | | | | | Video Tasks | | | Image Tasks | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | K400 | SSv2 | AVA | IN1K | Places205 | iNat21 |\n| Method | Arch. | Params. | Data | (16×8×3) | (16×2×3) | | | | |\n| Methods pretrained on Images | | | | | | | | | |\n| I-JEPA | ViT-H/16512 | 630M | IN22K | 79.7 | 50.0 | 19.8 | 84.4 | 66.5 | 85.7 |\n| OpenCLIP | ViT-G/14 | 1800M | LAION | 81.8 | 34.8 | 23.2 | 85.3 | 70.2 | 83.6 |\n| DINOv2 | ViT-g/14 | 1100M | LVD-142M | 83.4 | 50.6 | 24.3 | 86.2 | 68.4 | 88.8 |\n| Methods pretrained on Videos | | | | | | | | | |\n| MVD | ViT-L/16 | 200M | IN1K+K400 | 79.4 | 66.5 | 19.7 | 73.3 | 59.4 | 65.7 |\n| OmniMAE | ViT-H/16 | 630M | IN1K+SSv2 | 71.4 | 65.4 | 16.0 | 76.3 | 60.6 | 72.4 |\n| VideoMAE | ViT-H/16 | 630M | K400 | 79.8 | 66.2 | 20.7 | 72.3 | 59.1 | 65.5 |\n| VideoMAEv2 | ViT-g/14 | 1100M | Un.Hybrid | 71.2 | 61.2 | 12.9 | 71.4 | 60.6 | 68.3 |\n| Hiera | Hiera-H | 670M | K400 | 77.0 | 64.7 | 17.5 | 71.4 | 59.5 | 61.7 |\n| | ViT-L/16 | 200M | | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 |\n| V-JEPA | ViT-H/16 | 630M | VideoMix2M | 82.0 | 71.4 | 25.8 | 75.9 | 61.7 | 67.9 |\n| | ViT-H/16384 | 630M | | 81.9 | 72.2 | 25.0 | 77.4 | 62.8 | 72.6 |\n\n# 5 Comparison with Prior Work\n\nIn Section 5.1, we investigate the impact of feature prediction by comparing V-JEPA with video approaches that rely on pixel prediction, while using a similar architecture for all baselines. Subsequently, in Section 5.2, we remove the architectural constraint and report the best performance across architectures for self-supervised video and image pretraining approaches. Finally, we explore the label-efficiency of V-JEPA relative to other selfsupervised video pretraining approaches in Section 5.3. We further detail the evaluation setup in Appendix D.\n\n### 5.1 Comparison with Pixel Prediction\n\nTo investigate the effectiveness of feature prediction pretraining, we first compare V-JEPA to video masked modeling models relying on a pixel prediction loss. We control\n\nfor the possible confounding factor of model architecture by evaluating all models using either a ViT-L/16 encoder, or a Hiera-L encoder, which has a similar number of parameters. For the pixel prediction baselines we consider VideoMAE (Tong et al., 2022; Wang et al., 2023a), which trains vision transformer autoencoders exclusively on video, Hiera (Ryali et al., 2023), which trains a hierarchical transformer autoencoder on video, and OmniMAE (Girdhar et al., 2023), which trains a vision transformer autoencoder on static images and video simultaneously.\n\nTable 5 examines both frozen evaluation with an attentive probe on downstream video and image tasks, as well as end-to-end fine-tuning. In frozen evaluation, V-JEPA outperforms the baselines on all downstream tasks, except ImageNet, where we achieve 74.8% compared to 75.1% of an OmniMAE model trained directly on Im-", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "- 8. Abe T, Buckner SL, Mattocks KT, Jessee MB, Dankel SJ, Mouser JG, Bell ZW, Loenneke JP. Skeletal muscle mass and architecture of the world's strongest raw powerlifter: a case study. Asian J Sports Med 9: e61763, 2018. doi:10.5812/asjsm.61763.\n- 9. Powell PL, Roy RR, Kanim P, Bello MA, Edgerton VR. Predictability of skeletal muscle tension from architectural determinations in guinea pig hindlimbs. J Appl Physiol Respir Environ Exerc Physiol 57: 1715–1721, 1984. doi:10.1152/jappl.1984.57.6.1715.\n- 10. Maden-Wilkinson TM, Balshaw TG, Massey G, Folland JP. What makes long-term resistance-trained individuals so strong? A comparison of skeletal muscle morphology, architecture, and joint mechanics. J Appl Physiol (1985) 128: 1000–1011, 2019. doi:10.1152/ japplphysiol.00224.2019.\n- 11. Balshaw TG, Maden-Wilkinson TM, Massey GJ, Folland JP. The human muscle size and strength relationship: effects of architecture, muscle force, and measurement location. Med Sci Sports Exerc 53: 2140–2151, 2021. doi:10.1249/mss.0000000000002691.\n- 12. Baxter JR, Piazza SJ. Plantar flexor moment arm and muscle volume predict torque-generating capacity in young men. J Appl Physiol (1985)116: 538–544, 2014. doi:10.1152/japplphysiol.01140.2013.\n- 13. Miller R, Balshaw TG, Massey GJ, Maeo S, Lanza MB, Johnston M, Allen SJ, Folland JP. The muscle morphology of elite sprint running. Med Sci Sports Exerc 53: 804–815, 2021. doi:10.1249/ mss.0000000000002522.\n- 14. Balshaw TG, Funnell MP, McDermott E, Maden-Wilkinson TM, Abela S, Quteishat B, Edsey M, James LJ, Folland JP. The effect of specific bioactive collagen peptides on function and muscle remodeling during human resistance training. Acta Physiol (Oxf) 237: e13903, 2023 [Erratum in Acta Physiol (Oxf) 237:e13952, 2023]. doi:10.1111/apha.13903.\n- 15. Massey GJ, Balshaw TG, Maden-Wilkinson TM, Folland JP. Tendinous tissue properties after short- and long-term functional overload: differences between controls, 12 weeks and 4 years of resistance training. Acta Physiol (Oxf) 222: e13019, 2018. doi:10.1111/ apha.13019.\n- 16. Sugisaki N, Kobayashi K, Tsuchie H, Kanehisa H. Associations between individual lower-limb muscle volumes and 100-m sprint time in male sprinters. Int J Sports Physiol Perform 13: 214–219, 2018. doi:10.1123/ijspp.2016-0703.\n- 17. Seynnes OR, Erskine RM, Maganaris CN, Longo S, Simoneau EM, Grosset JF, Narici MV. Training-induced changes in structural and mechanical properties of the patellar tendon are related to muscle hypertrophy but not to strength gains. J Appl Physiol (1985) 107: 523–530, 2009. doi:10.1152/japplphysiol.00213.2009.\n- 18. Beckham GK, Sato K, Santana HAP, Mizuguchi S, Haff GG, Stone MH. Effect of body position on force production during the isometric midthigh pull. J Strength Cond Res 32: 48–56, 2018. doi:10.1519/ jsc.0000000000001968.\n- 19. Travis SK, Goodin JR, Beckham GK, Bazyler CD. Identifying a test to monitor weightlifting performance in competitive male and female weightlifters. Sports 6: 46, 2018. doi:10.3390/sports6020046.\n- 20. Beckham G, Mizuguchi S, Carter C, Sato K, Ramsey M, Lamont H, Hornsby G, Haff G, Stone M. Relationships of isometric mid-thigh pull variables to weightlifting performance. J Sports Med Phys Fit 53: 573–581, 2013.\n- 21. Hornsby WG, Gentles JA, MacDonald CJ, Mizuguchi S, Ramsey MW, Stone MH. Maximum strength, rate of force development, jump height, and peak power alterations in weightlifters across five months of training. Sports 5: 78, 2017. doi:10.3390/sports5040078.\n- 22. Beckham GK, Lamont HS, Sato K, Ramsey MW, Gh G, Stone MH. Isometric strength of powerlifters in key positions of the conventional deadlift. J Trainology 1: 32–35, 2012. doi:10.17338/trainology.1.2_32.\n- 23. Stone MH, Sands WA, Pierce KC, Carlock J, Cardinale M, Newton RU. Relationship of maximum strength to weightlifting performance. Med Sci Sports Exerc 37: 1037–1043, 2005. doi:10.1249/01.mss. 0000171621.45134.10.\n- 24. Beattie K, Carson BP, Lyons M, Kenny IC. The relationship between maximal strength and reactive strength. Int J Sports Physiol Perform 12: 548–553, 2017. doi:10.1123/ijspp.2016-0216.\n- 25. Suarez DG, Carroll KM, Slaton JA, Rochau KG, Davis MW, Stone MH. Utility of a shortened isometric midthigh pull protocol for assessing rapid force production in athletes. J Strength Cond Res 36: 1819–1825, 2022. doi:10.1519/jsc.0000000000003774.\n- 26. Suchomel TJ, Nimphius S, Stone MH. Scaling isometric mid-thigh pull maximum strength in division I athletes: are we meeting the assumptions? Sports Biomech 19: 532–546, 2020. doi:10.1080/ 14763141.2018.1498910.\n- 27. Cunningham DJ, Shearer DA, Drawer S, Pollard B, Cook CJ, Bennett M, Russell M, Kilduff LP. Relationships between physical qualities and key performance indicators during match-play in senior international rugby union players. PLoS One 13: e0202811, 2018. doi:10.1371/journal.pone.0202811.\n- 28. Doyle TLA, Fain AC, Wills JA, Cooper D, Toonen K, Kamphius B. Measures of lower body strength associated with injuries in Australian special forces selection candidates. J Appl Biomech 38: 255–262, 2022. doi:10.1123/jab.2021-0134.\n- 29. Kawamori N, Rossi SJ, Justice BD, Haff EE, Pistilli EE, O'Bryant HS, Stone MH, Haff GG. Peak force and rate of force development during isometric and dynamic mid-thigh clean pulls performed at various intensities. J Strength Cond Res 20: 483–491, 2006. doi:10.1519/ 18025.1.\n- 30. Wang R, Hoffman JR, Tanigawa S, Miramonti AA, Monica MB, Beyer KS, Church DD, Fukuda DH, Stout JR. Isometric mid-thigh pull correlates with strength, sprint, and agility performance in collegiate rugby union players. J Strength Cond Res 30: 3051–3056, 2016. doi:10.1519/jsc.0000000000001416.\n- 31. Haff GG, Stone M, O'Bryant HS, Harman E, Dinan C, Johnson R, Han KH. Force-time dependent characteristics of dynamic and isometric muscle actions. J Strength Cond Res 11: 269–272, 1997. doi:10.1519/1533-4287(1997)011<0269:FTDCOD>2.3.CO;2.\n- 32. Mercer RAJ, Russell JL, McGuigan LC, Coutts AJ, Strack DS, McLean BD. Finding the signal in the noise—interday reliability and seasonal sensitivity of 84 countermovement jump variables in professional basketball players. J Strength Cond Res 37: 394–402, 2023. doi:10.1519/jsc.0000000000004182.\n- 33. Cabarkapa D, Philipp N, Cabarkapa D, Eserhaut D, Fry A. Comparison of force-time metrics between countermovement vertical jump with and without an arm swing in professional male basketball players. Int J Strength Cond 3: 1–7, 2023. doi:10.47206/ijsc. v3i1.197.\n- 34. Tillin NA, Pain MT, Folland J. Explosive force production during isometric squats correlates with athletic performance in rugby union players. J Sports Sci 31: 66–76, 2013. doi:10.1080/02640414.2012.720704.\n- 35. Morris CG, Weber JA, Netto KJ. Relationship between mechanical effectiveness in sprint running and force-velocity characteristics of a countermovement jump in Australian rules football athletes. J Strength Cond Res 36: e59–e65, 2022. doi:10.1519/ jsc.0000000000003583.\n- 36. Johnson DL, Bahamonde R. Power output estimate in university athletes. J Strength Cond Res 10: 161–166, 1996. doi:10.1519/1533-4287 (1996)010<0161:poeiua>2.3.co;2.\n- 37. Mkaouer B, Jemni M, Amara S, Chaaben H , Tabka Z. Kinematic and kinetic analysis of counter movement jump versus two different types of standing back somersault. Sci Gymnast J 4: 61–71, 2012. https://www.fsp.uni-lj.si/en/research/scientific-magazines/scienceof-gymnastics/previous-issues/2012102209114244/.\n- 38. Walsh MS, Bohm H € , Butterfield MM, Santhosam J. Gender bias in the effects of arms and countermovement on jumping performance. J Strength Cond Res 21: 362–366, 2007. doi:10.1519/00124278- 200705000-00012.\n- 39. Vadgaonkar R, Prameela MD, Kumar CG, Blossom V, Tonse M, Murlimanju BV, Pai MM, Prabhu LV. Dimensions of pes anserinus of the lower extremity, an anatomical study with its surgical implications. Anat Cell Biol 54: 178–183, 2021. doi:10.5115/acb.20.275.\n- 40. Heinemeier KM, Schjerling P, Heinemeier J, Magnusson SP, Kjaer M. Lack of tissue renewal in human adult Achilles tendon is revealed by nuclear bomb 14C. FASEB J 27: 2074–2079, 2013. doi:10.1096/ fj.12-225599.\n- 41. Balshaw TG, Funnell MP, McDermott EJ, Maden-Wilkinson TM, Massey GJ, Abela S, Quteishat B, Edsey M, James LJ, Folland JP. The effect of specific bioactive collagen peptides on tendon remodeling during 15 wk of lower body resistance training. Med Sci Sports Exerc 55: 2083–2095, 2023. doi:10.1249/mss.0000000000003242.\n- 42. Welle S, Totterman S, Thornton C. Effect of age on muscle hypertrophy induced by resistance training. J Gerontol A Biol Sci Med Sci 51: M270–M275, 1996. doi:10.1093/gerona/51a.6.m270.", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed12.pdf" - }, - { - "text": "To that end, we pretrain a family of V-JEPA models on a dataset of 2 million videos collected from publicly available datasets by combining a masked modeling prediction task with a joint-embedding predictive architecture (see Figure 2). We measure performance on several downstream image and video tasks, using both frozen evaluation and end-to-end fine-tuning. Our findings suggest that feature prediction can indeed serve as an effective stand-alone objective for unsupervised learning from video, while using significantly shorter training schedules than pixel prediction methods. Specifically:\n\n- Feature prediction leads to versatile visual representations that perform well across downstream image and video tasks without adaption of the model's weights; i.e., using a frozen backbone. V-JEPA achieves the best performance among methods we consider (+6% accuracy) on the SomethingSomething-v2 task, which requires finegrained temporal understanding. V-JEPA is also competitive on tasks like Kinetics400, where appearance-based features are sufficient and hence state-of-the-art image models such as DINOv2 excel (Figure 1 and Table 6).\n- Models trained with feature prediction are superior to pixel prediction approaches under a frozen evaluation protocol (attentive probing) and are competitive with pixel prediction under full fine-tuning, while using significantly shorter training schedules (Tables 5 and 6).\n- Models trained with feature prediction are more label-efficient than pixel prediction approaches. Decreasing the available number of labeled examples results in an increase in the performance gap between V-JEPA and pixel-reconstruction models (Table 7).\n\n# 2 Related Works\n\nSlow Features. One way to encourage temporally adjacent representations to be predictive of each other is to ensure that they vary slowly over time. Early works targeting predictive features encouraged representations of individual video frames to be locally temporally invariant, while preventing representation collapse by using spectral methods, as in SFA (Wiskott and Sejnowski, 2002), SSA (Kayser et al., 2001), and Simulated Fixations (Zou et al., 2012). More recently, Goroshin et al. (2015); Wang et al. (2010) train a siamese convolutional network to map the representations of two subsequent frames to the same point, while encouraging distant frames to have diverse representations via a pairwise margin loss and a triplet loss, respectively. Other works (Oord et al., 2018; Surís et al., 2021; Feichtenhofer et al., 2021) implement temporal invariance using noisecontrastive estimation (Gutmann and Hyvärinen, 2012). Our exploration in this paper goes beyond temporal invariance and explores feature prediction using masked modeling.\n\nPredictive Features. Going beyond local invariance, a family of works trains a predictor network to map the representation of a frame or clip at one time-step to a distinct representation at another time-step. Srivastava et al. (2015); Vondrick et al. (2016); Wang et al. (2023b) train such a video feature predictor network on top of a frozen pretrained image or video encoder. Unfreezing the target feature extractor, several methods train the video encoder and the predictor network simultaneously, while preventing collapse by using a supervised action forecasting loss (Girdhar and Grauman, 2021), or by using the representations of distant clips as negative samples in a contrastive loss (Han et al., 2019, 2020; Tan et al., 2023), often focusing on small convolutional encoders (Han et al., 2019, 2020). The idea of learning a representation by predicting missing information in feature space is also core to the joint-embedding predictive architecture (JEPA) (LeCun, 2022), which combines a siamese encoder with a predictor network. JEPAs have been successfully instantiated in several modalities, such as with audio data (Baevski et al., 2022b) and image data (Zhou et al., 2021; Oquab et al., 2023; Assran et al., 2023). In this work, we extend this paradigm to video data by leveraging recent advances in self-supervised learning.\n\nAdvances in Self-Supervised Learning. The use of vision transformers (Dosovitskiy et al., 2020; Li et al., 2022) has become standard practice in self-supervised learning with joint-embedding architectures (Chen et al., 2021; Caron et al., 2021; Oquab et al., 2023; Zhou et al., 2021; Assran et al., 2022), and unlocked masked image modeling in pixel space by parameterizing the pixel decoder as a transformer with learnable mask tokens (Dosovitskiy et al., 2020; Xie et al., 2021; He et al., 2021; Bao et al., 2021), demonstrating a step-change in the representation quality of autoencoding methods (Vincent et al., 2010). This line of generative methods was subsequently extended to video data using spatio-temporal masking (Tong et al., 2022; Feichtenhofer et al., 2022; Wang et al., 2023a; Kalluri et al., 2023; Gupta et al., 2023). It was also recently shown that the representations of masked image autoencoders could be significantly improved by using learnable pooling mechanisms based on cross-attention (Chen et al., 2022). Finally, through careful selection of design choices, the non-contrastive collapse prevention strategy in BYOL (Grill et al., 2020) was recently made to work with image feature prediction methods (Baevski et al., 2022b; Assran et al., 2023), which demonstrated the ability to learn representations that can be leveraged for various downstream tasks without relying on invariance to hand-crafted image transformations.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv3.pdf" - }, - { - "text": "Feature Prediction versus Pixel Reconstruction. Approaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n# 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x. The additional variable z provides the predictor with information about the transformation that computes y from x.\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x. The basic architecture is made up of an encoder, Eθ(·), which computes the representation of the inputs, and a predictor, Pϕ(·), which predicts the representation of y from the representation of x, conditioned on a variable z indicating the transformation (or corruption) between x and y. Conditioning on z enables the generation of distinct predictions for various transformations of x.\n\n### 3.1 Training Objective\n\nWe train our visual encoder Eθ(·) to satisfy the constraint that representations computed from one part of the video, y, should be predictable from representations\n\ncomputed from another part of the video, x. The predictor network Pϕ(·), which maps the representation of x to the representation of y, is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆y.\n\nNaively implementing the objective using the regression\n\n$$\\begin{array}{r l}{{\\mathrm{minimize}_{\\theta,\\phi}}}&{{}\\|P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-E_{\\theta}(y)\\|_{1},}\\end{array}$$\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize${}_{\\theta,\\phi}\\quad||P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-\\mbox{sg}(\\overline{E}_{\\theta}(y))||_{1},$ (1)\n\nwhere sg(·) denotes a stop-gradient operation, which does not backpropagate through its argument, and Eθ(·) is an exponential moving average of the network Eθ(·). The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ1 regression, which we found to be more stable.\n\nTheoretical motivation. A theoretical motivation for the effectiveness of this collapse prevention strategy was proposed in Grill et al. (2020) for the BYOL method. We provide a simple adaptation of their analysis for our ℓ1 loss. For ease of exposition, we will disregard the effect of the conditioning variable z and consider one dimensional representations. Denote the representation Eθ(y) by a random variable Y . The optimal predictor under equation (1) is thus given by the following functional expression,\n\n$P^{\\star}(E_{\\theta}(x))=\\text{argmin}_{P}\\|P(E_{\\theta}(x))-Y\\|_{1}$ \n \n$=\\text{median}(Y|E_{\\theta}(x))$. \n \n\nSubstituting this expression for the optimal predictor into the loss function and evaluating the expected gradient of the encoder gives\n\n$$\\nabla_{\\theta}\\mathbb{E}\\|P^{\\star}(E_{\\theta}(x))-Y\\|_{1}=\\nabla_{\\theta}\\mathrm{MAD}(Y|E_{\\theta}(x)),$$\n\nwhere MAD(· |Eθ(x)) is the median absolute deviation of a random variable conditioned on Eθ(x). Thus, in the case where the predictor is optimal, the encoder must learn to capture as much information about the video as possible to minimize the deviation of the target. The hypothesis is that incorporating an exponential moving average to compute the representation of y ensures that the predictor evolves faster than the encoder and remains close to optimal, thereby preventing collapse.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "What is the average performance of the ViT-L/16 architecture on the K710 dataset with 700k samples ?", - "target_page": 5, - "target_passage": "70.9", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Table 1 Pixels vs. Featurized Targets. We ablate the effect of computing the prediction loss in feature space vs pixel space. All models are trained on VideoMix2M for 90K iterations with a batch size of 3072 using the multi-block prediction task. We examine downstream performance using a frozen backbone with attentive probing, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on K400. Predicting in feature space provide a consistent improvement over pixel space prediction.\n\n| | | | Frozen Evaluation | | Fine-Tuning |\n| --- | --- | --- | --- | --- | --- |\n| | | K400 | SSv2 | IN1K | K400-ft |\n| Target | Arch. | (16×1×1) | (16×1×1) | | (16×5×3) |\n| Pixels | ViT-L/16 | 68.6 | 66.0 | 73.3 | 85.4 |\n| Features | ViT-L/16 | 73.7 | 66.2 | 74.8 | 85.6 |\n\nTable 2 Pretraining Data Distribution. We pretrain all models for 90K iterations using a batch size of 3072, and evaluate downstream performance of the frozen backbones with an attentive probe using a single center view. Average performance across tasks increases with the pretraining dataset size.\n\n| | | | | Frozen Evaluation SSv2 | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Arch. | Data | #Samples | K400 (16×1×1) | (16×1×1) | IN1K | Avg. |\n| ViT-L/16 | K710 | 700K | 75.8 | 63.2 | 73.7 | 70.9 |\n| | K710+SSv2 | 900K | 72.9 | 67.4 | 72.8 | 71.0 |\n| | K710+HT | 1900K | 74.5 | 64.2 | 74.8 | 71.1 |\n| | VideoMix2M | 2000K | 73.7 | 66.2 | 74.8 | 71.5 |\n| ViT-H/16 | K710+SSv2 | 900K | 75.7 | 66.8 | 73.7 | 72.0 |\n| | VideoMix2M | 2000K | 74.0 | 68.5 | 75.9 | 72.8 |\n\nEvaluations. Pretrained models are evaluated on downstream video and image tasks. On video tasks, we use a subset of the VideoGLUE benchmark (Yuan et al., 2023) to test for various capabilities; specifically, we investigate action recognition on Kinetics-400 (K400) (Kay et al., 2017), motion classification on Something-Something-v2 (SSv2) (Goyal et al., 2017), and action localization on AVA (Gu et al., 2018). Action classification on Kinetics evaluates the appearance-based understanding of the model, as many action classes in the dataset can be inferred from the presence of specific objects in the video (Sevilla-Lara et al., 2021). Motion classification on Something-Something-v2 evaluates the temporal understanding of the model, as action classes in the dataset are decoupled from the appearance/presence of specific objects in the video (Goyal et al., 2017). Finally, action localization on AVA evaluates the ability of the model to understand and localize motions in the video. We follow standard practice and report accuracy on K400 and SSv2 by sampling several spatial and temporal views. For static image tasks, we explore object recognition on ImageNet (Russakovsky et al., 2015), scene classification on Places205 (Zhou et al., 2014), and fine-grained recognition on iNaturalist 2021 (Van Horn et al., 2018).\n\n# 4 What Matters for Learning Representations from Video?\n\nIn this section we isolate the contributions of several design choices, including: a) the use of a feature prediction\n\nversus pixel prediction objective, b) the construction of the pretraining data distribution, c) the feature pooling strategy for leveraging the model's representations in downstream tasks, and d) the masking strategy, towards identifying: what to predict from what?\n\n### 4.1 Predicting Representations versus Pixels\n\nWe first ablate the effect of computing the prediction loss in representation space. We train a pair of ViT-L/16 models using either a V-JEPA feature prediction loss, or a mean-squared error loss with the normalized pixel values, as in masked autoencoders (He et al., 2021), and perform a sweep over the learning rate and weight decay schedules for both approaches. All models are pretrained on VideoMix2M for 90K iterations with a batch size of 3072 using multi-block masking. We examine performance on Kinetics-400 (K400), Something-Something-v2 (SSv2), and ImageNet-1K (IN1K), using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on Kinetics-400.\n\nResults of this comparison are reported in Table 1 and indicate that predicting in feature space provides a consistent performance improvement over pixel space prediction in both frozen evaluation of the video backbone, as well as end-to-end fine-tuning.\n\n### 4.2 Pretraining Data Distribution\n\nNext we study the impact of the pretraining data distribution in Table 2. Leveraging large scale datasets", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 3 Average Pooling vs. Adaptive Pooling. We pool the feature map output by the frozen V-JEPA encoder using an attentive probe, which is then fed into a linear classifier for downstream supervised tasks (K400 and SSv2). We evaluate two pooling strategies: 1) average pooling (Avg.), and attentive pooling (Att.). Results are reported using a single center view. Using adaptive pooling with a crossattention layer leads to improvements of +17.3 points on K400 and +16.1 points on SSv2.\n\n| | | | Frozen Evaluation | | |\n| --- | --- | --- | --- | --- | --- |\n| | | K400 | | SSv2 | |\n| | | (16×1×1) | | (16×1×1) | |\n| Method | Arch. | Avg. | Att. | Avg. | Att. |\n| V-JEPA | ViT-L/16 | 56.7 | 73.7 | 50.1 | 66.2 |\n\nhas been critical for enabling the surge of advancements in other modalities, such as text and images (Kaplan et al., 2020; Cherti et al., 2023). We investigate whether a similar trend holds for video data. To control for the possible confounding variable of compute budget, we pretrain all models in Table 2 for 90K iterations using a batch-size of 3072. We report downstream results on K400, SSv2, and IN1K using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view.\n\nTable 2 shows that average performance across tasks monotonically increases as we increase the size of the pretraining dataset, but the best task-specific performance is obtained by independently selecting the pretraining data for each specific downstream task. For instance, the L/16 obtains its best SSv2 performance when pretrained on K710+SSv2, its best K400 performance when pretrained only on K710, and its best IN1K performance when pretrained only on K710+HT. The best average performance across all tasks is achieved by pretraining VideoMix2M, which combines all the data sources. Similarly, the H/16 pretrained on K710+SSv2 achieves a greater K400 score than the H/16 pretrained on VideoMix2M, however, the top performing H/16 on average is pretrained on VideoMix2M.\n\n### 4.3 Evaluation: Attentive Probing\n\nNext we explore the feature pooling strategy for applying the model's representations in downstream tasks. Since the prediction objective in equation (1) is unnormalized, there is no a priori reason for the encoder to yield a linearly separable subspace (Chen et al., 2020). Thus, rather than using a linear operation (averaging) to pool the features output of the frozen backbone, we explore a learnable non-linear pooling strategy. Specifically, when evaluating the frozen pretrained backbone on downstream tasks, we learn a cross-attention layer with a learnable query token. The output of the crossattention layer is then added back to the query token (residual connection), and then fed into two-layer MLP\n\nTable 4 Ablating Prediction Task. Models are ViT-L/16 networks pretrained on K710 and SSv2 and evaluated with an attentive probe using a single center view. The region x is sampled by masking spatio-temporal regions in the video; y is the mask complement. 1) random-tube[r]: x is obtained by masking a fraction r of tubes (spatial patches extended across the entire temporal duration) from the video, 2) causal multi-block[p]: x is restricted to the first p frames of the 16-frame video, which are then masked with a random set of spatio-temporal blocks, 3) multi-block: x is obtained by masking a random set of spatio-temporal blocks from the entire video. Best performance obtained by using multiblock masking.\n\n| | | Frozen Evaluation | |\n| --- | --- | --- | --- |\n| | K400 | SSv2 | IN1K |\n| Masking | (16×1×1) | (16×1×1) | |\n| random-tube[0.9] | 51.5 | 46.4 | 55.6 |\n| causal multi-block[6] | 61.3 | 49.8 | 66.9 |\n| causal multi-block[12] | 71.9 | 63.6 | 72.2 |\n| multi-block | 72.9 | 67.4 | 72.8 |\n\nwith a single GeLU activation, followed by a LayerNorm, and finally a linear classifier.\n\nIn Table 3 we see that using adaptive pooling with a learnable cross-attention layer leads to a significant improvement of +17 points on K400 and +16.1 points on SSv2. Using an attentive-probe is also beneficial for other baseline models as reported in Appendix E.\n\n### 4.4 Prediction Task: Predicting y from x\n\nWe conduct an ablation on the masking strategy used in V-JEPA pretraining. We examine the following masking strategies: random-tube[r] in which x is obtained by removing a random fraction r of tubes (spatial patches extended across the entire temporal duration) from the video, causal multi-block[p] in which x is restricted to the first p frames of the 16-frame video, which are then masked with a random set of spatio-temporal blocks, and multi-block in which x obtained by masking a random set of spatio-temporal blocks from the entire video. Spatio-temporal blocks are sampled using the parameters described in Section 3.2; an ablation on the size and quantity of masked spatio-temporal blocks is provided in Appendix E.4.\n\nTable 4 indicates that the best results are obtained by sampling x using a multi-block strategy, wherein the network is forced to make predictions after removing large continuous blocks in the video. When x is only sampled from the first few frames of the video, as in the causal multi-block strategy, we observe a decrease in downstream performances. Finally, the random-tube strategy, wherein 90% of the tubes in the video are randomly masked, leads to features of low-semantic quality when combined with our feature prediction objective.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 5 Comparison with Pixel Prediction Methods. We compare V-JEPA with OmniMAE (Girdhar et al., 2023), Video-MAE (Tong et al., 2022), and Hiera (Ryali et al., 2023), which leverage a pixel-reconstruction loss. All models are trained using a ViT-L architecture or a comparable Hiera-L. We evaluate the approaches on downstream image tasks (IN1K, Places205, iNat201) and video tasks (K400, SSv2, AVA) in both frozen evaluation (with a frozen backbone), and end-to-end fine-tuning. All models are evaluated at resolution 224. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. In frozen evaluation, V-JEPA outperforms the baselines on all downstream tasks, except ImageNet, where the model achieves 74.8% compared to 75.1% of an OmniMAE model trained directly on ImageNet. V-JEPA also achieves the best fine-tuning performance amongs all ViT-L models and matches the Hiera-L on SSv2. The V-JEPA results are achieved while processing significantly fewer examples during pretraining.\n\n| | | | | | | | Frozen Evaluation w/ Att. Pooling | | | Fine-Tuning | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | #Samples | | K400 | SSv2 | AVA | IN1K | Places205 | iNat21 | K400-ft | SSv2-ft |\n| Method | Arch. | Seen | Iter. | (16×8×3) | (16×2×3) | | | | | (16×5×3) | (16×2×3) |\n| | Methods pretrained using pixel prediction | | | | | | | | | | |\n| OmniMAE | ViT-L/16 | 2400M | 1170K | 65.6 | 60.6 | 14.4 | 75.1 | 59.8 | 66.1 | 84.0 | 74.2 |\n| VideoMAE | ViT-L/16 | 410M | 400K | 77.8 | 65.5 | 21.6 | 71.1 | 59.3 | 64.6 | 85.4 | 74.3 |\n| Hiera | Hiera-L | 770M | 1500K | 75.5 | 64.2 | 15.8 | 68.9 | 58.5 | 56.9 | 87.3 | 75.1 |\n| V-JEPA | ViT-L/16 | 270M | 90K | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 | 85.6 | 75.1 |\n\nTable 6 Comparison with State-of-the-Art Models. We compare V-JEPA with state-of-the-art baselines in frozen evaluation with an attentive probe on downstream image tasks (IN1K, Place205, iNat21) and video tasks (K400, SSv2, AVA). All models are evaluated at resolution 224, except I-JEPA512 and V-JEPA384 which are evaluated respectively at resolution 512 and 384. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. Compared to other video baselines, V-JEPA exhibits a consistent improvement across all downstream tasks. Compared to image-models that excel under the frozen evaluation, V-JEPA shows a significant performance improvement on tasks requiring motion understanding (+21 points on SSv2), and reduces the gap between video and image models on tasks requiring static appearance-based features.\n\n| | | | | | Video Tasks | | | Image Tasks | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | K400 | SSv2 | AVA | IN1K | Places205 | iNat21 |\n| Method | Arch. | Params. | Data | (16×8×3) | (16×2×3) | | | | |\n| Methods pretrained on Images | | | | | | | | | |\n| I-JEPA | ViT-H/16512 | 630M | IN22K | 79.7 | 50.0 | 19.8 | 84.4 | 66.5 | 85.7 |\n| OpenCLIP | ViT-G/14 | 1800M | LAION | 81.8 | 34.8 | 23.2 | 85.3 | 70.2 | 83.6 |\n| DINOv2 | ViT-g/14 | 1100M | LVD-142M | 83.4 | 50.6 | 24.3 | 86.2 | 68.4 | 88.8 |\n| Methods pretrained on Videos | | | | | | | | | |\n| MVD | ViT-L/16 | 200M | IN1K+K400 | 79.4 | 66.5 | 19.7 | 73.3 | 59.4 | 65.7 |\n| OmniMAE | ViT-H/16 | 630M | IN1K+SSv2 | 71.4 | 65.4 | 16.0 | 76.3 | 60.6 | 72.4 |\n| VideoMAE | ViT-H/16 | 630M | K400 | 79.8 | 66.2 | 20.7 | 72.3 | 59.1 | 65.5 |\n| VideoMAEv2 | ViT-g/14 | 1100M | Un.Hybrid | 71.2 | 61.2 | 12.9 | 71.4 | 60.6 | 68.3 |\n| Hiera | Hiera-H | 670M | K400 | 77.0 | 64.7 | 17.5 | 71.4 | 59.5 | 61.7 |\n| | ViT-L/16 | 200M | | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 |\n| V-JEPA | ViT-H/16 | 630M | VideoMix2M | 82.0 | 71.4 | 25.8 | 75.9 | 61.7 | 67.9 |\n| | ViT-H/16384 | 630M | | 81.9 | 72.2 | 25.0 | 77.4 | 62.8 | 72.6 |\n\n# 5 Comparison with Prior Work\n\nIn Section 5.1, we investigate the impact of feature prediction by comparing V-JEPA with video approaches that rely on pixel prediction, while using a similar architecture for all baselines. Subsequently, in Section 5.2, we remove the architectural constraint and report the best performance across architectures for self-supervised video and image pretraining approaches. Finally, we explore the label-efficiency of V-JEPA relative to other selfsupervised video pretraining approaches in Section 5.3. We further detail the evaluation setup in Appendix D.\n\n### 5.1 Comparison with Pixel Prediction\n\nTo investigate the effectiveness of feature prediction pretraining, we first compare V-JEPA to video masked modeling models relying on a pixel prediction loss. We control\n\nfor the possible confounding factor of model architecture by evaluating all models using either a ViT-L/16 encoder, or a Hiera-L encoder, which has a similar number of parameters. For the pixel prediction baselines we consider VideoMAE (Tong et al., 2022; Wang et al., 2023a), which trains vision transformer autoencoders exclusively on video, Hiera (Ryali et al., 2023), which trains a hierarchical transformer autoencoder on video, and OmniMAE (Girdhar et al., 2023), which trains a vision transformer autoencoder on static images and video simultaneously.\n\nTable 5 examines both frozen evaluation with an attentive probe on downstream video and image tasks, as well as end-to-end fine-tuning. In frozen evaluation, V-JEPA outperforms the baselines on all downstream tasks, except ImageNet, where we achieve 74.8% compared to 75.1% of an OmniMAE model trained directly on Im-", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "### A Supplementary materials for datasets\n\n#### A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using *cl100k_base* encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For *SummEvalFr*, the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the *multilingual-e5-large* model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n#### A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes (*domain* field) for the HAL dataset on *raw* subset and *mteb_eval* subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, *mteb_eval* covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13.4. Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original *raw* dataset, the *shs* and *sdv* classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with *GPT-4* for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n# B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n### C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.\n\nFor evaluating prompt-based models such as *intfloat/e5-mistral-instruct-7b*, we provide the prompts we used in Table 8.\n\n### D Evaluation results\n\nThis section presents the results obtained for each model on each task. To be relevant, we used the same metrics as in MTEB, which varies from one type of task to another:", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "Table 14 Temporal Coverage on Kinetics-400. We evaluate the effect of temporal coverage on K400. We train an attentive probe on K400 using either 1 clip (≈ 2 seconds of a video) or 8 clips (≈ 16 seconds of a video). To sample N clips, we first divide a video in N equal-length temporal segments and sample one clip at random per segment. The video encoder processes each clip in parallel and all the encoder output tokens are concatenated at the input of the attentive probe. Increasing the temporal coverage from 1 clip per video to 8 clips significantly improves the performance for both our VideoMAE baseline and V-JEPA.\n\n| Method | Arch. | 1 Clip | 8 Clips |\n| --- | --- | --- | --- |\n| VideoMAE | ViT-L/16 | 69.4 | 77.8 |\n| V-JEPA | ViT-L/16 | 73.7 | 80.9 |\n\nTable 15 Finetuning results. We evaluate a V-JEPA model with the finetuning protocol on the K400 and SSv2 datasets using 16 frames per clip and multi-view fusion (5×3 or 2×3) for inference. The #Samples Seen entry corresponds to the number of video clips processed during pretraining, which is larger than the size of the pretraining dataset for multi-epoch training. We compare V-JEPA with different video self-supervised learning approaches. We report the VideoMAEv2 results without instruction-turning for consistency with the other approaches. V-JEPA obtains competitive performance using the finetuning protocol.\n\n| Method | Arch. | Pretraining Data | #Samples Seen | K400 | SSv2 |\n| --- | --- | --- | --- | --- | --- |\n| | | | | (16×5×3) | (16×2×3) |\n| VideoMAEv1 | ViT-L/16 | K400 SSv2 | 380M 410M | 85.4 | 74.3 |\n| | ViT-H/16 | K400 SSv2 | 380M 410M | 86.6 | 74.8 |\n| VideoMAEv2 | ViT-H/16 | Un.Hybrid | 1600M | 86.9 | 76.8 |\n| MVD | ViT-L/16 | K400+IN1K | 2400M | 86.4 | 76.7 |\n| | ViT-H/16 | K400+IN1K | 2400M | 87.2 | 77.3 |\n| V-JEPA | ViT-L/16 | VideoMix2M | 270M | 85.6 | 75.1 |\n| | ViT-H/16 | VideoMix2M | 270M | 86.6 | 77.0 |\n\nexamine our multi-masking strategy and find that sampling two masks for each clip (long-range and short-range) to be more effective than sampling just a single mask for each clip.\n\nIn Figure 8c, we explore different average spatial and temporal masking ratio, i.e. the spatial/temporal ratio of the area that is covered by a mask on average for a clip. Recall that each mask is constructed by sampling several (possibly overlapping) blocks and taking their union. We change the average spatial or temporal masking ratio by changing a block spatial or temporal size, as well as the overall number of blocks. We found that low spatial or temporal coverage results in a trivial prediction task, which degrades downstream performance. Based on those results, we sample masks that remove roughly 90% of the frame and extend along the entire temporal dimension of the clip by default.\n\nIn Figure 8b , we explore different block size given an effective spatial masking ratio of 90% and temporal ratio of 100%. We keep the masking ratio approximately constant by changing the block size and the number of block at the same time. We find that sampling several blocks to perform better than sampling a single large block. Figure 9 visually illustrates the effect of sampling several smaller blocks to construct a mask.\n\nIn Figure 8a, we explore the effect of sampling various number of masks per samples. We find that sampling two masks for each clip, with different spatial block sizes for each, to be more effective than sampling just a single mask. We hypothesize that this masking strategy induces complementary tasks. In our experiment, we use this as our default masks sampling.", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 7 Low-Shot Frozen Evaluation. Comparing V-JEPA to other video models in frozen evaluation on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several low-shot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. We report the mean performances and standard deviation using the K400 and SSv2 validation sets. V-JEPA is more label-efficient than other models; specifically, decreasing the available number of labeled examples from each class increases the performance gap between V-JEPA and the baselines.\n\n| | | | | Frozen Evaluation | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | K400 | | SSv2 | | |\n| | | | (16×8×3) | | (16×2×3) | | |\n| | | 5% | 10% | 50% | 5% | 10% | 50% |\n| Method | Arch. | (∼29 samples per class) | (∼58 samples per class) | (∼287 samples per class) | (∼48 samples per class) | (∼96 samples per class) | (∼440 samples per class) |\n| MVD | ViT-L/16 | 62.6 ± 0.2 | 68.3 ± 0.2 | 77.2 ± 0.3 | 42.9 ± 0.8 | 49.5 ± 0.6 | 61.0 ± 0.2 |\n| VideoMAE | ViT-H/16 | 62.3 ± 0.3 | 68.5 ± 0.2 | 78.2 ± 0.1 | 41.4 ± 0.8 | 48.1 ± 0.2 | 60.5 ± 0.4 |\n| VideoMAEv2 | ViT-g/14 | 37.0 ± 0.3 | 48.8 ± 0.4 | 67.8 ± 0.1 | 28.0 ± 1.0 | 37.3 ± 0.3 | 54.0 ± 0.3 |\n| V-JEPA | ViT-H/16 | 67.0 ± 0.2 | 72.1 ± 0.1 | 80.2 ± 0.2 | 51.9 ± 0.3 | 57.5 ± 0.4 | 67.3 ± 0.2 |\n| | ViT-H/16384 | 68.2 ± 0.2 | 72.8 ± 0.2 | 80.6 ± 0.2 | 54.0 ± 0.2 | 59.3 ± 0.5 | 67.9 ± 0.2 |\n\nlayer attentive probe, which can be further improved to 77.9% using a two-layer attentive probe. More generally, we hypothesize that the datasets used to train V-JEPA and other video models are too constrained and lack the visual diversity of the internet-scale pretraining data used by the images models; as such, there is value in focusing future work on building diverse publicly available video datasets.\n\n# 5.3 Label-efficiency\n\nWe examine the label-efficiency of V-JEPA compared to other self-supervised video models by measuring the ability of the pretrained backbones to adapt to downstream tasks with few labels. Specifically, we investigate the performance of the frozen models on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several lowshot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. Table 7 reports the mean performances and standard deviation using the K400 and SSv2 validation sets.\n\nWe find V-JEPA to be more label-efficient than other self-supervised video models: decreasing the available number of labeled examples for training the attentive probe results in an increase in the performance gap between V-JEPA and the other models. In particular, the performance of the largest V-JEPA model on K400 drops by 12% to 68.2% top-1 when we reduce the number of labeled examples by a factor of 10× (from roughly 287 examples per class to 29 examples per class). By contrast, VideoMAEv2 drops by 30% to 37.0% top-1, VideoMAE drops by 15.9% to 62.3% top-1, and MVD drops by 14.6% to 62.6% top-1.\n\nSimilar observations hold on SSv2. The performance of the largest V-JEPA model on SSv2 drops by 13.9%\n\nto 54.0% top-1 when we reduce the number of labeled examples by a factor of 10× (from roughly 440 examples per class to 48 examples per class). By contrast, Video-MAEv2 drops by 26% to 28.0% top-1, VideoMAE drops by 19.1% to 41.4% top-1, and MVD drops by 18.1% to 42.9% top-1.\n\n# 6 Evaluating the Predictor\n\nNext, we seek to qualitatively inspect the V-JEPA models. Recall that the predictor network in V-JEPA predicts the representations of a masked spatio-temporal region y from a visible region x, given the positional information of the masked regions (see Section 3). To qualitatively investigate the grounding of the feature-space predictions, we freeze the pretrained encoder and predictor networks and train a conditional diffusion decoder to map the V-JEPA predictions to interpretable pixels. Notably, the decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video (see Figure 6a).\n\nGiven a masked video, we use the V-JEPA pretrained models to predict the representations of the missing regions, and then use the decoder to project the representations to pixel space. Figure 6b shows decoder outputs for various random seeds. Qualities that are common across samples represent information that is contained in the predictor representation.\n\nFigure 6b shows that the V-JEPA feature predictions are indeed grounded, and exhibit spatio-temporal consistency with the unmasked regions of the video. Specifically, the samples in Figure 6b show that the V-JEPA predictor correctly captures positional uncertainty and produces a variety of visual objects at various locations with consistent motion. Some of the samples also demonstrate an understanding of object-permanence, as the visual objects remain consistent after partial occlusion.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv3.pdf" - }, - { - "text": "| Dataset | Syntec | HAL | SummEvalFr |\n| --- | --- | --- | --- |\n| Samples | 100 queries | 26233 samples | 100 texts |\n| | 90 documents | 10 classes | 1100 human summaries |\n| | | | 1600 machine summaries |\n| Creation process | Scraping of Syntec col | Scraping of HAL arti | Translation from English |\n| | lective bargaining agree | cles with id, title and do | to French with Deepl of |\n| | ment with articles as doc | main. Further cleaning | the SummEval dataset. |\n| | uments. Writing queries | with deduplication, lan | |\n| | corresponding to articles. | guage filtering and class | |\n| | | subsampling. | |\n| Annotation process | 4 annotators divided into | Annotations provided by | Detailed annotation pro |\n| | 2 groups. Each group was | authors when submitting | cess provided in Fabbri |\n| | given half of the articles | their paper. They choose | et al. (2021). |\n| | and asked to choose an ar | the domain between exist | |\n| | ticle and ask a question | ing academic fields. | |\n| | about it. Each annotator | | |\n| | wrote 25 questions. | | |\n| Quality checks | Human verification of an | Baseline models for clas | Correlation between |\n| | notations. | sification and topic model | BLEU and ROUGE |\n| | | ing. | scores of the French |\n| | | | and the original English |\n| | | | datasets. LLM as-a-judge |\n| | | | translation rating and |\n| | | | human verification. |\n\nTable 1: New datasets details with the number of samples, the creation process, the annotation process and the quality checks. All datasets are test splits.\n\n- Samples belonging to *domain* classes with less than 500 samples were removed, which leads us to keep only 10 classes.\n- Subsampling was performed on 2 classes containing more than 10k samples each to lower the number of samples and mitigate the unbalance of the dataset.\n\nMore details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: *TF-IDF + SVM*, a fine-tuned *Camembert* (Martin et al., 2019) and *GPT-4* leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n#### 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,\n\n6 https://www.deepl.com", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "Table 16 Sample efficiency. We compare the sample efficiency of pretraining various state-of-the-art image and video models. The #Samples Seen entry corresponds to the number of samples (image or video clips) processed by the network during pretraining, which is larger than the size of the pretraining dataset for multi-epoch training. The V-JEPA results in this paper are obtained while processing an order of magnitude fewer samples than previous methods.\n\n| Method | Arch. | Data | #Samples Seen |\n| --- | --- | --- | --- |\n| OpenCLIP | ViT-G/14 | LAION-2B | 39000M |\n| DINOv2 | ViT-g/14 | LVD 142M | 1900M |\n| VideoMAEv2 | ViT-g/14 | UnlabeledHybrid | 1600M |\n| V-JEPA | ViT-H/16384 | VideoMix2M | 210M |\n\nFigure 8 Masking Strategy Ablation. Evaluating a linear probe on a ViT-B/16 pretrained with V-JEPA on K400 under various 3D Multi-Block masking settings. We examine the impact of (a) sampling several masks per video, (b) varying the number of blocks in a mask, and (c) varying the average spatial and temporal masking ratio. A temporal masking ratio of 100% extends the spatial mask across all the frames in the clip. We find it important to maintain a high spatial and temporal masking ratio during pretraining.\n\n(c) Num. Blocks: 2, Spatial Block Size: 160 × 160\n\nFigure 9 Illustration of mask with number of blocks and block size. Each mask is constructed by sampling several (possibly overlapping) blocks and taking their union.", - "page_start": 22, - "page_end": 22, - "source_file": "arxiv3.pdf" - }, - { - "text": "performed an outlier check, labeling images as a 'low-quality outlier' if the correlation coefficient was >3 s.d. from the absolute mean. None of our scans were flagged as outliers. The reconstructed participant files were aggregated into one connectometry database per metric.\n\n*Day2Day control dataset*. To compare our findings against a control group of nonpregnant densely-sampled individuals, we used the Day-2Day dataset23 which offered comparable whole-brain T1 and T2 MTL scans for eight participants (two male) scanned 12–50 times over 2–7 months. Each participant was run through the ANTs CT and ASHS processing pipelines as outlined above ('Cortical volume and thickness' and 'Hippocampal segmentation'). To note, for each participant, we created an SST based on their first two sessions for consistency with the primary dataset; subfield volumes for the T2 MTL scans did not undergo manual retouching. Due to missing header information on the publicly available diffusion scans, we were unable to benchmark our white matter changes with the Day2Day dataset.\n\n**Statistical analysis.** Statistical analyses were conducted using R (sMRI; version 3.4.4) and DSI Studio (dMRI; Chen-2022-07-31).\n\n*Summary brain metrics*. To reflect the existing literature, we first explored brain metrics across the entire study duration (prepregnancy through postpartum, *n* = 26 scans). When including all sessions, total brain volume, GMV, CT, global QA, ventricle volume and CSF displayed nonlinear trends over time; therefore, we used generalized additive models (GAM; cubic spline basis, *k* = 10, smoothing = GCV), a method of nonparametric regression analysis (R package, mgcv76), to explore the relationship between summary brain metrics (outcome variables) and gestation week (smooth term). Each model underwent examination (gam.check function) to ensure it was correctly specified with regards to (1) the choice of basis dimension (*k*) and (2) the distribution of model residuals (see mgcv documentation in ref. 76). The general pattern of results held after toggling model parameters; however, we note the risk of overinterpreting complex models with small sample sizes77. To address overfitting and cross-validate our basis type selection, we also fit the data using nonpenalized general linear models (GLM) with both linear and polynomial terms for gestation week. We compared the performance of each GLM (that is, models using only a linear term versus models with polynomial terms) via the Akaike information criterion (AIC), which revealed that cubic models consistently outperformed both linear and quadratic models (AICdiff > 3), providing additional evidence for nonlinear changes in structural brain variables over time. Determining whether these patterns replicate in larger cohorts and whether complex models are better suited to capture data patterns across individuals will be a necessary next step.\n\n*Cortical GMV and CT*. We then narrowed our analyses to the first 19 sessions (baseline—36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1–5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at *q* < 0.05).\n\n*Subcortical GMV*. A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at *q* < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy (*n* = 7 bilateral subregions and *n* = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at *q* < 0.05.\n\n*White matter microstructure*. DSI Studio's correlational tractography74 was used to analyze the relationship between white matter structure and gestational week (*n* = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones (*n* = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: *t* score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas78.\n\n*Day2Day dataset: measurement variability*. To establish a marker of normative variability over half a year, we computed metrics of measurement variability using the Day2Day dataset23, which provided both whole-brain T1 and high-resolution T2 MTL scans. For each region, *j*, of the Schaefer parcellation, we assessed across-session variability, *ε*, as\n\n$$\\varepsilon_{j}=100\\times\\mathrm{mean}\\left({\\frac{|t_{s}-{\\hat{t}}|}{{\\hat{t}}}}\\right)$$\n\nWhere *ts* is the morphometric measurement of a parcel for session *s* and *t* ̂ is the mean of *t* across sessions55,79. Thus, we defined variability as the mean absolute percent difference between each individual and the mean across sessions. Across-session variability estimates for all 400 regions were then averaged across eight participants, and a global measure of cortical GMV variability was computed by averaging across the 400 regions. This approach was repeated independently for the T2 hippocampal scans, wherein we computed across-session variability for each parcel of the ASHS parcellation scheme (*n* = 7 bilateral subfields). However, it is important to note that raw subfield values (that is, no manual retouching) were used for Day2Day variability assessments and should be interpreted with caution. Finally, to better compare against our own data, we repeated this approach using our", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - }, - { - "text": "### *What dataset management practices are necessary?*\n\nNo matter how a books data commons gets built, it will be important to consider broader aspects of data governance. For example:\n\n- **Dataset documentation and transparency:** Transparent documentation is important for any dataset used for AI training. A datasheet is a standardized form of documentation that includes information about provenance and composition of data, and includes information on management practices, recommended uses or collection process.\n- **Quality assurance:** Above, we note the many features that make books useful for AI training, as compared with web data, for example. That said, the institution managing a books commons dataset may still want to collect and curate the collection to meet the particular purposes of its users. For instance, it may want to take steps to mitigate biases inherent in the dataset, by ensuring books are representative of a variety of languages and geographies.\n- **Understanding uses:** The institution managing a books commons dataset could measure and study how the dataset is used, to inform future improvements. Such monitoring may also enable accountability measures with respect to uses of the dataset. Introducing community norms for disclosing datasets used in AI training and other forms of AI research would facilitate such monitoring.\n- **Governance mechanisms:** In determining matters like acceptable and ethical use, the fundamental question is \"who decides.\" While this might be settled simply by whoever sets up and operates the dataset and related infrastructure, participatory mechanisms — such as advisory bodies bringing together a broad range of users and stakeholders of a collection — could also be incorporated.", - "page_start": 19, - "page_end": 19, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "What is appropriate authority ?", - "target_page": 1, - "target_passage": "APPROPRIATE AUTHORITY.—The term ‘appropriate authority’ means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 3-22 Completion window\n\n# **3.2 User and group administration**\n\nWhen you design a Content Manager OnDemand system, you must determine the best way to implement the many authority structures that are available for users and administrators of your system. The span of control for the administration of the system must be considered with the level of user access to the data that is stored in the system. How many different administrators are required? Will all administrators have system administrator authority or will different administrators have different levels of authority? What is the most effective way to restrict a user's access to only the data that is necessary to do that user's job?\n\nThe answers to these questions depend on the size of the system, the degree of centralization to be exercised over system administration, and the nature of the data and the business needs of the users.\n\n# **Centralized or decentralized**\n\nIn a system design that exercises centralized control, one or a few administrators are granted system administrator authority. A centralized system typically is used when the number of reports and users to be added to the system is small. Centralized administration is also appropriate where resources are limited and only one person might have the skills and knowledge to perform the system administration tasks, or where one user group performs all of the administration tasks.\n\nIn a system design with decentralized control, different users are granted different levels of administrative authority. For example, you might have users that have the authority to create users and groups. Other users might have the authority to create application groups and folders, and others might be given full system administration authority.", - "page_start": 89, - "page_end": 89, - "source_file": "sg246915.pdf" - }, - { - "text": "(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality, public health, town and country planning, the development and utilization of mineral resources, for the purpose of any census or in order to secure the development or utilization of any property for a purpose beneficial to the community;\n- (b) that is reasonably required for the purpose of protecting the rights or freedoms of other persons;\n- (c) that authorizes an officer or agent of the Government of Botswana, a local government authority or a body corporate established by law for a public purpose to enter on the premises of any person in order to inspect those premises or anything thereon for the purpose of any tax, rate or duty or in order to carry out work connected with any property that is lawfully on those premises and that belongs to that Government, authority or body corporate, as the case may be; or\n- (d) that authorizes, for the purpose of enforcing the judgment or order of a court in any civil proceedings, the search of any person or property by order of a court or entry upon any premises by such order,\n\nand except so far as that provision or, as the case may be, anything done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n# **10. Provisions to secure protection of law**\n\n(1) If any person is charged with a criminal offence, then, unless the charge is withdrawn, the case shall be afforded a fair hearing within a reasonable time by an independent and impartial court established or recognized by law.\n\n(2) Every person who is charged with a criminal offence-\n\n- (a) shall be presumed to be innocent until he or she is proved or has pleaded guilty;\n- (b) shall be informed as soon as reasonably practicable, in a language that he or she understands and in detail, of the nature of the offence charged;\n- (c) shall be given adequate time and facilities for the preparation of his or her defence;\n- (d) shall be permitted to defend himself or herself before the court in person or, at his or her own expense, by a legal representative of his or her own choice;\n- (e) shall be afforded facilities to examine in person or by his or her legal representative the witnesses called by the prosecution before the court, and to obtain the attendance and carry out the examination of witnesses to testify on his or her behalf before the court on the same conditions as those applying to witnesses called by the prosecution; and\n- (f) shall be permitted to have without payment the assistance of an interpreter if he or she cannot understand the language used at the trial of the charge,\n\nand except with his or her own consent the trial shall not take place in his or her absence unless he or she so conducts himself or herself as to render the continuance of the proceedings in his or her presence impracticable and the court has ordered him or her to be removed and the trial to proceed in his or her absence.\n\n(3) When a person is tried for any criminal offence, the accused person or any person authorized by him or her in that behalf shall, if he or she so requires and subject to payment of such reasonable fee as may be prescribed by law, be given within a reasonable time after judgment a copy for the use of the accused person of any record of the proceedings made by or on behalf of the court.", - "page_start": 8, - "page_end": 8, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Responsibilities of management include –\n\n- Implement the corporate strategy set by the Board;\n- Achieve the performance targets set by the Board;\n- Develop, implement and manage risk management and internal control frameworks;\n- Develop, implement and update policies and procedures;\n- Provide sufficient, relevant and timely information to the Board to enable the Board to effectively discharge its responsibilities; and\n- Manage human, physical and financial resources to achieve the Company's objectives in other words to run the day to day business in an effective way.\n\n#### **1.2 Management Performance**\n\nSundance's Chairman, with Non-Executive Director input, is responsible for providing feedback to the MD on his performance assessed against the responsibilities mentioned above. The MD, with Chairman and Non-Executive Directors input, is responsible for providing feedback to senior executives and assessing their performance against the responsibilities mentioned above.\n\nDuring fiscal year 2014, an annual performance evaluation of senior executives was completed in line with the Company's incentive compensation policy as well as periodic one on one discussions carried out by the MD. Appropriate induction procedures are in place to allow new senior executives to participate fully and actively in management decision making at the earliest opportunity.\n\n# **Principle 2: Structure the Board to Add Value**\n\n#### **2.1 Board Composition and Independence**\n\nThe composition and operation of the Board is determined in accordance with the following requirements:\n\n- The constitution of Sundance specifies that there must be a minimum of three directors and a maximum of ten. The Board may determine the size of the Board within those limits;\n- It is the intention of the Board that its membership consists of a majority of independent directors who satisfy the criteria recommended by the ASX best practice corporate governance requirements, though it is recognized that this intention may be impractical to implement given the size and scope of the Company's business;\n- The Chairman of the Board should be an independent director who satisfies the criteria for independence recommended by the ASX best practice corporate governance requirements; and\n- The Board should, collectively, have the appropriate level of personal qualities, skills, experience, and time commitment to properly fulfil its responsibilities or have ready access to such skills where they are not available.\n\nSundance's Board of Directors currently consists of one Managing Director based in the US, three Non-Executive Directors based in Australia, and one Non-Executive Director based in the US. All of the Directors are shareholders of the Company. At all times during the fiscal year 2014, all four of the Non-Executive Directors were independent. Sundance considers an independent director to be a non-executive director who is not a member of management and who is free of any business or other relationship that could materially interfere with, or could reasonably be perceived to materially interfere with, the independent exercise of their judgement. Sundance believes that its current Board composition is appropriate at this time in the Company's evolution. Sundance will continue to address the appropriate structure and composition of the Board over time.\n\nThe composition of the Board at the date of this report is:\n\n| M D Hannell | Chairman, Independent Non-Executive Director |\n| --- | --- |\n| E McCrady | Managing Director and Chief Executive Officer |\n| N Martin | Independent Non-Executive Director |\n| D Hannes | Independent Non-Executive Director |\n| W Holcombe | Independent Non-Executive Director |\n\nDirectors can have access, in appropriate circumstances, to independent professional advice at the Company's expense. It is the continuing practice for the four Non-Executive Directors to confer from time to time without the Executive Director being present.", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "entities working for it or cooperating with it, including contractors and subcontractors, whether legal or natural persons, but only for the purpose of their mission for the contracting authority;\n\n(b) if the *result* is a \"document\" such as a report or a study, and it is meant to be published, the existence of *pre-existing materials* in the *result* may not prevent the publication of the document, its translation or its \"reuse\", it being understood however that the \"reuse\" may only be made of the *result* as a whole and not of the *pre-existing materials* taken separately from the *result*; for the sake of this provision, \"reuse\" and \"document\" have the meaning given by the Commission Decision of 12 December 2011 on the reuse of Commission documents (2011/833/EU).\n\nAll *pre-existing rights* are licensed to the contracting authority from the moment the *results* are delivered and approved by the contracting authority.\n\nThe licensing of *pre-existing rights* to the contracting authority under this FWC covers all territories worldwide and is valid for the duration of intellectual property rights protection.\n\nThe payment of the price as set out in the specific contracts is deemed to also include any fees payable to the contractor in relation to the licensing of *pre-existing rights* to the contracting authority, including for all forms of exploitation and of use of the *results*.\n\nWhere *implementation of the FWC* requires that the contractor uses *pre-existing materials* belonging to the contracting authority, the contracting authority may request that the contractor signs an adequate licence agreement. Such use by the contractor will not entail any transfer of rights to the contractor and is limited to the needs of this FWC.\n\n# **II.13.3. Exclusive rights**\n\nThe Contracting Authority acquires the following exclusive rights:\n\n- (a) reproduction: the right to authorise or prohibit direct or indirect, temporary or permanent reproduction of the *results* by any means (mechanical, digital or other) and in any form, in whole or in part;\n- (b) communication to the public: the exclusive right to authorise or prohibit any display, performance or communication to the public, by wire or wireless means, including the making available to the public of the *results* in such a way that members of the public may access them from a place and at a time individually chosen by them; this also includes the communication on Internet and broadcasting by cable or by satellite;\n- (c) distribution: the exclusive right to authorise or prohibit any form of distribution of *results* or copies of the *results* to the public, by sale or otherwise;\n- (d) rental: the exclusive right to authorise or prohibit rental or lending of the *results* or of copies of the *results*;\n- (e) adaptation: the exclusive right to authorise or prohibit any modification of the *results*;\n- (f) translation: the exclusive right to authorise or prohibit any translation, adaptation, arrangement, creation of derivative works based on the *results*, and any other alteration of the *results*, subject to the respect of moral rights of authors, where applicable;\n- (g) where the *results* are or include a database: the exclusive right to authorise or prohibit the extraction of all or a substantial part of the contents of the database to another medium by any means or in any form; and the exclusive right to authorise or prohibit the re-utilization of all or a substantial part of the contents of the database by the distribution of copies, by renting, by on-line or other forms of transmission;\n- (h) where the *results* are or include a patentable subject-matter: the right to register them as a patent and to further exploit such patent to the fullest extent;", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "communication be to the public generally or to any person or class of persons) and freedom from interference with his or her correspondence.\n\n(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality or public health; or\n- (b) that is reasonably required for the purpose of protecting the reputations, rights and freedoms of other persons or the private lives of persons concerned in legal proceedings, preventing the disclosure of information received in confidence, maintaining the authority and independence of the courts, regulating educational institutions in the interests of persons receiving instruction therein, or regulating the technical administration or the technical operation of telephony, telegraphy, posts, wireless, broadcasting or television; or\n- (c) that imposes restrictions upon public officers, employees of local government bodies, or teachers,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **13. Protection of freedom of assembly and association**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of assembly and association, that is to say, his or her right to assemble freely and associate with other persons and in particular to form or belong to trade unions or other associations for the protection of his or her interests.\n\n(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision-\n\n- (a) that is reasonably required in the interests of defence, public safety, public order, public morality or public health;\n- (b) that is reasonably required for the purpose of protecting the rights or freedoms of other persons;\n- (c) that imposes restrictions upon public officers, employees of local government bodies, or teachers; or\n- (d) for the registration of trade unions and associations of trade unions in a register established by or under any law, and for imposing reasonable conditions relating to the requirements for entry on such a register (including conditions as to the minimum number of persons necessary to constitute a trade union qualified for registration, or of members necessary to constitute an association of trade unions qualified for registration) and conditions whereby registration may be refused on the grounds that any other trade union already registered, or association of trade unions already registered, as the case may be, is sufficiently representative of the whole or of a substantial proportion of the interests in respect of which registration of a trade union or association of trade unions is sought,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **14. Protection of freedom of movement**\n\n(1) No person shall be deprived of his or her freedom of movement, and for the purposes of this section the said freedom means the right to move freely throughout Botswana, the right to reside in any part of Botswana, the right to enter Botswana and immunity from expulsion from Botswana.\n\n(2) Any restriction on a person's freedom of movement that is involved in his or", - "page_start": 11, - "page_end": 11, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "# **II.13.7. Moral rights of creators**\n\nBy delivering the *results*, the contractor warrants that the *creators* will not object to the following on the basis of their moral rights under copyright:\n\n- (a) that their names be mentioned or not mentioned when the *results* are presented to the public;\n- (b) that the *results* be divulged or not after they have been delivered in their final version to the contracting authority;\n- (c) that the *results* be adapted, provided that this is done in a manner which is not prejudicial to the *creator*'s honour or reputation.\n\nIf moral rights on parts of the *results* protected by copyright may exist, the contractor must obtain the consent of *creators* regarding the granting or waiver of the relevant moral rights in accordance with the applicable legal provisions and be ready to provide documentary evidence upon request.\n\n# **II.13.8. Image rights and sound recordings**\n\nIf natural persons appear in a *result* or their voice or any other private element is recorded in a recognisable manner, the contractor must obtain a statement by these persons (or, in the case of minors, by the persons exercising parental authority) giving their permission for the described use of their image, voice or private element and, on request, submit a copy of the permission to the contracting authority. The contractor must take the necessary measures to obtain such consent in accordance with the applicable legal provisions.\n\n# **II.13.9. Copyright notice for pre-existing rights**\n\nWhen the contractor retains *pre-existing rights* on parts of the *results*, reference must be inserted to that effect when the *result* is used as set out in Article I.10.1, with the following disclaimer: '© — year — European Union. All rights reserved. Certain parts are licensed under conditions to the EU', or with any other equivalent disclaimer as the contracting authority may consider best appropriate, or as the parties may agree on a case-by-case basis. This does not apply where inserting such reference would be impossible, notably for practical reasons.\n\n# **II.13.10. Visibility of ECHA funding and disclaimer**\n\nWhen making use of the *results*, the contractor must declare that they have been produced under a contract with the contracting authority and that the opinions expressed are those of the contractor only and do not represent the contracting authority's official position. The contracting authority may waive this obligation in writing or provide the text of the disclaimer.\n\n# **II.14. Force majeure**\n\n- **II.14.1** If a party is affected by *force majeure*, it must immediately *notify* the other party, stating the nature of the circumstances, their likely duration and foreseeable effects.\n- **II.14.2** A party is not liable for any delay or failure to perform its obligations under the FWC if that delay or failure is a *result* of *force majeure*. If the contractor is unable to fulfil its contractual obligations owing to *force majeure*, it has the right to remuneration only for the services actually provided.", - "page_start": 26, - "page_end": 26, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- (d) any labour required during any period of public emergency or in the event of any other emergency or calamity that threatens the life and well-being of the community, to the extent that the requiring of such labour is reasonably justifiable in the circumstances of any situation arising or existing during that period or as a result of that other emergency or calamity, for the purpose of dealing with that situation; or\n- (e) any labour reasonably required as part of reasonable and normal communal or other civic obligations.\n\n# **7. Protection from inhuman treatment**\n\n(1) No person shall be subjected to torture or to inhuman or degrading punishment or other treatment.\n\n(2) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question authorizes the infliction of any description of punishment that was lawful in the country immediately before the coming into operation of this Constitution.\n\n# **8. Protection from deprivation of property**\n\n(1) No property of any description shall be compulsorily taken possession of, and no interest in or right over property of any description shall be compulsorily acquired, except where the following conditions are satisfied, that is to say-\n\n- (a) the taking of possession or acquisition is necessary or expedient-\n\t- (i) in the interests of defence, public safety, public order, public morality, public health, town and country planning or land settlement;\n\t- (ii) in order to secure the development or utilization of that, or other, property for a purpose beneficial to the community; or\n\t- (iii) in order to secure the development or utilization of the mineral resources of Botswana; and\n- (b) provision is made by a law applicable to that taking of possession or acquisition-\n\t- (i) for the prompt payment of adequate compensation; and\n\t- (ii) securing to any person having an interest in or right over the property a right of access to the High Court, either direct or on appeal from any other authority, for the determination of his or her interest or right, the legality of the taking of possession or acquisition of the property, interest or right, and the amount of any compensation to which he or she is entitled, and for the purpose of obtaining prompt payment of that compensation.\n\n(2) No person who is entitled to compensation under this section shall be prevented from remitting, within a reasonable time after he or she has received any amount of that compensation, the whole of that amount (free from any deduction, charge or tax made or levied in respect of its remission) to any country of his or her choice outside Botswana.\n\n(3) Subsection (1)(b)(i) of this section shall be deemed to be satisfied in relation to any Law applicable to the taking of possession of minerals or the acquisition of rights to minerals if that law makes provision for the payment at reasonable intervals of adequate royalties.\n\n(4) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of subsection (2) of this section to the extent that the law in question authorizes-\n\n- (a) the attachment, by order of a court, of any amount of compensation to which a person is entitled in satisfaction of the judgment of a court or pending the determination of civil proceedings to which he or she is a party; or\n- (b) the imposition of reasonable restrictions on the manner in which any amount of", - "page_start": 6, - "page_end": 6, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "The skill level of the users might be a determining factor in the degree of authority that is granted. It takes a more skilled user to define indexes and report parameters than to set up users and groups. A decentralized system is typically used when data from different sources is stored on the same Content Manager OnDemand system but must be maintained independently of other data. Decentralization also makes sense when report loading and processing needs are limited to a specific group of users for security purposes or when administrators that add users and groups must be prevented from accessing report data.\n\nThe decision about whether to use a centralized or a decentralized administration model is best made *before* any data is set up in the system. Even though the type of administration that is chosen can be changed later, the amount of work that is involved in that change is greater than the amount of work that is necessary to study the requirements of the system and implement the appropriate administration policies from the beginning.\n\nIn this section, we describe different types of users, followed by a description of a decentralized administrative plan. We also introduce a new administrative tool, Content Manager OnDemand XML Batch Administration, which is a command-line program that is run on the Content Manager OnDemand server.\n\n# **3.2.1 User types, authorities, and functions**\n\nFour types of users are available in a Content Manager OnDemand system. Each type has a different level of access, authority, and responsibility in the system:\n\n- -User: Logs in and queries the system to retrieve documents and reports for viewing.\n- -User administrator: Adds users or other user administrators to the system.\n- - Report administrator: Defines the application groups, applications, folders, and cabinets to be part of the system. The report administrator is responsible for understanding the report and document data and for defining the indexes to be extracted from the data and stored. A report administrator is also responsible for designing the user interface to the reports through the folder definition process and for controlling access authority to the reports that the report administrator designs, indexes, and loads.\n- - System administrator: Has the highest level of authority in a Content Manager OnDemand system. The system administrator has authority for all system functions and can grant other users the authority to perform various tasks. The system administrator is the only level of authority that can create storage sets and define system printers.\n\nWhen the administrative tasks and levels of authorities are understood, you must decide the span of control in the system. Is it better to have one user control all access and functions in the Content Manager OnDemand system, or is it better to spread the administrative tasks among several users to smooth the workload based on system requirements? The answer to this question depends on whether your environment uses centralized or decentralized administrative control.\n\nA centralized administrative plan is best suited for a Content Manager OnDemand system with a few users and relatively few reports to define. In the next section, we focus on the decentralized system and describe the different aspects of a decentralized administrative plan.", - "page_start": 90, - "page_end": 90, - "source_file": "sg246915.pdf" - }, - { - "text": "General unless he or she is qualified to be appointed to the Office of a Judge of the High Court.\n\n(3) The Attorney-General shall be the principal legal adviser to the Government.\n\n(4) A person holding the Office of Attorney-General shall vacate his or her office when he or she attains the age of 60 years or such other age as may be prescribed by Parliament.\n\n# **51A. Director of Public Prosecutions**\n\n(1) There shall be a Director of Public Prosecutions appointed by the President whose office shall be a public office and who shall be subject to the administrative supervision of the Attorney-General.\n\n(2) A person shall not be qualified to be appointed to the Office of Director of Public Prosecutions unless he or she is qualified to be appointed to the Office of a Judge of the High Court.\n\n(3) The Director of Public Prosecutions shall have power in any case in which he or she considers it desirable to do so-\n\n- (a) to institute and undertake criminal proceedings against any person before any court (other than a court martial) in respect of any offence alleged to have been committed by that person;\n- (b) to take over and continue any such criminal proceedings that have been instituted or undertaken by any other person or authority; and\n- (c) to discontinue, at any stage before judgment is delivered, any such criminal proceedings instituted or undertaken by himself or herself or any other person or authority.\n\n(4) The powers of the Director of Public Prosecutions under subsection (3) may be exercised by him or her in person or by officers subordinate to him or her acting in accordance with his or her general or special authority.\n\n(5) For the purposes of this section any appeal from any judgment in any criminal proceedings before any court, or any case stated or question of law reserved for the purpose of any such proceedings, to any other court shall be deemed to be part of those proceedings:\n\nProvided that the power conferred on the Director of Public Prosecutions by subsection (3)(c) of this section shall not be exercised in relation to any appeal by a person convicted in any criminal proceedings or to any case stated or question of law reserved at the instance of such person.\n\n(6) In the exercise of the functions vested in him or her by subsection (3) of this section the Director of Public Prosecutions shall not be subject to the direction or control of any other person or authority:\n\nProvided that-\n\n- (a) where any other person or authority has instituted criminal proceedings, nothing in this subsection shall prevent the withdrawal of those proceedings by or at the instance of that person or authority, and with the leave of the court; and\n- (b) before exercising his or her powers in relation to cases considered by the Attorney-General to be of national importance, the Director of Public Prosecutions shall consult the Attorney-General.\n\n# **52. Permanent Secretaries**\n\nWhere any Minister has been charged with responsibility for any department of Government, he or she shall exercise general direction and control over that department and, subject to such direction and control, the department shall be under the supervision of a Permanent Secretary whose office shall be a public office.\n\n## **53. Prerogative of Mercy**\n\nThe President may-", - "page_start": 24, - "page_end": 24, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- **II.6.3** The contractor is liable for any loss or damage caused to the contracting authority during or as a consequence of *implementation of the FWC*, including in the event of subcontracting, but only up to an amount not exceeding three times the total amount of the relevant specific contract. However, if the damage or loss is caused by the gross negligence or wilful misconduct of the contractor or of its *personnel* or subcontractors, as well as in the case of an action brought against the contracting authority by a third party for breach of its intellectual property rights, the contractor is liable for the whole amount of the damage or loss.\n- **II.6.4** If a third party brings any action against the contracting authority in connection with the *implementation of the FWC*, including any action for alleged breach of intellectual property rights, the contractor must assist the contracting authority in the legal proceedings, including by intervening in support of the contracting authority upon request. If the contracting authority's liability towards the third party is established and that such liability is caused by the contractor during or as a consequence of the *implementation of the FWC*, Article II.6.3 applies.\n- **II.6.5** If the contractor is composed of two or more economic operators (i.e. who submitted a joint tender), they are all jointly and severally liable to the contracting authority for the *implementation of the FWC*.\n- **II.6.6** The contracting authority is not liable for any loss or damage caused to the contractor during or as a consequence of *implementation of the FWC*, unless the loss or damage was caused by wilful misconduct or gross negligence of the contracting authority.\n\n# **II.7. Conflict of interest and professional conflicting interests**\n\n- **II.7.1** The contractor must take all the necessary measures to prevent any situation of *conflict of interest* or *professional conflicting interest*.\n- **II.7.2** The contractor must *notify* the contracting authority in writing as soon as possible of any situation that could constitute a *conflict of interest* or a *professional conflicting interest* during the *implementation of the FWC*. The contractor must immediately take action to rectify the situation.\n\nThe contracting authority may do any of the following:\n\n- (a) verify that the contractor's action is appropriate;\n- (b) require the contractor to take further action within a specified deadline;\n- (c) decide not to award a specific contract to the contractor.\n- **II.7.3** The contractor must pass on all the relevant obligations in writing to:\n\t- (a) its *personnel*;\n\t- (b) any natural person with the power to represent it or take decisions on its behalf;\n\t- (c) third parties involved in the *implementation of the FWC*, including subcontractors.\n\nThe contractor must also ensure that the persons referred to above are not placed in a situation which could give rise to conflicts of interest.", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "What criteria must a lactation room meet?", - "target_page": 1, - "target_passage": "LACTATION ROOM.—The term ‘lactation room’ means a hygienic place, other than a bathroom, that— ‘‘(A) is shielded from view; ‘‘(B) is free from intrusion; and ‘‘(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Public Law 116–30 116th Congress\n\n## An Act\n\nJuly 25, 2019 [H.R. 866]\n\nTo provide a lactation room in public buildings.\n\n*Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,* \n\nFairness For Breastfeeding Mothers Act of 2019. 40 USC 101 note.\n\n#### **SECTION 1. SHORT TITLE.**\n\nThis Act may be cited as the ''Fairness For Breastfeeding Mothers Act of 2019''.\n\n#### **SEC. 2. LACTATION ROOM IN PUBLIC BUILDINGS.**\n\n(a) LACTATION ROOM IN PUBLIC BUILDINGS.—Chapter 33 of title 40, United States Code, is amended by adding at the end the following new section:\n\n40 USC 3318.\n\ndkrause on DSKBC28HB2PROD with PUBLAWS\n\n### **''§ 3318. Lactation room in public buildings**\n\n''(a) DEFINITIONS.—In this section:\n\n''(1) APPROPRIATE AUTHORITY.—The term 'appropriate authority' means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building.\n\n''(2) COVERED PUBLIC BUILDING.—The term 'covered public building' means a public building (as defined in section 3301) that is open to the public and contains a public restroom, and includes a building listed in section 6301 or 5101.\n\n''(3) LACTATION ROOM.—The term 'lactation room' means a hygienic place, other than a bathroom, that—\n\n''(A) is shielded from view;\n\n''(B) is free from intrusion; and\n\n''(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet.\n\n''(b) LACTATION ROOM REQUIRED.—Except as provided in subsection (c), the appropriate authority of a covered public building shall ensure that the building contains a lactation room that is made available for use by members of the public to express breast milk.\n\n''(c) EXCEPTIONS.—A covered public building may be excluded from the requirement in subsection (b) at the discretion of the appropriate authority if—\n\n''(1) the public building—\n\nVerDate Sep 11 2014 15:46 Aug 08, 2019 Jkt 089139 PO 00030 Frm 00001 Fmt 6580 Sfmt 6581 E:\\PUBLAW\\PUBL030.116 PUBL030\n\n''(A) does not contain a lactation room for employees who work in the building; and\n\n''(B) does not have a room that could be repurposed as a lactation room or a space that could be made private using portable materials, at a reasonable cost; or", - "page_start": 0, - "page_end": 0, - "source_file": "PLAW-116publ30.pdf" - }, - { - "text": "- (d) to visit a person (\"D\") whom P reasonably believes is dying, and where P is a member of D's household or a close family member or friend of D;\n- (e) to attend the funeral of a member of P's household or a close family member;\n- (f) in other exceptional circumstances such as—\n\t- (i) to seek medical assistance where this is required urgently or on the advice of a registered medical practitioner including to access services from dentists, opticians, audiologists, chiropodists, chiropractors, osteopaths and other medical and health practitioners, including services relating to mental health,\n\t- (ii) to access critical public services including social services or services provided to victims (such as victims of crime),\n\t- (iii) to avoid injury or illness or to escape risk of harm,\n\t- (iv) to access veterinary services where this is required urgently or on the advice of a veterinary surgeon.\n\n(2) P may only leave or be outside of the place where P is self-isolating in reliance on the grounds mentioned in sub-paragraph (1)(c), (d) or (e)—\n\n- (a) if P has been given prior permission by a person authorised by the Secretary of State for this purpose;\n- (b) if P complies with any reasonable requirements imposed by the person so authorised in relation to the exercise, the visit to the person or attendance at the funeral.\n\n#### **Meaning of \"place\"**\n\n**14.** For the purposes of this Schedule the place referred to in paragraphs 8 to 13 means the room in the designated accommodation where P is staying and, if connected to the room where P is staying, the room of any person referred to in paragraph 11(a) (travelling companion), including any balcony, and does not include the communal areas or any garden, yard, passage, stair, garage, outhouse or appurtenance of the accommodation in which the place is situated.\n\n#### **Designations**\n\n**15.** The Secretary of State must designate for the purposes of this Schedule—\n\n- (a) accommodation;\n- (b) transportation to the designated accommodation,\n\nand must publish details of the designations in such manner as appears to the Secretary of State to be appropriate.\n\n#### **Duties where P is a child**\n\n**16.** If P is a child—\n\n- (a) any person who has custody or charge of P when P is travelling to England must ensure, so far as is reasonably practicable, that P complies with the obligations in paragraphs 5 and 6;\n- (b) any person who has custody or charge of P during P's period of self-isolation must ensure, so far as is reasonably practicable, that P self-isolates in accordance with this Schedule.\n\n#### **Person caring for P**\n\n**17.** A person may reside in the place where P is residing pursuant to this Schedule to provide assistance P reasonably requires by reason of—\n\n- (a) P being a child; or\n- (b) any disability of P's,", - "page_start": 77, - "page_end": 77, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**10.** In regulation 13(3) (timescales for EHC plans), for \"(d)\" substitute \"(e)\".\n\n**11.** After regulation 18 (circumstances in which a local authority must review an EHC plan) insert—\n\n## \"**Circumstances in which it is not necessary to review an EHC plan**\n\n**18A.**—(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n\n(2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.\".\n\n**12.** In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert—\n\n\"(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**13.** In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**14.** In regulation 45 (unopposed appeals), after paragraph (7) insert—\n\n\"(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.\".\n\n### **Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014**\n\n**15.** The Special Educational Needs (Personal Budgets) Regulations 2014(**a**) are amended as follows.\n\n**16.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n**17.** After regulation 2 (interpretation) insert—\n\n\".\n\n#### \"**Relaxation of time period due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(<b>a) S.I. 2014/1652, to which there are amendments not relevant to these Regulations.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(2) (transfer of EHC plans) (in relation to the second reference to 15 working days), (4), (5), (7) (in relation to the second reference to 15 working days) and (8);\n- (b) regulation 16(2) and (3) (change of responsible commissioning body);\n- (c) regulation 20(9) and (10) (review where the child or young person attends a school or other institution);\n- (d) regulation 21(7), (8) and (9) (review of EHC plan where the child or young person does not attend a school or other institution);\n- (e) regulation 25(1) (notification of decision whether it is necessary to re-assess educational, health care and social care provision);\n- (f) regulation 27(4) (amending or replacing an EHC plan following a re-assessment);\n- (g) regulation 33 (requirement to consider mediation);\n- (h) regulation 34(1) and (2) (where a parent or young person does not wish to or fails to pursue mediation);\n- (i) regulation 35(2), (3) and (4) (mediation health care issues);\n- (j) regulation 36(2) (mediation no health care issues);\n- (k) regulation 39(1) and (3) (mediation certificate under section 55(5));\n- (l) regulation 42(3) and (4) (steps to be taken by a local authority);\n- (m) regulation 44(2)(d), (e), (f) and (h) (compliance with the orders of the First-tier Tribunal);\n- (n) regulation 45(4), (5) and (6A) (unopposed appeals);\n- (o) regulation 47 (disclosure of EHC plans in relation to higher education); and\n- (p) regulation 56(3) (publication of comments on the local offer).\".\n\n**6.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**7.** In regulation 5(4) (decision whether or not to conduct an EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\t- \"; or\n\t- (e) of a reason relating to the incidence or transmission of coronavirus\".\n- **8.** In regulation 8(2) (duty to co-operate in EHC needs assessments)—\n\t- (a) at the end of sub-paragraph (b) omit \"or\"; and\n\t- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**9.** In regulation 10(4) (decision not to secure an EHC plan)—", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (iv) in the goods vehicle or a hotel, hostel or bed and breakfast accommodation while not undertaking the work described in that paragraph if P is travelling with another person in a goods vehicle with a sleeper cab.\n(4) The address specified by P in the Passenger Locator Form pursuant to paragraph 2(a) of Schedule 6 must be—\n\n- (a) their home;\n- (b) the home of a friend or family member;\n- (c) a hotel, hostel, bed and breakfast accommodation, holiday apartment or home, campsite, caravan park or boarding house, canal boat or any other vessel;\n- (d) a military site or establishment;\n- (e) accommodation facilitated by the Secretary of State for the purposes of P's self-isolation;\n- (f) where P is an asylum seeker, accommodation provided or arranged under section 4, 95 or 98 of the Immigration and Asylum Act 1999; or\n- (g) where P is a person described in paragraph 9(1) of Schedule 10 to the Immigration Act 2016 (powers of Secretary of State to enable person to meet bail conditions), accommodation provided or arranged under that paragraph.\n\n(5) More than one address may be specified as the place at which P intends to self-isolate in the Passenger Locator Form where—\n\n- (a) a legal obligation requires P to change addresses; or\n- (b) it is necessary for P to stay overnight at an address on their arrival in England before travelling directly to another address at which they will be self-isolating.\n\n(6) In paragraph (3)(a)(ii) \"a place at which they intend to self-isolate while in England\" means—\n\n- (a) where the person has completed a Passenger Locator Form, at an intended place of selfisolation specified in that form;\n- (b) where the person has completed a form equivalent to a Passenger Locator Form pursuant to an enactment in Scotland, Wales or Northern Ireland, at an intended place of selfisolation specified in that form;\n- (c) in any other case at a place described in paragraph (4)(a) to (c).\n\n(7) P must, on their arrival in England, travel directly to the place at which they are to selfisolate, and must then self-isolate until whichever is the earlier of—\n\n- (a) the end of the 10th day after the day on which they arrived in England or, if later, the end of any period that applies by virtue of paragraph 2 or 3 of Schedule 8;\n- (b) their departure from England; or\n\n- (c) the beginning of P's period of self-isolation, where P or R, where P is a child, is notified under regulation 2A or 2B of the Self-Isolation Regulations(**a**).\n(8) In paragraph (7)(c), \"period of self-isolation\" and \"R\" have the meanings given for the purposes of Part 1 of the Self-Isolation Regulations (see regulations 3 and 5 of those Regulations).\n\n(9) Paragraph (2) does not require P to remain in isolation—\n\n- (a) from any person with whom they were travelling when they arrived in England and who is also self-isolating in the place where P is self-isolating;\n- (b) where P is self-isolating in their home, from any member of their household;\n- (c) where P is self-isolating in the home of a friend or family member, from any member of the household of that friend or family member;\n\n(<b>a) A person notified, or a child in respect of whom a notification is given, under regulation 2A or 2B will be required to selfisolate in accordance with those Regulations from the moment the notification is given. Regulations 2A and 2B were inserted by S.I. 2021/364.", - "page_start": 13, - "page_end": 13, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (ii) to access critical public services, including—\n\t- (aa) social services,\n\t- (bb) services provided to victims (such as victims of crime),\n- (iii) to move to a different place for self-isolation where it becomes impracticable to remain at the address at which they are self-isolating;\n- (j) for the purposes of, or connected with, undertaking a test in accordance with Schedule 8 or Schedule 10;\n- (k) if self-isolating in a goods vehicle by virtue of paragraph (3)(d)—\n\t- (i) for sanitary reasons,\n\t- (ii) to take exercise outside,\n\t- (iii) where required or permitted by that paragraph, to move to a different place for selfisolation,\n\t- (iv) to inspect the vehicle or its load or to carry out any other task required for the safe and continued operation of the vehicle, including refuelling, and\n\t- (v) for any other reason or purpose specified in this paragraph.\n\n(12) For the purposes of this regulation, the place referred to in paragraph (3) includes the premises where P is self-isolating together with any garden, yard, passage, stair, garage, outhouse, or other appurtenance of such premises.\n\n(13) If P is a child, any person who has custody or charge of P during P's period of self-isolation must ensure, so far as reasonably practicable, that P self-isolates in accordance with this regulation.\n\n(14) If P has arrived from Wales or Scotland and is in England, temporarily, for a reason which would constitute an exception under paragraph (11), P is not required to comply with this regulation.\n\n(15) If P is a person described—\n\n- (a) in paragraph 1(1) of Schedule 4—\n\t- (i) where P is a person described in paragraph 1(1)(a) to (k) of, and meets the conditions set out in paragraph 1(3) of, that Schedule, P is not required to comply with this regulation,\n\t- (ii) in any other case, paragraph (3)(b) and (c) does not apply to P;\n- (b) in paragraph 1(2) of Schedule 4 (essential work for foreign country etc), P is not required to comply with this regulation;\n- (c) in paragraph 33 of Schedule 4 (healthcare), paragraph (2) does not require P to remain in isolation in the circumstances set out in paragraph 33 of that Schedule;\n- (d) in paragraph 43 of Schedule 4 (horticultural work)—\n\t- (i) paragraph (2) does not require P to remain in isolation from any other person who is living or working on the specified farm,\n\t- (ii) paragraph (3)(a)(i) applies with the modification that the address specified by P as the address at which they intend to self-isolate must be the specified farm, where \"specified farm\" has the meaning given in paragraph 43 of Schedule 4;\n- (e) either—\n\t- (i) in paragraph 44 of Schedule 4 (elite sports),\n\t- (ii) in sub-paragraphs (1)(h) to (l) of paragraph 2 of Schedule 11 (exemptions from additional measures applicable to arrivals from category 3 countries and territories),\n\nP satisfies the requirements of paragraph (2) if P complies with the relevant conditions specified in paragraph 44(4) of Schedule 4;", - "page_start": 15, - "page_end": 15, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "transportation, treatment, storage and disposal of hazardous and non-hazardous solid waste, and require states to develop programs to ensure the safe disposal of solid waste in sanitary landÑlls.\n\nSubtitle D of RCRA establishes a framework for regulating the disposal of municipal solid waste. Regulations under Subtitle D currently include minimum comprehensive solid waste management criteria and guidelines, including location restrictions, facility design and operating criteria, closure and post-closure requirements, Ñnancial assurance standards, groundwater monitoring requirements and corrective action standards, many of which had not commonly been in eÅect or enforced in the past in connection with municipal solid waste landÑlls. Each state was required to submit to the U.S. EPA a permit program designed to implement Subtitle D regulations by April 9, 1993. All of the states in which we operate have implemented permit programs pursuant to RCRA and Subtitle D. These state permit programs may include landÑll requirements which are more stringent than those of Subtitle D.\n\nAll of our planned landÑll expansions or new landÑll development projects have been engineered to meet or exceed Subtitle D requirements. Operating and design criteria for existing operations have been modiÑed to comply with these new regulations. Compliance with Subtitle D regulations has resulted in increased costs and may in the future require substantial additional expenditures in addition to other costs normally associated with our waste management activities.\n\n(2) *The Comprehensive Environmental Response, Compensation and Liability Act of 1980, as amended.* CERCLA, among other things, provides for the cleanup of sites from which there is a release or threatened release of a hazardous substance into the environment. CERCLA may impose strict joint and several liability for the costs of cleanup and for damages to natural resources upon current owners and operators of the site, parties who were owners or operators of the site at the time the hazardous substances were disposed of, parties who transported the hazardous substances to the site and parties who arranged for the disposal of the hazardous substances at the site. Under the authority of CERCLA and its implementing regulations, detailed requirements apply to the manner and degree of investigation and remediation of facilities and sites where hazardous substances have been or are threatened to be released into the environment. Liability under CERCLA is not dependent upon the existence or disposal of only \"\"hazardous wastes'' but can also be based upon the existence of small quantities of more than 700 \"\"substances'' characterized by the U.S. EPA as \"\"hazardous,'' many of which may be found in common household waste.\n\nAmong other things, CERCLA authorizes the federal government to investigate and remediate sites at which hazardous substances have been or are threatened to be released into the environment or to order (or oÅer an opportunity to) persons potentially liable for the cleanup of the hazardous substances to do so. In addition, the U.S. EPA has established a National Priorities List of sites at which hazardous substances have been or are threatened to be released and which require investigation or cleanup.\n\nLiability under CERCLA is not dependent upon the intentional disposal of hazardous waste or hazardous substances. It can be founded upon the release or threatened release, even as a result of unintentional, non-negligent or lawful action, of thousands of hazardous substances, including very small quantities of such substances. Thus, even if our landÑlls have never knowingly received hazardous waste as such, it is possible that one or more hazardous substances may have been deposited or \"\"released'' at our landÑlls or at other properties which we currently own or operate or may have owned or operated. Therefore, we could be liable under CERCLA for the cost of cleaning up such hazardous substances at such sites and for damages to natural resources, even if those substances were deposited at our facilities before we acquired or operated them. The costs of a CERCLA cleanup can be very expensive. Given the diÇculty of obtaining insurance for environmental impairment liability, such liability could have a material impact on our business and Ñnancial condition. For a further discussion, see \"\"Ì Liability Insurance and Bonding.''\n\n(3) *The Federal Water Pollution Control Act of 1972, as amended.* This Act regulates the discharge of pollutants from a variety of sources, including solid waste disposal sites, into streams, rivers and other waters of the United States. Point source runoÅ from our landÑlls and transfer stations that is discharged into surface waters must be covered by discharge permits that generally require us to conduct", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "#### **Charge for day 2 tests and day 8 tests**\n\n**12.**—(1) The Secretary of State or a person designated by the Secretary of State may impose a charge in respect of mandatory tests provided by a public provider.\n\n(2) The Secretary of State—\n\n- (a) must publish details of the charges in such manner as the Secretary of State considers appropriate; and\n- (b) may recover any sum owed by a person pursuant to such a charge as a debt.\n\n# SCHEDULE 9 Regulation 7(5)\n\n# Workforce tests\n\n### **Interpretation of this Schedule**\n\n**1.** In this Schedule—\n\n- (a) \"P\" means a person required to undertake workforce tests under regulation 7 (requirement to undertake workforce tests);\n- (b) \"workforce test\" means any of the categories of workforce test described in regulation 7(6).\n\n#### **Requirement after failure to undertake test**\n\n**2.**—(1) Sub-paragraph (2) applies where P fails to undertake a workforce test that P is required by regulation 7 to undertake.\n\n(2) Where this sub-paragraph applies, P must self-isolate in accordance with regulation 2 of the Self-Isolation Regulations until the earlier of—\n\n- (a) the end of the 14th day after the day on which P arrived in England; or\n- (b) the time P obtains a negative result from a workforce test.\n\n(3) P must comply with any applicable obligations in regulation 7(2) during any period that P is required to self-isolate in accordance with sub-paragraph (2).\n\n(4) Where P is required to self-isolate in accordance with sub-paragraph (2), regulation 2(2) of the Self-Isolation Regulations (meaning of self-isolate) applies as if it also permitted P to leave the place of self-isolation where necessary to undertake a workplace test.\n\n#### **Consequences of test results**\n\n**3.**—(1) Where a workforce test undertaken by P in accordance with regulation 7 generates a positive result—\n\n- (a) P must as soon as reasonably practicable undertake a further test which complies with the requirements for a day 2 test specified in paragraph 6 of Schedule 8 (mandatory testing after arrival in England), in the circumstances specified in paragraph 10 of that Schedule (other than the circumstances in paragraph 10(2) about when a test must be undertaken);\n- (b) P must self-isolate in accordance with regulation 2 of the Self-Isolation Regulations until the end of the 10th day after the day P undertook the test.\n- (2) Where sub-paragraph (1) applies—\n\t- (a) if the test taken by P was a workforce test undertaken for day 2, P is not required to undertake a workforce test for day 5 or day 8;\n\t- (b) if the test undertaken by P was a workforce test undertaken for day 5, P is not required to undertake a workforce test for day 8.", - "page_start": 66, - "page_end": 66, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "When take effect the Fairness For Breastfeeding Mothers Act ?", - "target_page": 2, - "target_passage": "The amendments made by this section shall take effect 1 year after the date of the enactment of this Act. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Public Law 116–30 116th Congress\n\n## An Act\n\nJuly 25, 2019 [H.R. 866]\n\nTo provide a lactation room in public buildings.\n\n*Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,* \n\nFairness For Breastfeeding Mothers Act of 2019. 40 USC 101 note.\n\n#### **SECTION 1. SHORT TITLE.**\n\nThis Act may be cited as the ''Fairness For Breastfeeding Mothers Act of 2019''.\n\n#### **SEC. 2. LACTATION ROOM IN PUBLIC BUILDINGS.**\n\n(a) LACTATION ROOM IN PUBLIC BUILDINGS.—Chapter 33 of title 40, United States Code, is amended by adding at the end the following new section:\n\n40 USC 3318.\n\ndkrause on DSKBC28HB2PROD with PUBLAWS\n\n### **''§ 3318. Lactation room in public buildings**\n\n''(a) DEFINITIONS.—In this section:\n\n''(1) APPROPRIATE AUTHORITY.—The term 'appropriate authority' means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building.\n\n''(2) COVERED PUBLIC BUILDING.—The term 'covered public building' means a public building (as defined in section 3301) that is open to the public and contains a public restroom, and includes a building listed in section 6301 or 5101.\n\n''(3) LACTATION ROOM.—The term 'lactation room' means a hygienic place, other than a bathroom, that—\n\n''(A) is shielded from view;\n\n''(B) is free from intrusion; and\n\n''(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet.\n\n''(b) LACTATION ROOM REQUIRED.—Except as provided in subsection (c), the appropriate authority of a covered public building shall ensure that the building contains a lactation room that is made available for use by members of the public to express breast milk.\n\n''(c) EXCEPTIONS.—A covered public building may be excluded from the requirement in subsection (b) at the discretion of the appropriate authority if—\n\n''(1) the public building—\n\nVerDate Sep 11 2014 15:46 Aug 08, 2019 Jkt 089139 PO 00030 Frm 00001 Fmt 6580 Sfmt 6581 E:\\PUBLAW\\PUBL030.116 PUBL030\n\n''(A) does not contain a lactation room for employees who work in the building; and\n\n''(B) does not have a room that could be repurposed as a lactation room or a space that could be made private using portable materials, at a reasonable cost; or", - "page_start": 0, - "page_end": 0, - "source_file": "PLAW-116publ30.pdf" - }, - { - "text": "the offspring12. Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment13. Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted31. Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs37–39, including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition40. For both adolescence41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period42, but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.\n\nThese findings provide a critical rationale for conducting further precision imaging studies of pregnancy in demographically enriched cohorts to determine the universality and idiosyncrasy of these adaptations and their role in maternal health. Are the changes observed in our participant reflective of the broader population? Do deviations from the norm lead to maladaptive outcomes? A precision imaging approach can help determine whether the pace of pregnancy-induced neuroanatomical changes drives divergent brain health outcomes in women, as may be the case during other rapid periods of brain development44. One in five women experiences perinatal depression45 and while the first FDA-approved treatment is now available46, early detection remains elusive. Precision imaging studies could offer clues about an individual's risk for or resilience to depression before symptom onset, helping clinicians better determine when and how to intervene. Neuroscientists and clinicians also lack tools to facilitate detection and treatment of neurological disorders that co-occur, worsen or remit with pregnancy, such as epilepsy, headaches, multiple sclerosis and intracranial hypertension47. Precision mapping of the maternal brain lays the groundwork for a greater understanding of the subtle and sweeping structural, functional, behavioral and clinical changes that unfold across pregnancy. Such pursuits will advance our basic understanding of the human brain and its remarkable ability to undergo protracted plasticity in adulthood.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41593-024-01741-0.\n\n# **References**\n\n- 1. World Health Organization. Maternal, newborn, child and adolescent health and ageing. platform.who.int/data/ maternal-newborn-child-adolescent-ageing (2022).\n- 2. Thornburg, K. L., Bagby, S. P. & Giraud, G. D. *Knobil and Neill's Physiology of Reproduction* pp. 1927–1955 (Elsevier, 2015).\n- 3. Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. *Nat. Rev. Neurosci.* **9**, 11–25 (2008).\n- 4. Gregg, C. Pregnancy, prolactin and white matter regeneration. *J. Neurol. Sci.* **285**, 22–27 (2009).\n- 5. Haim, A. et al. A survey of neuroimmune changes in pregnant and postpartum female rats. *Brain Behav. Immun.* **59**, 67–78 (2017).\n- 6. Barrière, D. A. et al. Brain orchestration of pregnancy and maternal behavior in mice: a longitudinal morphometric study. *NeuroImage* **230**, 117776 (2021).\n- 7. Celik, A., Somer, M., Kukreja, B., Wu, T. & Kalish, B. T. The genomic architecture of pregnancy-associated plasticity in the maternal mouse hippocampus. *eNeuro* **9**, ENEURO.0117-22. 2022 (2022).\n- 8. Puri, T. A., Richard, J. E. & Galea, L. A. M. Beyond sex diferences: short- and long-term efects of pregnancy on the brain. *Trends Neurosci.* **46**, 459–471 (2023).\n- 9. Chaker, Z. et al. Pregnancy-responsive pools of adult neural stem cells for transient neurogenesis in mothers. *Science* **382**, 958–963 (2023).\n- 10. Diamond, M. C., Johnson, R. E. & Ingham, C. Brain plasticity induced by environment and pregnancy. *Int. J. Neurosci.* **2**, 171–178 (1971).\n- 11. Servin-Barthet, C. et al. The transition to motherhood: linking hormones, brain and behaviour. *Nat. Rev. Neurosci.* **24**, 605–619 (2023).\n- 12. Ammari, R. et al. Hormone-mediated neural remodeling orchestrates parenting onset during pregnancy. *Science* **382**, 76–81 (2023).\n- 13. Hoekzema, E. et al. Pregnancy leads to long-lasting changes in human brain structure. *Nat. Neurosci.* **20**, 287–296 (2017).\n- 14. Hoekzema, E. et al. Mapping the efects of pregnancy on resting state brain activity, white matter microstructure, neural metabolite concentrations and grey matter architecture. *Nat. Commun.* **13**, 6931 (2022).\n- 15. Martínez-García, M., Paternina-Die, M., Desco, M., Vilarroya, O. & Carmona, S. Characterizing the brain structural adaptations across the motherhood transition. *Front. Glob. Womens Health* **2**, 742775 (2021).\n- 16. Spalek, K. et al. Pregnancy renders anatomical changes in hypothalamic substructures of the human brain that relate to aspects of maternal behavior. *Psychoneuroendocrinology* **164**, 107021 (2024).\n- 17. Martínez-García, M. et al. Do pregnancy-induced brain changes reverse? The brain of a mother six years after parturition. *Brain Sci.* **11**, 168 (2021b).\n- 18. De Lange, A.-M. G. et al. Population-based neuroimaging reveals traces of childbirth in the maternal brain. *Proc. Natl Acad. Sci. USA* **116**, 22341–22346 (2019).", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "**Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a**, Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). **b**, Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. **c**, A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n# **Discussion**\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence13–15,20,21,24–26. Investigations that compare women week. **d**, Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes11,27. But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "# **nature neuroscience**\n\n# **Neuroanatomical changes observed over the course of a human pregnancy**\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\n**Laura Pritschet  1 , Caitlin M. Taylor  1 , Daniela Cossio  2 , Joshua Faskowitz  3 , Tyler Santander1 , Daniel A. Handwerker  3 , Hannah Grotzinger1 , Evan Layher1 , Elizabeth R. Chrastil  2,5 & Emily G. Jacobs  1,4,5**\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal fuid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity3–10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups12.\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum13–16, particularly in regions central to theory-of-mind processing13. These GMV changes persist at 6 years postpartum17 and are traceable decades later18,19, underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues21. Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n1 Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA. 2 Department of Neurobiology and Behavior, University of California, Irvine, CA, USA. 3 Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA. 4 Neuroscience Research Institute, University of California, Santa Barbara, CA, USA. 5 These authors contributed equally: Elizabeth R. Chrastil, Emily G. Jacobs.  e-mail: laura.pritschet@pennmedicine.upenn.edu; chrastil@uci.edu; emily.jacobs@psych.ucsb.edu", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain—indicating greater tract integrity—as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n# **Results**\n\n#### **Serological evaluations**\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml−1 and progesterone (P) = 0.84 ng ml−1; 3 weeks preparturition, E = 12,400 pg ml−1 and P = 103 ng ml−1; 3 months postparturition, E = 11.50 pg ml−1 and P = 0.04 ng ml−1).\n\n#### **Whole-brain dynamics from baseline through postpartum**\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline—2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV (*F* = 27.87, *P* < 0.001, deviance explained = 93.9%, *R*2 adj = 0.91), summary CT (*F* = 15.79, *P* < 0.001, deviance explained = 78.6%, *R*2 adj = 0.75) and total brain volume (*F* = 26.12, *P* < 0.001, deviance explained = 93.4%, *R*2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, *F* = 4.62, *P* = 0.007, deviance explained = 60.2%, *R*2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion *(F* = 10.44, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.77) and increased cerebrospinal fluid (CSF; *F* = 13.32, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n#### **Cortical volume and thickness changes tied to gestation**\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline—36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).\n\nChanges in GMV were near-ubiquitous across the cortical mantle (Fig. 2a). Most large-scale brain networks exhibited decreases in GMV (Fig. 2b and Supplementary Table 1); indeed, 80% of the 400 regions of interest (ROI) demonstrated negative relationships between GMV and gestation week (Fig. 2a and Supplementary Table 2). Together, these results provide evidence of a global decrease in cortical volume across pregnancy. Several sensory and attention subnetworks were particularly sensitive to gestation, including the control (subnetwork B), salience/ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks (Supplementary Table 1). Regions driving these network-level changes include the bilateral inferior parietal lobe, postcentral gyri, insulae, prefrontal cortex, posterior cingulate and somatosensory cortex (Fig. 2c, Supplementary Table 2 and validation of findings using alternate pipeline in Supplementary Tables 1 and 3). These regions and associated brain networks appear to decrease in volume at a faster rate than the rest of the brain throughout pregnancy, as determined by a subsequent analysis controlling for total GMV (Supplementary Tables 1 and 2). GMV reductions were also significantly correlated with the participant's estradiol and progesterone concentrations (Supplementary Table 1). A highly similar pattern of results was observed when examining pregnancy-related CT changes (Supplementary Fig. 3 and Supplementary Tables 4 and 5). Significant reductions in cortical GMV over gestation remained after controlling for standard quality control (QC) metrics, albeit with some influence on the magnitude and location of the observed effects (Supplementary Figs. 4 and 5).\n\nIn contrast, GMV within regions of the default mode (subnetwork C), limbic (subnetworks A and B) and visual peripheral networks buck the global trend by slightly increasing (for example, temporal poles), remaining constant (for example, orbitofrontal cortex) or reducing at a much slower rate (for example, extrastriate cortex) than total GMV (Fig. 2a,b and Supplementary Tables 1 and 2). CT changes in these regions exhibit similar patterns (Supplementary Fig. 3 and Supplementary Tables 4 and 5).\n\n#### **Subcortical GMV changes tied to gestation**\n\nConsistent with the broader cortical reductions in GMV, several subcortical regions significantly reduced in volume across gestation (Fig. 3a, left). This included bilateral ventral diencephalon (right hemisphere values shown in Fig. 3a, right; encompasses hypothalamus, substantia nigra, mammillary body, lateral geniculate nucleus and red nucleus among others22), caudate, hippocampus and thalamus, along with left putamen and brain stem (Supplementary Table 6, *q* < 0.05).\n\nNext, high-resolution segmentation of the MTL allowed us to interrogate subcortical structures at a finer resolution, revealing nonlinear volumetric decreases in CA1 (*F*(2,15) = 5.84, *q* = 0.031, *R*2 adj = 0.36; Fig. 3b, left) and CA2/CA3 (*F*(2,15) = 6.82, *q* = 0.027, *R*2 adj = 0.41; Fig. 3b, middle) across gestation. PHC exhibited linear volumetric decreases across gestation (*F*(1,16) = 24.87, *q* < 0.001, *R*2 adj = 0.58; Fig. 3b, right) which was also tied to estradiol (*F*(1,12) = 20.21, *q* = 0.005, *R*2 adj = 0.60). All three relationships remained significant after proportional correction for total GMV. There was no significant change in other subregions or total volume of the hippocampal body, or in the parahippocampal gyrus (Supplementary Table 7 and Supplementary Fig. 8).\n\n#### **White matter microstructure changes tied to gestation**\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all *q* < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n#### **Comparing brain changes across pregnancy against controls**\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls23. The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "#### *Consumer Laws and Regulations*\n\nWe are also subject to certain consumer laws and regulations that are designed to protect consumers in transactions with banks. While the following list is not exhaustive, these laws and regulations include the Truth in Lending Act, the Truth in Savings Act, the Electronic Funds Transfer Act, the Expedited Funds Availability Act, the Equal Credit Opportunity Act, and the Fair Housing Act, among others. These laws and regulations among other things prohibit discrimination on the basis of race, gender or other designated characteristics and mandate various disclosure requirements and regulate the manner in which financial institutions must deal with customers when taking deposits or making loans to such customers. These and other laws also limit finance charges or other fees or charges earned in our activities. We must comply with the applicable provisions of these consumer protection laws and regulations as part of our ongoing customer relations.\n\n#### *Technology Risk Management and Consumer Privacy*\n\nState and federal banking regulators have issued various policy statements emphasizing the importance of technology risk management and supervision in evaluating the safety and soundness of depository institutions with respect to banks that contract with outside vendors to provide data processing and core banking functions. The use of technology-related products, services, delivery channels and processes expose a bank to various risks, particularly operational, privacy, security, strategic, reputation and compliance risk. Banks are generally expected to prudently manage technology-related risks as part of their comprehensive risk management policies by identifying, measuring, monitoring and controlling risks associated with the use of technology.\n\nUnder Section 501 of the Gramm-Leach-Bliley Act, the federal banking agencies have established appropriate standards for financial institutions regarding the implementation of safeguards to ensure the security and confidentiality of customer records and information, protection against any anticipated threats or hazards to the security or integrity of such records and protection against unauthorized access to or use of such records or information in a way that could result in substantial harm or inconvenience to a customer. Among other matters, the rules require each bank to implement a comprehensive written information security program that includes administrative, technical and physical safeguards relating to customer information.\n\nUnder the Gramm-Leach-Bliley Act, a financial institution must also provide its customers with a notice of privacy policies and practices. Section 502 prohibits a financial institution from disclosing nonpublic personal information about a consumer to nonaffiliated third parties unless the institution satisfies various notice and opt-out requirements and the customer has not elected to opt out of the disclosure. Under Section 504, the agencies are authorized to issue regulations as necessary to implement notice requirements and restrictions on a financial institution's ability to disclose nonpublic personal information about consumers to nonaffiliated third parties. Under the final rule the regulators adopted, all banks must develop initial and annual privacy notices which describe in general terms the bank's information sharing practices. Banks that share nonpublic personal information about customers with nonaffiliated third parties must also provide customers with an opt-out notice and a reasonable period of time for the customer to opt out of any such disclosure (with certain exceptions). Limitations are placed on the extent to which a bank can disclose an account number or access code for credit card, deposit, or transaction accounts to any nonaffiliated third party for use in marketing.\n\n#### *Monetary Policy*\n\nBanks are affected by the credit policies of other monetary authorities, including the Federal Reserve Board, that affect the national supply of credit. The Federal Reserve Board regulates the supply of credit in order to influence general economic conditions, primarily through open market operations in United States government obligations, varying the discount rate on financial institution borrowings, varying reserve requirements against financial institution deposits, and restricting certain borrowings by financial institutions and their subsidiaries. The monetary policies of the Federal Reserve Board have had a significant effect on the operating results of banks in the past and are expected to continue to do so in the future.", - "page_start": 37, - "page_end": 37, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Diversity\n\nThe Company has a policy to improve the diversity of its workforce over time by identifying women and individuals from under-represented backgrounds for recruitment, and by rewarding and promoting employees on the basis of performance.\n\nHowever, at this stage of its development, the Company has a small Board of Directors, and a small management team which is geographically dispersed and because of the industry in which the Company operates, the Board does not consider it to be practicable to set measurable objectives to achieve greater gender diversity at this time.\n\nIn addition, the Board acknowledges the benefits of seeking to improve gender diversity at all levels in the Company over time and will keep this issue under review.\n\nThe Company aims to foster continuous improvement in the area of diversity; building on achievement realised through the implementation of historical diversity initiatives, by applying principles successfully used at our leading operation in this area, to other parts of the business.\n\nOur flagship 'Chatree' Mine in Thailand boasts the enviable statistic of having equal representation by women on the senior management team. Recruitment, training and promotion principles employed at Chatree are currently being applied to our 'Challenger' Mine in Australia, where we currently have 14% representation of women across the senior management and professional categories and to other parts of the business.\n\nThere is currently no representation by women on our Board of Directors. Whilst this is in part reflective of the relatively small size of the Board and stage of development of key elements of the business, it forms part of an overall business review process to consider the issue of gender diversity at this level and will be the subject of ongoing review.\n\nThe Company considers that it will benefit from its ongoing commitment to promote a diverse workforce with treatment of employees and future employees on the basis of merit, abilities and potential, regardless of gender, colour, ethnic or national origin, race, disability, age, sexual orientation, gender reassignment, socioeconomic background, religious or political belief, non / trade union membership, family circumstances or other irrelevant distinction.\n\nThe Company has set various criteria and procedures in order to support equality and diversity in the workforce and applies these principles to:\n\n- 〉 Provide fair access to workplace opportunities and benefits, including internal promotion, leadership development, flexible work practices and fair and comparable wages;\n- 〉 Attracting and retaining a skilled and diverse workforce;\n- 〉 Creating an inclusive workplace culture where discriminatory behaviour is unacceptable; and\n- 〉 Providing an effective grievance mechanism for employees.\n\n### Current Proportion of Women Employees\n\n| Board | 0.0% |\n| --- | --- |\n| Senior Executives | 0.0% |\n| Senior Managers | 1.8% |\n| Managers | 1.0% |\n| Professionals | 8.6% |\n| Non-professionals | 6.4% |\n| Total Workforce | 17.8% |\n\n## Share Trading Policy\n\nIn the interests of shareholder confidence and compliance with insider trading laws, the Company has formal policies governing the trading of the Company's securities by Directors, officers and employees. Details of Directors' shareholdings are disclosed in the Directors' Report.\n\nThe policy prohibits Directors and employees from engaging in short-term trading of any of the Company's securities and buying or selling the Company's securities if they possess unpublished, price-sensitive information.\n\nDirectors and senior management may buy or sell Company securities in the four week period following significant announcements by the Company, including the release of the quarterly report, half-yearly results, the preliminary annual results and the lodgement of the Company's Annual Report (subject to the prohibition of dealing in the Company's securities if they possess unpublished price sensitive information).\n\nDirectors and senior management must also receive approval from the Chairman before buying or selling Company securities.\n\nThe Company's Share Trading Policy is available in the 'Corporate Governance' section of the Company's website.\n\n## Communication with Shareholders and Continuous Disclosure\n\nThe Company is committed to providing relevant and timely information to its shareholders in accordance with its continuous disclosure obligations under the ASX Listing Rules and the *Corporations Act 2001* (Cth).\n\nInformation is communicated to shareholders through the distribution of the Company's Annual Report and other communications. All releases are posted on the Company's website and released to the ASX in a timely manner.\n\nThe Company has practices in place throughout the year governing who may authorise and make disclosures and the method by which the market is to be informed of any price sensitive information.\n\nThe Company Secretary is responsible for communications with the ASX and ensuring that the Company meets its continuous disclosure obligations.\n\nThe Company's Continuous Disclosure is available in the 'Corporate Governance' section of the Company's website.\n\n## Annual General Meeting\n\nAll shareholders are encouraged to attend and participate in the Company's Annual General Meeting. Shareholders may attend in person or send a proxy as their representative.\n\nThe Company's external auditor is routinely invited to and attends the Annual General Meeting in order to respond to questions raised by shareholders relating to the content and conduct of the audit and accounting policies adopted by the Company in relation to the preparation of the financial statements.\n\n## Corporate Governance Disclosure\n\nThe Company's governance policies and procedures comply in all substantial respects with the Australian Securities Exchange Corporate Governance Principles and Recommendations with 2010 Amendments. The following table compares the ASX Recommendations and the Company's corporate governance policies and practices.", - "page_start": 38, - "page_end": 38, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "mo•tion (n) 1. An act, process, or instance of moving. 2. A proposal for action. act, ance g. 2. A for ac", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Remuneration Report\n\n#### Dear Shareholder\n\nI am pleased to present our Remuneration Report for 2013.\n\nAs you would be aware, at last year's Annual General Meeting (\"AGM\") 30% of the votes cast in respect of the resolution to adopt the 2012 Remuneration Report voted 'against' the resolution. As this was greater than the 25% threshold under the executive remuneration legislation, we received what is referred to as a 'first strike.' Our formal response to issues raised by shareholders at the AGM with respect to the 2012 Remuneration Report is set out on page 50 of this Report.\n\nVoting at AGMs is not compulsory and results of the 2012 AGM reflected this with only 59% of issued shares that were eligible to vote on the resolution to adopt the Remuneration Report doing so, meaning the 'against' vote represented 18% of eligible issued shares.\n\nWhile we believe our remuneration practices are sound and demonstrate a clear link between executive and shareholder returns, we have taken the first strike seriously and have undertaken an extensive review of the remuneration principles for Key Management Personnel.\n\nThe changes that the Board have implemented as a result of this review include:\n\n- 〉 A structural review of the Company resulting in the appointment in December 2012 of a senior human resources specialist as a direct report to the Managing Director and Executive Committee member;\n- 〉 Fees / base salary packages for Directors and Key Management Personnel were frozen from 1 July 2012;\n- 〉 Directors and Key Management Personnel have agreed to a 10% reduction in fees and remuneration;\n- 〉 The Managing Director and Key Management Personnel agreed to not accept any of their entitled Short Term Incentive (\"STI\") equivalent to a minimum of 10% of their base salary for the 2013 financial year;\n- 〉 A revised Performance Management System, including 'at risk' remuneration, has been introduced at all levels in corporate and site based operations including at risk remuneration for Key Management Personnel in the form of short term and long term incentive programs described in detail in this report; and\n- 〉 A broadening of the remuneration benchmarking processes for Directors and Key Management Personnel.\n\nFurther details on each of the changes outlined above are provided in specific sections of this Remuneration Report. We believe that these changes will be welcomed by our shareholders.\n\nWe will continue to review our remuneration polices and framework in consideration of a changing industry environment and your feedback.\n\nThank you for your interest in this report.\n\nRoss Smyth-Kirk Chairman Remuneration Committee", - "page_start": 50, - "page_end": 50, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Introduction\n\nThis Remuneration Report forms part of the Directors' Report. It outlines the Remuneration Policy and framework applied by the Company as well as details of the remuneration paid to Key Management Personnel. Key Management Personnel are defined as those persons having the authority and responsibility for planning, directing and controlling the activities of the Company, directly or indirectly, including Directors and members of the Executive Management group.\n\nThe information provided in this report has been prepared in accordance with s300A and audited as required by section 308 (3c) of the *Corporations Act 2001*.\n\nThe objective of the Company's remuneration philosophy is to ensure that Directors and senior staff are remunerated fairly and responsibly at a level that is competitive, reasonable and appropriate, in order to attract and retain suitably skilled and experienced people.\n\nDuring the year the Company introduced a STI Plan that is based on Key Management Personnel individual performance measures and a Long-Term Incentive (\"LTI\") Executive Rights Plan that provides performance-based remuneration to members of management through the issue of Deferred Rights and Performance Rights vesting over a period of three years. These new plans are discussed in further detail later in this report.\n\n## Voting and comments made at the Company's 2012 AGM\n\nThe table below provides a summary of the Board's action and / or comments in response to concerns raised by shareholders at the 2012 AGM in relation to remuneration.\n\nKey issues raised were:\n\n- 〉 the granting of deferred rights;\n- 〉 definition of what compromises 'fixed pay'; and\n- 〉 a lack of understanding of the TSR Alpha™ concept recommended as the LTI performance assessment process.\n\n## Remuneration Policy\n\nThe Remuneration Policy has been designed to align the interests of shareholders, Directors, and employees. This is achieved by setting a framework to:\n\n- 〉 help ensure an applicable balance of fixed and at-risk remuneration, with the at-risk component linking incentive and performance measures to both Group and individual performance;\n- 〉 provide an appropriate reward for Directors and Executive Management to manage and lead the business successfully and to drive strong, long-term growth in line with the Company's strategy and business objectives;\n- 〉 encourage executives to strive for superior performance;\n- 〉 facilitate transparency and fairness in executive remuneration policy and practices;\n- 〉 be competitive and cost effective in the current employment market; and\n- 〉 contribute to appropriate attraction and retention strategies for Directors and executives.\n\nIn consultation with external remuneration consultants, the Group has structured an executive remuneration framework that is market competitive and complimentary to the business strategy of the organisation.\n\nThe framework is intended to provide a mix of fixed and variable remuneration, with a blend of short and long-term incentives as appropriate. As executives gain seniority within the Group, the balance of this mix shifts to a higher proportion of \"at risk\" rewards (refer to chart – Remuneration Reward Mix on the following page).\n\n## Remuneration Governance\n\n#### Role of the Remuneration Committee\n\nThe Remuneration Committee is a committee of the Board and has responsibility for setting policy for determining the nature and amount of emoluments of Board members and senior executives. The Committee makes recommendations to the Board concerning:\n\n- 〉 Non-Executive Director fees;\n- 〉 remuneration levels of Executive Directors and other Key Management Personnel;\n- 〉 the executive remuneration framework and operation of the incentive plan; and\n- 〉 key performance indicators and performance hurdles for the executive team.\n\nIn forming its recommendations the Committee takes into consideration the Group's stage of development, remuneration in the industry and performance. The Corporate Governance Statement provides further information on the role of this committee.\n\n#### Remuneration Consultants\n\nThe Group engages the services of independent and specialist remuneration consultants from time to time. Under the *Corporations Act 2001*, remuneration consultants must be engaged by the Non-Executive Directors and reporting of any remuneration recommendations must be made directly to the Remuneration Committee.\n\n#### Concern Action or Comment\n\nThe Company has benchmarked the issuing of LTIs to the Managing Director and other Key Management Personnel against all companies of comparable market position as part of a broader remuneration comparison using AON Hewitt / McDonald, a review of survey data from the Egan and Associates \"The KMP Report\" and validation from Godfrey's Remuneration Group. The findings confirm the level of remuneration, inclusive of performance rights, to be comparable to similarly experienced Managing Directors and other Key Management Personnel with companies of comparable market positioning within the industry.\n\nThe Company has sought to discuss key elements contained in the Remuneration Report with shareholders, shareholder representative groups and proxy advisory groups. Further details regarding the TSR Alpha™ benchmarking methodology are included in the LTI section of this Report.\n\nDeferred rights for the Managing Director were transitional with eligibility for performance rights only in the future.\n\nDetails of the STI and LTI Plans are provided later in this Report.", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "When is it not necessary to review an EHC plan ?", - "target_page": 3, - "target_passage": " It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**10.** In regulation 13(3) (timescales for EHC plans), for \"(d)\" substitute \"(e)\".\n\n**11.** After regulation 18 (circumstances in which a local authority must review an EHC plan) insert—\n\n## \"**Circumstances in which it is not necessary to review an EHC plan**\n\n**18A.**—(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n\n(2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.\".\n\n**12.** In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert—\n\n\"(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**13.** In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**14.** In regulation 45 (unopposed appeals), after paragraph (7) insert—\n\n\"(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.\".\n\n### **Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014**\n\n**15.** The Special Educational Needs (Personal Budgets) Regulations 2014(**a**) are amended as follows.\n\n**16.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n**17.** After regulation 2 (interpretation) insert—\n\n\".\n\n#### \"**Relaxation of time period due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(<b>a) S.I. 2014/1652, to which there are amendments not relevant to these Regulations.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(2) (transfer of EHC plans) (in relation to the second reference to 15 working days), (4), (5), (7) (in relation to the second reference to 15 working days) and (8);\n- (b) regulation 16(2) and (3) (change of responsible commissioning body);\n- (c) regulation 20(9) and (10) (review where the child or young person attends a school or other institution);\n- (d) regulation 21(7), (8) and (9) (review of EHC plan where the child or young person does not attend a school or other institution);\n- (e) regulation 25(1) (notification of decision whether it is necessary to re-assess educational, health care and social care provision);\n- (f) regulation 27(4) (amending or replacing an EHC plan following a re-assessment);\n- (g) regulation 33 (requirement to consider mediation);\n- (h) regulation 34(1) and (2) (where a parent or young person does not wish to or fails to pursue mediation);\n- (i) regulation 35(2), (3) and (4) (mediation health care issues);\n- (j) regulation 36(2) (mediation no health care issues);\n- (k) regulation 39(1) and (3) (mediation certificate under section 55(5));\n- (l) regulation 42(3) and (4) (steps to be taken by a local authority);\n- (m) regulation 44(2)(d), (e), (f) and (h) (compliance with the orders of the First-tier Tribunal);\n- (n) regulation 45(4), (5) and (6A) (unopposed appeals);\n- (o) regulation 47 (disclosure of EHC plans in relation to higher education); and\n- (p) regulation 56(3) (publication of comments on the local offer).\".\n\n**6.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**7.** In regulation 5(4) (decision whether or not to conduct an EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\t- \"; or\n\t- (e) of a reason relating to the incidence or transmission of coronavirus\".\n- **8.** In regulation 8(2) (duty to co-operate in EHC needs assessments)—\n\t- (a) at the end of sub-paragraph (b) omit \"or\"; and\n\t- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**9.** In regulation 10(4) (decision not to secure an EHC plan)—", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n- 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n- 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n- 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n- 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n- 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n### **3. ENABLING TRANSFORMATIVE CHANGE**\n\n### **3.1. A new governance framework**\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place **a new European biodiversity governance framework**. This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a **clear set of agreed indicators** and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## **3.2. Stepping up implementation and enforcement of EU environmental legislation**\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind60. This is having dramatic consequences on biodiversity and comes with a substantial economic cost61 . **The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy**, for which political support and financial and human resources will need to be prioritised.\n\n60 See 2015 State of Nature in the EU report (COM (2015)219).\n\n61 The costs of non-implementation are estimated at EUR 50 billion per year.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## EMPLOYEE RETIREMENT AND BENEFIT PLANS\n\n11\n\nA noncontributory defined benefit retirement plan is maintained for all regular employees of the Company except those of Quest Medical. This plan was amended effective January 1, 1998 to become a cash balance pension plan. The Company's funding policy is to make the annual contributions required by applicable regulations and recommended by its actuary. The Company uses a December 31 measurement date for the plan.\n\nThe changes in the plan's projected benefit obligation (\"PBO\") as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| CHANGE IN BENEFIT OBLIGATION: | | | | |\n| Benefit obligation, January 1 | $ | 4,170 | $ | 4,599 |\n| Service cost | | 214 | | 320 |\n| Interest cost | | 298 | | 307 |\n| Amendments | | —- | | (616) |\n| Actuarial (gain)/loss | | 529 | | (93) |\n| Benefits paid | | (333) | | (347) |\n| Benefit obligation, December 31 | $ | 4,878 | $ | 4,170 |\n\nIn December 2002, the plan was amended to reduce benefit accruals for future service by plan participants by approximately 50 percent. This amendment caused a reduction in the PBO of approximately $616,000, and is reflected as a reduction in pension expense over the estimated employee service lives.\n\nThe changes in the fair value of plan assets, funded status of the plan and the status of the prepaid pension benefit recognized, which is included in the Company's balance sheets as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | | 2003 | | 2002 |\n| --- | --- | --- | --- | --- |\n| CHANGE IN PLAN ASSETS: | | | | |\n| Fair value of plan assets, January 1 | $ | 4,383 | $ | 4,550 |\n| Actual return on plan assets | | 963 | | (750) |\n| Employer contributions | | 400 | | 930 |\n| Benefits paid | | (333) | | (347) |\n| Fair value of plan assets, December 31 | $ | 5,413 | $ | 4,383 |\n| Funded status of plan | $ | 535 | $ | 213 |\n| Unrecognized actuarial loss | | 1,941 | | 2,154 |\n| Unrecognized prior service cost | | (502) | | (539) |\n| Unrecognized net transition obligation | | (88) | | (132) |\n| Net amount recognized as other assets | $ | 1,886 | $ | 1,696 |", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## M A N A G E M E N T ' S R E S P O N S I B I L I T Y F O R F I N A N C I A L S T A T E M E N T S\n\nManagement is responsible for the preparation and integrity of the consolidated financial statements and other financial information presented in this report. That responsibility is accomplished using internal controls designed to provide reasonable assurance as to the integrity and accuracy of the Company's financial records and to adequately safeguard, verify, and maintain accountability of assets. Such controls are based on established written policies and procedures, are implemented by trained personnel with an appropriate segregation of duties, and are monitored through a comprehensive internal audit program. These policies and procedures prescribe that the Company and all its members are to maintain the highest ethical and business standards.\n\nPricewaterhouseCoopers, LLP, independent accountants, is retained to audit HON INDUSTRIES' financial statements. Their accompanying report is based on audits conducted in accordance with auditing standards, generally accepted in the United States.\n\nThe Board of Directors exercises its responsibility for these financial statements through its Audit Committee, which consists entirely of independent board members. The Audit Committee meets periodically with the independent accountants and with the Company's internal auditors, both privately and with management present, to review accounting, auditing, internal controls, and financial reporting matters.\n\nJack D. Michaels Jerald K. Dittmer C H A I R M A N A N D V I C E P R E S I D E N T A N D C H I E F E X E C U T I V E O F F I C E R C H I E F F I N A N C I A L O F F I C E R", - "page_start": 59, - "page_end": 59, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "encouraging cooperation in **education for environmental sustainability** in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n### **4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA**\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances76. The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n# **4.1. Raising the level of ambition and commitment worldwide**\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts – working with like-minded partners in **a high-ambition coalition on biodiversity** – to agree an ambitious new global framework for post-2020 at the upcoming 15th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n- Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, **by 2050, all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n- Ambitious **global 2030 targets in line with EU commitments** in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n- A much **stronger implementation, monitoring and review** process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a **regular review cycle** to look at progress towards the\n\n76 Green alliances focus on cooperation with African and other partners to implement the European Green Deal.", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "429 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31994R2062\n\n430 Communication from the Commission - Adapting to change in work and society: a new Community strategy on health and safety at work 2002-2006 /COM/2002/0118 final\n\n431 European Commission Brussels, 31.5.2013 SWD (2013) 202 final COMMISSION STAFF WORKING DOCUMENT Evaluation of the European Strategy 2007-2012 on health and safety at work\n\n432 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Improving quality and productivity at work: Community strategy 2007-2012 on health and safety at work {SEC(2007) 214} {SEC(2007) 215} {SEC(2007) 216}\n\n433 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on an EU Strategic Framework on Health and Safety at Work 2014-2020, Brussels, 6.6.2014 COM (2014) 332 final\n\n434 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: EU strategic framework on health and safety at work 2021- 2027: Occupational safety and health in a changing world of work, {SWD(2021) 148 final} - {SWD(2021) 149 final, Brussels, 28.6.2021\n\n435 European Agency for Safety and Health at Work, 2019: National Strategies in the field of Occupational Safety and Health in the EU, 2019, https://osha.europa.eu/en/safety-and-health-legislation/osh-strategies\n\n436 See as overview: https://osha.europa.eu/en/emerging-risks, examples e.g. EU-OSHA, 2014: Current and emerging issues in the healthcare sector, including home and community care, European Risk Observatory report, European Risk Observatory Report;\n\nEU-OSHA, 2014: Green jobs, new risks? New and emerging risks to occupational safety and health in the electricity sectors, Workshop for European Sectoral Social Dialogue Committee Electricity'; EU OSHA 2019: A Review on the Future of Work: Performance Enhancing Drugs\n\n437 Eurostat: Accident at work statistics\n\n438 OSH related LFS Ad hoc modules were: 1999 - Accidents at work and occupational diseases; 2007 - Work related accidents, health problems and hazardous exposure; 2013 Accidents at work and other work-related health problems; 2020 - Accidents at work- and work-related health problems.\n\n439 Eurofound Website: https://www.eurofound.europa.eu/about-eurofound\n\n440Eurofound, European Working Conditions Survey\n\n441 Eurofound, First European Survey on the Work Environment 1991-1992\n\nhttps://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef9211en.pdf\n\n442 European Agency for Safety and Health at Work, ESENER, https://visualisation.osha.europa.eu/esener#!/en 443 Europeans and Health and Safety at Work, June 1992, https://europa.eu/eurobarometer/surveys/detail/113 ; Europeans and Health and Safety at Work, August 1996 https://europa.eu/eurobarometer/surveys/detail/158\n\n444 Eurobarometer: Working Conditions in Europe, June 1997, https://europa.eu/eurobarometer/surveys/detail/151 Eurobarometer: Working Conditions, April 2014, https://europa.eu/eurobarometer/surveys/detail/2044\n\n445 Eurobarometer: Work-Life Balance, October 2018, https://europa.eu/eurobarometer/surveys/detail/2185 446 Eurobarometer: Undeclared work in the European Union, February 2020\n\nhttps://europa.eu/eurobarometer/surveys/detail/2250\n\n447 The Horizon-project INGRID2 offers links to searchable databases on surveys related to working conditions. https://www.ingridportal.eu/en Supporting expertise in inclusive growth, e-portal 'Dataset on Working conditions', 448 e.g.: International Benchmarking on Occupational Safety and Health (OSH) Regulation, revised version 2018, http://www.iali-aiit.org/ ,\n\n449 The Horizon-project INGRID2 also provides an overview on these types of databases\n\n(https://www.ingridportal.eu/en ) 450 European Centre for the Development of Vocational Training CEDEFOP (https://www.cedefop.europa.eu/) Skills Panorama: https://skillspanorama.cedefop.europa.eu/en\n\n451 European Institute for Gender Equality EIGE (https://eige.europa.eu/ ) Gender Statistics Database, Work and Labour market, https://eige.europa.eu/gender-statistics/dgs, Gender Equality Index, e.g. index of digitalisation in the world of work (2020)\n\n452 European Chemical Agency ECHA (https://echa.europa.eu/home) Exposure scenario examples\n\n*453 European Centre for Disease Prevention and Control, https://www.ecdc.europa.eu/en*\n\n454 European Maritime Safety Agency EMSA (http://www.emsa.europa.eu/ ), Section on Safety and Security http://www.emsa.europa.eu/we-do/safety.html\n\n455 Fundamental Rights Agency FRA, https://fra.europa.eu/en, Section on 'Trafficking and labour exploitation, e.g the report from June 2021 titled: Protecting migrants in an irregular situation from labour exploitation – Role of the\n\nEmployers Sanctions Directive 456 European Monitoring Centre for Drugs and Drug Addiction EMCDDA (https://www.emcdda.europa.eu/), Section 'Best practice', Policy and practice briefings: Work places, https://www.emcdda.europa.eu/bestpractice/briefings/workplace_en\n\nQuite unknown and difficult to estimate: between one and nine percent of the employees take so-called neuro", - "page_start": 157, - "page_end": 157, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "#### **Evaluation**\n\nIt is critical to ensure that AI systems are safe, ethical, and without bias in the clinical domain. For the proposed approach, we performed comprehensive automatic evaluations and a novel, rigorous, patient safety-focused clinical evaluation. The unique clinical evaluation framework was designed to (1) screen for and identify the common, specific correctness issues in LLMs observed in longform clinical summarization and (2) assess the potential patient safety implications associated with any incorrectness identified using a modified version of the World Health Organization's International Classification for Patient Safety.45\n\n### **Automated Evaluations**\n\nWe used the summarization evaluation metrics of recall-oriented understudy for gisting evaluation (ROUGE),46 bidirectional encoder representations from transformers score (BERTScore),47 and source chunking approach for large-scale inconsistency evaluation (SCALE).48 ROUGE computes the overlap of n-grams between the generated and reference summaries. For longform document summarization, the following ROUGE scores are considered to be close to the reference summaries: ROUGE-1, above 0.4; ROUGE-2, above 0.2; and ROUGE-L, above 0.3.46 BERTScore leverages the pretrained contextual embeddings from BERT and matches words to compute a similarity score for each token in the candidate sentence with each token in the reference sentence. We used SCALE,48 a natural language inference–based approach, to measure the faithfulness between the source document and the generated text. Further background is provided about SCALE in eAppendix 2 in Supplement 1.\n\n#### **Statistical Analysis**\n\nBased on prior work, 3 board certified EM physician leaders (M.M., A.F., and P.S.) with experience in formal quality and patient safety review processes performed retrospective reviews of ED-based EHR records of 50 individual ED patient encounters, randomly selected from the test dataset.49 Based on prior published clinical evaluations of LLM, as well as the study feasibility of using EM physician quality and patient safety leaders, 50 ED patient encounters were evaluated.50 Reviewers\n\nCBC indicates complete blood count; CMP, comprehensive metabolic panel; CTH, computed tomography of the head; EHR, electronic health record; Hct, hematocrit; Hgb, hemoglobin; HPI, history of present illness; HR, heart rate; IP, inpatient; IVF, intravenous fluid; N/V/D, nausea, vomiting, and diarrhea; RR, respiratory rate; SDU, step down unit; SPO2, peripheral capillary oxygen saturation; WBC, white blood cell; WBG, whole blood glucose.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 5/12", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed8.pdf" - }, - { - "text": "build on the headline ambition to ensure that by 2050 **all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. As part of this, the world should commit to no human-induced extinction of species, at minimum where avoidable.\n\nThis strategy sets out how Europe can help make this happen. As a milestone, it aims to ensure that **Europe's biodiversity will be on the path to recovery by 2030** for the benefit of people, the planet, the climate and our economy, in line with the 2030 Agenda for Sustainable Development and with the objectives of the Paris Agreement on Climate Change. It addresses the five main drivers of biodiversity loss, sets out an enhanced governance framework to fill remaining gaps, ensures the full implementation of EU legislation, and pulls together all existing efforts. This strategy is enterprising and incentivising in spirit and action. It reflects the fact that **protecting and restoring nature will need more than regulation alone**. It will require action by citizens, businesses, social partners and the research and knowledge community, as well as strong partnerships between local, regional, national and European level. This strategy is in line with the ambitions and commitment set out in President von der Leyen's Political Guidelines and in the European Green Deal.\n\nAdopted in the heart of the COVID-19 pandemic, this strategy will also be a central element of the EU's recovery plan. It will be crucial to prevent and build resilience to future zoonosis outbreaks and to provide immediate business and investment opportunities for restoring the EU's economy.\n\nAll new initiatives and proposals will be underpinned by the Commission's better regulation tools. Based on public consultations and on the identification of the environmental, social and economic impacts, impact assessments will contribute to ensuring that all initiatives achieve their objectives in the most effective and least burdensome way and live up to a green oath to \"do no harm\".\n\n### **2. PROTECTING AND RESTORING NATURE IN THE EUROPEAN UNION**\n\nThe EU has legal frameworks, strategies and action plans to protect nature and restore habitats and species. But protection has been incomplete, restoration has been smallscale, and the implementation and enforcement of legislation has been insufficient17 .\n\nTo put biodiversity on the path to recovery by 2030, we need to step up the protection and restoration of nature. This should be done by improving and **widening our network of protected areas** and by developing an ambitious **EU Nature Restoration Plan**.\n\n#### **2.1. A coherent network of protected areas**\n\nBiodiversity fares better in protected areas. However, the current network of legally protected areas, including those under strict protection, is not sufficiently large to safeguard biodiversity. Evidence shows that the targets defined under the Convention on Biological Diversity are insufficient to adequately protect and restore nature18. Global\n\n17 Mid-term review of the EU Biodiversity Strategy to 2020 (COM(2015) 478 and SWD(2015) 187); Fitness Check of the EU Nature Legislation (Birds and Habitats Directives) (SWD(2016) 472); Fitness Check of the EU Water Legislation (SWD(2019) 439).\n\n18 The global Aichi biodiversity targets are that protected areas should cover 17% on land and 10% at sea, while scientific studies' figures range from 30% to 70%. See e.g. IPBES 2019.", - "page_start": 3, - "page_end": 3, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "Give me some info about the scroll bars in excel", - "target_page": 6, - "target_passage": "Appear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move through the document. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "### **NAVIGATING IN A FILE**\n\n| Arrow | Move one cell to the right, left, up or down |\n| --- | --- |\n| Keys | |\n| Tab | Move once cell to the right |\n| Ctrl+Home | To beginning file |\n| Ctrl+End | To end of typed information |\n| Home | Beginning of a line |\n| End | End of a line |\n| Page Down | Down one screen |\n| Page Up | Up one screen |\n| F5 | To a specific page |\n| Scroll bars | Appear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move |\n| | through the document. |", - "page_start": 5, - "page_end": 5, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **WRAPPING AND MERGING TEXT**\n\nMicrosoft Excel will allow long cell entries to spill across to other adjacent cells to the right as long as those cells are empty. If those cells contain data the spill-over will be chopped off. If you need to place long text entries in a cell you can arrange for Microsoft Excel to wrap the text within the cell and also merge that cell with others to accommodate the longer text entry.\n\n### **For Your Reference…**\n\n- To wrap text click in the cell to merge and click on the *Wrap Text* command in the *Alignment* group on the *Home* tab\n- To merge text click on the drop arrow for *Merge & Centre* in the *Alignment* group and select **Merge Cells**\n\n### **Handy to Know…**\n\n- In the example above, wrapping forced the text into one cell and Excel expanded the row height so that all of the text was accommodated. We then merged the text across several horizontal cells in the exercise above so that we could reduce the row height to a more acceptable level.", - "page_start": 25, - "page_end": 25, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "# **THE CHARTING PROCESS**\n\n*Charts* provide a way of seeing trends in the data in your worksheet. The charting feature in Excel is extremely flexible and powerful and allows you to create a wide range of charts from\n\nany of the *Insert* commands in the *Charts* group on the\n\n### **Inserting Charts**\n\nThe first step when creating a chart is to select the data from the worksheet that you want to chart. It is important to remember that the selected range (which can be either contiguous or non-contiguous), should include *headings* (e.g. names of months, countries, departments, etc). These become *labels* on the chart. Secondly, the selected range should not (normally) include totals as these are inserted automatically when a chart is created.\n\nThe second step is to create a chart using the *INSERT* tab on the ribbon. You can choose a *Recommended Chart* where Excel analyses the selected data and suggests several possible chart layouts.\n\nAlternatively you can create the chart yourself from scratch by choosing one of the *Insert* commands in the *Charts* group. Charts that you create in Excel can be either *embedded* into a worksheet, or they can exist on their own sheets, known as *chart sheets*.\n\n### **Embedded Charts**\n\nCharts that appear within a worksheet are known as embedded charts. A chart is really an object that sits on top of the worksheet – unlike numbers and letters, charts are not actually placed into worksheet cells.\n\n### **Chart Sheets**\n\nIf you want to keep your chart separate from the data you can move the chart to its own sheet. Chart sheets make it easier and more convenient to work with your chart because you'll see more of it on the screen – since the data is not there!", - "page_start": 43, - "page_end": 43, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **UNDERSTANDING WORKBOOKS**\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a *workbook*. Workbooks are just like huge electronic books with pages (or\n\n*sheets*) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\nA workbook (as you would expect) is made up of pages known as *worksheets*. You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled *Sheet1*, *Sheet2*, and *Sheet3*. Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n\nThe *Insert Worksheet* button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **TYPING TEXT OR NUMBERS INTO A WORKSHEET**\n\nGenerally when you start a new spreadsheet project, the first task is to enter some headings into rows and columns. To type anything into a worksheet you need to make the cell into which you wish to enter the data active. This can be done in a number of ways but the most common is to click in it first before typing.\n\n#### **For Your Reference… For Your Reference…**\n\nTo *enter text*: To *save a new document*:\n\n- 1. Click the cell pointer on the desired cell and 1. Click on the *File Tab* and select **Save As**\n- type the required information 2. Press , an arrow key or to 2. Locate the storage folder in the *Navigation pane*\n- confirm the data entry and to move the cell pointer to another cell 3. Type a *File name* and click on **[Save]**\n\n#### **Handy to Know… Handy to Know…**\n\n- You don't have to use or to make adjacent cells active. You can simply use the mouse and click in the cells if you want or even press the arrow keys to move up, down, left, or right. In the exercise above we have named the workbook *Garden Department Sales* and filed it in *C:\\Course Files for Excel 2010*. Each time you start Excel it will most likely assume you want to file your workbooks in a folder called *Documents* which is associated with the user name you use on the computer.", - "page_start": 6, - "page_end": 6, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **RENAMING A WORKSHEET**\n\nBy default, Excel names worksheets as *Sheet1*, *Sheet2*, *Sheet3*, etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n### **For Your Reference…**\n\n#### To *rename* a *worksheet*:\n\n- 1. Double click on the current name on the worksheet tab\n- 2. Type the new name and press\n\n#### **Handy to Know…**\n\n- You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on *Rename*.\n- A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Make your meaning more visual by formatting text\n\nTo format text, select it, and then select a button in the Font or Paragraph area on the Home tab.\n\nTry it: Select text in the lines below and choose formatting options so that the text is an example of the formatting it's describing:\n\n| Bold (keyboard shortcut: Ctrl+B) |\n| --- |\n| Italic (keyboard shortcut: Ctrl+I) |\n| Highlight |\n| Font color |\n| Bullets |\n| Numbering |\n\nPro tip: If you selected whole words for this exercise, did you notice that Word popped up a little toolbar, with the font formatting options?\n\n| Segoe UI - 11 | - A A | Aa - | Po |\n| --- | --- | --- | --- |\n| B I U v abe X2 X2 | | A - all - A - | |\n\nBetween that and keyboard shortcuts like Ctrl+B and Ctrl+I, you save time by not having to go up to the Home tab all the time.", - "page_start": 3, - "page_end": 3, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "form of the imaginary part.\n\nFIG. 17: Conductivities and ∆W for a fixed λωsf . Top – ωsf = 26 meV ,λ = 1,ωo = 40 meV ,Zo = 0.77 Bottom – ωsf = 2.6 meV ,λ = 10,ωo = 13.5 meV ,Zo = 1.22. The zero crossing for ∆W is not affected by a change in λ because it is determined only by λωsf . We set ∆ = 30 meV .\n\nFIG. 18: The behavior of Kubo sums in the CB model. Note that the spectral weight in the NS is always larger than in the SCS. We set ωsf = 26 meV ,λ = 1, and ∆ = 30 meV .\n\nWe performed the same calculations of conductivities and optical integrals as in the previous three cases. The results are summarized in Figs. 17 - 22. Fig 17 shows conductivities in the NS and the SCS for two couplings λ = 1 and λ = 10 (keeping λωsf constant). Other parameters Zo and ωo are calculated according to the discussion after Eq 21. for ωsf = 26 meV , λ = 1, we find ωo = 40 meV , Zo = 0.77. And for ωsf = 2.6 meV , λ = 10, we find ωo = 13.5 meV , Zo = 1.22. Note that the conductivity in the SCS starts at 2∆ + ωo (i.e. the resonance energy\n\nFIG. 19: The evolution of the optical integrals in the NS and the SCS in the CB model. Note that about ∼ 75% of the spectral weight is recovered up to 1 eV . We set ωsf = 26 meV ,λ = 1, and ∆ = 30 meV .\n\nFIG. 20: ∆W (in meV) for λ = 1(top) and λ = 10(bottom). We used ωsf = 26 meV /λ and ∆ = 30meV . The zero crossing is not affected because we keep λωsf constant. The notable difference is the widening of the dip at a larger λ.", - "page_start": 11, - "page_end": 11, - "source_file": "1001.0764.pdf" - }, - { - "text": "### **SELECTING ROWS**\n\nIf you want to make changes to an *entire row*, such as bolding all of the headings in a row or changing the font of all the cell entries, you must first select the row. This is done by clicking on the row header to the left of the row. Remember that any changes you make will apply to every cell in the row all the way across to column XFD, so be careful!\n\n### **For Your Reference…**\n\nTo *select* an entire *row*:\n\n- 1. Click on the row header of the row that you want to select\nOR\n\n- 1. Click in any cell in the row and press +\n### **Handy to Know…**\n\n- When *every cell* in a row or column is selected, the corresponding row or column header is filled in dark blue. When only *some* of the cells are selected, the row or column header is filled in orange. These indicators help you locate the active cell(s) on the worksheet.", - "page_start": 17, - "page_end": 17, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **CHANGING WORKSHEET TAB COLOURS**\n\nTo make it easier for you to distinguish between worksheets, Excel enables you to change the colours of worksheet tabs. This allows you, for example, to quickly distinguish between different financial years, departments or months. The *active sheet* appears as underlined in a gradient version of the selected colour, while inactive tabs will display a solid colour background.\n\n### **For Your Reference…**\n\n#### To *change the colour* of a *worksheet tab*:\n\n- 1. Right-click on the worksheet tab to display the shortcut menu\n- 2. Point to *Tab colour* to display a palette of colour options\n- 3. Click on the desired colour\n\n### **Handy to Know…**\n\n- To apply the same colour to two or more sheets at once, select them first. Hold down to select consecutive worksheets or hold down to select non-consecutive worksheets.", - "page_start": 13, - "page_end": 13, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "How to rename a worksheet in Excel ?", - "target_page": 12, - "target_passage": "To rename a worksheet: 1. Double click on the current name on the worksheet tab 2. Type the new name and press ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "### **RENAMING A WORKSHEET**\n\nBy default, Excel names worksheets as *Sheet1*, *Sheet2*, *Sheet3*, etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n### **For Your Reference…**\n\n#### To *rename* a *worksheet*:\n\n- 1. Double click on the current name on the worksheet tab\n- 2. Type the new name and press\n\n#### **Handy to Know…**\n\n- You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on *Rename*.\n- A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **GROUPING WORKSHEETS**\n\nWorksheet *grouping* enables you to make the same change at once to all selected worksheets. This feature is useful in situations where your worksheets have identical layouts or text. For\n\nexample, if you want to format the heading for multiple worksheets, you simply *group* the worksheets, make a change to one worksheet and the other worksheets will reflect the change also.\n\n### **For Your Reference…**\n\nTo *group worksheet tabs*:\n\n- 1. Click on the first worksheet tab\n- 2. Hold down , then click on the last worksheet tab\n\n### **Handy to Know…**\n\n- To deselect a group, either click on the tab of a worksheet that is not in the group, or rightclick on a tab and select **Ungroup Sheets**.\n- Most formatting and text changes done on a worksheet in a group will be applied to other sheets in that grouping.", - "page_start": 14, - "page_end": 14, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **INSERTING AND DELETING WORKSHEETS**\n\nOnce you've decided on a structure for your workbook, you may find that there are some worksheets that can be *deleted*. Alternatively, you may find that you need additional blank\n\nworksheets *inserted*. However, remember that deletion of worksheets is permanent and can't be undone using *Undo*, so always save your workbook before making these changes.\n\n### **For Your Reference…**\n\nTo *insert* a *new worksheet* into a *workbook*:\n\n- Click on the *New Sheet* icon to the right of the worksheet tabs\nTo *delete* a *worksheet* from a *workbook*:\n\n- Right click on the worksheet tab, then select **Delete**\n### **Handy to Know…**\n\n- To insert a worksheet between existing worksheets, right-click on the worksheet tab before which you want to insert a new sheet, then click on *Insert* to display the *Insert* dialog box. Select *Worksheet* and click on **[OK]**.", - "page_start": 9, - "page_end": 9, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **CHANGING WORKSHEET TAB COLOURS**\n\nTo make it easier for you to distinguish between worksheets, Excel enables you to change the colours of worksheet tabs. This allows you, for example, to quickly distinguish between different financial years, departments or months. The *active sheet* appears as underlined in a gradient version of the selected colour, while inactive tabs will display a solid colour background.\n\n### **For Your Reference…**\n\n#### To *change the colour* of a *worksheet tab*:\n\n- 1. Right-click on the worksheet tab to display the shortcut menu\n- 2. Point to *Tab colour* to display a palette of colour options\n- 3. Click on the desired colour\n\n### **Handy to Know…**\n\n- To apply the same colour to two or more sheets at once, select them first. Hold down to select consecutive worksheets or hold down to select non-consecutive worksheets.", - "page_start": 13, - "page_end": 13, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **COPYING A WORKSHEET**\n\nJust as you can copy the contents of cells and ranges within a worksheet, you can *duplicate* worksheets within a workbook. This technique is ideal for replicating layouts. For example, if you\n\nhave a budget workbook that contains data for several departments, you can create a worksheet for the first department and then copy it to create identical worksheets for other departments.\n\n### **For Your Reference…**\n\n### To *copy* a *worksheet*:\n\n- 1. Right-click on the worksheet to copy, then select *Move or Copy*\n- 2. Click on *Create a copy* so it appears ticked\n- 3. Click on **[OK]**\n\n### **Handy to Know…**\n\n- You can copy the current worksheet using the *HOME* tab by clicking on *Format* in the *Cells* group, then clicking on *Move or Copy Sheet*.\n- The *Before sheet* options in the *Move or Copy* dialog box allow you to position the copied worksheet where you want.", - "page_start": 10, - "page_end": 10, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **UNDERSTANDING WORKBOOKS**\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a *workbook*. Workbooks are just like huge electronic books with pages (or\n\n*sheets*) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\nA workbook (as you would expect) is made up of pages known as *worksheets*. You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled *Sheet1*, *Sheet2*, and *Sheet3*. Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n\nThe *Insert Worksheet* button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **MOVING OR COPYING A SHEET TO ANOTHER WORKBOOK**\n\nYou can *copy* worksheets to other workbooks as required. For example, you might need to keep records for six different divisions – rather than send each division the entire set of records, you\n\ncan copy their worksheet to another workbook and send them their data only. If worksheets exist in the other workbook, you will need to determine the order in which to place the copied worksheet.\n\n### **For Your Reference…**\n\n#### To *copy* a *sheet* to *another workbook*:\n\n- 1. Right click on the worksheet tab, then click on *Move or Copy*\n- 2. Select either *(new book)* or the name of another workbook in *To book*\n- 3. Tick *Create a copy*, then click on **[OK]**\n\n#### **Handy to Know…**\n\n- To copy a worksheet into an existing workbook, make sure that you open the destination workbook first to ensure that it is listed in *To book* in the *Move or Copy* dialog box.", - "page_start": 12, - "page_end": 12, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **TYPING TEXT OR NUMBERS INTO A WORKSHEET**\n\nGenerally when you start a new spreadsheet project, the first task is to enter some headings into rows and columns. To type anything into a worksheet you need to make the cell into which you wish to enter the data active. This can be done in a number of ways but the most common is to click in it first before typing.\n\n#### **For Your Reference… For Your Reference…**\n\nTo *enter text*: To *save a new document*:\n\n- 1. Click the cell pointer on the desired cell and 1. Click on the *File Tab* and select **Save As**\n- type the required information 2. Press , an arrow key or to 2. Locate the storage folder in the *Navigation pane*\n- confirm the data entry and to move the cell pointer to another cell 3. Type a *File name* and click on **[Save]**\n\n#### **Handy to Know… Handy to Know…**\n\n- You don't have to use or to make adjacent cells active. You can simply use the mouse and click in the cells if you want or even press the arrow keys to move up, down, left, or right. In the exercise above we have named the workbook *Garden Department Sales* and filed it in *C:\\Course Files for Excel 2010*. Each time you start Excel it will most likely assume you want to file your workbooks in a folder called *Documents* which is associated with the user name you use on the computer.", - "page_start": 6, - "page_end": 6, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "| Understanding Workbooks 1 |\n| --- |\n| Navigating in a File 2 |\n| Typing Text or Numbers Into A Worksheet 3 |\n| Typing Simple Formulas In A Worksheet 4 |\n| Filling A Series 5 |\n| Inserting And Deleting Worksheets 6 |\n| Copying A Worksheet 7 |\n| Renaming A Worksheet 8 |\n| Moving or Copying A Sheet To Another Workbook 9 |\n| Changing Worksheet Tab Colours 10 |\n| Grouping Worksheets 11 |\n| Freezing Rows And Columns 12 |\n| Selecting Ranges 13 |\n| Selecting Rows 14 |\n| Selecting Columns 15 |\n| Understanding Formatting 16 |\n| Applying General Formatting 17 |\n| Changing Fonts 18 |\n| Changing Font Size 19 |\n| Understanding Borders 20 |\n| Applying A Border To A Range 21 |\n| Wrapping And Merging Text 22 |\n| PRACTICE EXERCISE 23 |\n| PRACTICE EXERCISE 24 |\n| PRACTICE EXERCISE 25 |\n| Understanding Functions 26 |\n| Using The SUM Function To Add 27 |\n| Calculating An Average 28 |\n| Finding A Minimum Value 29 |\n| Common Error Messages 30 |\n| PRACTICE EXERCISE 31 |\n| Understanding Quick Analysis 32 |\n| Quick Formatting 33 |\n| Quick Charting 34 |\n| Quick Totals 35 |\n| Quick Sparklines 36 |\n| Quick Tables 37 |\n| Practice Exercise 38 |\n| Printing A Worksheet 39 |", - "page_start": 1, - "page_end": 1, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "# **THE CHARTING PROCESS**\n\n*Charts* provide a way of seeing trends in the data in your worksheet. The charting feature in Excel is extremely flexible and powerful and allows you to create a wide range of charts from\n\nany of the *Insert* commands in the *Charts* group on the\n\n### **Inserting Charts**\n\nThe first step when creating a chart is to select the data from the worksheet that you want to chart. It is important to remember that the selected range (which can be either contiguous or non-contiguous), should include *headings* (e.g. names of months, countries, departments, etc). These become *labels* on the chart. Secondly, the selected range should not (normally) include totals as these are inserted automatically when a chart is created.\n\nThe second step is to create a chart using the *INSERT* tab on the ribbon. You can choose a *Recommended Chart* where Excel analyses the selected data and suggests several possible chart layouts.\n\nAlternatively you can create the chart yourself from scratch by choosing one of the *Insert* commands in the *Charts* group. Charts that you create in Excel can be either *embedded* into a worksheet, or they can exist on their own sheets, known as *chart sheets*.\n\n### **Embedded Charts**\n\nCharts that appear within a worksheet are known as embedded charts. A chart is really an object that sits on top of the worksheet – unlike numbers and letters, charts are not actually placed into worksheet cells.\n\n### **Chart Sheets**\n\nIf you want to keep your chart separate from the data you can move the chart to its own sheet. Chart sheets make it easier and more convenient to work with your chart because you'll see more of it on the screen – since the data is not there!", - "page_start": 43, - "page_end": 43, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "I want to freeze a pane in my Excel worksheet ", - "target_page": 16, - "target_passage": "To freeze panes in a worksheet: 1. Click in the cell below and to the right of the area you want to freeze/unfreeze 2. Click on the VIEW tab 3. Click on Freeze Panes in the Window group, then select Freeze Panes ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "### **INSERTING AND DELETING WORKSHEETS**\n\nOnce you've decided on a structure for your workbook, you may find that there are some worksheets that can be *deleted*. Alternatively, you may find that you need additional blank\n\nworksheets *inserted*. However, remember that deletion of worksheets is permanent and can't be undone using *Undo*, so always save your workbook before making these changes.\n\n### **For Your Reference…**\n\nTo *insert* a *new worksheet* into a *workbook*:\n\n- Click on the *New Sheet* icon to the right of the worksheet tabs\nTo *delete* a *worksheet* from a *workbook*:\n\n- Right click on the worksheet tab, then select **Delete**\n### **Handy to Know…**\n\n- To insert a worksheet between existing worksheets, right-click on the worksheet tab before which you want to insert a new sheet, then click on *Insert* to display the *Insert* dialog box. Select *Worksheet* and click on **[OK]**.", - "page_start": 9, - "page_end": 9, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **FREEZING ROWS AND COLUMNS**\n\nWhen you lay out your data in rows and columns, it is most likely that your headings end up at the top or to the left of your data. If you have a large amount of data, you may find that when you\n\nscroll across or down to particular cells, the headings scroll out of view. This problem can be resolved by *freezing* the rows and/or columns that hold the headings.\n\n### **For Your Reference…**\n\n#### To *freeze panes* in a *worksheet*:\n\n- 1. Click in the cell below and to the right of the area you want to freeze/unfreeze\n- 2. Click on the *VIEW* tab\n- 3. Click on *Freeze Panes* in the *Window* group, then select **Freeze Panes**\n\n### **Handy to Know…**\n\n- If you want to freeze only the rows above the selected cell (leaving all columns unfrozen), select the cell in column *A* of that row – e.g. to freeze rows *1* to *6*, click in cell *A7*. The same applies to freezing only columns and leaving the rows unfrozen: select the cell in row *1*.", - "page_start": 15, - "page_end": 15, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **GROUPING WORKSHEETS**\n\nWorksheet *grouping* enables you to make the same change at once to all selected worksheets. This feature is useful in situations where your worksheets have identical layouts or text. For\n\nexample, if you want to format the heading for multiple worksheets, you simply *group* the worksheets, make a change to one worksheet and the other worksheets will reflect the change also.\n\n### **For Your Reference…**\n\nTo *group worksheet tabs*:\n\n- 1. Click on the first worksheet tab\n- 2. Hold down , then click on the last worksheet tab\n\n### **Handy to Know…**\n\n- To deselect a group, either click on the tab of a worksheet that is not in the group, or rightclick on a tab and select **Ungroup Sheets**.\n- Most formatting and text changes done on a worksheet in a group will be applied to other sheets in that grouping.", - "page_start": 14, - "page_end": 14, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **UNDERSTANDING WORKBOOKS**\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a *workbook*. Workbooks are just like huge electronic books with pages (or\n\n*sheets*) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\nA workbook (as you would expect) is made up of pages known as *worksheets*. You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled *Sheet1*, *Sheet2*, and *Sheet3*. Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n\nThe *Insert Worksheet* button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **RENAMING A WORKSHEET**\n\nBy default, Excel names worksheets as *Sheet1*, *Sheet2*, *Sheet3*, etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n### **For Your Reference…**\n\n#### To *rename* a *worksheet*:\n\n- 1. Double click on the current name on the worksheet tab\n- 2. Type the new name and press\n\n#### **Handy to Know…**\n\n- You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on *Rename*.\n- A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **COPYING A WORKSHEET**\n\nJust as you can copy the contents of cells and ranges within a worksheet, you can *duplicate* worksheets within a workbook. This technique is ideal for replicating layouts. For example, if you\n\nhave a budget workbook that contains data for several departments, you can create a worksheet for the first department and then copy it to create identical worksheets for other departments.\n\n### **For Your Reference…**\n\n### To *copy* a *worksheet*:\n\n- 1. Right-click on the worksheet to copy, then select *Move or Copy*\n- 2. Click on *Create a copy* so it appears ticked\n- 3. Click on **[OK]**\n\n### **Handy to Know…**\n\n- You can copy the current worksheet using the *HOME* tab by clicking on *Format* in the *Cells* group, then clicking on *Move or Copy Sheet*.\n- The *Before sheet* options in the *Move or Copy* dialog box allow you to position the copied worksheet where you want.", - "page_start": 10, - "page_end": 10, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **MOVING OR COPYING A SHEET TO ANOTHER WORKBOOK**\n\nYou can *copy* worksheets to other workbooks as required. For example, you might need to keep records for six different divisions – rather than send each division the entire set of records, you\n\ncan copy their worksheet to another workbook and send them their data only. If worksheets exist in the other workbook, you will need to determine the order in which to place the copied worksheet.\n\n### **For Your Reference…**\n\n#### To *copy* a *sheet* to *another workbook*:\n\n- 1. Right click on the worksheet tab, then click on *Move or Copy*\n- 2. Select either *(new book)* or the name of another workbook in *To book*\n- 3. Tick *Create a copy*, then click on **[OK]**\n\n#### **Handy to Know…**\n\n- To copy a worksheet into an existing workbook, make sure that you open the destination workbook first to ensure that it is listed in *To book* in the *Move or Copy* dialog box.", - "page_start": 12, - "page_end": 12, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "# **THE CHARTING PROCESS**\n\n*Charts* provide a way of seeing trends in the data in your worksheet. The charting feature in Excel is extremely flexible and powerful and allows you to create a wide range of charts from\n\nany of the *Insert* commands in the *Charts* group on the\n\n### **Inserting Charts**\n\nThe first step when creating a chart is to select the data from the worksheet that you want to chart. It is important to remember that the selected range (which can be either contiguous or non-contiguous), should include *headings* (e.g. names of months, countries, departments, etc). These become *labels* on the chart. Secondly, the selected range should not (normally) include totals as these are inserted automatically when a chart is created.\n\nThe second step is to create a chart using the *INSERT* tab on the ribbon. You can choose a *Recommended Chart* where Excel analyses the selected data and suggests several possible chart layouts.\n\nAlternatively you can create the chart yourself from scratch by choosing one of the *Insert* commands in the *Charts* group. Charts that you create in Excel can be either *embedded* into a worksheet, or they can exist on their own sheets, known as *chart sheets*.\n\n### **Embedded Charts**\n\nCharts that appear within a worksheet are known as embedded charts. A chart is really an object that sits on top of the worksheet – unlike numbers and letters, charts are not actually placed into worksheet cells.\n\n### **Chart Sheets**\n\nIf you want to keep your chart separate from the data you can move the chart to its own sheet. Chart sheets make it easier and more convenient to work with your chart because you'll see more of it on the screen – since the data is not there!", - "page_start": 43, - "page_end": 43, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **EMBEDDING A CHART INTO A WORKSHEET**\n\nCharts can either be presented in their own sheets or they can be embedded into a worksheet that contains data. In fact, you can move a chart back and forth between its own\n\nsheet and a worksheet as often as you wish without impacting at all on the chart. Sometimes it is easier to work with a chart in its own sheet, but it may be necessary to print the chart with its data.\n\n### **For Your Reference…**\n\nTo *embed* a *chart* in a *worksheet*:\n\n- 1. Click on the *CHART TOOLS: DESIGN* tab, then click on *Move Chart* in the *Location* group\n- 2. Click on the drop arrow, select the sheet to embed it into, then click on **[OK]**\n\n### **Handy to Know…**\n\n- Embedding is normally only done when it is necessary to print the worksheet and the data together.", - "page_start": 56, - "page_end": 56, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **CHANGING WORKSHEET TAB COLOURS**\n\nTo make it easier for you to distinguish between worksheets, Excel enables you to change the colours of worksheet tabs. This allows you, for example, to quickly distinguish between different financial years, departments or months. The *active sheet* appears as underlined in a gradient version of the selected colour, while inactive tabs will display a solid colour background.\n\n### **For Your Reference…**\n\n#### To *change the colour* of a *worksheet tab*:\n\n- 1. Right-click on the worksheet tab to display the shortcut menu\n- 2. Point to *Tab colour* to display a palette of colour options\n- 3. Click on the desired colour\n\n### **Handy to Know…**\n\n- To apply the same colour to two or more sheets at once, select them first. Hold down to select consecutive worksheets or hold down to select non-consecutive worksheets.", - "page_start": 13, - "page_end": 13, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What is the msodocexStructTypeArticle type value ?", - "target_page": 21, - "target_passage": "A group of nodes forming a single flow of text that should be read or searched as a contiguous block of content. Some documents have a single article and others have multiple articles.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "| Type Value | Description |\n| --- | --- |\n| msodocexStructTypeTOC | A table of contents. |\n| msodocexStructTypeTOCI | An item in a table of contents. |\n| msodocexStructTypeExtLink | A link to an external resource. |\n| msodocexStructTypeIntLink | A link to an internal resource. |\n| msodocexStructTypeFootnote | A footnote. |\n| msodocexStructTypeEndnote | An endnote. |\n| msodocexStructTypeTextbox | A text box. |\n| msodocexStructTypeHeader | A block of text forming a header. |\n| msodocexStructTypeFooter | A footer. |\n| msodocexStructInlineShape | An inline shape. |\n| msodocexStructAnnotation | An annotation. |\n| msodocexStructTypeSpanBlock | A block of text. |\n| msodocexStructTypeWorkbook | A workbook. |\n| msodocexStructTypeWorksheet | A worksheet. |\n| msodocexStructTypeMacrosheet | A macrosheet. |\n| msodocexStructTypeChartsheet | A chartsheet. |\n| msodocexStructTypeDialogsheet | A dialogsheet. |\n| msodocexStructTypeSlide | A slide. |\n| msodocexStructTypeChart | A chart. |\n| msodocexStructTypeDiagram | A SmartArt diagram. |\n| msodocexStructTypeBulletText | Buller text. |\n| msodocexStructTypeTextLine | A line of text. |\n| msodocexStructTypeDropCap | A drop cap. |\n| msodocexStructTypeSection | A section. |\n| msodocexStructTypeAnnotationBegin | The beginning of an annotation. |\n| msodocexStructTypeAnnotationEnd | The end of an annotation. |", - "page_start": 21, - "page_end": 21, - "source_file": "office-pdf.pdf" - }, - { - "text": "- **shapeProperty** is for a msodocexStructTypeFigure where the content is a shape, text box, or table cell and contains bit fields from the MSODOCEXSHAPEPROPERTY enumeration.\n- **tableAttr** is the table cell attributes for a msodocexStructTypeTH or msodocexStructTypeTD.\n- **idTableHeader** is the unique id for an msodocexStructTypeTH or msodocexStructTypeTD.\n- **iTargetParentId** is the id of the node to reparent an msodocexStructTypeDiagram to.\n\nTable 3. Enumerated values of MSODOCEXLINEBREAKTYPE\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexLineBreakTypeNormal | Normal line break. |\n| msodocexLineBreakTypeManual | Manual line break. |\n| msodocexLineBreakTypeEOP | End of paragraph. |\n\n#### Table 4. Enumerated values of MSODOCEXLISTTYPE\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexListTypeNone | No bullets or numbering. |\n| msodocexListTypeBulletDisc | Disc-shaped bullets. |\n| msodocexListTypeBulletCircle | Circle-shaped bullets. |\n| msodocexListTypeBulletSquare | Square-shaped bullets. |\n| msodocexListTypeBulletDecimal | Decimal numbering. |\n| msodocexListTypeUpperRoman | Uppercase Roman numeral numbering. |\n| msodocexListTypeLowerRoman | Lowercase Roman numberal numbering. |\n| msodocexListTypeUpperAlpha | Uppercase alphabetic numbering. |\n| msodocexListTypeLowerAlpha | Lowercase alphabetic numbering. |\n\nTable 5. Enumerated values of MSODOCEXSHAPEPROPERTY bit fields", - "page_start": 9, - "page_end": 9, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n| --- | --- |\n| msodocexStructTypeParaRTLAttr | A block of text within an article with right-to-left |\n| | layout. |\n| msodocexStructTypeTableRTLAttr | A block of text forming a table with right-to-left |\n| | layout. |\n| msodocexStructTypeHeadingRTLAttr | A heading in the text with right-to-left layout. |\n| msodocexStructTypeListItemRTLAttr | A block of text forming a list item with right-to-left |\n| | layout. |\n| msodocexStructTypeParaUnannotatableAttr | A block of text within an article that is not |\n| | annotatable. |\n| msodocexStructTypeTHead | The header row area in a table. |\n| msodocexStructTypeTBody | The body area in a table, i.e. the portion between |\n| | the THead and TFoot. |\n| msodocexStructTypeLabel | A label. |\n| msodocexStructTypeEquation | An equation. |\n| msodocexStructTypeIntLinkNoteRef | A footnote or endnote reference mark link. |\n| msodocexStructTypeTFoot | The footer row area in a table. |\n\n**fContentNode** Specifies whether a **DocExComment_EndStructNode** structure marks the end of this structure node. If **fContentNode** is **true**, a\n\n**DocExComment_EndStructNode** structure closes off the content bounded by the node. If this **fContentNode** has a **false** value, then the node does not bound any content.\n\nThe **fContentNode** member affects the interpretation of the parent ID value of subsequent nodes. If **fContentNode**is **true**, nodes that are inserted between this **DocExComment_BeginStructNode** and a subsequent **DocExComment_EndStructNode**, and that have a parent ID of **-1**, are children of this node. However, if **fContentNode** is **true**, nodes inserted after this **DocExComment_BeginStructNode**, and that have a parent ID of **-1**, are not children of this node. They are children of the next-most-recently specified node that has **fContentNode** equal to **false**.\n\nYou can nest document structure nodes to arbitrary depth.\n\n**cwchAltText** Specifies the number of Unicode characters in the block of alternate text that follows the structure. This Unicode string specifies alternate text for the node (for example, alternate text for an image).", - "page_start": 22, - "page_end": 22, - "source_file": "office-pdf.pdf" - }, - { - "text": "The **idNode** member specifies the ID of the node. This member may not have a value of **0**. A value of **-1** indicates that child nodes do not use the **idNodeParent** member to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have a ID of **-1**. If the ID is not **-1**, the value is unique across the document.\n\nThe **nodetype** specifies the type of structure node. This member is equal to one of the values from the **MSODOCEXSTRUCTTYPE** enumeration type. The following table lists examples of document structure node types.\n\nTable 7. Document structure node types\n\n#### ノ **Expand table**\n\n| Type Value | Description |\n| --- | --- |\n| msodocexStructTypePara | A block of text within an article. Its parent node |\n| | must be an article. |\n| msodocexStructTypeFigure | A graphical element (for example, an image or |\n| | collection of shapes) that has a textual |\n| | representation. The textual representation is the |\n| | alternate text used for reading or searching the |\n| | document. |\n| msodocexStructTypeArticle | A group of nodes forming a single flow of text that |\n| | should be read or searched as a contiguous block |\n| | of content. Some documents have a single article |\n| | and others have multiple articles. |\n| msodocexStructTypeHeading | A heading in the text. |\n| msodocexStructTypeTable | A block of text forming a table. |\n| msodocexStructTypeTR | A block of text forming a single row of a table. |\n| msodocexStructTypeTD | A block of text forming a single cell in a table row. |\n| msodocexStructTypeTH | A block of text forming a single header cell in a |\n| | table row. |\n| msodocexStructTypeList | A block of text forming a list. |\n| msodocexStructTypeListItem | A block of text forming a list item. |\n| msodocexStructTypeListBody | A block of text forming the body of a list item. |\n| msodocexStructTypeDocument | A document. |\n| msodocexStructTypePage | A page in the document. |", - "page_start": 20, - "page_end": 20, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\ntypedef struct _MsoDocexStructNode\n{\n int idNode;\n MSODOCEXSTRUCTTYPE nodetype;\n WCHAR* pwchAltText;\n union\n {\n int iHeadingLevel;\n ULONG idPara;\n ULONG idDropCap;\n int iPage;\n WCHAR* pwchActualText;\n MSODOCEXLINEBREAKTYPE bt;\n int iListLevel;\n MSODOCEXLISTTYPE listType;\n ULONG idAtn;\n long cpLim;\n int shapeProperty;\n MsoDocexTableAttr tableAttr;\n WCHAR* idTableHeader;\n int iTargetParentId;\n };\n} MSODOCEXSTRUCTNODE;\n```\nThe **idNode** member specifies the ID of the node being passed in the call to **HrBeginStructNode**. This member may not have a value of **0**. A value of **-1** indicates that child nodes do not use the *idNodeParent* parameter to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have an ID of **-1**. If the ID is not **-1**, the value is unique across the document.\n\nThe embedded union at the end of the MSODOCEXSTRUCTNODE is interpreted differently depending on the type of node:\n\n- **iHeadingLevel** is the heading level for an msodocexStructTypeHeading.\n- **idPara** is the paragraph id for a P, TOCI, or ListBody.\n- **idDropCap** is the id of an msodocexStructTypeDropCap.\n- **iPage** is the page number for an msodocexStructTypePage.\n- **bt** is the line break type for an msodocexStructTypeTextLine.\n- **iListLevel** is the list level for an msodocexStructTypeList or msodocexStructTypeListItem.\n- **listType** is the list type for an msodocexStructTypeListItem.\n- **idAtn** is the id of an msodocexStructTypeAnnotationBegin or msodocexStructTypeAnnotationEnd.\n- **cpLim** is used to determine the nesting order of tables within tables for an msodocexStructTypeTable, msodocexStructTypeTOC, or msodocexStructTypeListBody.", - "page_start": 8, - "page_end": 8, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Value | Numeric Value | Description |\n| --- | --- | --- |\n| msodocexShape | 0x00000001 | The object is a shape or text box. |\n| msodocexShapeText | 0x00000002 | The object has non-whitespace text. |\n| msodocexShapePath | 0x00000004 | The object has a fill and/or outline. |\n| msodocexShapeAltText | 0x00000008 | The object has Alt Text. |\n| msodocexShapeEquation | 0x00000010 | The object has text that contains an equation. |\n| msodocexShapeTabelCell | 0x00000020 | The object is a cell in a table. |\n\n#### **MsoDocexTableAttr**\n\nThe **MsoDocexTableAttr** structure fits in 32 bits and includes the row and column span and header scope information for a table cell.\n\n```\nC++\nstruct MsoDocexTableAttr\n{\n static constexpr unsigned int MaxSpanBits = sizeof(unsigned int) * 8 / 2\n- 1;\n static constexpr unsigned int MaxSpanValue = (1u << MaxSpanBits) - 1;\n unsigned int rowSpan : MaxSpanBits;\n unsigned int fRowScope : 1;\n unsigned int colSpan : MaxSpanBits;\n unsigned int fColScope : 1;\n};\n```\nThe members of **MsoDocexTableAttr** structure are as follows:\n\n- **MaxSpanBits** Specifies the number of bits available for the rowSpan and colSpan values, which is 15.\n- **MaxSpanValue** Specifies the maximum value that can be specified for the rowSpan and colSpan.\n- **rowSpan** Specifies the number of rows that a table cell spans.\n- **fRowScope** Specifies whether the header is Row/Both or Column.\n- **colSpan** Specifies the number of columns that a table cell spans.", - "page_start": 10, - "page_end": 10, - "source_file": "office-pdf.pdf" - }, - { - "text": "The *metadatatype* parameter specifies the type of metadata represented by the string. The *metadatatype* parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 8. Enumerated values of MSODOCEXMETADATA\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexMetadataTitle | The title of the document. |\n| msodocexMetadataAuthor | The author of the document |\n| msodocexMetadataSubject | String that describes the subject matter of the document (for |\n| | example, business or science). |\n| msodocexMetadataKeywords | Keyword relevant to the document content. |\n| msodocexMetadataCreator | The creator of the document, possibly distinct from the author. |\n| msodocexMetadataProducer | The producer of the document, possibly distinct from the author |\n| | or creator. |\n| msodocexMetadataCategory | String that describes the type of document (for example, memo, |\n| | article, or book). |\n| msodocexMetadataStatus | Status of the document. This field can reflect where the |\n| | document is in the publication process (for example, draft or |\n| | final). |\n| msodocexMetadataComments | Miscellaneous comments relevant to the document. |\n\nFor a given document, each metadata type can have only one string associated with it. So, for example, if the document has multiple keywords, they are passed to the add-in as one concatenated string.\n\nThe *pwchValue* parameter specifies a Unicode string that contains the metadata itself.\n\nHow the add-in incorporates the text-string metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n### **HrAddDocumentMetadataDate**\n\nPublisher calls the **HrAddDocumentMetadataDate** method to specify document metadata in the form of a FILETIME structure.", - "page_start": 34, - "page_end": 34, - "source_file": "office-pdf.pdf" - }, - { - "text": "Table 6. Semantic record types supported by fixed-format export\n\nノ **Expand table**\n\n| Comment Value | Structure Type |\n| --- | --- |\n| msodocexcommentExternalHyperlink | DocExComment_ExternalHyperlink |\n| msodocexcommentExternalHyperlinkRctfv | DocExComment_ExternalHyperlink |\n| msodocexcommentInternalHyperlink | DocExComment_InternalHyperlink |\n| msodocexcommentInternalHyperlinkRctfv | DocExComment_InternalHyperlink |\n| msodocexcommentColorInfo | DocExComment_ColorInfo |\n| msodocexcommentColorMapEnable | DocExComment_ColorEnable |\n| msodocexcommentBeginTextRun | DocExComment_BeginTextRun |\n| msodocexcommentBeginTextRunRTL | DocExComment_BeginTextRun |\n| msodocexcommentEndTextRun | DocExComment_EndTextRun |\n| msodocexcommentBeginStructNode | DocExComment_BeginStructNode |\n| msodocexcommentEndStructNode | DocExComment_EndStructNode |\n| msodocexcommentUnicodeForNextTextOut | DocExComment_UnicodeForNextTextOut |\n| msodocexcommentUnicodeForNextTextOutRTL | DocExComment_UnicodeForNextTextOut |\n| msodocexcommentEPSColor | DocExComment_EPSColor |\n| msodocexcommentEPSCMYKJPEG | DocExComment_EPSColorCMYKJPEG |\n| msodocexcommentEPSSpotImage | DocExComment_EPSColorSpotImage |\n| msodocexcommentEPSStart | DocExComment_EPSStart |\n| msodocexcommentPageName | DocExComment_PageName |\n| msodocexcommentTransparent | DocExComment_Transparent |\n\n#### **DocExComment_ExternalHyperlink(Rctfv)**\n\nThe **DocExComment_ExternalHyperlink(Rctfv)** structure describes a hyperlink that links to outside of the document, for example to a Web site on the Internet.", - "page_start": 14, - "page_end": 14, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\nC++\nHRESULT HrAddDocumentMetadataDate(\n MSODOCEXMETADATA metadataType, \n const FILETIME* pftLocalTime\n);\n```\nThe *metadatatype* parameter specifies the type of metadata represented by the **FILETIME** structure. The *metadatatype* parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 9. Enumerated values of MSODOCEXMETADATA\n\n#### ノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexMetadataCreationDate | The creation date for the document. |\n| msodocexMetadataModDate | The last-modified date for the document. |\n\nThe *pftLocalTime* parameter specifies a pointer to a FILETIME structure that contains the date and time information for the metadata. The following code snippet demonstrates how to extract this information from the structure.\n\n```\nC++\n```\n\n```\nSYSTEMTIME st = { 0 };\nWCHAR s[100];\nFileTimeToSystemTime(pfiletime, &st);\nswprintf(s, 99, L\" %04d-%02d-%02dT%02d:%02d:%02dZ\", st.wYear % 10000, \n st.wMonth % 100, st.wDay % 100, st.wHour % 100, st.wMinute % 100, \n st.wSecond % 100);\n```\nHow the add-in incorporates the date and time metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n### **HrFinalize**\n\nPublisher calls the **HrFinalize** method at the end of the document-export process.\n\nC++", - "page_start": 35, - "page_end": 35, - "source_file": "office-pdf.pdf" - }, - { - "text": "The collection of structure nodes within the document forms a tree; each node has a parent node and may also have sibling nodes. The **idNodeParent** and **iSortOrder** members describe the structure of this tree. Note that a child node may or may not appear between the **DocExComment_BeginStructNode** and **DocExComment_EndStructNode** structures of the parent node in the EMF.\n\n```\nC++\nstruct DocExComment_BeginStructNode\n{\n DWORD ident {};\n DWORD iComment {};\n int idNodeParent {};\n int iSortOrder {};\n MSODOCEXSTRUCTNODE desn;\n BOOL fContentNode {};\n int cwchAltText {};\n};\n```\nThe members of the **DocExComment_BeginStructNode** structure are as follows:\n\n- **ident** Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n- **iComment** Specifies the MSODOCEXCOMMENT value, msodocexcommentBeginStructNode.\n- **idNodeParent** Specifies the ID of the parent node. A value of **0** specifies the root node. A value of **-1** specifies the currently open structure node, that is, the *enclosing* structure node.\n- **iSortOrder** Specifies the sort order of the structure node among its sibling nodes. The sort order enables the add-in to order the content correctly in the exported document.\n\nNo two nodes can have the same sort order. However, the set of integers that constitute the sort order do not need to be contiguous.\n\nA value of **-1** indicates that the sibling order is the same order in which the nodes appear in the EMF comments. Note that the order in which the content appears in the EMF is not necessarily the order in which the content is consumed by a user of the document.\n\n- **desn** Specifies a **MSODOCEXSTRUCTTYPE** structure, which is defined earlier in the document.", - "page_start": 19, - "page_end": 19, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What are vector colors ?", - "target_page": 29, - "target_passage": "Vector colors are any COLORREF values that the add-in receives from Publisher.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "The **DocExComment_EPSColorSpotImage** structure provides spot color information for the subsequent RGB image. For more information about this structure, see the section Extended Color Support.\n\n```\nC++\ntypedef struct\n{\n DWORD ident {};\n DWORD iComment {};\n COLORREF cmykAlt { 0 };\n COLORREF rgbAlt { 0 };\n float flTintMin {};\n float flTintMax {};\n char szSpotName[1];\n} DocExComment_EPSColorSpotImage;\n```\nThe members of the **DocExComment_EPSColorSpotImage** structure are as follows:\n\n- **ident** Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n- **iComment** Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSSpotImage.\n- **cmykAlt** Specifies a CMYK color ID.\n- **rgbAlt** Specifies an RGB color ID.\n- **flTintMin** Specifies the minimum tint.\n- **flTintMax** Specifies the maximum tint.\n- **szSpotName[1]** Specifies a variable length, zero-terminated string that contains the spot name.\n\n### **Extended Color Support**\n\nTo support extended color spaces in Publisher, additional EMF semantic records and interfaces are needed because EMF only supports RGB (red-green-black) colors. Extended color spaces include CMYK (cyan-magenta-yellow-black) and spot color space, which are commonly used in commercial printing.\n\nPublisher uses color mapping to represent extended colors in the document EMF. Publisher builds a color table for all colors used in the document and replaces actual colors with color IDs in the EMF. The type for the color ID is **COLORREF**, which is the", - "page_start": 27, - "page_end": 27, - "source_file": "office-pdf.pdf" - }, - { - "text": "same type that is used for RGB color. For information about the COLORREF structure, see COLORREF.\n\nTo resolve color IDs in the EMF back to the extend color space, the add-in calls back to Publisher through the **HrResolveColor** method of the **IMsoDocExporterSite** interface. The add-in passes Publisher an interface pointer to an **IDOCEXCOLOR** interface as one of the parameters to **HrResolveColor**. Publisher takes the color IDs, also specified in the call to **HrResolveColor**, converts them to extended color (RGB, CMYK, or spot color), and passes them back to the add-in through the methods in the **IDOCEXCOLOR** interface.\n\n#### **Vector Color and Recolored Images**\n\nVector colors are any **COLORREF** values that the add-in receives from Publisher. For example, text color, line stroke color, and color for metafile recolor. When color mapping is enabled, Publisher uses a color ID for **COLORREF** rather than a real RGB color value. If Publisher provides the add-in an **IMsoDocExporterSite** interface pointer by calling the **SetDocExporterSite** method of the **IMsoDocExporter** interface, the add-in should always call the **IMsoDocExporterSite::HrResolveColor** method to convert the **COLORREF** to an extended color, which the add-in receives through the methods in the **IDOCEXCOLOR** interface.\n\nTo support vector color mapping, the add-in needs to do the following:\n\n- Implement class support for an **IDOCEXCOLOR** interface. The methods in this interface enable Publisher to pass extended color back to the add-in.\n- Cache the following color state values from the semantic records in the EMF.\n- Set foreground color for recoloring. This is set through the **DocExComment_ColorInfo** structure.\n- Set background color for recoloring. This is set through the **DocExComment_ColorInfo** structure.\n- Determine when color mapping is enabled. This is set through the **DocExComment_ColorEnable** structure.\n- For a vector color, create an **IDOCEXCOLOR** interface with the color ID, so that **IDOCEXCOLOR::GetUnresolvedRGB** returns the color ID. The add-in should call the **IMsoDocExporterSite::HrResolveColor** method with the **IDOCEXCOLOR** interface and cached color states. Publisher calls the **IDOCEXCOLOR** interface methods with the final color, which can be RGB, CMYK, spot, or registration tint.", - "page_start": 28, - "page_end": 28, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\n✞ ☎\n # Create AIF object\n aif = init_aif (\n A:: Vector { Array {T, N}}, # A- matrices\n B:: Vector { Array {T, N}}; # B- matrices\n C:: Vector { Array { Real }}, # C- matrices ( optional )\n D:: Vector { Vector { Real }}, # D- matrices ( optional )\n E:: Vector {T}, # E- vector ( optional )\n pA:: Union { Vector { Array {T, N}}, Nothing }, # Dirichlet priors for A- matrices ( optional )\n pB:: Union { Vector { Array {T, N}}, Nothing }, # Dirichlet priors for B- matrices ( optional )\n pD:: Union { Vector { Array { Real }}, Nothing }, # Dirichlet priors for D- vectors ( optional )\n parameters :: Dict { String , Real }, # Dictionary containing other parameters ( optional )\n settings :: Dict { String , Any } # Dictionary containing settings ( optional )\n )\n```\n**A** and **B** are the only mandatory arguments to the init_aif function—the other arguments are keyword arguments that default to uniform priors. **A**, **B**, **C**, **D** and **E** and their corresponding Dirichlet priors, in the cases of **A**, **B** and **D**, should be formatted as standard array objects. All but **E** can have multiple modalities/factors (see Section 4), so they should be formatted as vectors of arrays with one array per modality/factor. These arrays can be hand-specified by the user, or be generated with some of the helper functions supplied by ActiveInference. Here, we create an AIF agent equipped with a generative model with six environmental states, five possible observations and two possible actions. Here, we use helper functions to create matrices and vectors with the correct dimensions; in Section 4, we create them manually. First, we define the number of states, observations, controls and the length of policies:\n\n✝ ✆\n\n```\n✞ ☎\n # Information about number of states , observations , actions and policy length\n states = [6] # Six states , single factor\n observations = [5] # Five observations , single modality\n controls = [2] # Two actions , single factor\n policy_length = 1 # Length of policies\n # Generate uniform templates for matrices and vectors of the generative model\n A, B, C, D, E = create_matrix_templates ( states , observations , controls , policy_length )\n```\nThe **A** object generated here is a one-dimensional vector containing a uniform 5 × 6 matrix (six states and five observations). The **B** object is a one-dimensional vector containing a uniform 6 × 6 × 2 array (six states and two actions). The **C**, **D** and **E** objects are onedimensional vectors, each containing uniform vectors with their corresponding sizes. We can now modify these to supply the agent with more informative priors over observations, initial states and policies. Here, we performed this using the onehot function:\n\n✞ ☎\n\n✝ ✆\n\n```\n# We make C take the following form : [0 , 0 , 0 , 0 , 1]\n C[1] = onehot (5,5) # Initialize the single element of the C object with a one - hot vector\n # D will be: [1 , 0 , 0 , 0 , 0 , 0]\n D[1] = onehot (1,6) # Initialize the single element of the D object with a one - hot vector\n # To make the agent prefer policy 2\n E = onehot (2,2) # Initialize as a one - hot encoded vector : [0 ,1]\n✝ ✆\n```\nWe now create the Dirichlet priors for **A**, **B** and **D**. When we use parameter learning, these are used to define **A**, **B** and **D** defined above, and are updated at every time step. One way to construct Dirichlet priors is to simply multiply the matrices below with a scaling factor; a higher scaling leads to more precise priors that require stronger evidence to update. Here, we use a scaling parameter of 2. In the current version, parameter learning is only implemented for the **A**, **B** and **D**:", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "- When either foreground color or background color for recoloring is specified from an EMF semantic record, the add-in should recolor images in the add-in (for example, metafiles or raster pictures).\n#### **Non-Recolored Images**\n\nEMF supports CMYK *images* using GDI+. Therefore, images in the EMF may be either RGB or CMYK. If the image is a CMYK image, the add-in needs to convert the image to the target color space.\n\nPublisher maintains a target color space for the document. The add-in can use this target color space by calling the **IMsoDocExporterSite::HrConvertImageColorSpace** method with the image's color space.\n\n### **Color from EPS Files**\n\nEncapsulated Postscript (EPS) is a metafile type that supports extended color spaces. User who embed EPS images in a Publisher document expect the color information to be used in the fixed-format output. Inside Publisher, the EPS is converted to an EMF with EPS-related semantic records. This EMF is then embedded in the page EMF file that the application passes to the add-in.\n\nTo support color in EPS files, the add-in needs to do the following:\n\n- Call the **IMsoDocExporterSite::SetEPSInfo** method for **DocExComment_EPSColor** records encountered in the EMF.\n- Extract the CMYK image from the **DocExComment_EPSColorCMYKJPEG** record in the EMF. This record contains a binary object that is the actual CMYK JPEG file stream. Use it to replace the RGB image specified in the subsequent call to the **StretchDIBits** function.\n- The **DocExComment_EPSColorSpotImage** record provides spot color information for the subsequent RGB image, which is always an index image. The add-in needs to convert the spot image to the target color space.\n- The add-in can optionally call the **IMsoDocExporterSite:: HrGetSpotRecolorInfo** method to obtain the document's target color from Publisher. Then the add-in can recolor the subsequent RGB image by mapping colors from the palette of the RGB image to **flTintMin** and **flTintMax** tints specified in the **DoxExComment_EPSColorSpotImage** record. The luminosity for each color of the palette is used for the mapping.", - "page_start": 29, - "page_end": 29, - "source_file": "office-pdf.pdf" - }, - { - "text": "**Figure 6.** Chain traces. Each color signifies an individual chain.\n\n**Figure 7.** Posterior estimates of the *α* parameter plotted against the prior for two synthetic subjects, one from each group.\n\n```\n✞ ☎\n # Sample from the model prior\n prior_chains = sample ( model , Prior (), 1000 )\n # Rename parameters from the prior chains to match the posterior chains\n renamed_prior_chains = rename_chains ( prior_chains , model )\n # Plot the posterior and prior for the first subject\n plot_parameters ( renamed_prior_chains [:,1:1,:], renamed_posterior_chains [:,1:1,:])\n # Visualize the true alpha value\n vline !([ data [1,: Alpha ]], line =: dash , color = : darkorange2 , label = \" Generative Alpha \")\n # Plot the posterior and prior for the last subject\n plot_parameters ( renamed_prior_chains [:,10:10,:], renamed_posterior_chains [:,10:10,:])\n # Visualize the true alpha value\n vline !([ data [ 3000 ,: Alpha ]], line =: dash , color = : darkorange2 , label = \" Generative Alpha \")\n✝ ✆\n```\nWe then, as is often the case in computational psychiatry, wanted to compare the distributions of parameter values between the two groups. We extracted the median of the estimated posteriors for each subject and plotted them against the value used to generate", - "page_start": 26, - "page_end": 26, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "space, so no transformation is needed.\n\n- **iTargetPage** Specifies the page number of the destination page within the document.\n- **xtfvTarget** Specifies the x-coordinate of the target location on the destination page. The unit of measure for this value is points.\n- **ytfvTarget** Specifies the y-coordinate of the target location on the destination page. The unit of measure for this value is points.\n- **dytfTargetPage** The height of the destination page in points. The offset specified by the **ytfvTarget** member is relative to the upper-left corner of the page. However, some fixed-format types use a coordinate system that is relative to the bottom-left corner of the page. For these types of documents, the page height is required to convert the offset.\n\n#### **DocExComment_ColorInfo**\n\nThe **DocExComment_ColorInfo** structure specifies color-state information for the EMF. For more information about this structure, see the section Extended Color Support.\n\n```\nC++\nstruct DocExComment_ColorInfo\n{\n DWORD ident {};\n DWORD iComment {};\n COLORREF clr { 0 };\n BOOL fForeColor {};\n};\n```\nThe members of the **DocExComment_ColorInfo** structure are as follows:\n\n- **ident** Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n- **iComment** Specifies the MSODOCEXCOMMENT value, msodocexcommentColorInfo.\n- **clr** Specifies a color ID that represents a current color state in the EMF.\n- **fForeColor** Specifies whether the color ID in the **clr** member represents a foreground color or a background color. If this member has a value of **true**, the", - "page_start": 17, - "page_end": 17, - "source_file": "office-pdf.pdf" - }, - { - "text": "# B Extended Description of V-JEPA\n\nIn this section, we provide an in-depth description of our approach V-JEPA that is illustrated in Figure 3.\n\nInput. Unless stated otherwise, during during pretraining, we always randomly sample a clip of 16 frames from each input video with a temporal stride of 4 between sampled frames. An input video clip therefore covers 64 frames in total, or roughly 2 seconds of a given video running at 30 frames per second. We then resize the video's spatial dimensions to 224 × 224, resulting in an overall shape of 16 × 224 × 224 × 3 for the entire clip. Since ViT networks process a 1D sequence of tokens, we must convert an input video clip into a 1D token sequence. To do so, we apply a 3D convolution comprising d filters of size 2 × 16 × 16 with a temporal stride of 2 and a spatial stride of 16, resulting in a tensor of shape 8 × 14 × 14 × d. Next we add absolute 3D sin-cos positional embeddings to the spatio-temporal feature map and flatten it, resulting in a 1D token sequence of shape 1568 × d. This process is demonstrated in Figure 7.\n\nFigure 7 V-JEPA training operates on a video clip flattened into a sequence of tokens. To convert a video clip of size 16 × 224 × 224 × 3 into a 1D token sequence, we apply a 3D convolution comprising d filters of size 2 × 16 × 16 with a temporal stride of 2 and a spatial stride of 16, resulting in a tensor of shape 8 × 14 × 14 × d. Next we add absolute 3D sin-cos positional embeddings to the spatio-temporal feature map and flatten it, resulting in a 1D token sequence of shape 1568 × d.\n\nV-JEPA. We sample both a video clip, and a video mask in each iteration. We denote a video clip represented as a 1D token sequence of length L = 1568 by xL = (x1, . . . , xL). Similarly, given a mask of M < L patches, leaving N = L − M patches unmasked, we denote the indices of masked patches by (i1, . . . , iM) and its complement (the indices of unmasked patches) by (j1, . . . , jN ).\n\nComputing the x-representations. To compute the V-JEPA loss, we first produce the x-representations by masking the video clip and feeding it into the x-encoder; we denote the masked video by xN = (xj1 , . . . , xjN ). Applying the xencoder Eθ(·) to the masked clip gives a sequence of patch representations, denoted as zN = Eθ(xN ) = (zj1 , . . . , zjN ).\n\nPredicting the target. Next, the V-JEPA predictor network Pϕ(·, ·) takes as input the tokens produced by the x-encoder and predicts the missing regions in the video clip, which are specified by a set of learnable mask tokens. Specifically, the mask tokens are parameterized as the sum of a shared learnable vector and an absolute 3D sin-cos positional embedding, denoted by mM = (mi1 , . . . , miM ). The output of the predictor is thus given by, sˆM = Pϕ(zN , mM) = (ˆsi1 , . . . , sˆiM ), corresponding to a d-dimensional output for each of the M masked patches.\n\nComputing the y-representations. Finally to compute the prediction targets, the entire unmasked video clip is processed by the y-encoder to obtain a set of target representations, denoted by sL = Eθ(xL) = (s1, . . . , sL). The V-JEPA loss is now computed as\n\n$$\\text{Loss}=\\frac{1}{M}\\sum_{k\\in(i_{1},...,i_{M})}\\|\\hat{s}_{k}-s_{k}\\|_{1},\\tag{2}$$\n\nwhich is simply the average L1 distance between the output of the predictor and the y-encoder. We then compute a gradient update with respect to the parameters of the x-encoder, θ, and the predictor, ϕ, and subsequently update the parameters of the y-encoder as an exponential moving average of the context encoder weights (Polyak average).", - "page_start": 15, - "page_end": 15, - "source_file": "arxiv3.pdf" - }, - { - "text": "- Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6047–6056, 2018.\n- Agrim Gupta, Jiajun Wu, Jia Deng, and Li Fei-Fei. Siamese masked autoencoders. arXiv preprint arXiv:2305.14344, 2023.\n- Michael U Gutmann and Aapo Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of machine learning research, 13(2), 2012.\n- Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0–0, 2019.\n- Tengda Han, Weidi Xie, and Andrew Zisserman. Memoryaugmented dense predictive coding for video representation learning. In European conference on computer vision, pages 312–329. Springer, 2020.\n- Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021.\n- Geoffrey E Hinton. Connectionist learning procedures. In Machine learning, pages 555–610. Elsevier, 1989.\n- Tarun Kalluri, Deepak Pathak, Manmohan Chandraker, and Du Tran. Flavr: Flow-agnostic video representations for fast frame interpolation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2071–2082, 2023.\n- Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.\n- Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.\n- Christoph Kayser, Wolfgang Einhäuser, Olaf Dümmer, Peter König, and Konrad Körding. Extracting slow subspaces from natural videos leads to complex cells. In Artificial Neural Networks—ICANN 2001: International Conference Vienna, Austria, August 21–25, 2001 Proceedings 11, pages 1075–1080. Springer, 2001.\n- Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. 2016.\n- Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Colorization as a proxy task for visual understanding. 2017.\n- Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. 2022.\n- Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. In Proceedings of the IEEE international conference on computer vision, pages 667–676, 2017.\n- Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Unified transformer for efficient spatiotemporal representation learning. arXiv preprint arXiv:2201.04676, 2022.\n- Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.\n- Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630–2640, 2019.\n- Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, pages 69–84. Springer, 2016.\n- Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.\n- Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.\n- Nikhil Parthasarathy, SM Eslami, João Carreira, and Olivier J Hénaff. Self-supervised video pretraining yields strong image representations. arXiv preprint arXiv:2210.06433, 2022.\n- Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016.\n- Silvia L Pintea, Jan C van Gemert, and Arnold WM Smeulders. Déja vu: Motion prediction in static images. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III 13, pages 172–187. Springer, 2014.\n- Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.\n- Rajesh PN Rao and Dana H Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience, 2(1):79–87, 1999.\n- Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv3.pdf" - }, - { - "text": "**Frozen**\n\n(a) Visualization Methodology. We train a conditional diffusion model to decode the V-JEPA feature-space predictions to interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video.\n\n(b) Visualizations. First Row: Masked videos used as input to the V-JEPA models (a pretrained ViT-H/16 encoder and its corresponding predictor network). Other rows: Bounding boxes contain various samples from the decoder overlayed on the original video. V-JEPA is not a generative model and the decoder does not have access to the context (first row), so we do not expect samples to exactly match the input. This experiment qualitatively illustrates what information is encoded and predicted by V-JEPA. In particular, characteristics that are common across samples represent information that is encoded in the V-JEPA predictions. V-JEPA generates predictions that are spatially and temporally coherent with unmask region of the video. The predictions also capture consistent motion through time.\n\nFigure 6 Qualitative Analysis. Offline visualizations of the V-JEPA feature-space predictions.\n\n# 7 Conclusion\n\nIn this work, we explored the effectiveness of feature prediction as a stand-alone objective for unsupervised learning from video and introduced V-JEPA, a collection of vision models trained solely using a self-supervised feature prediction objective. The V-JEPA models demonstrate the ability to solve various downstream image and video tasks without adaption of the model parameters, and outperform previous video representation learning approaches in frozen evaluation on action recognition, spatio-temporal action detection, and image classification tasks. Additionally, we show that pretraining V-JEPA on videos is particularly effective for solving downstream tasks requiring fine-grained motion understanding, while large-scale image models trained on internet scale datasets fall short on such tasks. Finally, we empirically observed that V-JEPA models are label-efficient learners, and exhibit good performance on downstream tasks, even when only few labeled examples are available.\n\n# References\n\n- Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems, 34:24206–24221, 2021.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv3.pdf" - }, - { - "text": "Feature Prediction versus Pixel Reconstruction. Approaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n# 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x. The additional variable z provides the predictor with information about the transformation that computes y from x.\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x. The basic architecture is made up of an encoder, Eθ(·), which computes the representation of the inputs, and a predictor, Pϕ(·), which predicts the representation of y from the representation of x, conditioned on a variable z indicating the transformation (or corruption) between x and y. Conditioning on z enables the generation of distinct predictions for various transformations of x.\n\n### 3.1 Training Objective\n\nWe train our visual encoder Eθ(·) to satisfy the constraint that representations computed from one part of the video, y, should be predictable from representations\n\ncomputed from another part of the video, x. The predictor network Pϕ(·), which maps the representation of x to the representation of y, is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆y.\n\nNaively implementing the objective using the regression\n\n$$\\begin{array}{r l}{{\\mathrm{minimize}_{\\theta,\\phi}}}&{{}\\|P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-E_{\\theta}(y)\\|_{1},}\\end{array}$$\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize${}_{\\theta,\\phi}\\quad||P_{\\phi}(E_{\\theta}(x),\\Delta_{y})-\\mbox{sg}(\\overline{E}_{\\theta}(y))||_{1},$ (1)\n\nwhere sg(·) denotes a stop-gradient operation, which does not backpropagate through its argument, and Eθ(·) is an exponential moving average of the network Eθ(·). The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ1 regression, which we found to be more stable.\n\nTheoretical motivation. A theoretical motivation for the effectiveness of this collapse prevention strategy was proposed in Grill et al. (2020) for the BYOL method. We provide a simple adaptation of their analysis for our ℓ1 loss. For ease of exposition, we will disregard the effect of the conditioning variable z and consider one dimensional representations. Denote the representation Eθ(y) by a random variable Y . The optimal predictor under equation (1) is thus given by the following functional expression,\n\n$P^{\\star}(E_{\\theta}(x))=\\text{argmin}_{P}\\|P(E_{\\theta}(x))-Y\\|_{1}$ \n \n$=\\text{median}(Y|E_{\\theta}(x))$. \n \n\nSubstituting this expression for the optimal predictor into the loss function and evaluating the expected gradient of the encoder gives\n\n$$\\nabla_{\\theta}\\mathbb{E}\\|P^{\\star}(E_{\\theta}(x))-Y\\|_{1}=\\nabla_{\\theta}\\mathrm{MAD}(Y|E_{\\theta}(x)),$$\n\nwhere MAD(· |Eθ(x)) is the median absolute deviation of a random variable conditioned on Eθ(x). Thus, in the case where the predictor is optimal, the encoder must learn to capture as much information about the video as possible to minimize the deviation of the target. The hypothesis is that incorporating an exponential moving average to compute the representation of y ensures that the predictor evolves faster than the encoder and remains close to optimal, thereby preventing collapse.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What are msodocexMetadataComments ?", - "target_page": 35, - "target_passage": "Miscellaneous comments relevant to the document.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "The *metadatatype* parameter specifies the type of metadata represented by the string. The *metadatatype* parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 8. Enumerated values of MSODOCEXMETADATA\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexMetadataTitle | The title of the document. |\n| msodocexMetadataAuthor | The author of the document |\n| msodocexMetadataSubject | String that describes the subject matter of the document (for |\n| | example, business or science). |\n| msodocexMetadataKeywords | Keyword relevant to the document content. |\n| msodocexMetadataCreator | The creator of the document, possibly distinct from the author. |\n| msodocexMetadataProducer | The producer of the document, possibly distinct from the author |\n| | or creator. |\n| msodocexMetadataCategory | String that describes the type of document (for example, memo, |\n| | article, or book). |\n| msodocexMetadataStatus | Status of the document. This field can reflect where the |\n| | document is in the publication process (for example, draft or |\n| | final). |\n| msodocexMetadataComments | Miscellaneous comments relevant to the document. |\n\nFor a given document, each metadata type can have only one string associated with it. So, for example, if the document has multiple keywords, they are passed to the add-in as one concatenated string.\n\nThe *pwchValue* parameter specifies a Unicode string that contains the metadata itself.\n\nHow the add-in incorporates the text-string metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n### **HrAddDocumentMetadataDate**\n\nPublisher calls the **HrAddDocumentMetadataDate** method to specify document metadata in the form of a FILETIME structure.", - "page_start": 34, - "page_end": 34, - "source_file": "office-pdf.pdf" - }, - { - "text": "Table 6. Semantic record types supported by fixed-format export\n\nノ **Expand table**\n\n| Comment Value | Structure Type |\n| --- | --- |\n| msodocexcommentExternalHyperlink | DocExComment_ExternalHyperlink |\n| msodocexcommentExternalHyperlinkRctfv | DocExComment_ExternalHyperlink |\n| msodocexcommentInternalHyperlink | DocExComment_InternalHyperlink |\n| msodocexcommentInternalHyperlinkRctfv | DocExComment_InternalHyperlink |\n| msodocexcommentColorInfo | DocExComment_ColorInfo |\n| msodocexcommentColorMapEnable | DocExComment_ColorEnable |\n| msodocexcommentBeginTextRun | DocExComment_BeginTextRun |\n| msodocexcommentBeginTextRunRTL | DocExComment_BeginTextRun |\n| msodocexcommentEndTextRun | DocExComment_EndTextRun |\n| msodocexcommentBeginStructNode | DocExComment_BeginStructNode |\n| msodocexcommentEndStructNode | DocExComment_EndStructNode |\n| msodocexcommentUnicodeForNextTextOut | DocExComment_UnicodeForNextTextOut |\n| msodocexcommentUnicodeForNextTextOutRTL | DocExComment_UnicodeForNextTextOut |\n| msodocexcommentEPSColor | DocExComment_EPSColor |\n| msodocexcommentEPSCMYKJPEG | DocExComment_EPSColorCMYKJPEG |\n| msodocexcommentEPSSpotImage | DocExComment_EPSColorSpotImage |\n| msodocexcommentEPSStart | DocExComment_EPSStart |\n| msodocexcommentPageName | DocExComment_PageName |\n| msodocexcommentTransparent | DocExComment_Transparent |\n\n#### **DocExComment_ExternalHyperlink(Rctfv)**\n\nThe **DocExComment_ExternalHyperlink(Rctfv)** structure describes a hyperlink that links to outside of the document, for example to a Web site on the Internet.", - "page_start": 14, - "page_end": 14, - "source_file": "office-pdf.pdf" - }, - { - "text": "- **shapeProperty** is for a msodocexStructTypeFigure where the content is a shape, text box, or table cell and contains bit fields from the MSODOCEXSHAPEPROPERTY enumeration.\n- **tableAttr** is the table cell attributes for a msodocexStructTypeTH or msodocexStructTypeTD.\n- **idTableHeader** is the unique id for an msodocexStructTypeTH or msodocexStructTypeTD.\n- **iTargetParentId** is the id of the node to reparent an msodocexStructTypeDiagram to.\n\nTable 3. Enumerated values of MSODOCEXLINEBREAKTYPE\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexLineBreakTypeNormal | Normal line break. |\n| msodocexLineBreakTypeManual | Manual line break. |\n| msodocexLineBreakTypeEOP | End of paragraph. |\n\n#### Table 4. Enumerated values of MSODOCEXLISTTYPE\n\nノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexListTypeNone | No bullets or numbering. |\n| msodocexListTypeBulletDisc | Disc-shaped bullets. |\n| msodocexListTypeBulletCircle | Circle-shaped bullets. |\n| msodocexListTypeBulletSquare | Square-shaped bullets. |\n| msodocexListTypeBulletDecimal | Decimal numbering. |\n| msodocexListTypeUpperRoman | Uppercase Roman numeral numbering. |\n| msodocexListTypeLowerRoman | Lowercase Roman numberal numbering. |\n| msodocexListTypeUpperAlpha | Uppercase alphabetic numbering. |\n| msodocexListTypeLowerAlpha | Lowercase alphabetic numbering. |\n\nTable 5. Enumerated values of MSODOCEXSHAPEPROPERTY bit fields", - "page_start": 9, - "page_end": 9, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n| --- | --- |\n| msodocexStructTypeTOC | A table of contents. |\n| msodocexStructTypeTOCI | An item in a table of contents. |\n| msodocexStructTypeExtLink | A link to an external resource. |\n| msodocexStructTypeIntLink | A link to an internal resource. |\n| msodocexStructTypeFootnote | A footnote. |\n| msodocexStructTypeEndnote | An endnote. |\n| msodocexStructTypeTextbox | A text box. |\n| msodocexStructTypeHeader | A block of text forming a header. |\n| msodocexStructTypeFooter | A footer. |\n| msodocexStructInlineShape | An inline shape. |\n| msodocexStructAnnotation | An annotation. |\n| msodocexStructTypeSpanBlock | A block of text. |\n| msodocexStructTypeWorkbook | A workbook. |\n| msodocexStructTypeWorksheet | A worksheet. |\n| msodocexStructTypeMacrosheet | A macrosheet. |\n| msodocexStructTypeChartsheet | A chartsheet. |\n| msodocexStructTypeDialogsheet | A dialogsheet. |\n| msodocexStructTypeSlide | A slide. |\n| msodocexStructTypeChart | A chart. |\n| msodocexStructTypeDiagram | A SmartArt diagram. |\n| msodocexStructTypeBulletText | Buller text. |\n| msodocexStructTypeTextLine | A line of text. |\n| msodocexStructTypeDropCap | A drop cap. |\n| msodocexStructTypeSection | A section. |\n| msodocexStructTypeAnnotationBegin | The beginning of an annotation. |\n| msodocexStructTypeAnnotationEnd | The end of an annotation. |", - "page_start": 21, - "page_end": 21, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\nC++\nHRESULT HrAddDocumentMetadataDate(\n MSODOCEXMETADATA metadataType, \n const FILETIME* pftLocalTime\n);\n```\nThe *metadatatype* parameter specifies the type of metadata represented by the **FILETIME** structure. The *metadatatype* parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 9. Enumerated values of MSODOCEXMETADATA\n\n#### ノ **Expand table**\n\n| Value | Description |\n| --- | --- |\n| msodocexMetadataCreationDate | The creation date for the document. |\n| msodocexMetadataModDate | The last-modified date for the document. |\n\nThe *pftLocalTime* parameter specifies a pointer to a FILETIME structure that contains the date and time information for the metadata. The following code snippet demonstrates how to extract this information from the structure.\n\n```\nC++\n```\n\n```\nSYSTEMTIME st = { 0 };\nWCHAR s[100];\nFileTimeToSystemTime(pfiletime, &st);\nswprintf(s, 99, L\" %04d-%02d-%02dT%02d:%02d:%02dZ\", st.wYear % 10000, \n st.wMonth % 100, st.wDay % 100, st.wHour % 100, st.wMinute % 100, \n st.wSecond % 100);\n```\nHow the add-in incorporates the date and time metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n### **HrFinalize**\n\nPublisher calls the **HrFinalize** method at the end of the document-export process.\n\nC++", - "page_start": 35, - "page_end": 35, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n| --- | --- |\n| msodocexStructTypeParaRTLAttr | A block of text within an article with right-to-left |\n| | layout. |\n| msodocexStructTypeTableRTLAttr | A block of text forming a table with right-to-left |\n| | layout. |\n| msodocexStructTypeHeadingRTLAttr | A heading in the text with right-to-left layout. |\n| msodocexStructTypeListItemRTLAttr | A block of text forming a list item with right-to-left |\n| | layout. |\n| msodocexStructTypeParaUnannotatableAttr | A block of text within an article that is not |\n| | annotatable. |\n| msodocexStructTypeTHead | The header row area in a table. |\n| msodocexStructTypeTBody | The body area in a table, i.e. the portion between |\n| | the THead and TFoot. |\n| msodocexStructTypeLabel | A label. |\n| msodocexStructTypeEquation | An equation. |\n| msodocexStructTypeIntLinkNoteRef | A footnote or endnote reference mark link. |\n| msodocexStructTypeTFoot | The footer row area in a table. |\n\n**fContentNode** Specifies whether a **DocExComment_EndStructNode** structure marks the end of this structure node. If **fContentNode** is **true**, a\n\n**DocExComment_EndStructNode** structure closes off the content bounded by the node. If this **fContentNode** has a **false** value, then the node does not bound any content.\n\nThe **fContentNode** member affects the interpretation of the parent ID value of subsequent nodes. If **fContentNode**is **true**, nodes that are inserted between this **DocExComment_BeginStructNode** and a subsequent **DocExComment_EndStructNode**, and that have a parent ID of **-1**, are children of this node. However, if **fContentNode** is **true**, nodes inserted after this **DocExComment_BeginStructNode**, and that have a parent ID of **-1**, are not children of this node. They are children of the next-most-recently specified node that has **fContentNode** equal to **false**.\n\nYou can nest document structure nodes to arbitrary depth.\n\n**cwchAltText** Specifies the number of Unicode characters in the block of alternate text that follows the structure. This Unicode string specifies alternate text for the node (for example, alternate text for an image).", - "page_start": 22, - "page_end": 22, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\ntypedef struct _MsoDocexStructNode\n{\n int idNode;\n MSODOCEXSTRUCTTYPE nodetype;\n WCHAR* pwchAltText;\n union\n {\n int iHeadingLevel;\n ULONG idPara;\n ULONG idDropCap;\n int iPage;\n WCHAR* pwchActualText;\n MSODOCEXLINEBREAKTYPE bt;\n int iListLevel;\n MSODOCEXLISTTYPE listType;\n ULONG idAtn;\n long cpLim;\n int shapeProperty;\n MsoDocexTableAttr tableAttr;\n WCHAR* idTableHeader;\n int iTargetParentId;\n };\n} MSODOCEXSTRUCTNODE;\n```\nThe **idNode** member specifies the ID of the node being passed in the call to **HrBeginStructNode**. This member may not have a value of **0**. A value of **-1** indicates that child nodes do not use the *idNodeParent* parameter to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have an ID of **-1**. If the ID is not **-1**, the value is unique across the document.\n\nThe embedded union at the end of the MSODOCEXSTRUCTNODE is interpreted differently depending on the type of node:\n\n- **iHeadingLevel** is the heading level for an msodocexStructTypeHeading.\n- **idPara** is the paragraph id for a P, TOCI, or ListBody.\n- **idDropCap** is the id of an msodocexStructTypeDropCap.\n- **iPage** is the page number for an msodocexStructTypePage.\n- **bt** is the line break type for an msodocexStructTypeTextLine.\n- **iListLevel** is the list level for an msodocexStructTypeList or msodocexStructTypeListItem.\n- **listType** is the list type for an msodocexStructTypeListItem.\n- **idAtn** is the id of an msodocexStructTypeAnnotationBegin or msodocexStructTypeAnnotationEnd.\n- **cpLim** is used to determine the nesting order of tables within tables for an msodocexStructTypeTable, msodocexStructTypeTOC, or msodocexStructTypeListBody.", - "page_start": 8, - "page_end": 8, - "source_file": "office-pdf.pdf" - }, - { - "text": "see the section Extended Color Support.\n\n```\nC++\ntypedef struct\n{\n DWORD ident {};\n DWORD iComment {};\n BYTE colorInfo[];\n} DocExComment_EPSColor;\n```\nThe members of the **DocExComment_EPSColor** structure are as follows:\n\n- **ident** Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n- **iComment** Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSColor.\n- **colorInfo[]** Specifies the color information for the EPS file. The add-in should pass this information to Publisher using the **IMsoDocExporterSite::SetEPSInfo** method.\n\n#### **DocExComment_EPSColorCMYKJPEG**\n\nThe **DocExComment_EPSColorCMYKJPEG** structure specifies the start, in the EMF, of a binary object that is a CMYKJPEG file stream. For more information about this structure, see the section Extended Color Support.\n\n```\nC++\ntypedef struct\n{\n DWORD ident {};\n DWORD iComment {};\n} DocExComment_EPSColorCMYKJPEG;\n```\nThe members of the **DocExComment_EPSColorCMYKJPEG** structure are as follows:\n\n- **ident** Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n- **iComment** Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSCMYKJPEG;\n\n### **DocExComment_EPSColorSpotImage**", - "page_start": 26, - "page_end": 26, - "source_file": "office-pdf.pdf" - }, - { - "text": "The **idNode** member specifies the ID of the node. This member may not have a value of **0**. A value of **-1** indicates that child nodes do not use the **idNodeParent** member to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have a ID of **-1**. If the ID is not **-1**, the value is unique across the document.\n\nThe **nodetype** specifies the type of structure node. This member is equal to one of the values from the **MSODOCEXSTRUCTTYPE** enumeration type. The following table lists examples of document structure node types.\n\nTable 7. Document structure node types\n\n#### ノ **Expand table**\n\n| Type Value | Description |\n| --- | --- |\n| msodocexStructTypePara | A block of text within an article. Its parent node |\n| | must be an article. |\n| msodocexStructTypeFigure | A graphical element (for example, an image or |\n| | collection of shapes) that has a textual |\n| | representation. The textual representation is the |\n| | alternate text used for reading or searching the |\n| | document. |\n| msodocexStructTypeArticle | A group of nodes forming a single flow of text that |\n| | should be read or searched as a contiguous block |\n| | of content. Some documents have a single article |\n| | and others have multiple articles. |\n| msodocexStructTypeHeading | A heading in the text. |\n| msodocexStructTypeTable | A block of text forming a table. |\n| msodocexStructTypeTR | A block of text forming a single row of a table. |\n| msodocexStructTypeTD | A block of text forming a single cell in a table row. |\n| msodocexStructTypeTH | A block of text forming a single header cell in a |\n| | table row. |\n| msodocexStructTypeList | A block of text forming a list. |\n| msodocexStructTypeListItem | A block of text forming a list item. |\n| msodocexStructTypeListBody | A block of text forming the body of a list item. |\n| msodocexStructTypeDocument | A document. |\n| msodocexStructTypePage | A page in the document. |", - "page_start": 20, - "page_end": 20, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Value | Numeric Value | Description |\n| --- | --- | --- |\n| msodocexShape | 0x00000001 | The object is a shape or text box. |\n| msodocexShapeText | 0x00000002 | The object has non-whitespace text. |\n| msodocexShapePath | 0x00000004 | The object has a fill and/or outline. |\n| msodocexShapeAltText | 0x00000008 | The object has Alt Text. |\n| msodocexShapeEquation | 0x00000010 | The object has text that contains an equation. |\n| msodocexShapeTabelCell | 0x00000020 | The object is a cell in a table. |\n\n#### **MsoDocexTableAttr**\n\nThe **MsoDocexTableAttr** structure fits in 32 bits and includes the row and column span and header scope information for a table cell.\n\n```\nC++\nstruct MsoDocexTableAttr\n{\n static constexpr unsigned int MaxSpanBits = sizeof(unsigned int) * 8 / 2\n- 1;\n static constexpr unsigned int MaxSpanValue = (1u << MaxSpanBits) - 1;\n unsigned int rowSpan : MaxSpanBits;\n unsigned int fRowScope : 1;\n unsigned int colSpan : MaxSpanBits;\n unsigned int fColScope : 1;\n};\n```\nThe members of **MsoDocexTableAttr** structure are as follows:\n\n- **MaxSpanBits** Specifies the number of bits available for the rowSpan and colSpan values, which is 15.\n- **MaxSpanValue** Specifies the maximum value that can be specified for the rowSpan and colSpan.\n- **rowSpan** Specifies the number of rows that a table cell spans.\n- **fRowScope** Specifies whether the header is Row/Both or Column.\n- **colSpan** Specifies the number of columns that a table cell spans.", - "page_start": 10, - "page_end": 10, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What are the total operating expenses of Wikimedia foundation in 2024 ?", - "target_page": 6, - "target_passage": "178,471,109", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n#### **(11) Contingencies and Commitments**\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n#### **(12) Subsequent Events**\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "# Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n# **(9) Liquidity and Availability of Financial Assets**\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| date, June 30, 2024 and 2023, are as follows: | | | |\n| --- | --- | --- | --- |\n| | | 2024 | 2023 |\n| Cash and cash equivalents | $ | 82,845,159 | 75,808,401 |\n| Current contributions receivable | | 856,657 | — |\n| Short-term investments | | 116,074,763 | 132,216,667 |\n| Total financial assets | | 199,776,579 | 208,025,068 |\n| Less: | | | |\n| Restricted by donors for programs | | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for | | | |\n| general expenditures within one year | $ | 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n# **(10) Related Party Transactions**\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation's expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nFor example (unaudited):\n\n- Wikipedia and the other projects operated by the Foundation receive more than 19.4 billion pageviews per month, making them one of the most popular Web properties worldwide. Wikipedia is available in more than 332 languages and contains more than 63 million articles contributed by a global volunteer community.\n- For the year ended June 30, 2024, the educational content of the Foundation's largest project, Wikipedia, grew by approximately 1.9 million articles to approximately 63.4 million articles.\n- For the year ended June 30, 2024, volunteers added approximately 12.2 million images, movies, and sound files to the Foundation's multimedia repository, making the total 106.7 million files.\n- Volunteers also contribute in several ways to the Foundation's wiki software: volunteer software developers add new functionality to the code base, and volunteer language specialists add to the code base by translating the wiki interface into different languages. During the year ended June 30, 2024, there were 47,773 commits merged, through the efforts of approximately 511 authors/contributors, of which 8,161 commits were through the efforts of approximately 244 volunteers.\n\n## **(7) Operating Leases**\n\nOur operating lease relates to the Foundation's headquarters in San Francisco and has a non-cancelable remaining term of 3 months as of June 30, 2024. The discount rate is 2.9%, the risk-free rate based on daily U.S. Treasury with a term comparable to the lease term. The lease provides the Foundation the option to extend the lease term for one additional period of five years. The Foundation determined during the year ended June 30, 2024 not to renew the lease. Operating lease expense was $1,859,383 and $1,489,134 for the year ended June 30, 2024 and 2023, respectively.\n\nUndiscounted lease payments as of June 30, 2024 were as follows:\n\n| | | Lease |\n| --- | --- | --- |\n| | | payments |\n| Year ending June 30: | | |\n| 2025 | | 419,791 |\n| | $ Total minimum lease payments | 419,791 |\n\n#### **(8) Retirement Plan**\n\nThe Foundation offers a 401(k) plan (the Plan) to all of its employees residing in the United States. Employees are eligible to participate in the Plan upon employment. The Foundation matches employee contributions on a dollar-for-dollar basis up to 4% of the employee's compensation. The Foundation contributed $1,859,839 and $1,859,012 to the Plan for the years ended June 30, 2024 and 2023, respectively.", - "page_start": 17, - "page_end": 17, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\n#### **(1) Organization and Summary of Significant Accounting Policies**\n\n#### *(a) Organization and Purpose*\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n#### *(b) Risks and Uncertainties*\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n#### *(c) Income Taxes*\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n#### *(d) Financial Statement Presentation*\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, *Not-for-Profit Entities*.\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "# **Licenses and Public Domain Tools**\n\nThe first CC License was created in 2002. Today, we boast **six CC Licenses** and two public domain tools, setting a global standard for sharing.\n\n### **We've estimated that over 2.5 billion pieces of content were CC Licensed by the end of 2023.**\n\n\"The great growling engine of change - technology. Alvin Toffler\" by katerha is licensed under CC BY 2.0. Our legal and technology staff continued to make key infrastructure updates and manage daily maintenance to ensure these Licenses work for everyone.\n\n### **In 2023, we launched the Open Infrastructure Circle (OIC) to ensure consistent funding for this work.**\n\nWe're grateful to the early supporters of the OIC, including the William + Flora Hewlett Foundation, Bill & Melinda Gates Foundation, Filecoin Foundation for the Decentralized Web, Robert Wood Johnson Foundation, Chan Zuckerberg Initiative, Endless, Siegel Family Endowment, Flickr, Microsoft, and Paul and Iris Brest.", - "page_start": 3, - "page_end": 3, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\n#### **(4) Property and Equipment, Net**\n\nProperty and equipment at June 30, 2024 and 2023 consist of the following:\n\n| | | 2024 | 2023 |\n| --- | --- | --- | --- |\n| Furniture | $ | 72,042 | 737,143 |\n| Leasehold improvements | | — | 2,074,581 |\n| Computer equipment | | 22,821,120 | 21,941,684 |\n| Internal use software | | 2,507,701 | 5,198,574 |\n| Total | | 25,400,863 | 29,951,982 |\n| Less accumulated depreciation and amortization | | (13,574,727) | (15,906,843) |\n| Property and equipment, net | $ | 11,826,136 | 14,045,139 |\n\n#### **(5) Net Assets**\n\nNet assets with donor restrictions at June 30, 2024 and 2023 are available for the following purposes:\n\n| | | 2024 | 2023 |\n| --- | --- | --- | --- |\n| Restricted to future periods: | $ | 50,000 | 100,000 |\n| Restricted by purpose: | | | |\n| Abstract Wikipedia | | 861,008 | 1,249,004 |\n| Artificial intelligence | | 239,878 | — |\n| Endowment support | | — | 1,297,620 |\n| Future Audiences | | 500,000 | — |\n| Knowledge equity | | 965,910 | 2,228,134 |\n| Machine learning | | 24,528 | 860,620 |\n| Media Wiki | | 1,500,000 | — |\n| Other | | 125,000 | 147,295 |\n| Restricted to future periods and by purpose: | | | |\n| Artificial intelligence | | 1,430,000 | — |\n| Net assets with donor restrictions | $ | 5,696,324 | 5,782,673 |\n\n#### **(6) Functional Allocation of Expenses**\n\nCosts of providing the Foundation's activities have been summarized below on a functional basis. Programs comprise various initiatives that focus on (1) building the technological and operating platform that enables the Foundation to function sustainably as a top global internet organization, (2) strengthening, growing, and increasing diversity of the Wikimedia communities, and (3) accelerating impact by investing in key geographic areas, mobile application development, and bottom-up innovation, all of which support Wikipedia and other wiki-based projects. This also includes costs related to the Wikimedia Endowment for which the Foundation is reimbursed. The allocation between programs, general and administrative, and fundraising expenses is based on personnel and related costs and other operating expenses such as rent and office expenses using estimates of time spent or percentage of utilization by headcounts, as well as", - "page_start": 15, - "page_end": 15, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nGifts of cash and other assets are reported as contributions with donor restrictions if they are received with donor stipulations that limit the use of the donated assets or are restricted as to time. When a donor restriction expires, that is, when a stipulated time restriction ends or purpose restriction is accomplished, net assets with donor restrictions are reclassified to net assets without donor restrictions and reported in the consolidated statement of activities as net assets released from restrictions.\n\n#### *(l) Contributions of Nonfinancial Assets and Services*\n\nContributions of nonfinancial assets and services include contributed services, as described below.\n\nContributed services are reported at fair value in the consolidated financial statements for voluntary donations of services when those services (1) create or enhance nonfinancial assets, (2) require specialized skills provided by individuals possessing those skills and are services that would be typically purchased if not provided by the donation, and (3) are professional in nature, and have been explicitly agreed to in advance. Contributed services are reported as contributions of nonfinancial assets and services revenue and in-kind service expenses in the consolidated statements of activities. Fair value is estimated based on current local rates for similar services.\n\nA substantial number of volunteers make significant contributions of their time in the furtherance of the Foundation's projects. The value of this contributed time is not reflected in the accompanying consolidated financial statements, as the criteria above are not met.\n\nContributed service revenue and expenses recorded in the consolidated statements of activities consist of contributed legal services, engineering services, subscription services, and internet hosting services and bandwidth. The amounts of specialized contributed legal services as revenue and expenses are $82,638 and $493,315 for the years ended June 30, 2024 and 2023, respectively. The value of specialized engineering services as revenue and expenses are $0 and $498,800 for the years ended June 30, 2024 and 2023, respectively. The value of donated subscription services as revenue and expenses was $124,738 and $0 for the years ended June 30, 2024 and 2023, respectively. The amounts of contributed internet hosting services and bandwidth for the years ended June 30, 2024 and 2023 is $56,100 and $48,338, respectively. Included in the 2024 and 2023 amounts are donated hosting services and bandwidth from the following companies: (1) FiberRing, (2) Tele2, (3) Datahop, (4) LibertyGlobal, (5) Init7, and (6) Arelion.\n\n#### *(m) Revenue Recognition – Contracts With Customers*\n\nThe Foundation recognizes revenue from contracts with customers related to Wikimedia, LLC under Accounting Standards Codification Topic 606, Revenue from Contracts with Customers, which establishes a principle that revenue is recognized upon transfer of control of promised products and services to customers in an amount that reflects the consideration the Foundation expects to receive in exchange for those products or services.\n\nThe Foundation determines the amount of revenue to be recognized through the application of the following 5-step process: 1) identification of the contract, or contracts, with a customer; 2) identification of the performance obligations in the contract; 3) determination of the transaction price; 4) allocation of the transaction price to the performance obligations in the contract; and 5) recognition of revenue when or as the Foundation satisfies the performance obligations.", - "page_start": 10, - "page_end": 10, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nand free to everyone in the world, the Foundation's cost related to this collaborative arrangement is included within awards and grants in the statement of activities. The amount included within awards and grants was $6.1 million and $4.1 million for the years ended June 30, 2024 and 2023, respectively.\n\n#### *(p) Use of Estimates*\n\nThe preparation of financial statements in conformity with U.S. generally accepted accounting principles requires management to make estimates and assumptions that affect the amounts reported in the consolidated financial statements and accompanying notes. Items subject to such estimates and assumptions include the investment valuations, useful lives of fixed assets, and the valuation of contributed services. Accordingly, actual results could differ from those estimates.\n\n#### *(q) Reclassifications*\n\nCertain reclassifications have been made in the financial statements to conform 2023 information to the 2024 presentation. The Foundation had a change in accounting policy to present unrealized gains and losses on investments separately from investment income, net. This resulted in a reclassification of $3,547,510 from investment income, net to unrealized gains on investments within the statement of activities. The Foundation also had a change in accounting policy to no longer present the Wikimania event as special event expense, net in the statement of activities. Revenue from registration sales is now reported within other income, net, and expenses are reported within travel and conference expenses. This resulted in a reclassification of $698,141 from special event expenses to travel and conference expenses in the statement of activities.\n\n#### **(2) Contributions Receivable**\n\nAs of June 30, 2024 and 2023, contributions receivable is $1,571,657 and $0, respectively, and represents contributions receivable from two grants, as well as contributions receivable from payment processors.", - "page_start": 12, - "page_end": 12, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n- 283. Christian (2020), pp. 67, 73.\n- 284. Yudkowsky (2008).\n- 285. Anderson & Anderson (2011).\n- 286. AAAI (2014).\n- 287. Wallach (2010).\n- 288. Russell (2019), p. 173.\n- 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). *Business Insider*. Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n- 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). *TechCrunch*. Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n- 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). *MIT Technology Review*. Retrieved 14 April 2024.\n- 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). *AI Business*. Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n- 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). *Ars Technica*. Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). *VentureBeat*. Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n- 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). *Vox*. Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and _safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What external events can affect Wikimedia Fundation in raising funds ?", - "target_page": 8, - "target_passage": "External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n#### **(11) Contingencies and Commitments**\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n#### **(12) Subsequent Events**\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "# Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n# **(9) Liquidity and Availability of Financial Assets**\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| date, June 30, 2024 and 2023, are as follows: | | | |\n| --- | --- | --- | --- |\n| | | 2024 | 2023 |\n| Cash and cash equivalents | $ | 82,845,159 | 75,808,401 |\n| Current contributions receivable | | 856,657 | — |\n| Short-term investments | | 116,074,763 | 132,216,667 |\n| Total financial assets | | 199,776,579 | 208,025,068 |\n| Less: | | | |\n| Restricted by donors for programs | | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for | | | |\n| general expenditures within one year | $ | 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n# **(10) Related Party Transactions**\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation's expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\n#### **(1) Organization and Summary of Significant Accounting Policies**\n\n#### *(a) Organization and Purpose*\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n#### *(b) Risks and Uncertainties*\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n#### *(c) Income Taxes*\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n#### *(d) Financial Statement Presentation*\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, *Not-for-Profit Entities*.\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nOnce such stipulations are satisfied, the associated net assets are released from net assets with donor restrictions and recognized as net assets without donor restrictions.\n\nContributions received are recorded as net assets without donor restriction or net assets with donor restrictions depending on the existence and/or nature of any donor restrictions.\n\n## *(e) Cash and Cash Equivalents*\n\nThe Foundation manages its cash through major financial institutions. At June 30, 2024 and 2023, the carrying amount of the Foundation's general ledger cash held primarily in nationally recognized financial institutions is $60.0 million and $63.9 million, respectively. Cash balances are insured by the Federal Deposit Insurance Corporation (FDIC) up to the applicable limits. Cash balances held in these financial institutions at June 30, 2024 and 2023 exceed the applicable FDIC insurance limits. The Foundation's current practice is to maintain at least four months of cash and cash equivalents to support a combination of operating cash and a current reserve fund. The Foundation considers all highly liquid investments with an original maturity of three months or less when purchased to be cash equivalents. Cash equivalents of $22.8 million and $12.0 million as of June 30, 2024 and 2023, respectively, are considered Level 1 under ASC Topic 820, *Fair Value Measurement*.\n\n#### *(f) Restricted Cash*\n\nRestricted cash includes standby letters of credit for (1) the Foundation's headquarters office lease and (2) one of the Foundation's Employer of Record responsible for administering compensation and benefits for non-US personnel. As of June 30, 2024, neither letter of credit has been used.\n\n#### *(g) Contributions Receivable*\n\nContributions receivable represent gift amounts due from various entities, which are occasionally directed at specific activities. Contributions receivable due more than one year from the contribution date are discounted to present value using a fair value rate based on the U.S. Treasury bond rate and reflect the risks inherent in these cash flows. Contributions receivable are subject to review and adjustment by management should amounts be deemed uncollectible.\n\n#### *(h) Investments*\n\nThe Foundation's policy regarding investments is to invest cash in short-term, intermediate-term, and long-term fixed income, and equity instruments without assuming material undue risk to principal. Preservation of principal and maintenance of liquidity are priorities over yield. Investments are reported at fair value with realized and unrealized gains and losses, and accrued interest included as a component of the change in net assets. Additionally, the Foundation holds no shares of donated stock as of June 30, 2024 or 2023, consistent with its policy to sell stock received through donations as soon as possible.\n\nThe Foundation presents its investment portfolios as short-term and long-term based on expectations of the holding period of the investment in line with the investment guidelines stipulated in the investment policy.\n\nASC Topic 820 establishes a fair value hierarchy that prioritizes observable inputs to valuation techniques used to measure fair value. The hierarchy gives the highest priority to unadjusted quoted", - "page_start": 8, - "page_end": 8, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nand free to everyone in the world, the Foundation's cost related to this collaborative arrangement is included within awards and grants in the statement of activities. The amount included within awards and grants was $6.1 million and $4.1 million for the years ended June 30, 2024 and 2023, respectively.\n\n#### *(p) Use of Estimates*\n\nThe preparation of financial statements in conformity with U.S. generally accepted accounting principles requires management to make estimates and assumptions that affect the amounts reported in the consolidated financial statements and accompanying notes. Items subject to such estimates and assumptions include the investment valuations, useful lives of fixed assets, and the valuation of contributed services. Accordingly, actual results could differ from those estimates.\n\n#### *(q) Reclassifications*\n\nCertain reclassifications have been made in the financial statements to conform 2023 information to the 2024 presentation. The Foundation had a change in accounting policy to present unrealized gains and losses on investments separately from investment income, net. This resulted in a reclassification of $3,547,510 from investment income, net to unrealized gains on investments within the statement of activities. The Foundation also had a change in accounting policy to no longer present the Wikimania event as special event expense, net in the statement of activities. Revenue from registration sales is now reported within other income, net, and expenses are reported within travel and conference expenses. This resulted in a reclassification of $698,141 from special event expenses to travel and conference expenses in the statement of activities.\n\n#### **(2) Contributions Receivable**\n\nAs of June 30, 2024 and 2023, contributions receivable is $1,571,657 and $0, respectively, and represents contributions receivable from two grants, as well as contributions receivable from payment processors.", - "page_start": 12, - "page_end": 12, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nprices in active markets for identical assets or liabilities (Level 1 measurements) and the lowest priority to measurements involving significant unobservable inputs (Level 3 measurements).\n\nThe three levels of the fair value hierarchy are as follows:\n\n- Level 1 inputs are quoted prices (unadjusted) in active markets for identical investments that the Foundation has the ability to access at the measurement date. The Foundation's Level 1 assets are investments in marketable securities, including stocks and mutual funds.\n- Level 2 inputs are inputs other than quoted prices included in Level 1 that are observable for the investment, either directly or indirectly. The Foundation's Level 2 assets are investments in corporate bonds, mortgage-backed securities, and U.S. Treasury securities.\n- Level 3 inputs are unobservable inputs from investments. Level 3 inputs incorporate assumptions about the factors that market participants would use in pricing the instrument.\n\n#### *(i) Property and Equipment, Net*\n\nExpenditures for property and equipment with useful lives of one year or more are capitalized and recorded at cost. Depreciation is calculated on a straight-line basis over the estimated useful lives of the assets. The estimated useful life of furniture and data center equipment is five years and computer equipment such as laptops and desktops is four years. Leasehold improvements are amortized over the shorter of the life of the lease or the leasehold improvement. Donated computer equipment and software are recorded at the fair value at the time of the donation and are deemed as contributions without donor restriction in the year in which they are received. Repairs and maintenance of equipment are charged to operations. Upon retirement, sale, or other disposition of property and equipment, costs, and accumulated depreciation are eliminated from the accounts, and any resulting gain or loss is included in operations.\n\nThe Foundation incurs software development costs related to internal use software. Qualifying costs incurred during the application development stage are capitalized. These costs primarily consist of internal labor and third-party development costs and are amortized using the straight-line method over the estimated useful life of the software, which is generally three years. These assets are reviewed for impairment whenever events or changes in circumstances occur that could impact their recoverability. External use software is expensed as incurred since there is generally no passage of time between achievement of technological feasibility and the availability for general release.\n\n#### *(j) Other Operating Expenses*\n\nOther operating expenses primarily include facility expenses, staff related expenses, insurance and personal property tax expenses, and other general administrative expenses.\n\n#### *(k) Contributions of Cash and Other Financial Assets*\n\nUnconditional promises to give are recognized as revenue when the underlying promises are received by the Foundation. Contributions that are conditional are not recorded until the condition is substantially met. Conditional contributions must include both (1) one or more barriers that need to be overcome before the Foundation is entitled to the contribution, and (2) a right of return or a right of release from the donor's obligation to provide the contribution.", - "page_start": 9, - "page_end": 9, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\nGifts of cash and other assets are reported as contributions with donor restrictions if they are received with donor stipulations that limit the use of the donated assets or are restricted as to time. When a donor restriction expires, that is, when a stipulated time restriction ends or purpose restriction is accomplished, net assets with donor restrictions are reclassified to net assets without donor restrictions and reported in the consolidated statement of activities as net assets released from restrictions.\n\n#### *(l) Contributions of Nonfinancial Assets and Services*\n\nContributions of nonfinancial assets and services include contributed services, as described below.\n\nContributed services are reported at fair value in the consolidated financial statements for voluntary donations of services when those services (1) create or enhance nonfinancial assets, (2) require specialized skills provided by individuals possessing those skills and are services that would be typically purchased if not provided by the donation, and (3) are professional in nature, and have been explicitly agreed to in advance. Contributed services are reported as contributions of nonfinancial assets and services revenue and in-kind service expenses in the consolidated statements of activities. Fair value is estimated based on current local rates for similar services.\n\nA substantial number of volunteers make significant contributions of their time in the furtherance of the Foundation's projects. The value of this contributed time is not reflected in the accompanying consolidated financial statements, as the criteria above are not met.\n\nContributed service revenue and expenses recorded in the consolidated statements of activities consist of contributed legal services, engineering services, subscription services, and internet hosting services and bandwidth. The amounts of specialized contributed legal services as revenue and expenses are $82,638 and $493,315 for the years ended June 30, 2024 and 2023, respectively. The value of specialized engineering services as revenue and expenses are $0 and $498,800 for the years ended June 30, 2024 and 2023, respectively. The value of donated subscription services as revenue and expenses was $124,738 and $0 for the years ended June 30, 2024 and 2023, respectively. The amounts of contributed internet hosting services and bandwidth for the years ended June 30, 2024 and 2023 is $56,100 and $48,338, respectively. Included in the 2024 and 2023 amounts are donated hosting services and bandwidth from the following companies: (1) FiberRing, (2) Tele2, (3) Datahop, (4) LibertyGlobal, (5) Init7, and (6) Arelion.\n\n#### *(m) Revenue Recognition – Contracts With Customers*\n\nThe Foundation recognizes revenue from contracts with customers related to Wikimedia, LLC under Accounting Standards Codification Topic 606, Revenue from Contracts with Customers, which establishes a principle that revenue is recognized upon transfer of control of promised products and services to customers in an amount that reflects the consideration the Foundation expects to receive in exchange for those products or services.\n\nThe Foundation determines the amount of revenue to be recognized through the application of the following 5-step process: 1) identification of the contract, or contracts, with a customer; 2) identification of the performance obligations in the contract; 3) determination of the transaction price; 4) allocation of the transaction price to the performance obligations in the contract; and 5) recognition of revenue when or as the Foundation satisfies the performance obligations.", - "page_start": 10, - "page_end": 10, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except share and per share amounts)*\n\n## **5. Investment Properties (continued)**\n\nFor year ended December 31, 2013, interest costs associated with the general corporate borrowings used to fund development have been capitalized to the respective development using the Company's weighted average borrowing rate of 4.38% (December 31, 2012 ‑ 4.50%). Interest costs associated with construction loans are capitalized to the respective development using the actual borrowing rate associated with the loan.\n\nInvestment properties with a fair value of $1,432,731 at December 31, 2013, (December 31, 2012 ‑ $1,294,317) are pledged as collateral against the Company's mortgages payable.\n\n#### *Valuation Process*\n\nThe management group that determines the Company's valuation policies and procedures for investment property valuations comprises the Chief Executive Officer (\"CEO\") and Chief Financial Officer (\"CFO\"). Each year, the CEO and CFO decide which external valuator to appoint to be responsible for the external valuations of the Company's properties. Selection criteria include market knowledge, reputation, independence and whether professional standards are maintained.\n\nThe CEO and CFO decide each quarter after consultation with the Company's external valuator and the Company's finance department:\n\n• whether a property's fair value can be reliably determined (IPUC are valued at cost until such time as fair value becomes reliably determinable);\n\n- which valuation method should be applied for each property; and\n- the assumptions made for unobservable inputs that are used in the valuation methods.\n\nValuations are performed on a quarterly basis at each interim reporting date. Valuations for interim reporting purposes are prepared internally by the Company's finance department using cap‑rates provided by the Company's external valuator. On an annual basis the Company obtains full valuation reports from an external valuator for approximately 20% of its investment property portfolio, and therefore every property is externally valued at least once every five years.\n\nAt each reporting date, the finance department analyses the movement in each property's value. For this analysis, the finance department verifies the major inputs applied in the latest valuation by referencing supporting information in the calculation to market reports and other relevant documents. For each property, the latest valuation is also compared with the valuations in the preceding quarter. If the fair value change (positive or negative) is more than 5%, the change is further analyzed to ensure reasonability, as well as absence of expected changes.\n\nOn a quarterly basis, the finance department discusses assumptions used in the valuations, with an emphasis on: (i) properties with fair value changes outside of the relevant threshold set out above; and (ii) IPUC.\n\nThe following table presents the following for each class of investment property:\n\n- the level of the fair value hierarchy;\n- the carrying amount or fair value of the investment property;\n- a description of the valuation technique; and\n- for Level 3 fair value measurements, quantitative information about significant unobservable inputs.\n\n| Class of | Fair value at | Fair value at | Valuation | Unobservable inputs | 2013 | 2012 |\n| --- | --- | --- | --- | --- | --- | --- |\n| property | December 31, | December 31, | technique | | Inputs | Inputs |\n| | 2013 | 2012 | | | | |\n| Apartments | | | Income | ‑ Capitalization rate (weighted average) | 5.88% | 6.02% |\n| ‑Level 3 | $1,334,153 | $1,126,189 | capitalization | ‑ Vacancy rate (weighted average) | 3.50% | 3.50% |\n| | | | approach | ‑ Management fee rate | 3.50% | 3.50% |\n| MHCs | | | Income | ‑ Capitalization rate (weighted average) | 6.86% | 7.04% |\n| ‑Level 3 | $115,414 | $168,401 | capitalization | ‑ Vacancy rate | 1.70% | 1.70% |\n| | | | approach | ‑ Management fee rate | 3.00% | 3.00% |", - "page_start": 78, - "page_end": 78, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "#### Table of Contents\n\nOur provision for income taxes increased by $434 million in the three months ended September 30, 2024 and increased by $652 million in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. Our effective tax rate increased from 8% to 22% in the three months ended September 30, 2024 and increased from 10% to 23% in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. These increases are primarily due to the impact of releasing the valuation allowance on our U.S. deferred tax assets in the fourth quarter of 2023 and changes in mix of jurisdictional earnings.\n\nSee Note 9, Income Taxes, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q for further details.\n\n#### Liquidity and Capital Resources\n\nWe expect to continue to generate net positive operating cash flow as we have done in the last five fiscal years. The cash we generate from our core operations enables us to fund ongoing operations and production, our research and development projects for new products and technologies including our proprietary battery cells, additional manufacturing ramps at existing manufacturing facilities, the construction of future factories, and the continued expansion of our retail and service locations, body shops, Mobile Service fleet, Supercharger, including to support NACS, energy product installation capabilities and autonomy and other artificial intelligence enabled products.\n\nIn addition, because a large portion of our future expenditures will be to fund our growth, we expect that if needed we will be able to adjust our capital and operating expenditures by operating segment. For example, if our near-term manufacturing operations decrease in scale or ramp more slowly than expected, including due to global economic or business conditions, we may choose to correspondingly slow the pace of our capital expenditures. Finally, we continually evaluate our cash needs and may decide it is best to raise additional capital or seek alternative financing sources to fund the rapid growth of our business, including through drawdowns on existing or new debt facilities or financing funds. Conversely, we may also from time to time determine that it is in our best interests to voluntarily repay certain indebtedness early.\n\nAccordingly, we believe that our current sources of funds will provide us with adequate liquidity during the 12-month period following September 30, 2024, as well as in the long-term.\n\nSee the sections below for more details regarding the material requirements for cash in our business and our sources of liquidity to meet such needs.\n\n#### Material Cash Requirements\n\nFrom time to time in the ordinary course of business, we enter into agreements with vendors for the purchase of components and raw materials to be used in the manufacture of our products. However, due to contractual terms, variability in the precise growth curves of our development and production ramps, and opportunities to renegotiate pricing, we generally do not have binding and enforceable purchase orders under such contracts beyond the short-term, and the timing and magnitude of purchase orders beyond such period is difficult to accurately project.\n\nAs discussed in and subject to the considerations referenced in Part I, Item 2, Management's Discussion and Analysis of Financial Condition and Results of Operations—Management Opportunities, Challenges and Uncertainties and 2024 Outlook —Cash Flow and Capital Expenditure Trends in this Quarterly Report on Form 10-Q, we currently expect our capital expenditures to support our projects globally to exceed $11.00 billion in 2024 and be between $8.00 to $10.00 billion in each of the following two fiscal years. We also have certain obligations in connection with our operations at Gigafactory New York and Gigafactory Shanghai, as outlined in Part II, Item 7, Management's Discussion and Analysis of Financial Condition and Results of Operations—Liquidity and Capital Resources—Material Cash Requirements in our Annual Report on Form 10-K for the year ended December 31, 2023.\n\nAs of September 30, 2024, we and our subsidiaries had outstanding $7.42 billion in aggregate principal amount of indebtedness, of which $2.12 billion is current. For details regarding our indebtedness, refer to Note 7, Debt, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n#### Sources and Conditions of Liquidity\n\nOur sources to fund our material cash requirements are predominantly from our deliveries and servicing of new and used vehicles, sales and installations of our energy storage products, interest income, and proceeds from debt facilities and equity offerings, when applicable.", - "page_start": 42, - "page_end": 42, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "(4) The deposit of any moneys forming part of the Consolidated Fund with a bank or with the Crown Agents for Overseas Governments and Administrations or the investment of any such moneys in securities in which, under the law for the time being in force in Botswana, trustees are authorized to invest, or the making of advances to such extent and in such circumstances as may be prescribed by Parliament, shall not be regarded as a withdrawal of those moneys from the Fund for the purposes of this section.\n\n## **119. Authorization of expenditure**\n\n(1) The Minister for the time being responsible for finance shall cause to be prepared and laid before the National Assembly, before or not later than 30 days after the commencement of each financial year, estimates of the revenues and expenditure of Botswana for that year.\n\n(2) The organisations of expenditure contained in the estimates for a financial year (other than expenditure charged upon the Consolidated Fund by this Constitution or any other law) shall be included in a Bill to be known as an Appropriation Bill which shall be introduced into the Assembly to provide for the issue from the Consolidated Fund of the sums necessary to meet that expenditure and the appropriation of those sums for the purposes specified in the said Bill.\n\n(3) If in any financial year it is found-\n\n- (a) that the amount appropriated by the Appropriation Act for the purposes included in any organisation of expenditure is insufficient or that a need has arisen for expenditure for a purpose for which no amount has been appropriated by the Appropriation Act; or\n- (b) that any moneys have been expended on any organisation of expenditure in excess of the amount appropriated for the purposes included in that organisation by the Appropriation Act or for a purpose for which no amount has been appropriated by the Appropriation Act,\n\na supplementary estimate showing the sums required or spent shall be laid before the National Assembly and the organisations of expenditure shall be included in a supplementary Appropriation Bill, or in a motion or motions approving such expenditure, which shall be introduced or moved in the Assembly.\n\n(4) Where any supplementary expenditure has been approved in a financial year by a resolution of the National Assembly in accordance with the provisions of subsection (3) of this section, a supplementary Appropriation Bill shall be introduced in the National Assembly, not later than the end of the financial year next following, providing for the appropriation of the sums so approved.\n\n## **120. Authorization of expenditure in advance of appropriation**\n\nParliament may make provision under which, if the Appropriation Act in respect of any financial year has not come into operation by the beginning of that financial year, the President may authorize the withdrawal of moneys from the Consolidated Fund for the purpose of meeting expenditure necessary to carry on the services of the Government until the expiration of four months from the beginning of that financial year or the coming into operation of the Appropriation Act, whichever is the earlier.\n\n## **121. Contingencies Fund**\n\n(1) Parliament may make provision for the establishment of a Contingencies Fund and for authorizing the President, if satisfied that there has arisen an urgent and unforeseen need for expenditure for which no other provision exists, to make advances from that Fund to meet that need.\n\n(2) Where any advance is made from the Contingencies Fund, a supplementary estimate shall be laid before the National Assembly as soon as possible for the purpose of replacing the amount so advanced.", - "page_start": 51, - "page_end": 51, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What include Wikimedia Fundation restricted cash ?", - "target_page": 9, - "target_passage": "Restricted cash includes standby letters of credit for (1) the Foundation’s headquarters office lease and (2) one of the Foundation’s Employer of Record responsible for administering compensation and benefits for non-US personnel.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n# **(9) Liquidity and Availability of Financial Assets**\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| date, June 30, 2024 and 2023, are as follows: | | | |\n| --- | --- | --- | --- |\n| | | 2024 | 2023 |\n| Cash and cash equivalents | $ | 82,845,159 | 75,808,401 |\n| Current contributions receivable | | 856,657 | — |\n| Short-term investments | | 116,074,763 | 132,216,667 |\n| Total financial assets | | 199,776,579 | 208,025,068 |\n| Less: | | | |\n| Restricted by donors for programs | | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for | | | |\n| general expenditures within one year | $ | 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n# **(10) Related Party Transactions**\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation's expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nOnce such stipulations are satisfied, the associated net assets are released from net assets with donor restrictions and recognized as net assets without donor restrictions.\n\nContributions received are recorded as net assets without donor restriction or net assets with donor restrictions depending on the existence and/or nature of any donor restrictions.\n\n## *(e) Cash and Cash Equivalents*\n\nThe Foundation manages its cash through major financial institutions. At June 30, 2024 and 2023, the carrying amount of the Foundation's general ledger cash held primarily in nationally recognized financial institutions is $60.0 million and $63.9 million, respectively. Cash balances are insured by the Federal Deposit Insurance Corporation (FDIC) up to the applicable limits. Cash balances held in these financial institutions at June 30, 2024 and 2023 exceed the applicable FDIC insurance limits. The Foundation's current practice is to maintain at least four months of cash and cash equivalents to support a combination of operating cash and a current reserve fund. The Foundation considers all highly liquid investments with an original maturity of three months or less when purchased to be cash equivalents. Cash equivalents of $22.8 million and $12.0 million as of June 30, 2024 and 2023, respectively, are considered Level 1 under ASC Topic 820, *Fair Value Measurement*.\n\n#### *(f) Restricted Cash*\n\nRestricted cash includes standby letters of credit for (1) the Foundation's headquarters office lease and (2) one of the Foundation's Employer of Record responsible for administering compensation and benefits for non-US personnel. As of June 30, 2024, neither letter of credit has been used.\n\n#### *(g) Contributions Receivable*\n\nContributions receivable represent gift amounts due from various entities, which are occasionally directed at specific activities. Contributions receivable due more than one year from the contribution date are discounted to present value using a fair value rate based on the U.S. Treasury bond rate and reflect the risks inherent in these cash flows. Contributions receivable are subject to review and adjustment by management should amounts be deemed uncollectible.\n\n#### *(h) Investments*\n\nThe Foundation's policy regarding investments is to invest cash in short-term, intermediate-term, and long-term fixed income, and equity instruments without assuming material undue risk to principal. Preservation of principal and maintenance of liquidity are priorities over yield. Investments are reported at fair value with realized and unrealized gains and losses, and accrued interest included as a component of the change in net assets. Additionally, the Foundation holds no shares of donated stock as of June 30, 2024 or 2023, consistent with its policy to sell stock received through donations as soon as possible.\n\nThe Foundation presents its investment portfolios as short-term and long-term based on expectations of the holding period of the investment in line with the investment guidelines stipulated in the investment policy.\n\nASC Topic 820 establishes a fair value hierarchy that prioritizes observable inputs to valuation techniques used to measure fair value. The hierarchy gives the highest priority to unadjusted quoted", - "page_start": 8, - "page_end": 8, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n#### **(11) Contingencies and Commitments**\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n#### **(12) Subsequent Events**\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "Notes to Consolidated Financial Statements June 30, 2024 and 2023\n\n#### **(1) Organization and Summary of Significant Accounting Policies**\n\n#### *(a) Organization and Purpose*\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n#### *(b) Risks and Uncertainties*\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n#### *(c) Income Taxes*\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n#### *(d) Financial Statement Presentation*\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, *Not-for-Profit Entities*.\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "#### **LIQUIDITY AND CAPITAL RESOURCES**\n\nWe strive to maintain a level of liquidity sufficient to allow us to cover our seasonal cash needs and to maintain appropriate levels of shortterm borrowings. We believe that our operating cash flows, available credit facilities and potential future borrowings are sufficient to finance our cash requirements for the next 12 months and beyond.\n\nOver the long term, we manage our cash and capital structure to maximize shareholder return, maintain our financial position, manage refinancing risk and allow flexibility for strategic initiatives. We regularly assess our debt and leverage levels, capital expenditure requirements, debt service payments, dividend payouts, potential share repurchases and other future investments. We believe that as of January 31, 2015, our existing cash and cash equivalents on-hand of $827, available credit facilities of $800 and potential future operating cash flows and borrowings will be sufficient to fund these scheduled future payments and potential long-term initiatives. Additionally, if an agreement is reached and a transaction is consummated in regards to our credit card receivables, it could result in additional cash flows to further support our capital requirements and strategic initiatives.\n\n#### **Operating Activities**\n\nNet cash provided by operating activities was $1,220 in 2014, $1,320 in 2013 and $1,110 in 2012. The majority of our operating cash inflows are derived from sales. We also receive cash payments for property incentives from developers. Our operating cash outflows generally consist of payments to our merchandise vendors (net of vendor allowances), payments to our employees for wages, salaries and other employee benefits and payments to our landlords for rent. Operating cash outflows also include payments for income taxes and interest payments on our short-term and long-term borrowings.\n\nCash provided by operating activities decreased in 2014 compared with 2013, which was primarily due to higher state tax payments made in 2014 compared with 2013, as well as changes in working capital in 2014.\n\nCash provided by operating activities increased in 2013 compared with 2012, resulting from less state tax payments made in 2013 due to additional payments made in 2012 as a result of the 53rd week, along with increased property incentives received from developers and changes in working capital.\n\n#### **Investing Activities**\n\nNet cash used in investing activities was $889 in 2014, $822 in 2013 and $369 in 2012. Our investing cash flows primarily consist of capital expenditures, changes in restricted cash accumulated for debt maturities and changes in credit card receivables associated with cardholder purchases outside of Nordstrom using our Nordstrom Visa credit cards.\n\n#### Capital Expenditures\n\nOur capital expenditures over the last three years totaled $2,177, with $861 in 2014, $803 in 2013 and $513 in 2012. Capital expenditures increased in 2014 compared with 2013 primarily due to ongoing store expansion and increased technology investments.\n\nCapital expenditures increased in 2013 compared with 2012 as we continued to make progress executing our customer strategy through increased investments in technology, ecommerce, remodels and new stores, including Nordstrom Rack and our Manhattan full-line store.\n\nThe following table summarizes our store count and square footage activity:\n\n| | | Store count | | | Square footage | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Fiscal year | 2014 | 2013 | 2012 | 2014 | 2013 | 2012 |\n| Total, beginning of year | 260 | 240 | 225 | 26.0 | 25.3 | 24.7 |\n| Store openings: | | | | | | |\n| Nordstrom full-line stores - U.S. | 2 | — | 1 | 0.3 | — | 0.1 |\n| Nordstrom Rack and other stores1 | 29 | 22 | 15 | 1.2 | 0.7 | 0.6 |\n| Stores acquired | 4 | — | — | — | — | |\n| Stores closed | (3) | (2) | (1) | (0.4) | — | (0.1) |\n| Total, end of year | 292 | 260 | 240 | 27.1 | 26.0 | 25.3 |\n\n1 Other stores include Jeffrey boutiques, Trunk Club showrooms, our Nordstrom Canada full-line store and Last Chance.\n\nWe had no store relocations in 2014, compared with one Nordstrom full-line store and two Nordstrom Rack relocations in 2013 and three Nordstrom Rack relocations in 2012. Our 2014 new store openings increased our square footage by 5.5%.\n\nTo date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### **LIQUIDITY AND CAPITAL RESOURCES**\n\n### **Cash Flows – Summary**\n\n#### Our cash flows consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n| --- | --- | --- | --- |\n| Net cash provided by operations $ 829,247 | | $740,812 | $ 846,546 |\n| Investing cash flows: | | | |\n| Proceeds from the sale of subsidiaries, net | 345,730 | — | — |\n| Capital expenditures | (702,862) | (550,232) | (300,039) |\n| Investments in unconsolidated affiliates | (11,602) | (41,350) | (80,314) |\n| Other | 20,981 | 35,894 | 9,143 |\n| Net cash used in investing activities | (347,753) | (555,688) | (371,210) |\n| Financing cash flows: | | | |\n| Net repayment under bank credit facilities (1,574,489) | | (285,087) | (270,126) |\n| Issuance of long-term debt | 1,528,957 | 600,000 | — |\n| Purchase of treasury stock | (348,895) | (442,864) | (207,590) |\n| Other | 68,455 | (37,284) | 23,231 |\n| Net cash used in financing activities | (325,972) | (165,235) | (454,485) |\n| Net increase in cash and cash equivalents $ 155,522 | | $ 19,889 | $ 20,851 |\n\n#### **Cash Flows – Operating Activities**\n\nTrends in our operating cash flows tend to follow trends in our operating income, excluding non-cash charges, since our business is primarily cash-based. Cash flow from operations in 2004 increased from 2003 due to higher operating income offset by higher tax payments. Cash flow from operations in 2003 decreased from 2002, resulting from the decrease in operating income and higher cash paid for taxes.\n\nAt December 31, 2004 and 2003, we held cash and cash equivalents of $435 million and $280 million, respectively. We require a certain amount of cash on hand to operate our resorts. Beyond our cash on hand, we utilize a company-wide cash management system to minimize the amount of cash held in banks. Funds are swept from accounts at our resorts daily into central bank accounts, and excess funds are invested overnight or are used to repay borrowings under our bank credit facilities. Included in cash and cash equivalents at December 31, 2004 is $141 million received from the sale of MGM Grand Australia and still held in Australia, pending clarification of the tax rule for repatriated earnings, as discussed earlier.\n\n#### **Cash Flows – Investing Activities**\n\nThe sale of the Golden Nugget Subsidiaries closed in January 2004 with net proceeds to the Company of $210 million. The sale of MGM Grand Australia closed in July 2004 with net proceeds to the Company of $136 million.\n\nCapital expenditures in 2004 increased over 2003 due to continued spending on major projects at several of our resorts, including:\n\n• The Bellagio expansion completed in December 2004;\n\n• The theatre for *KÀ* at MGM Grand Las Vegas, completed in November 2004.\n\nSpending on these two projects totaled approximately $325 million. Other capital expenditures were made for maintenance capital activities, including room remodel projects at New York-New York and MGM Grand Las Vegas and new restaurant and entertainment amenities at several resorts. Capital expenditures in 2003 were significantly higher than 2002, due largely to major projects at our existing resorts, including projects described above which began in 2003, the *Zumanity* theatre at New York-New York, the Bellagio room remodel and slot technology improvements. Capital expenditures in 2002 included general property improvements at our resorts, such as a room remodel project at The Mirage, new restaurant and nightclub development at several of our resorts, and various other remodeling projects.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "| | 2013 | 2012 |\n| --- | --- | --- |\n| | $'000 | $'000 |\n| 7. Cash and cash equivalents and restricted cash | | |\n| Current | | |\n| Cash on hand | 18 | 17 |\n| Deposits at call | 30,476 | 87,014 |\n| Cash and other bank balances | 30,494 | 87,031 |\n| Other deposits | 2,493 | 3,592 |\n| Total cash and cash equivalents – current | 32,987 | 90,623 |\n| Non-current | | |\n| Restricted cash | 5,474 | – |\n| Total restricted cash – non-current | 5,474 | – |\n\nCash on hand\n\nThese are petty cash balances held by subsidiaries.\n\nDeposits at call\n\nThe deposits at call are bearing floating interest rates and they may be accessed daily.\n\nOther deposits\n\nThis represents restricted cash held on deposit with financial institutions.\n\nRestricted cash\n\nUnder the terms of the loan facilities (see Note 16), the Group is required to maintain a minimum cash balance of US$5 million in respect of Akara.\n\nRisk exposure\n\nThe Group's exposure to interest rate risk and a sensitivity analysis for financial assets and liabilities are disclosed in Note 28.\n\n#### 8. Receivables\n\n| Trade receivables | – | 3,201 |\n| --- | --- | --- |\n| Other debtors | 9,431 | 9,025 |\n| Total receivables | 9,431 | 12,226 |\n\n#### Trade receivables\n\nTrade receivables represent gold sales at the end of the financial year, where payment was yet to be received. No trade receivables were past due or impaired as at 30 June 2013 (2012: nil).\n\nOther debtors\n\nOther debtors mainly relate to GST / VAT receivables, advances made for land acquisition and diesel fuel tax credits.\n\n#### Risk exposure\n\nThe Group's exposure to credit and currency is disclosed in Note 28.", - "page_start": 85, - "page_end": 85, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "- *available-for-sale financial assets –* we measure the excess of the cost to acquire the asset (less any impairment loss we have previously recognized), over its current fair value, if any. The difference is reclassified from the available-for-sale reserve in equity to net income.\n#### *Investments in Associates and Joint Arrangements*\n\nAt the end of each reporting period, we assess whether there is objective evidence that impairment exists in our investments in associates and joint arrangements. If objective evidence exists, we compare the carrying amount of the investment to its recoverable amount and recognize the excess over the recoverable amount, if any, as a loss in net income (see *Recognition of Impairment Charge*, below).\n\n#### *Goodwill and Indefinite-Life Intangible Assets*\n\nWe test goodwill and indefinite-life intangible assets for impairment once a year, or more frequently if we identify indicators of impairment. Goodwill is allocated to cash generating units (or groups of cash generating units) based on the level at which management monitors goodwill, which cannot be higher than an operating segment. The allocation involves considerable management judgement, and is made to cash generating units (or groups of cash generating units) that are expected to benefit from the synergies of the business combination from which the goodwill arose.\n\nA cash generating unit is the smallest identifiable group of assets that generates cash inflows largely independent of the cash inflows from other assets or groups of assets.\n\n#### *Non-Financial Assets with Finite Useful Lives*\n\nOur non-financial assets with finite useful lives include property, plant and equipment, and intangible assets. We test these assets for impairment whenever an event or change in circumstances indicates that their carrying amounts may not be recoverable. The asset is impaired if the recoverable amount is less than the carrying amount. If we cannot estimate the recoverable amount of an individual asset because it does not generate independent cash inflows, we test the entire cash generating unit for impairment.\n\n#### *Recognition of Impairment Charge*\n\nThe recoverable amount of a cash generating unit or asset is the higher of:\n\n- its fair value less costs to sell, or\n- its value in use.\n\nWe estimate an asset's (or cash generating unit's) fair value less costs to sell using the best information available to estimate the amount we could obtain from disposing the asset in an arm's length transaction, less the estimated cost of disposal.\n\nWe estimate value in use by discounting estimated future cash flows from a cash generating unit or asset to their present value using a pretax rate that reflects current market assessments of the time value of money and the risks specific to the asset. Estimated cash flows are based on management's assumptions and are supported by external information.\n\nThe above concepts used to determine the recoverable amount require significant estimates such as:\n\n- future cash flows\n- terminal growth rate, and\n- the discount rate applied.\n\nIf our estimate of the asset's or cash generating unit's recoverable amount is less than its carrying amount, we reduce its carrying amount to the recoverable amount, and recognize the loss in net income.\n\nWe reverse a previously recorded impairment loss if our estimate of a previously impaired asset's or cash generating unit's recoverable amount has increased such that the impairment recorded in the previous year has reversed. The reversal is recognized by increasing the asset's or cash generating unit's carrying amount to our new estimate of its recoverable amount. The new carrying amount cannot be higher than the carrying amount we would have recorded if we had not recognized an impairment loss in previous years. We do not reverse impairment losses recognized for goodwill.\n\n#### **Investments**\n\n#### *Investments in Associates and Joint Arrangements*\n\nAn entity is an associate when we have a significant influence on the entity's financial and operating policies but do not control it. We are generally presumed to have significant influence over an entity when we hold more than 20% of the voting power.\n\nA joint arrangement exists when there is a contractual agreement that establishes joint control over its activities and requires unanimous consent for strategic financial and operating decisions. We classify our interests in joint arrangements into one of two categories:\n\n- Joint operations, when we have the rights to the assets and obligations for the liabilities related to the arrangement\n- Joint ventures, when we have the rights to the net assets of the arrangement.\n\nWe use the equity method to account for our investments in associates and joint ventures, and we use the proportionate consolidation method to account for our investments in joint operations.\n\nWe recognize our investments in associates and joint ventures initially at cost, and then increase or decrease the carrying amounts based on our share of each entity's income or loss after initial recognition. Distributions we receive from these entities reduce the carrying amount of our investments.\n\nWe eliminate unrealized gains and losses from our investments in associates or joint ventures against our investment, up to the amount of our interest in the entity.\n\n#### *Investments in Publicly Traded and Private Companies*\n\nWe classify our investments in publicly traded and private companies where we have no control or significant influence as available-for-sale investments, and account for them as follows:\n\n- publicly traded companies: we record them at fair value based on publicly quoted prices\n- private companies: we record them at fair value using well established market or asset based techniques, or projected income valuation techniques, applying them to each investment's future operating and profitability prospects.\n\nWe record changes in the fair value of these investments in other comprehensive income until we dispose of the investments or they become impaired.\n\nSee note 14 for more information about our investments.", - "page_start": 103, - "page_end": 103, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### FINANCIAL CONDITION\n\n#### **Capital Resources**\n\nOur capital resources consist primarily of cash flow from operations, cash and cash equivalents, available lines of credit, funds available under our accounts receivable securitization program and issuances of long-term debt.\n\nThis information is forward-looking and should be read in conjunction with \"About forward-looking information\" and \"Risks and Uncertainties Affecting Our Business\" and other disclosure about various economic, competitive and regulatory assumptions, factors and risks that could cause our actual future financial and operating results to differ from those currently expected.\n\nWe anticipate generating a net cash surplus in 2014 from our cash from operations. We expect that we will have sufficient capital resources to satisfy our cash funding requirements in 2014, including the funding of dividends on our common shares, repayment of maturing long-term debt and other financing activities, investing activities, and other requirements, taking into account our opening cash balance, cash from operations, the amount available under our $2.0 billion bank credit facility, and our accounts securitization program and from the issuance of short-term and, or long-term debt from time to time. At December 31, 2013, there were no significant restrictions on the flow of funds between Rogers and its subsidiary companies.\n\nWe believe that we can satisfy foreseeable additional funding requirements by issuing additional debt financing, which, depending on market conditions, could include restructuring our existing bank credit and letter of credit facilities, issuing public or private debt, amending the terms of our accounts receivable securitization program or issuing equity. We may also refinance a portion of existing debt depending on market conditions and other factors. There is no assurance, however, that this will or can be done.\n\n#### **Bank Credit and Letter of Credit Facilities**\n\nWe have $2.5 billion of bank credit and letter of credit facilities. Each of these facilities is unsecured and guaranteed by Rogers Communications Partnership and ranks equally with all of our senior notes and debentures. The terms of our bank credit facility are committed by the participating financial institutions until it expires in July 2017. As at December 31, 2013, there were no advances outstanding under our $2.0 billion bank credit facility and there were letters of credit totalling $0.5 billion outstanding under our letter of credit facilities.\n\n#### **Liquidity**\n\nWe had approximately $4.5 billion of available liquidity at December 31, 2013, as compared to $3.1 billion available at December 31, 2012:\n\n- $2.3 billion in cash and cash equivalents (2012 $0.2 billion)\n- $2.0 billion available under our bank credit facility (2012 $2.0 billion)\n- $0.2 billion available under the $0.9 billion accounts receivable securitization program (2012 – $0.9 billion).\n\n#### **Covenants**\n\nWe are currently in compliance with all covenants under our debt instruments. At December 31, 2013, there were no financial leverage covenants in effect other than those under our bank credit and letter of credit facilities (see Terms and conditions under Note 18 to the 2013 audited consolidated financial statements).\n\n#### **Credit Ratings**\n\nCredit ratings provide an independent measure of credit quality of an issue of securities, and can affect our ability to obtain short-term and long-term financing and the terms of the financing. If rating agencies lower the credit ratings on our debt, particularly a downgrade below investment grade, it could adversely affect our cost of financing and access to liquidity and capital.\n\nWe have engaged each of Fitch Ratings (Fitch), Moody's Investors Service (Moody's) and Standard & Poor's Ratings Services (Standard & Poor's) to rate our public debt issues. In May 2013, each of Fitch and Standard & Poor's upgraded RCI's senior unsecured debt to BBB+ (from BBB) with a stable outlook. Moody's comparably equivalent rating of Baa1 with a stable outlook has not changed from last year.\n\nThe table below shows the credit ratings on our borrowings received from the rating agencies as of December 31, 2013:\n\n| | Corporate credit issuer | |\n| --- | --- | --- |\n| 2013 | default rating | Senior unsecured debt |\n| Standard and Poor's | BBB+ with a stable outlook BBB+ with a stable outlook | |\n| Fitch | BBB+ with a stable outlook BBB+ with a stable outlook | |\n| Moody's | Baa1, stable outlook | Baa1, stable outlook |\n\nRatings for debt instruments across the universe of composite rates range from AAA (Standard & Poor's and Fitch) or Aaa (Moody's) representing the highest quality of securities rated, to D (Standard & Poor's), C (Moody's) and Substantial Risk (Fitch) for the lowest quality of securities rated.\n\nCredit ratings are not recommendations for investors to purchase, hold or sell the rated securities, nor are they a comment on market price or investor suitability. There is no assurance that a rating will remain in effect for a given period of time, or that a rating will not be revised or withdrawn entirely by a rating agency if it believes circumstances warrant it. The ratings on our senior debt provided by Standard & Poor's, Fitch and Moody's are investment grade ratings.\n\n#### **RATIO OF ADJUSTED OPERATING PROFIT TO INTEREST**", - "page_start": 64, - "page_end": 64, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Capital additions for 2002 include $72.6 million used to purchase equipment consisting primarily of revenue-producing vehicles originally placed into service pursuant to an operating lease.\n\n### **Liquidity and Capital Resources**\n\nThe major components of changes in cash Öows for the years ended December 31, 2004, 2003 and 2002 are discussed below.\n\n*Cash Flows From Operating Activities.* Cash Öows provided by operating activities were $666.3 million, $600.5 million and $569.7 million for the years ended December 31, 2004, 2003 and 2002, respectively. The changes in cash provided by operating activities during the periods are due to the expansion of our business, the timing of payments received for accounts receivable, and the timing of payments for accounts payable and income taxes.\n\nIn December 2003, we received written approval from the Internal Revenue Service to exclude probable expansion airspace from our calculation of landÑll amortization, depletion, and Ñnal capping, closure and postclosure costs for tax purposes. As a result of this change, we recorded a tax receivable of approximately $48.0 million which was collected or used to oÅset taxes payable during the year ended December 31, 2004. Also during the year ended December 31, 2004, we collected a $23.0 million note receivable associated with a divested business.\n\nWe expect our cash Öow from operating activities during 2005 to be lower than 2004 and 2003 because of higher tax payments due to the reversal of bonus depreciation and several one-time items that beneÑted previous periods, including utilization of a tax receivable and the collection of a note receivable during 2004, and an increase in self-insurance reserves during 2003.\n\nWe use cash Öow from operations to fund capital expenditures, acquisitions, share repurchases, dividend payments and debt repayments.\n\n*Cash Flows Used In Investing Activities.* Cash used in investing activities was $206.7 million, $552.4 million and $316.3 million for the years ended December 31, 2004, 2003 and 2002, respectively, and consists primarily of cash used for capital additions and business acquisitions for all the periods presented and cash provided by restricted marketable securities in 2004. Capital additions were $283.8 million, $273.2 million and $258.6 million during the years ended December 31, 2004, 2003 and 2002, respectively. Cash used to acquire businesses, net of cash acquired, was $47.3 million, $51.5 million and $55.8 million during the years ended December 31, 2004, 2003 and 2002, respectively.\n\nThe increase in restricted marketable securities during 2003 consists of amounts transferred from unrestricted cash for Ñnancial guarantees. In 2004, we liquidated a portion of these marketable securities and used the proceeds to repay the $225.0 million of public notes. We used letters of credit to replace Ñnancial guarantees secured by marketable securities that were liquidated.\n\nWe intend to Ñnance capital expenditures and acquisitions through cash on hand, cash Öows from operations, our revolving credit facility, tax-exempt bonds and other Ñnancings. We expect to use primarily cash for future business acquisitions.\n\n*Cash Flows Used In Financing Activities.* Cash Öows used in Ñnancing activities was $437.3 million, $70.4 million and $128.0 million for the years ended December 31, 2004, 2003 and 2002, respectively, and primarily include proceeds from issuances of tax-exempt bonds, repayments of debt and repurchases of common stock under our stock repurchase program. Dividends paid were $46.0 million and $19.0 million during 2004 and 2003, respectively. In 2004, repayments of debt include the liquidation of $225.0 million of public notes.\n\nFrom 2000 through 2004, our board of directors authorized the repurchase of up to $1,025.0 million of our common stock. As of December 31, 2004, we paid $750.4 million to repurchase approximately 35.2 million shares of our common stock, of which $266.1 million was paid during 2004 to repurchase approximately 9.6 million shares of our common stock.", - "page_start": 53, - "page_end": 53, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "What is the price of the The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 ?", - "target_page": 8, - "target_passage": "£6.90", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## **2020 No. 471**\n\n## **EDUCATION, ENGLAND**\n\n# The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\n| Made - - | - | - 28th April 2020 |\n| --- | --- | --- |\n| Laid before Parliament | | 30th April 2020 |\n| Coming into force | - | - 1st May 2020 |\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014(**a**) and sections 29(3) and 569(4) of the Education Act 1996(**b**).\n\n## **Citation and commencement**\n\n**1.** These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## **Review and expiry**\n\n**2.**—(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n\n(2) These Regulations cease to have effect on 25th September 2020.\n\n## **Amendment of the Special Educational Needs and Disability Regulations 2014**\n\n**3.** The Special Educational Needs and Disability Regulations 2014(**c**) are amended as follows.\n\n**4.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**5.** After regulation 2 (interpretation) insert—\n\n## \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of\n\n(<b>a) 2014 c.6. Section 30(8) was amended by Schedule 2, Part 1, paragraph 4 to the Children and Social Work Act 2017 (c.16).\n\n(<b>b) 1996 c.56. Section 29(3) was amended by Schedule 30, paragraph 67 and Schedule 31 to the School Standards and Framework Act 1998 (c.31) and S.I. 2010/1158 and section 569(4) was amended by section 8(1) and (5) of the Education (Wales) Measure 2009.\n\n(<b>c) S.I. 2014/1530, relevant amending instruments are S.I. 2014/2096, S.I. 2015/359 and S.I. 2017/1306.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "## **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\n \n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "# PART 6\n\n### Final provisions\n\n### **Review of need for requirements**\n\n**24.** The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n#### **Expiry of Regulations**\n\n**25.** These Regulations expire at the end of 16th May 2022.\n\n### **Revocations, transitional provision consequential amendments and savings**\n\n**26.**—(1) The following Regulations are revoked—\n\n- (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020(**a**);\n- (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\")(**b**); and\n- (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021(**c**).\n\n(2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n\n(3) Schedule 16 makes transitional provisions.\n\n(4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\n*Robert Courts* Parliamentary Under Secretary of State At 10.32 a.m. on 14th May 2021 Department for Transport\n\n(**a**) S.I. 2020/567.\n\n(<b>b) S.I. 2020/568.\n\n(<b>c) S.I. 2021/38.", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**10.** In regulation 13(3) (timescales for EHC plans), for \"(d)\" substitute \"(e)\".\n\n**11.** After regulation 18 (circumstances in which a local authority must review an EHC plan) insert—\n\n## \"**Circumstances in which it is not necessary to review an EHC plan**\n\n**18A.**—(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n\n(2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.\".\n\n**12.** In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert—\n\n\"(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**13.** In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**14.** In regulation 45 (unopposed appeals), after paragraph (7) insert—\n\n\"(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.\".\n\n### **Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014**\n\n**15.** The Special Educational Needs (Personal Budgets) Regulations 2014(**a**) are amended as follows.\n\n**16.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n**17.** After regulation 2 (interpretation) insert—\n\n\".\n\n#### \"**Relaxation of time period due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(<b>a) S.I. 2014/1652, to which there are amendments not relevant to these Regulations.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "The Secretary of State makes the following Regulations in exercise of the powers conferred by sections 45B, 45F(2) and 45P(2) of the Public Health (Control of Disease) Act 1984(**a**).\n\n## PART 1\n\n### Introductory\n\n#### **Citation, commencement, extent and application**\n\n**1.**—(1) These Regulations may be cited as the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021.\n\n(2) These Regulations come into force at 4.00 a.m. on 17th May 2021.\n\n(3) These Regulations extend to England and Wales and apply in relation to England only.\n\n#### **Interpretation and introduction of Schedules 1 to 4**\n\n**2.**—(1) In these Regulations—\n\n\"category 1 arrival\" means person who has arrived in England from a category 1 country or territory, and has not been in a category 2 country or territory or a category 3 country or territory in the period beginning with the 10th day before the date of their arrival in England;\n\n\"category 1 country or territory\" means a country or territory, or part of a country or territory, specified in Schedule 1(**b**);\n\n\"category 2 country or territory\" means a country or territory or part of a country or territory specified in Schedule 2(**c**);\n\n\"category 3 country or territory\" means a country or territory or part of a country or territory specified in Schedule 3(**d**);\n\n\"child\" means a person under the age of 18;\n\n\"the common travel area\" has the meaning given in section 1(3) of the Immigration Act 1971(**e**);\n\n\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n\"coronavirus disease\" means COVID-19 (the official designation of the disease which can be caused by coronavirus);\n\n\"designated port\" means a port designated for the purposes of Schedule 11;\n\n\"device\" means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002(**f**);\n\n\"disability\" has the meaning given in the Equality Act 2010(**g**) (see section 6 of, and Schedule 1 to, that Act);\n\n\"immigration officer\" means a person appointed by the Secretary of State as an immigration officer under paragraph 1 of Schedule 2 to the Immigration Act 1971(**h**);\n\n\"managed self-isolation package\" has the meaning given in paragraph 8 of Schedule 11;\n\n\"operator\" except in regulation 18, means an operator of a relevant service;\n\n(**b**) Category 1 countries and territories are referred to colloquially and in guidance as \"Green List\" countries and territories.\n\n(**c**) Category 2 countries and territories are referred to colloquially and in guidance as \"Amber List\" countries and territories.\n\n(**f**) S.I. 2002/618.\n\n(<b>a) 1984 c. 22. Part 2A was inserted by section 129 of the Health and Social Care Act 2008 (c. 14).\n\n(<b>d) Category 3 countries and territories are referred to colloquially and in guidance as \"Red List\" countries and territories. (**e**) 1971 c. 77; section 1(3) provides that the United Kingdom, the Channel Islands, the Isle of Man and the Republic of Ireland are collectively referred to in that Act as \"the common travel area\".\n\n(<b>g) 2010 c. 15.\n\n(<b>h) Paragraph 1 was amended by paragraph 3 of Schedule 3 to the Health Protection Agency Act 2004 (c. 17), and by S.I. 1993/1813.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (e) where P is required to obtain a testing package or undertake a test under regulation 6 or Schedule 8—\n\t- (i) information generated where P books, or attempts to book, a testing package for the purposes of regulation 6,\n\t- (ii) a copy of any notice given to P which contains information about the requirement to book a testing package or to undertake a test,\n\t- (iii) information A obtained under paragraph 10(3) or (4) of Schedule 8,\n\t- (iv) the results of a test undertaken by P in accordance with Schedule 8 (whether or not that test was provided as part of a testing package),\n\t- (v) information obtained by A in the course of providing a test that falls within paragraph (iv) and is undertaken, or in the course of arranging for such a test to be undertaken, by P (including confirmation that the test was undertaken, details of when and where it was undertaken, any reasons for a test not be being undertaken and the details of any replacement test to be undertaken);\n- (f) information provided to an immigration officer pursuant to regulations 3(7), 4(4) or 6(11);\n- (g) where a sample taken in respect of a day 2 test under regulation 6 has been sequenced, the sorted BAM file relating to that sample containing all reads aligning to the SARS-CoV-2 reference genome with unaligned and human reads removed;\n- (h) information provided by, or on behalf of, A by way of explanation for failing to comply with regulation 3, 4 or 6, or paragraph 3 of Schedule 8; or\n- (i) information about any steps taken in relation to A, including details of any fixed penalty notice issued under these Regulations.\n- (3) A may only use relevant information where it is necessary—\n\t- (a) for the purpose of carrying out a function under these Regulations;\n\t- (b) for the purpose of—\n\t\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n\t- (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n\n(4) Subject to paragraph (7), A may only disclose relevant information to another person (the \"recipient\") where it is necessary for the recipient to have the information —\n\n- (a) for the purpose of carrying out a function of the recipient under—\n\t- (i) these Regulations, or\n\t- (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n- (b) for the purpose of—\n\t- (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n\t- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n\t- (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "When come into force the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 ?", - "target_page": 1, - "target_passage": "These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## **2020 No. 471**\n\n## **EDUCATION, ENGLAND**\n\n# The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\n| Made - - | - | - 28th April 2020 |\n| --- | --- | --- |\n| Laid before Parliament | | 30th April 2020 |\n| Coming into force | - | - 1st May 2020 |\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014(**a**) and sections 29(3) and 569(4) of the Education Act 1996(**b**).\n\n## **Citation and commencement**\n\n**1.** These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## **Review and expiry**\n\n**2.**—(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n\n(2) These Regulations cease to have effect on 25th September 2020.\n\n## **Amendment of the Special Educational Needs and Disability Regulations 2014**\n\n**3.** The Special Educational Needs and Disability Regulations 2014(**c**) are amended as follows.\n\n**4.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**5.** After regulation 2 (interpretation) insert—\n\n## \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of\n\n(<b>a) 2014 c.6. Section 30(8) was amended by Schedule 2, Part 1, paragraph 4 to the Children and Social Work Act 2017 (c.16).\n\n(<b>b) 1996 c.56. Section 29(3) was amended by Schedule 30, paragraph 67 and Schedule 31 to the School Standards and Framework Act 1998 (c.31) and S.I. 2010/1158 and section 569(4) was amended by section 8(1) and (5) of the Education (Wales) Measure 2009.\n\n(<b>c) S.I. 2014/1530, relevant amending instruments are S.I. 2014/2096, S.I. 2015/359 and S.I. 2017/1306.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "**18.** Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n#### **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\"), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.\".\n\n## **Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015**\n\n**18.** The Special Educational Needs and Disability (Detained Persons) Regulations 2015(**a**) are amended as follows.\n\n**19.** In regulation 2(1) (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n**20.** After regulation 2 (interpretation) insert—\n\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 15(1) and (4) (needs assessments which are not completed);\n- (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n- (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n- (d) regulation 19 (requirement to consider mediation);\n- (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n- (f) regulation 21 (mediation);\n- (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n- (h) regulation 27(3) (steps to be taken by a home authority);\n- (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n- (j) regulation 30(3) and (6) (unopposed appeals).\".\n\n**21.** In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert—\n\n> \"(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**22.** In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\", or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n(<b>a) S.I. 2015/62.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "(3) In regulation 4ZA—\n\n- (a) in the heading, for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\";\n- (b) in paragraph (1)(a), for \"regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\")\" substitute \"regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 (\"the International Travel and Operator Liability Regulations\")\";\n- (c) in paragraph (1)(c), for \"paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\";\n- (d) in paragraph (3), for \"paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations\".\n\n**2.**—(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020(**a**) are amended as follows.\n\n(2) In regulation 2D(1)(c), for \"regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n(3) In regulation 6(1)—\n\n- (a) in the definitions of \"designated place\", \"isolation requirements\" and \"self-isolating worker\", for \"regulation 4\" substitute \"regulation 9\";\n- (b) in the definition of \"International Travel Regulations\", for \"the Health Protection (Coronavirus, International Travel) (England) Regulations 2020\" substitute \"the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021\".\n\n# SCHEDULE 16 Regulation 26(3)\n\n### Transitional provision\n\n**1.** Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the 2020 Regulations\") in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n\n**2.** Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n\n**3.** A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n\n**4.** Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.\n\n(<b>a) S.I. 2020/1045. Regulation 2D was inserted by S.I. 2021/364. There are other amendments but none is relevant.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "**23.** In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**24.** In regulation 10(4) (decision not to secure an EHC plan)—\n\n- (a) at the end of sub-paragraph (b) omit \"or\"; and\n- (b) at the end of sub-paragraph (c) insert—\n\n\"; or\n\n- (d) of a reason relating to the incidence or transmission of coronavirus\".\n**25.** In regulation 13(3) (timescales for EHC plans), for \"(c)\" substitute \"(d)\".\n\n**26.** In regulation 29 (compliance with the orders of the First-tier Tribunal)—\n\n- (a) after paragraph (6) insert—\n\"(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.\".\n\n- (b) in paragraph (7)(c) after \"10(4)(a)\" insert \"or (d)\".\n**27.** In regulation 30(7)(c) (unopposed appeals), after \"10(4)(a)\" insert \"or (d)\".\n\n## **Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017**\n\n**28.** The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017(**a**) are amended as follows.\n\n**29.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); \".\n\n- **30.** After regulation 2 (interpretation) insert—\n#### \"**Relaxation of time periods due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n\n(3) The following regulations are specified for the purposes of paragraphs (1) and (2)—\n\n- (a) regulation 6(3) and (6) (responding to health care recommendations); and\n- (b) regulation 7(1) and (4) (responding to social care recommendations).\".\n\n*Vicky Ford* Parliamentary Under Secretary of State 28th April 2020 Department for Education\n\n#### (**a**) S.I. 2017/1306.", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "## **EXPLANATORY NOTE**\n\n#### *(This note is not part of the Regulations)*\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\n \n\n© Crown copyright 2020\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "# PART 6\n\n### Final provisions\n\n### **Review of need for requirements**\n\n**24.** The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n#### **Expiry of Regulations**\n\n**25.** These Regulations expire at the end of 16th May 2022.\n\n### **Revocations, transitional provision consequential amendments and savings**\n\n**26.**—(1) The following Regulations are revoked—\n\n- (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020(**a**);\n- (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 (\"the International Travel Regulations\")(**b**); and\n- (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021(**c**).\n\n(2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n\n(3) Schedule 16 makes transitional provisions.\n\n(4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\n*Robert Courts* Parliamentary Under Secretary of State At 10.32 a.m. on 14th May 2021 Department for Transport\n\n(**a**) S.I. 2020/567.\n\n(<b>b) S.I. 2020/568.\n\n(<b>c) S.I. 2021/38.", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "The Secretary of State makes the following Regulations in exercise of the powers conferred by sections 45B, 45F(2) and 45P(2) of the Public Health (Control of Disease) Act 1984(**a**).\n\n## PART 1\n\n### Introductory\n\n#### **Citation, commencement, extent and application**\n\n**1.**—(1) These Regulations may be cited as the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021.\n\n(2) These Regulations come into force at 4.00 a.m. on 17th May 2021.\n\n(3) These Regulations extend to England and Wales and apply in relation to England only.\n\n#### **Interpretation and introduction of Schedules 1 to 4**\n\n**2.**—(1) In these Regulations—\n\n\"category 1 arrival\" means person who has arrived in England from a category 1 country or territory, and has not been in a category 2 country or territory or a category 3 country or territory in the period beginning with the 10th day before the date of their arrival in England;\n\n\"category 1 country or territory\" means a country or territory, or part of a country or territory, specified in Schedule 1(**b**);\n\n\"category 2 country or territory\" means a country or territory or part of a country or territory specified in Schedule 2(**c**);\n\n\"category 3 country or territory\" means a country or territory or part of a country or territory specified in Schedule 3(**d**);\n\n\"child\" means a person under the age of 18;\n\n\"the common travel area\" has the meaning given in section 1(3) of the Immigration Act 1971(**e**);\n\n\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n\"coronavirus disease\" means COVID-19 (the official designation of the disease which can be caused by coronavirus);\n\n\"designated port\" means a port designated for the purposes of Schedule 11;\n\n\"device\" means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002(**f**);\n\n\"disability\" has the meaning given in the Equality Act 2010(**g**) (see section 6 of, and Schedule 1 to, that Act);\n\n\"immigration officer\" means a person appointed by the Secretary of State as an immigration officer under paragraph 1 of Schedule 2 to the Immigration Act 1971(**h**);\n\n\"managed self-isolation package\" has the meaning given in paragraph 8 of Schedule 11;\n\n\"operator\" except in regulation 18, means an operator of a relevant service;\n\n(**b**) Category 1 countries and territories are referred to colloquially and in guidance as \"Green List\" countries and territories.\n\n(**c**) Category 2 countries and territories are referred to colloquially and in guidance as \"Amber List\" countries and territories.\n\n(**f**) S.I. 2002/618.\n\n(<b>a) 1984 c. 22. Part 2A was inserted by section 129 of the Health and Social Care Act 2008 (c. 14).\n\n(<b>d) Category 3 countries and territories are referred to colloquially and in guidance as \"Red List\" countries and territories. (**e**) 1971 c. 77; section 1(3) provides that the United Kingdom, the Channel Islands, the Isle of Man and the Republic of Ireland are collectively referred to in that Act as \"the common travel area\".\n\n(<b>g) 2010 c. 15.\n\n(<b>h) Paragraph 1 was amended by paragraph 3 of Schedule 3 to the Health Protection Agency Act 2004 (c. 17), and by S.I. 1993/1813.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**10.** In regulation 13(3) (timescales for EHC plans), for \"(d)\" substitute \"(e)\".\n\n**11.** After regulation 18 (circumstances in which a local authority must review an EHC plan) insert—\n\n## \"**Circumstances in which it is not necessary to review an EHC plan**\n\n**18A.**—(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n\n(2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.\".\n\n**12.** In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert—\n\n\"(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\".\n\n**13.** In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)—\n\n- (a) at the end of sub-paragraph (c) omit \"or\"; and\n- (b) at the end of sub-paragraph (d) insert—\n\n\"; or\n\n- (e) of a reason relating to the incidence or transmission of coronavirus\".\n**14.** In regulation 45 (unopposed appeals), after paragraph (7) insert—\n\n\"(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.\".\n\n### **Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014**\n\n**15.** The Special Educational Needs (Personal Budgets) Regulations 2014(**a**) are amended as follows.\n\n**16.** In regulation 2 (interpretation), at the appropriate place insert—\n\n\"\"coronavirus\" means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2);\n\n**17.** After regulation 2 (interpretation) insert—\n\n\".\n\n#### \"**Relaxation of time period due to coronavirus exception**\n\n**2A.**—(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n(<b>a) S.I. 2014/1652, to which there are amendments not relevant to these Regulations.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "and the Channel Islands. The British Overseas Territories are not in the common travel area. Public health requirements may vary depending upon in which nation of the UK you are staying.\n\nEngland: https://www.gov.uk/uk-border-control\n\nNorthern Ireland: https://www.nidirect.gov.uk/articles/coronavirus-covid-19-international-traveladvice\n\nScotland: https://www.gov.scot/publications/coronavirus-covid-19-international-travel-quarantine/\n\nWales: https://gov.wales/arriving-wales-overseas\n\nFailure to comply with these measures is a criminal offence and you could be fined. There are a limited set of exemptions from these measures. Check the list of exemptions carefully. You may be fined if you fraudulently claim an exemption.\n\n# PART 2\n\n#### **Onboard announcement**\n\nThe following is a public health message on behalf of the UK's public health agencies.\n\nIf you have been in or transited through an amber or red country within the previous 10 days you must quarantine for the first 10 days after you arrive. This is to protect yourself and others.\n\nThe symptoms of coronavirus are a new continuous cough, a high temperature or a loss of, or change in, normal sense of taste or smell. If you experience any of these symptoms, however mild, you are advised to make yourself known to the crew.\n\nSimple measures you can take to help protect yourself and family are:\n\nwash your hands\n\navoid touching your face with your hands\n\ncatch coughs and sneezes in a tissue and dispose of it immediately.\n\n### PART 3\n\n### Relevant websites\n\n**1.** The following are \"the relevant websites\" for the purposes of regulation 14—\n\nhttps://www.gov.uk/government/publications/coronavirus-covid-19-travellers-exempt-from-ukborder-rules/coronavirus-covid-19-travellers-exempt-from-uk-border-rules\n\nhttps://www.gov.uk/guidance/booking-and-staying-in-a-quarantine-hotel-when-you-arrive-inengland\n\nhttps://www.gov.uk/guidance/coronavirus-covid-19-testing-for-people-travelling-to-england\n\nhttp://www.gov.uk/travel-quarantine-and-testing\n\nhttps://www.gov.uk/guidance/red-amber-and-green-list-rules-for-entering-england", - "page_start": 82, - "page_end": 82, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "Who is Daniel Casali ?", - "target_page": 12, - "target_passage": " Daniel Casali is a Thought Leader Information Technology Specialist working for 15 years at IBM with Power Systems, high-performance computing, big data, and storage. His role at IBM is to bring to reality solutions that address client’s needs by exploring new technologies for different workloads. He is also fascinated by real multicloud implementations, always trying to abstract and simplify the new challenges of the heterogeneous architectures that are intrinsic to this new consumption model, be that on-premises or in the public cloud. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems Volume 1**\n\n**Front cover**\n\nDino Quintero Ricardo Dobelin Barros Daniel Casali Luis Ferreira Alain Fisher Federico Fros Luis Daniel Gonzalez Miguel Gomez Gonzalez Mahesh Gurugunti Rogelio Rivera Gutierrez Nicolas Joly Boris Litichevsky Ismael Solis Moreno Gabriel Padilla\n\nSudipto Pal Bogdan Savu Richard Wale", - "page_start": 0, - "page_end": 0, - "source_file": "sg248459.pdf" - }, - { - "text": "## H E A R T H & H O M E T E C H N O L O G I E S : H O T ! H O T ! H O T !\n\n#### A C A S E S T U D Y I N E X P A N D I N G M A R K E T S\n\nWith four brand names under the Hearth & Home Technologies umbrella, we are collectively the world's largest fireplace manufacturer, the country's premier fireplace brands, the most recognized name in the industry, and the preferred brands among home builders. As the leading provider of hearth and home products and services, we make houses feel more like homes.\n\nIn addition to our commanding leadership position in manufacturing the two strongest hearth and home product brand names — Heatilator® and Heat-N-Glo® — we also offer innovative wood fuel technology, fireplaces, and stoves through Quadra-FireTM, while Fireside Hearth & Home distributes, services, and sells fireplace systems.\n\nWhat are we up to with all our great brands? We are meeting a broad range of customer needs, particularly by selling both to consumers and builders through a network of independent and company-owned, stand-alone, or gallerystyle design and installation centers. These Fireside Hearth & Home design centers — visually impressive and aspirational in setting — manifest our proprietary concept of elevating the hearth retail, installation, and distribution experience to a new level of sophistication and service. Since there is no other nationally branded hearth retailer in the industry, we are once again changing the game by being first-to-market innovators.\n\nOur newest store in Eagan, Minnesota, for example, is living proof that we're succeeding in growing core product share by getting closer to consumers. One customer, a St. Paul, Minnesota veterinarian, recently had a typically dynamic retail experience at the Eagan store. He's among a large group of people who own at least one of our hearth products — and who comes back for more. He explains: \"When we moved into our house, there were three fireplaces built into the family room, living room, and kitchen. Since we used them every day and liked them so much, we decided to convert our threeseason porch into a year-round porch.\"\n\n\"We all went to the Eagan store to purchase our fourth Heat-N-Glo® fireplace. Once we were walking around the store, taking in the lifestyle environments that are set up and dreaming about what our house could look and feel like, we realized we wanted more! We saw an amazing stone surround setting in one of the store displays — and before you knew it, we had bought the whole wall. Not only does our new fireplace now have a beautiful aesthetic and terrific functionality, but so does our porch. Because the surround wall installation was so surprisingly easy and clean, we're even considering our next purchase.\"", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "- [89] Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78.\n- [90] Lisa P. Nathan, Predrag V. Klasnja, and Batya Friedman. 2007. Value Scenarios: A Technique for Envisioning Systemic Effects of New Technologies. In CHI'07 Extended Abstracts on Human Factors in Computing Systems. ACM, 2585–2590.\n- [91] Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 2144–2160. https://doi.org/10.18653/v1/2020.findings-emnlp.195\n- [92] Maggie Nelson. 2015. The Argonauts. Graywolf Press, Minneapolis.\n- [93] Timothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehension of Natural Language Arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 4658–4664. https://doi.org/10.18653/v1/P19-1459\n- [94] Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.\n- [95] Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? Making Sense of Language-Specific BERT Models. arXiv:2003.02912 [cs.CL]\n- [96] David Ortiz, Daniel Myers, Eugene Walls, and Maria-Elena Diaz. 2005. Where do we stand with newspaper data? Mobilization: An International Quarterly 10, 3 (2005), 397–419.\n- [97] Charlotte Pennington, Derek Heim, Andrew Levy, and Derek Larkin. 2016. Twenty Years of Stereotype Threat Research: A Review of Psychological Mediators. PloS one 11 (01 2016), e0146487. https://doi.org/10.1371/journal.pone. 0146487\n- [98] Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, 1532–1543. https://doi.org/10.3115/v1/ D14-1162\n- [99] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 2227–2237. https://doi.org/10.18653/v1/N18-1202\n- [100] Pew. 2018. Internet/Broadband Fact Sheet. (2 2018). https://www.pewinternet. org/fact-sheet/internet-broadband/\n- [101] Aidan Pine and Mark Turin. 2017. Language Revitalization. Oxford Research Encyclopedia of Linguistics.\n- [102] Francesca Polletta. 1998. Contending stories: Narrative in social movements. Qualitative sociology 21, 4 (1998), 419–446.\n- [103] Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation Sensitivity Analysis to Detect Unintended Model Biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 5740–5745. https://doi.org/10.18653/v1/D19-1578\n- [104] Laura Pulido. 2016. Flint, environmental racism, and racial capitalism. Capitalism Nature Socialism 27, 3 (2016), 1–16.\n- [105] Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained Models for Natural Language Processing: A Survey. arXiv:2003.08271 [cs.CL]\n- [106] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9.\n- [107] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20- 074.html\n- [108] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2383–2392.\n\nhttps://doi.org/10.18653/v1/D16-1264\n\n- [109] Sarah T. Roberts, Joel Tetreault, Vinodkumar Prabhakaran, and Zeerak Waseem (Eds.). 2019. Proceedings of the Third Workshop on Abusive Language Online. Association for Computational Linguistics, Florence, Italy. https://www.aclweb. org/anthology/W19-3500\n- [110] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A Primer in BERTology: What We Know About How BERT Works. Transactions of the Association for Computational Linguistics 8 (2021), 842–866.\n- [111] Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proc. IEEE 88, 8 (2000), 1270–1278.\n- [112] Corby Rosset. 2020. Turing-NLG: A 17-billion-parameter language model by Microsoft. Microsoft Blog (2020).\n- [113] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019).\n- [114] Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social Bias Frames: Reasoning about Social and Power Implications of Language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5477–5490. https://doi.org/10.18653/v1/2020.acl-main.486\n- [115] Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green AI. Commun. ACM 63, 12 (Nov. 2020), 54–63. https://doi.org/10.1145/3381831\n- [116] Sabine Sczesny, Janine Bosak, Daniel Neff, and Birgit Schyns. 2004. Gender stereotypes and the attribution of leadership traits: A cross-cultural comparison. Sex roles 51, 11-12 (2004), 631–645.\n- [117] Claude Elwood Shannon. 1949. The Mathematical Theory of Communication. University of Illinois Press, Urbana.\n- [118] Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2019. Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. arXiv:1909.05840 [cs.CL]\n- [119] Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3407–3412. https://doi.org/10.18653/v1/D19-1339\n- [120] Katie Shilton, Jes A Koepfler, and Kenneth R Fleischmann. 2014. How to see values in social computing: methods for studying values dimensions. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 426–435.\n- [121] Joonbo Shin, Yoonhyung Lee, and Kyomin Jung. 2019. Effective Sentence Scoring Method Using BERT for Speech Recognition. In Asian Conference on Machine Learning. 1081–1093.\n- [122] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053 (2019).\n- [123] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019).\n- [124] Karen Spärck Jones. 2004. Language modelling's generative model: Is it rational? Technical Report. Computer Laboratory, University of Cambridge.\n- [125] Robyn Speer. 2017. ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors. (2017). Blog post, https://blog.conceptnet.io/2017/04/24/ conceptnet-numberbatch-17-04-better-less-stereotyped-word-vectors/.\n- [126] Steven J. Spencer, Christine Logel, and Paul G. Davies. 2016. Stereotype Threat. Annual Review of Psychology 67, 1 (2016), 415–437. https://doi.org/ 10.1146/annurev-psych-073115-103235 arXiv:https://doi.org/10.1146/annurevpsych-073115-103235 PMID: 26361054.\n- [127] Katrina Srigley and Lorraine Sutherland. 2019. Decolonizing, Indigenizing, and Learning Biskaaybiiyang in the Field: Our Oral History Journey1. The Oral History Review (2019).\n- [128] Greg J. Stephens, Lauren J. Silbert, and Uri Hasson. 2010. Speaker–listener neural coupling underlies successful communication. Proceedings of the National Academy of Sciences 107, 32 (2010), 14425–14430. https://doi.org/10.1073/pnas. 1008662107 arXiv:https://www.pnas.org/content/107/32/14425.full.pdf\n- [129] Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 3645–3650.\n- [130] Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: Enhanced Representation through Knowledge Integration. arXiv:1904.09223 [cs.CL]\n- [131] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "*Semantic Evaluation (SemEval-2022)*, pages 1094– 1106, Seattle, United States. Association for Computational Linguistics.\n\n- Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. *ArXiv*, abs/1803.05449.\n- Mathias Creutz. 2018. Open subtitles paraphrase corpus for six languages. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*, Miyazaki, Japan. European Language Resources Association (ELRA).\n- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *North American Chapter of the Association for Computational Linguistics*.\n- Ning Ding, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2023. Sentence and document representation learning. In *Representation Learning for Natural Language Processing*, pages 81–125. Springer Nature Singapore Singapore.\n- Aarohi Srivastava et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *ArXiv*, abs/2206.04615.\n- Alexander R Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. *Transactions of the Association for Computational Linguistics*, 9:391–409.\n- Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F. T. Martins, Gautier Viaud, Céline Hudelot, and Pierre Colombo. 2024. Croissantllm: A truly bilingual french-english language model.\n- Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2023. MASSIVE: A 1M-example multilingual natural language understanding dataset with 51 typologically-diverse languages. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 4277–4302, Toronto, Canada. Association for Computational Linguistics.\n- Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In *Conference on Empirical Methods in Natural Language Processing*.\n- Iker García-Ferrero, Rodrigo Agerri, and German Rigau. 2021. Benchmarking meta-embeddings: What works and what does not. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages\n\n3957–3972, Punta Cana, Dominican Republic. Association for Computational Linguistics.\n\n- Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation.\n- Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised language model pre-training for french.\n- Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nv-embed: Improved techniques for training llms as generalist embedding models.\n- Antoine Lefebvre-Brossard, Stephane Gazaille, and Michel C. Desmarais. 2023. Alloprof: a new french question-answer education dataset and its use in an information retrieval case study.\n- Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume*, pages 2950–2962, Online. Association for Computational Linguistics.\n- Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.\n- Antoine Louis and Gerasimos Spanakis. 2022. A statutory article retrieval dataset in French. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 6789–6803, Dublin, Ireland. Association for Computational Linguistics.\n- Louis Martin, Benjamin Muller, Pedro Ortiz Suarez, Yoann Dupont, Laurent Romary, Eric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. 2019. Camembert: a tasty french language model. In *Annual Meeting of the Association for Computational Linguistics*.\n- Philip May. 2021. Machine translated multilingual sts benchmark dataset.\n- Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In *Proceedings of the 7th ACM Conference on Recommender Systems*, RecSys '13, page 165–172, New York, NY, USA. Association for Computing Machinery.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv4.pdf" - }, - { - "text": "Research Report 90\n\nNick Morgan, Daniel Heap, Amy Elliott, Tim Millar\n\nJanuary 2016", - "page_start": 0, - "page_end": 0, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Design: Gene Mayer Associates, Inc. www.shareholderfocus.com Text: Daniel D. Elman\n\n Photography: Ted Kawalerski; page 8, Amy Etra", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "Design: Gene Mayer Associates, Inc. www.shareholderfocus.com Text: Daniel D. Elman\n\nPhotography: Ted Kawalerski; page 8, Amy Etra", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| Preferences | | | | × |\n| --- | --- | --- | --- | --- |\n| New ontologies | OWLViz Plugins Reasoner | | Renderer User details | |\n| Annotations Explanations | General Loa | New entities | New entities metadata | |\n| Entity rendering @ Render by entity IRI short name (Id) | | | | |\n| | O Render by prefixed name | | | |\n| | O Render by annotation property (e.g., rdfs:label, skos:prefLabel) | | | |\n| | O Render by prefixed annotation property | | | |\n| | Configure ... | | | |\n| Appearance | Highlight active ontology statements | | | |\n| | Show hyperlinks in components that support them | | | |\n| | Highlight keywords | | | |\n| Font size | 12 = | | | |\n| | Reset font | | | |\n| Reset preferences ... | | | | |\n| | OK Cancel | | | |\n\nFigure 4.2 Renderer tab\n\n| □ < PizzaTutorial (http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial) : [C:\\Users\\Michael DeB ... | | | | | × |\n| --- | --- | --- | --- | --- | --- |\n| Edit Refactor Window Help | File View | Reasoner | Tools | | |\n| + PizzaTutorial (http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial) | | | | | Search ... |\n| Active ontology × Entities × Individuals by class × DL Query × | | | | | |\n| Ontology header: 团团启回团 Ontology metric 团团目回区 | | | | | |\n| Ontology IRI http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial Metrics | | | | | |\n| Ontology Version IRI e.g. http://www.semanticweb.org/michaeldebellis/ontologies/2020/PizzaTutorial/1 Axiom 0 | | | | | |\n| Logical axio ... 0 | | | | | |\n| Declaration ... 0 Annotations (+ | | | | | |\n| Class count 0 rdfs:comment × (0) | | | | | |\n| Object prop ... 0 A tutorial ontology for the Pizza domain. | | | | | |\n| Data proper ... 0 | | | | | |\n| Individual c ... 0 | | | | | |\n| Annotation ... 1 | | | | | |\n| Class axioms | | | | | |\n| SubClassOf 0 | | | | | |\n| Ontology imports General class axioms | | Ontology Prefixes | | | |\n| Imported ontologies: | | | | | 008回國國國國 |\n| Direct Imports (+ | | | | | |\n| Indirect Imports | | | | | |\n| Show Inferences | | | | To use the reasoner click Reasoner > Start reasoner | 0 |\n\nFigure 4.3: The Active Ontology Tab with a New Comment", - "page_start": 12, - "page_end": 12, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "# *Acknowledgements*\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### P E R F E C T M A T C H # 4\n\n## T H E H O M E O W N E R A N D H E A R T H & H O M E T E C H N O L O G I E S\n\nHearth & Home Technologies, you warm our hearts by making a powerful impact on our lives; you are the ones who transform our houses into homes. First, you warmed up our living rooms and family rooms with style, elegance, and comfort. Now, you're heating up our porches and our kitchens … and finding creative and innovative ways to make our bedrooms, bathrooms, dens, guest rooms, and kids' rooms all toasty with your beautiful glow. The home fires are burning brighter and hotter than ever, now that you've come into our lives.", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "When does IBM close its acquisition of Red Hat ?", - "target_page": 20, - "target_passage": " On July 9th, 2019, IBM closed its acquisition of Red Hat, a leader in enterprise Linux and open source technology", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "This publication describes how Red Hat and IBM can advance your cloud journey and speed growth and innovation for your business by using Red Hat OpenShift on IBM Power Systems.\n\n**Note:** Red Hat joins IBM as a distinct unit, preserving the independence and neutrality of Red Hat's open source development heritage and unique development culture. Red Hat's unwavering commitment to open source remains unchanged and it continues to offer customers choice and flexibility.", - "page_start": 20, - "page_end": 20, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **1.1 Introduction**\n\nMost companies started or are contemplating their journey to cloud. Although in recent years the adoption of cloud became much more common place, the scope of what a cloud is or can be also increased. This broadening of possibilities unfortunately added confusion and can result in companies being unsure of how their existing application estate can change to integrate with the cloud model.\n\nAs such, doubts still exist around how to start and progress on this journey. It is also true that although people understand traditional enterprise applications and more modern cloud-hosted applications, the integration or co-existence of both can prove equally confusing and contradicting.\n\nRecent industry trends, combined with the new partnership between Red Hat and IBM, seek to bring some clarity to the landscape while providing new modernization opportunities for existing enterprise applications and familiar environments.\n\nThe main focus of this IBM Redbooks publication relates to IBM Cloud Paks and Red Hat OpenShift, which is hosted on IBM Power Systems. Although individually much can be written about either topic, the relationship this publication highlights is between Red Hat OpenShift and IBM Power Systems.\n\nWe show what Red Hat OpenShift brings to the IBM Power Systems platform specifically discuss how it can be deployed and added into existing familiar Power System environments, and the benefits that integration and co-existence can provide from an existing enterprise application viewpoint.\n\nThis publication is a first volume in a planned multi-volume publication over the next 12 - 18 months. Within this initial volume, we explain the fundamental perspective (which is accurate as of the time of this writing) while providing pointers to future direction that will be discussed in future volumes.\n\n**Note:** This initial publication relates to Red Hat OpenShift 3.11, because this release was the current OpenShift Container Platform (OCP) release for IBM Power Systems at the time of this writing. IBM and Red Hat intend to deliver Red Hat OpenShift 4 for IBM POWER® to accelerate agility for enterprise clients through integrated tooling and a feature-rich Kubernetes container platform for cloud-native development on POWER9 and IBM POWER8® processor-based servers.\n\n#### **1.2 Red Hat and IBM**\n\nOn July 9th, 2019, IBM closed its acquisition of Red Hat, a leader in enterprise Linux and open source technology.\n\nThis acquisition puts Red Hat and IBM in a unique position to unlock the true value of hybrid cloud for your business. By combining the power and flexibility of Red Hat's open hybrid cloud technologies with the scale and depth of IBM innovation and industry expertise, you now have the tools to accelerate your cloud journey.\n\nIBM and Red Hat worked together for more than 20 years in making open source a competitive advantage for businesses on x86, IBM Power Systems, and IBM z Systems®. Together, we are both on a mission to improve open source technology and help your companies capture the business value of the cloud.", - "page_start": 19, - "page_end": 19, - "source_file": "sg248459.pdf" - }, - { - "text": "IBM Redbooks\n\n#### **Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems: Volume 1**\n\nMarch 2020", - "page_start": 2, - "page_end": 2, - "source_file": "sg248459.pdf" - }, - { - "text": "## **Related publications**\n\nThe publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book.\n\n#### **IBM Redbooks**\n\nThe IBM Redbooks publication *IBM PowerVM Best Practices*, SG24-8062*,* provides more information about the topic in this document. Note that this publication might be available in softcopy only.\n\nYou can search for, view, download or order this documents and other Redbooks, Redpapers, Web Docs, draft, and other materials, at the following website:\n\n**ibm.com**/redbooks\n\n#### **Online resources**\n\nThe following websites are also relevant as further information sources:\n\n- - Deploying Red Hat OpenShift Container Platform 3.11 on Red Hat OpenStack Platform 13 https://red.ht/2pEFNpV\n- - OpenShift on POWER https://red.ht/337zOIT\n- - Kubernetes concepts https://kubernetes.io/docs/concepts/services-networking/service/\n- - IBM PowerVC https://www.ibm.com/us-en/marketplace/powervc\n- - Using PowerVC storage https://ibm.co/34Cko06\n- - Red Hat OpenShift Container Platform 3.11 CLI Reference https://red.ht/2XZGBmz\n\n#### **Help from IBM**\n\nIBM Support and downloads\n\n**ibm.com**/support\n\nIBM Global Services\n\n**ibm.com**/services", - "page_start": 264, - "page_end": 264, - "source_file": "sg248459.pdf" - }, - { - "text": "```\nsubscription-manager refresh\nAll local data refreshed\nsubscription-manager list --available --matches '*OpenShift*'\nSubscription Name: Red Hat OpenShift Container Platform for Power, LE Business Partner \nNFR, Self-Supported\nProvides: Red Hat Enterprise Linux for Power, little endian - Extended Update \nSupport\n Red Hat Enterprise Linux Fast Datapath Beta for Power, little \nendian\n Red Hat Enterprise Linux for Power, little endian\n Red Hat Ansible Engine\n Red Hat OpenShift Enterprise Application Node\n Red Hat Enterprise Linux for Power 9\n Red Hat Software Collections (for RHEL Server for IBM Power LE)\n Red Hat OpenShift Container Platform for Power\n Red Hat Software Collections Beta (for RHEL Server for IBM Power \nLE)\n RHEL for SAP HANA for Power, little endian - Extended Update \nSupport\n Red Hat Beta\n Red Hat OpenShift Container Platform Client Tools for Power\n Red Hat Enterprise Linux Fast Datapath (for RHEL Server for IBM \nPower LE)\n RHEL for SAP for Power, little endian - Extended Update Support\n Red Hat Enterprise Linux for Power, little endian Beta\n Red Hat Container Native Virtualization\n Red Hat CodeReady Linux Builder for Power, little endian - Extended \nUpdate Support\nSKU: 111111111\nContract: 111111111\nPool ID: \nProvides Management: No\nAvailable: Unlimited\nSuggested: 1\nService Level: Standard\nService Type: L1-L3\nSubscription Type: Stackable\nStarts: 05/31/2019\nEnds: 05/31/2020\nSystem Type: Virtual\n```\n- c. Assign the OpenShift subscription:\n\n```\nsubscription-manager attach --pool=\nSuccessfully attached a subscription for: Red Hat OpenShift Container Platform for \nPower, LE Business Partner NFR, Self-Supported\n```\n- d. Enable only the repositories that are required by OpenShift Container Platform 3.11. For IBM POWER9, run the commands that are shown in Example 6-2. For IBM POWER8, run the commands that are shown in Example 6-3 on page 107.\n*Example 6-2 OpenShift repositories for POWER9 servers*\n\n```\n# subscription-manager repos --disable=\"*\"\n# subscription-manager repos \\\n --enable=\"rhel-7-for-power-9-rpms\" \\\n --enable=\"rhel-7-for-power-9-extras-rpms\" \\\n```", - "page_start": 121, - "page_end": 121, - "source_file": "sg248459.pdf" - }, - { - "text": "# **Related publications**\n\nThe publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book.\n\n# **IBM Redbooks**\n\nThe following IBM Redbooks publications provide more information about the topic in this document (note that some publications referenced in this list might be available in softcopy only):\n\n- -*IBM b-type Gen 5 16 Gbps Switches and Network Advisor*, SG24-8186\n- - *IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines*, SG24-7521\n- - *Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and V5030)*, SG24-8162\n- - *Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.8*, SG24-7933\n\nYou can search for, view, download, or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website:\n\n**ibm.com**/redbooks\n\nThe following IBM Redbooks publication web pages that are related to this book are also useful resources:\n\n- -IBM Storage Networking Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/san\n\n- -IBM Flash Storage Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/flash\n\n- -IBM Software Defined Storage Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/sds\n\n- -IBM Disk Storage Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/disk\n\n- -IBM Storage Solutions Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/storagesolutions\n\n- -IBM Tape storage Redbooks:\nhttp://www.redbooks.ibm.com/Redbooks.nsf/domains/tape", - "page_start": 810, - "page_end": 810, - "source_file": "sg247938.pdf" - }, - { - "text": "# **5**\n\n## **Chapter 5. Red Hat OpenShift installation planning and considerations**\n\nThis chapter describes the Red Hat OpenShift planning, considerations, and installation guidelines and includes the following topics:\n\n- -5.1, \"IBM Power Systems\" on page 72\n- -5.2, \"Red Hat OpenShift Container Platform 3.11 on IBM Power Systems\" on page 77\n- -5.3, \"Red Hat OpenShift Container Platform 3.11 on IBM PowerVC\" on page 79", - "page_start": 86, - "page_end": 86, - "source_file": "sg248459.pdf" - }, - { - "text": "### **Preface**\n\nThis IBM® Redbooks® publication educates and prepares the readers to understand and enter the multicloud era.\n\nThis book describes a journey to the following aspects of multicloud and associated context of application modernization:\n\n- -Introduction to the rationale and methodology of this publication\n- -Concepts and terminology\n- -Why move to the cloud?\n- -Introduction to containers and orchestration with Kubernetes\n- -Introduction to OpenShift on Power Systems\n- -Why IBM? Why IBM Power Systems?\n- -Reference architecture for Red Hat OpenShift on Power Systems\n- - Installation planning, considerations and guidelines to help provide a system configuration and implementation\n- -Implementation details\n- -Use case studies\n\nThe goal of this publication is to describe the journey to implement an IBM Cloud™ Solution that uses Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems by using theoretical knowledge to learn the concepts, hands-on exercises to practice the theory, and documenting these findings by way of sample scenarios.\n\nThe publication addresses topics for developers, IT architects, IT specialists, sellers, and anyone who wants to implement a Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems. This book also provides technical content to transfer how-to skills to the support teams, and solution guidance to the sales team.\n\nThis book compliments the documentation that is available at IBM Knowledge Center, and also aligns with the educational offerings that are provided by the IBM Systems Software Education (SSE).\n\n#### **Authors**\n\nThis book was produced by a team of specialists from around the world working at IBM Redbooks, Austin Center.\n\n**Dino Quintero** is an IT Management Consultant and an IBM Level 3 Senior Certified IT Specialist with IBM Redbooks in Poughkeepsie, New York. Dino shares his technical computing passion and expertise by leading teams developing technical content in the areas of enterprise continuous availability, enterprise systems management, high-performance computing, cloud computing, artificial intelligence including machine and deep learning, and cognitive solutions. He also is a Certified Open Group Distinguished IT Specialist. Dino holds a Master of Computing Information Systems degree and a Bachelor of Science degree in Computer Science from Marist College.", - "page_start": 10, - "page_end": 10, - "source_file": "sg248459.pdf" - }, - { - "text": "Find out more about the residency program, browse the residency index, and apply online at: **ibm.com**/redbooks/residencies.html\n\n#### **Comments welcome**\n\nYour comments are important to us!\n\nWe want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:\n\n- -Use the online **Contact us** review Redbooks form found at:\n**ibm.com**/redbooks\n\n- -Send your comments in an email to:\nredbooks@us.ibm.com\n\n- -Mail your comments to:\nIBM Corporation, IBM Redbooks Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400\n\n#### **Stay connected to IBM Redbooks**\n\n- -Find us on Facebook:\nhttp://www.facebook.com/IBMRedbooks\n\n- -Follow us on Twitter:\nhttp://twitter.com/ibmredbooks\n\n- -Look for us on LinkedIn:\nhttp://www.linkedin.com/groups?home=&gid=2130806\n\n- - Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:\nhttps://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm\n\n- -Stay current on recent Redbooks publications with RSS Feeds:\nhttp://www.redbooks.ibm.com/rss.html", - "page_start": 14, - "page_end": 14, - "source_file": "sg248459.pdf" - }, - { - "text": "**Sudipto Pal** is Solution Architect for IBM Cognos® Analytics in GBS. He successfully delivered several critical deliverable with IBM clients from USA and Europe. He led Cognos administration competency and monitored several candidates. He co-authored IBM Redbooks publications about Cognos implementation with PowerVM platform. He has experience in IBM Power system for Virtualized environment setup and provisioning. He also has hands-on experience in data lake implementation by using DIP over a big data platform. He is based in IBM India, Kolkata. He holds Master of Computer Application and has experience in product development that uses C, C++ and Python,\n\n**Bogdan Savu** is a Cloud Infrastructure Architect at IBM Cloud Managed Application Services and works for IBM Global Technologies Services in Romania. He has over 13 years of experience in designing, developing, and implementing Cloud Computing, Virtualization, Automatization, and Infrastructure solutions. Bogdan holds a Bachelor's degree in Computer Science from the Polytechnic University of Bucharest. He is an IBM Certified Advanced Technical Expert for Power Systems, TOGAF 9 Certified, VMware Certified Professional, and Red Hat Certified Specialist in Containerized Application Development. His areas of expertise include Cloud Computing, Virtualization, DevOps, and Scripting.\n\n**Richard Wale** is a Senior IT Specialist, supporting many IBM development teams at the IBM Hursley Lab, UK. He holds a B.Sc. (Hons) degree in Computer Science from Portsmouth University, England. He joined IBM in 1996 and has been supporting production AIX systems since 1998. His areas of expertise include IBM Power Systems, PowerVM, AIX, and IBM i. He has participated in co-writing many IBM Redbooks publications since 2002.\n\nThanks to the following people for their contributions to this project:\n\n#### Wade Wallace **IBM Redbooks, Austin Center**\n\nManoj Kumar, Joe Cropper, Chuck Bryan, Keshav Ranganathan, Bruce Anthony, Bruce Semple, Reza Ghasemi, Mike Easlon **IBM USA**\n\nMiguel Angel de la Mora, Cesar Dominguez Moreno, Guillermo Hernandez Gonzalez, Arianne Navarro **IBM Guadalajara, Mexico**\n\nYenugu Madhavi **IBM India**\n\nAlfonso Jara **IBM Spain**\n\n#### **Now you can become a published author, too!**\n\nHere's an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an IBM Redbooks residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.", - "page_start": 13, - "page_end": 13, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "What does an ITMS service provide ?", - "target_page": 30, - "target_passage": "An IT Service Management (ITSM) perspective can provide automation and a global management view, and incorporate the necessary software disciplines that are required to build a solid infrastructure for an enterprise, commercial or not. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Focusing on core serverless services**\n\nAWS has over 220 services.\n\nEach service is a tool in your serverless development toolbox. Commonly, you start out using some services more frequently than others. This topic provides an overview of the core services you need to build serverless solutions.\n\nYou can read high level explanations of the core services here, and an example of how they interact within the context of an example microservice, or you can choose to skip ahead to the hands on workshop that uses three common services to build a working microservice.\n\n# **Common serverless services**\n\nThe following diagram shows AWS services commonly used together to build serverless applications:", - "page_start": 33, - "page_end": 33, - "source_file": "serverless-core.pdf" - }, - { - "text": "### **Tip**\n\nAn IAM role is identical in function to an IAM user, with the important distinction that it is not uniquely associated with one entity, but assumable by many entities. Typically, IAM roles correspond to a job function.\n\nA loose analogy for IAM roles are that of professional uniforms: a surgeon's scrubs, a firefighter's hardhat, or a startup CTO's favorite hoodie. Many people can *assume the role* of a surgeon, firefighter, and startup CTO, which identifies them with a certain job function.\n\nOne of the most useful things about IAM roles is they can be associated not only with human entities, but also with AWS services. These types of roles are known as *service roles*. This means you can assign an IAM role directly to a service. With an IAM role assigned to the service instance, you can then associate specific IAM policies with the instance role, so that the service instance itself can access other AWS services. This is extremely useful for automation.\n\n### **Authorization - PARC**\n\nSo far we've been talking about principals. Principals represent the **authentication** component. For authorization, you will attach JSON documents called *IAM policies* to principals.\n\n#### **Principals**\n\nAs mentioned, *principals* are the entities that are allowed or denied access.\n\n#### **Actions**\n\n*Actions* are the type of access that is allowed or denied. Actions are commonly AWS service API calls that represent create, read, describe, list, update, and delete semantics.\n\n#### **Resources**\n\n*Resources* are the AWS resources the action will act upon.\n\nAll AWS resources are identified by an Amazon Resource Name (ARN) . Because AWS services are deployed all over the world, ARNs function like an addressing system to precisely locate a specific component. ARNs have hierarchical structures:\n\narn:partition:service:region:account-id:resource-id", - "page_start": 43, - "page_end": 43, - "source_file": "serverless-core.pdf" - }, - { - "text": "market knowledge, community relations and name recognition, and to instill their entrepreneurial drive at all levels of our operations. By furnishing the local management of such acquired companies with our Ñnancial and marketing resources and technical expertise, we believe that the acquired companies are better able to secure additional municipal franchises and other contracts.\n\n*Privatize Municipal Operations and Acquire Divested Operations.* We also seek to acquire solid waste collection operations, transfer stations and landÑlls that municipalities and other governmental authorities are privatizing. Many municipalities are seeking to outsource or sell these types of solid waste operations, as they lack the capital, technical expertise and/or operational resources necessary to comply with increasingly stringent regulatory standards and/or to compete eÅectively with privatesector companies. In addition, we have acquired, and will continue to seek to acquire, operations and facilities that may be divested by other publicly-owned waste companies.\n\n### **Operations**\n\nOur operations primarily consist of the collection, transfer and disposal of non-hazardous solid waste.\n\n*Collection Services.* We provide solid waste collection services to commercial, industrial, municipal and residential customers in 22 states through 140 collection companies. In 2004, 74.3% of our revenue was derived from collection services consisting of approximately 32.5% from services provided to municipal and residential customers, 36.6% from services provided to commercial customers, and 30.9% from services provided to industrial and other customers.\n\nOur residential collection operations involve the curbside collection of refuse from small containers into collection vehicles for transport to transfer stations or directly to landÑlls. Residential solid waste collection services are typically performed under contracts with municipalities, which we generally secure by competitive bid and which give our company exclusive rights to service all or a portion of the homes in their respective jurisdictions. These contracts or franchises usually range in duration from one to Ñve years, although some of our exclusive franchises are for signiÑcantly longer periods. Residential solid waste collection services may also be performed on a subscription basis, in which individual households contract directly with our company. The fees received for subscription residential collection are based primarily on market factors, frequency and type of service, the distance to the disposal facility and cost of disposal. In general, subscription residential collection fees are paid quarterly in advance by the residential customers receiving the service.\n\nIn our commercial and industrial collection operations, we supply our customers with waste containers of varying sizes. We also rent compactors to large waste generators. Commercial collection services are generally performed under one- to three-year service agreements, and fees are determined by such considerations as:\n\n- ' market factors,\n- ' collection frequency,\n- ' type of equipment furnished,\n- ' the type and volume or weight of the waste collected,\n- ' the distance to the disposal facility and\n- ' the cost of disposal.\n\nWe rent waste containers to construction sites and also provide waste collection services to industrial and construction facilities on a contractual basis with terms generally ranging from a single pickup to one year or longer. We collect the containers or compacted waste and transport the waste either to a landÑll or a transfer station for disposal.\n\nAlso, we currently provide recycling services in certain markets primarily to comply with local laws or obligations under our franchise agreements. These services include the curbside collection of residential recyclable waste and the provision of a variety of recycling services to commercial and industrial customers.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "The architecture of traditional monolithic web applications tends to become more complex over time. Complexity increases ramp-up time for new developers, makes tracking down the source of bugs more challenging, and delays the delivery of new features.\n\n# **Use services instead of custom code**\n\nServerless applications usually comprise several AWS services, integrated with custom code run in Lambda functions. While Lambda can be integrated with most AWS services, the services most commonly used in serverless applications are:\n\n|\n| |\n\n| Category | AWS service |\n| --- | --- |\n| Compute | Lambda |\n| Data storage | Amazon S3, DynamoDB, Amazon RDS |\n| API | API Gateway |\n| Application integration | EventBridge, Amazon SNS, Amazon SQS |\n| Orchestration | Step Functions |\n| Streaming data and analytics | Amazon Data Firehose |\n\nThere are many well-established, common patterns in distributed architectures that you can build yourself or implement using AWS services. For most customers, there is little commercial value in investing time to develop these patterns from scratch. When your application needs one of these patterns, use the corresponding AWS service:\n\n#### **Common patterns and corresponding AWS services**\n\n| Pattern | AWS service |\n| --- | --- |\n| Queue | Amazon SQS |\n| Event bus | EventBridge |\n| Publish/subscribe (fan-out) | Amazon SNS |", - "page_start": 22, - "page_end": 22, - "source_file": "serverless-core.pdf" - }, - { - "text": "# **Getting started with serverless applications**\n\nCore *service starters* will quickly explain the value and technical fundamentals of each service. Each starter will also mention advanced topics, so you can start with the essentials, but be aware of capabilities to dive into when you need them.\n\nStarters are short reads (less than 2,300 words; 10-15 min) that connect concepts and practical hands-on use.\n\n#### **Topics**\n\n- Get started with IAM\n- Get started with Lambda\n- Get started with API Gateway\n- Get started with DynamoDB\n- Learn using a workshop\n\n# **Get started with IAM**\n\nInteractions with AWS services and resources by developers and entities require:\n\n- **Authentication**: proof that the entity requesting access is who they claim to be\n- **Authorization**: actions that are allowed or denied\n\n### **What is Identity and Access Management?**\n\nAWS provides and uses a service called Identity and Access Management (IAM) for authentication and authorization. IAM is used to manage developer accounts and secure the interaction between services and resources.\n\n#### **Warning**\n\nSecurity is an important, complex, and broad topic. Large organizations generally have specific operational procedures that developers need to follow. This guide will explain only essential concepts necessary to get started with AWS services. If in doubt, consult your IT department or the official security documentation.", - "page_start": 39, - "page_end": 39, - "source_file": "serverless-core.pdf" - }, - { - "text": "| Serverless |\n| --- |\n\n| Pattern | AWS | service |\n| --- | --- | --- |\n| Orchestration | Step | Functions |\n| API | API | Gateway |\n| Event streams | Kinesis | |\n\nThese services are designed to integrate with Lambda and you can use infrastructure as code (IaC) to create and discard resources in the services. You can use any of these services via the AWS SDK without needing to install applications or configure servers. Becoming proficient with using these services via code in your Lambda functions is an important step to producing well-designed serverless applications.\n\n# **Serverless development on AWS**\n\nTo build serverless solutions, you need to shift your mindset to break up monoliths into loosely connected services. Consider how each service will do one thing well, with as few dependencies as possible.\n\nYou may have created microservices before, but it was probably inside a traditional framework. Imagine if your microservice existed, but without the framework. For that to happen, services need a way to get input, communicate with other services, and send outputs or errors.\n\nThe key to serverless apps is *event-driven architecture.*\n\nEvent-driven architecture (EDA) is a modern architecture pattern built from small, decoupled services that publish, consume, or route *events.* Events are messages sent between services. This architecture makes it easier to scale, update, and independently deploy separate components of a system.\n\nThe following diagram shows an event-driven serverless microservice. A client request is converted by an API Gateway into an event that is sent to a Lambda compute service. A Lambda function retrieves info from a DynamoDB data store. That data is returned in an event to API Gateway, which sends a response to the client with all the appropriate headers, cookies, and security tokens.", - "page_start": 23, - "page_end": 23, - "source_file": "serverless-core.pdf" - }, - { - "text": "#### **Cloud engineering**\n\nIn the same line of ITSM, the application of an engineering approach on cloud infrastructures helped the clients and system administrators to integrate better and manage their day-to-day business.\n\nCloud engineering focuses on cloud services, such as SaaS, PaaS, and IaaS. It is a multidisciplinary method that includes the foundation of cloud, implementation, cloud development-delivery lifecycle, and management.\n\nAn orchestrator normally includes a range of technologies, products, and components, as shown in Figure 2-4.\n\n*Figure 2-4 Example of Orchestration Components*\n\nThe following cloud engineering disciplines are addressed by an orchestrator:\n\n- -Platform management\n- -Virtualization services\n- -Authentication and authorization services\n- -Resources management\n- -Disaster recovery\n- -Workload resilience\n- -Monitoring, usage, and accounting\n- -Configuration services\n- -Application lifecycle\n- -Service automation\n- -Service catalog", - "page_start": 30, - "page_end": 30, - "source_file": "sg248459.pdf" - }, - { - "text": "#### **2.1 A new computing paradigm in cloud transformation**\n\nCloud computing transformed the way that IT is managed.\n\nIn the traditional method of using services or resources, the owner of the infrastructure is responsible for managing every piece of hardware and software they use. Normally, it takes some time for a user to access a new resource, but it can be configured exactly as needed.\n\nTraditional infrastructure is often related to aging core applications (typically integrated with aging infrastructure and technologies) that cannot be easily migrated to cloud paradigms. Elasticity, standardization, and other clear cloud advantages are not sufficient reasons to migrate. In other cases, rigid security and country regulations sometimes force users to locate data nearby and under total management control.\n\n#### **2.1.1 Cloud service model**\n\nThis section describes the different cloud service models.\n\n#### **Infrastructure as a service (IaaS)**\n\nThe management responsibility for the company starts with the operating system layer and the provider ensures the availability and reliability of the infrastructure provided.\n\nSeveral use cases can benefit from this pattern. Companies that lack an owned data center look to IaaS as a quick, cheap infrastructure for their business initiatives that can be expanded or ended as needed. Traditional companies that need compute power to run variable workloads with less capital expenditure are perfect examples of IaaS adoption. In both cases, companies pay only for the services that they use.\n\n#### **Platform as a service (PaaS)**\n\nDevelopment companies and factories that want to implement agile methodologies are the most suited for PaaS. PaaS providers publish many services that can be used inside applications. Those services are always available and up-to-date. PaaS provides a simple way to test and prototype new applications. It can save money when developing new services and applications. Applications can be released more quickly than usual to get user feedback.\n\nThe API economy is the new paradigm in development. The cloud provides the perfect platform for its implementation.\n\n#### **Software as a service (SaaS)**\n\nAt the time of this writing, SaaS patterns are accepted by many companies that want to benefit from application usage without the need to maintain and update infrastructure and components. Mail, ERP, collaboration, and office apps are the most accepted SaaS solutions. The flexibility and elasticity of the SaaS model are great benefits.\n\nNo \"one-size-fits-all\" solution exists for cloud adoption. Companies must consider their own cost and benefit equation and then decide on the best model. Each application and process that is needed is a workload. A deep workload assessment is normally performed by companies that decided to move to the cloud.", - "page_start": 23, - "page_end": 23, - "source_file": "sg248459.pdf" - }, - { - "text": "Common serverless services 31", - "page_start": 34, - "page_end": 34, - "source_file": "serverless-core.pdf" - }, - { - "text": "- -INLLIBL\n- -LOG\n- -LOGCLPGM\n- -INQMSGRPY\n- -JOBMSGQMX\n- -JOBMSGQFL\n\nFor example, if you want to change the job queue that instance TEST uses, you create a job description that is called TEST in the TEST library that specifies the job queue that you want to use.\n\nTo change the run priority of the server jobs, you must add a routing entry to the subsystem. The server job is always submitted with routing data QRLMSERVER.\n\nTo change the run priority of all server jobs for all instances to 40, add the following routing entry to subsystem QSYSWRK. (You must choose a sequence number (SEQNBR) that is not already in use.)\n\n```\nADDRTGE SBSD(QSYSWRK) SEQNBR(1841) CMPVAL(QRLMSERVER) PGM(QSYS/QCMD) \nCLS(QSYS/QSYSCLS40)\n```\nAfter you make this change, you must stop and restart all of your servers.\n\n#### *Automatically starting instances*\n\nTo enable an instance to start automatically each time that the system restarts, you must add one of the commands that are described in 2.4.3, \"Starting and stopping servers\" on page 33 to your **QSTRUP** program. You can also add the commands to a job scheduler.\n\n# **2.5 Implementing a Content Manager OnDemand instance on z/OS**\n\nInstances on z/OS do not differ greatly from instances on Multiplatforms. The concept is the same. In this section, we explain how to set up a new instance and provide background information about the UNIX System Services implementation. Always refer to the product documentation of your release for the specific steps to follow.\n\nBefore you set up your z/OS instance of Content Manager OnDemand, understand the different types of configurations and the components that make up the Content Manager OnDemand instance. A source for determining this information is the IBM Content Manager OnDemand for z/OS - Introduction and Planning Guide, SC19-365.\n\n*Instances* are logical implementations for the separation of administration functions, users, and data on the same server. Instances have the same physical access to the program libraries, but they have different databases with a separate system log and separate file systems. Instances are typically used to separate different customers on one z/OS server to separate the test and production environments, or to use different code pages on different databases.\n\nA Content Manager OnDemand instance on a z/OS server is a separately started task (ARSSOCKx) that uses different databases, users, and application groups. Every user on the instance must be defined for the instance. Every instance has its own security if internal security is used. If an external security exit is used, it is common over all of the instances.\n\nFigure 2-6 on page 37 shows an overview of the single instance on z/OS.", - "page_start": 59, - "page_end": 59, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "What are the two distinct public domain tools support by Creative Commons ?", - "target_page": 1, - "target_passage": "Creative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "This is a frame from \"Twenty Years of Creative Commons (in Sixty Seconds)\" by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n**Creative Commons**\n\nPO Box 1866 Mountain View CA 94042 USA\n\n+1 415 429 6753 info@creativecommons.org", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n# **About Us**\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n#### **Chief Executive Officer**\n\nAnna Tumadóttir\n\n#### **General Counsel**\n\nKat Walsh\n\n# **Board of Directors**\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann Lawrence Lessig **Emeritus* Angela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\n**Except where otherwise noted, \"Annual Report 2023\" by Creative Commons is licensed under CC BY 4.0.**", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## *5. Examining approaches to building a books data commons*\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## *5a. Public domain and permissively licensed books*\n\n#### **Existing Project Example : The Pile v2** 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile — a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others.28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2.29 Among other things, v2 would \"have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.\" At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.\n\nThis is an illustrative example, and there are also other projects of this ilk. For instance, see the 27 Common Corpus project, which includes an array of public domain books from a number of countries, at https://huggingface.co./blog/Pclanglais/common-corpus; see also https://huggingface.co./datasets/ storytracer/internet_archive_books_en (\"This dataset contains more than 650,000 English public domain books (~ 61 billion words) which were digitized by the Internet Archive and cataloged as part of the Open Library project.\")\n\nSee Gao et al, supra note 8. 28\n\nGoldman, Sharon. \"One of the World's Largest AI Training Datasets Is About to Get Bigger and 29 \"Substantially Better.\" *VentureBeat*, 11 Jan. 2024, venturebeat.com/ai/one-of-the-worlds-largest-aitraining-datasets-is-about-to-get-bigger-and-substantially-better/. Accessed 20 Mar. 2024.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Figure 4.15 Domain and Range inferred by the reasoner\n\nIt is possible to specify more than one class as the domain or range of a property. One of the most common mistakes of new users is to do this and expect that the resulting domain/range is the union of the two classes. However, note that next to the Domain and Range in the Description view it says (intersection). This is because the semantics of having 2 or more classes as the domain or range is the *intersection* of those classes *not* the union. E.g., if one defined the domain for a property to be Pizza and then added another domain IceCream that would mean that for something to be in the domain of that property it would have to be an instance of *both* Pizza *and* IceCream not (as people often expect) the *union* of those two sets which would be *either* the class Pizza *or* the class IceCream. Also, note that the domain and range are for inferencing, they are not data integrity constraints. This distinction will be explained in more detail below in the section on SHACL.", - "page_start": 28, - "page_end": 28, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## *4. Copyright, Licensing, & Access to Books for Training*\n\nEven if books can be acquired, digitized, and made technically useful for AI training, the development of a books data commons would necessarily need to navigate and comply with copyright law.\n\n**Out-of-Copyright Books:** A minority of books are old enough to be in the public domain and out of copyright, and an AI developer could use them in training without securing any copyright permission. In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on, it is worth noting that the status of whether a book is in the public domain can be difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14 out of copyright if they were not subject to a copyright renewal; however, data on copyright renewals is not easily accessible.\n\nWhat's more, copyright definitions and term lengths vary among countries. Even if a work is in the public domain in the US, it may not be in other countries. Countries generally use the 15 life of the last living author + \"x\" years to determine the term of copyright protection. For most countries, \"x\" is either 50 years (the minimum required by the Berne Convention) or 70 years (this is the case for all member states of the European Union and for all works published in the U.S. after 1978). This approach makes it difficult to determine copyright terms with certainty because it requires information about the date of death of each author, which is often not readily available.\n\n**In-Copyright Books:** The vast majority of books are in copyright, and, insofar as the training process requires making a copy of the book, the use in AI training may implicate copyright law. Our workshop covered three possible paths for incorporating such works.\n\n#### **Direct licensing**\n\nOne could directly license books from rightsholders. There may be some publishers who are willing to license their works for this purpose, but it is hard to determine the scale of such access, and, in any event, there are significant limits on this approach. Along with the challenge (and expense) of reaching agreements with relevant rightsholders, there is also the practical difficulty of simply identifying and finding the rightsholder that one must negotiate\n\nFor a sense of the complexity, see e.g. Melissa Levine, Richard C. Adler. *Finding the Public Domain:* 14 *Copyright Review Management System Toolkit*. 2016, quod.lib.umich.edu/c/crmstoolkit/\n\n14616082.0001.001. Accessed 20 Mar. 2024.; Kopel, Matthew. \"LibGuides: Copyright at Cornell Libraries: Copyright Term and the Public Domain.\" guides.library.cornell.edu/copyright/publicdomain; Mannapperuma, Menesha, et al. *Is It in the Public Domain? A HANDBOOK for EVALUATING the COPYRIGHT STATUS of a WORK CREATED in the UNITED STATES*. 1923.\n\nSee e.g. Moody, Glyn. \"Project Gutenberg Blocks Access in Germany to All Its Public Domain Books 15 because of Local Copyright Claim on 18 of Them.\" *Techdirt*, 7 Mar. 2018, www.techdirt.com/ 2018/03/07/project-gutenberg-blocks-access-germany-to-all-public-domain-books-because-localcopyright-claim-18-them/. Accessed 20 Mar. 2024.", - "page_start": 8, - "page_end": 8, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n#### **Permissively licensed works**\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution).18\n\nSee e.g. Heald, Paul J. \"How Copyright Makes Books and Music Disappear (and How Secondary 16 Liability Rules Help Resurrect Old Songs).\" Illinois Program in Law, Behavior and Social Science Paper No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181. Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen, Rebecca J. \"Why Are so Few Books from the 20th Century Available as Ebooks?\" *The Atlantic*, 18 Mar. 2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-centuryavailable-as-ebooks/284486/. See also \"Google Book Search Settlement and Access to Out of Print Books.\" *Google Public Policy Blog*, publicpolicy.googleblog.com/2009/06/google-book-searchsettlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed classaction settlement between Google, the Authors Guild, and the Association of American Publishers). Google's final brief in the settlement proceedings notes the \"prohibitive transaction costs of identifying and locating individual Rightsholders of these largely older, out-of-print books\" — see this brief at https:// web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/ google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also justified the settlement's terms in light of the fact that \"the transaction costs involved in finding copyright owners and clearing the rights are too high\"; while they argued that most works are not truly \"orphans,\" they note that total transaction costs as a whole (including, for example, determining whether the author or publisher holds the rights and then negotiating rates) are so high as to block uses of outof-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http:// thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.\n\nIn the EU, the 2019 Copyright Directive introduced specific provisions on the \"use of out-of-commerce 17 works and other subject matter by cultural heritage institutions\" (Articles 8-11 CDSMD). These provisions allow cultural heritage institutions to \"make available, for non-commercial purposes, out-ofcommerce works or other subject matter permanently in their collections\". The limitation to noncommercial purposes means that works made available under these provisions would be of limited use in building a books data commons.\n\nFor one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18 they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin' 'Bout AI Generation: Copyright and the Generative AI Supply Chain. Forthcoming, *Journal of the Copyright Society* 2024. https://doi.org/10.2139/ssrn.4523551.", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **Licenses and Public Domain Tools**\n\nThe first CC License was created in 2002. Today, we boast **six CC Licenses** and two public domain tools, setting a global standard for sharing.\n\n### **We've estimated that over 2.5 billion pieces of content were CC Licensed by the end of 2023.**\n\n\"The great growling engine of change - technology. Alvin Toffler\" by katerha is licensed under CC BY 2.0. Our legal and technology staff continued to make key infrastructure updates and manage daily maintenance to ensure these Licenses work for everyone.\n\n### **In 2023, we launched the Open Infrastructure Circle (OIC) to ensure consistent funding for this work.**\n\nWe're grateful to the early supporters of the OIC, including the William + Flora Hewlett Foundation, Bill & Melinda Gates Foundation, Filecoin Foundation for the Decentralized Web, Robert Wood Johnson Foundation, Chan Zuckerberg Initiative, Endless, Siegel Family Endowment, Flickr, Microsoft, and Paul and Iris Brest.", - "page_start": 3, - "page_end": 3, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "What is Creative Commons ?", - "target_page": 1, - "target_passage": " Creative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "This is a frame from \"Twenty Years of Creative Commons (in Sixty Seconds)\" by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n**Creative Commons**\n\nPO Box 1866 Mountain View CA 94042 USA\n\n+1 415 429 6753 info@creativecommons.org", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you change your copyright terms from the default of \"all rights reserved\" to \"some rights reserved.\"\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n### Public domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark. Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n#### Where public domain tools fit in the copyright spectrum\n\n# The CC0 Public Domain Dedication\n\n**Use this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.**\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\n## What is the difference between CC0 and the Public Domain Mark?\n\nCC0 (\"CC Zero\") is intended for use only by authors or holders of copyright and related rights (including database rights), in connection\n\nwith works that are still subject to those rights in one or more countries.\n\nWhen CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\nchange the copyright status of a work.\n\n# Public Domain Mark\n\n**Use this tool if you have identified a work that is free of known copyright restrictions.**\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A \"books data commons\" needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use \"commons\" here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach.5\n\nIn this way, we do not use \"commons\" in the narrow sense of permissively licensed. What's more, this 4 resource could also be governed as more of a data \"trust,\" and, indeed, we discuss extensively the work of HathiTrust as a relevant project in this domain. However, our use of the word \"commons\" is not meant to preclude this or other arrangements.\n\nThere are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5 e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy of further analysis.", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n# **About Us**\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n#### **Chief Executive Officer**\n\nAnna Tumadóttir\n\n#### **General Counsel**\n\nKat Walsh\n\n# **Board of Directors**\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann Lawrence Lessig **Emeritus* Angela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\n**Except where otherwise noted, \"Annual Report 2023\" by Creative Commons is licensed under CC BY 4.0.**", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# **A Note from Leadership**\n\nCC staff photos are licensed under CC BY 4.0.\n\n2023 was a busy year at Creative Commons. Our **Open Culture** program and **Open Climate Campaign** entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our **Open Infrastructure Circle** in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on **the critical work that awaits us in 2024**.\n\n**Anna Tumadóttir, CEO**", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "# *Acknowledgements*\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# *7. Conclusion*\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development.41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception — it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else — independent researchers, entrepreneurs, and smaller entities — will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.\n\nFor other existing and past examples, one might look to the work of Europeana, https:// 41 www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement expressly would have set up a \"Research Corpus\" for non-consumptive research. HathiTrust created a Research Center, with the intention of becoming one of the hosts for the \"Research Corpus.\" The Settlement was criticized and was ultimately rejected by the district court for both substantive reasons (that is, what the settlement would specifically do) and procedural (in the sense of violating class-action law, but also in a broader sense of representing a \"backroom deal\" without sufficient participation from impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to relitigate the issue, but rather to call out that design decisions of this sort have been considered in the past.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## *1. Introduction*1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called \"Books3\" to train LLMs.2 The Books3 dataset contains text from over 170,000 books, which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited.3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a \"books data commons for AI training\" might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1 Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\nSee e.g. Knibbs, Kate. \"The Battle over Books3 Could Change AI Forever.\" *Wired*, 4 Sept. 2023, 2 www.wired.com/story/battle-over-books3/.\n\nFor key documents in these cases, see the helpful compendium at \"Master List of Lawsuits v. AI, 3 ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.\" *Chat GPT Is Eating the World*, 27 Dec. 2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoftmeta-midjourney-other-ai-cos. See also \"Fair Use Week 2024: Day Two with Guest Expert Brandon Butler.\" *Fair Use Week*, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-withguest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not consequential for the fair use analysis).", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **Our Impact**\n\nCC believes that opening up knowledge is key to addressing the world's most pressing challenges. Today, we steer campaigns, programming, and training in many areas:\n\n### **Open Culture**\n\n2023 was quite a year for the CC Open Culture Program, thanks to generous funding from **Arcadia**. We grew our Open Culture team from one to two and a half staff, rolling out new initiatives like TAROC (Towards a Recommendation on Open Culture) and **Open Culture Live: A Webinar Series**. We invite you to read \"**What did Creative Commons do for Open Culture in 2023?**\" to learn more.\n\n### **Open Journalism**\n\nThanks to generous funding from the **John D. and Catherine T. MacArthur Foundation**, CC hosted its very first Open Journalism track at the CC Global Summit, including eight presentations, lightning talks, panel discussions, and workshops as well as a **keynote by Anya Kamenetz**.\n\nRepresentatives from 33 news outlets and digital rights-focused organizations attended the CC Summit sessions. The Open Journalism track built on **numerous collaborations and workshops** throughout 2023.\n\n### **Open Education**\n\nWe delivered workshops and presentations on CC Licenses and Open Educational Resources at over 16 conferences and events. The CC Open Education Platform also funded six global projects, **including work to advance the UNESCO Recommendation on OER.**\n\n\"Follow the Color Brick Road\" by Bert Kaufmann is licensed under CC BY-SA 2.0.", - "page_start": 6, - "page_end": 6, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "different rightsholders and authors. Managing opt-outs for so many different interests within one book may get overly complicated very fast.\n\nIn any event, creating an opt-out system will need some ways of authenticating whether someone has the relevant authority to make choices about inclusion of a work.\n\n## *Who would get to use the books data commons? For what?*\n\nA commons might be made publicly available to all, as has been done with datasets like The Pile. Another possible design choice is to restrict access only to authorized users and to enforce particular responsibilities or obligations in return for authorization. Three particular dimensions of permitted uses and users came up in our discussions:\n\n- **Defining and ensuring acceptable and ethical use:** Participants discussed to what extent restrictions should be put on use of the resource. In the case of HathiTrust, acceptable use is implicitly ensured by limiting access to researchers from member institutions; other forms of \"gated access\" are possible, allowing access only to certain types of users and for certain uses. One can imagine more fine-grained 39 mechanisms, based on a review of the purpose for which datasets are used. This imagined resource could become a useful lever to demand responsible development and use of AI; alongside \"sticks\" like legal penalties, this would be a \"carrot\" that could incentivize good behavior. At the same time, drawing the lines around, let alone enforcing, \"good behavior\" would constitute a significant challenge.\n- **Charging for use to support sustainability of the training corpus itself:** While wanting to ensure broad access to this resource, it is important to consider economic sustainability, including support for continuing to update the resource with new works and appropriate tooling for AI training. Requiring some form of payment to use the resource could support sustainability, perhaps with different requirements for different types of users (e.g., differentiating between non-commercial and commercial users, or high-volume, well-resourced users and others).40\n- **Ensuring benefits of AI are broadly shared, including with book authors or publishers:** The creation of a training resource might lower barriers to the development of AI tools, and in that way support broadly shared benefits by facilitating greater competition and mitigating concentration of power. On the other hand, just as concentration of technology industries is already a significant challenge, AI might not look much different, and the benefits of this resource may still simply go to a few large firms in \"winner takes all-or-most\" markets. The workshops discussed how, for instance, large commercial users might be expected to contribute to a fund that supported contributors of training data, or more generally to fund writers, to ensure everyone contributing to the development of AI benefits.\n\nFor examples of gated access to AI models, see https://huggingface.co./docs/hub/en/models-gated. 39\n\nAs an analogy, consider for instance Wikimedia Enterprise, which \"build[s] services for high-volume 40 commercial reusers of Wikimedia content\" and charges for that access. https://meta.wikimedia.org/ wiki/Wikimedia_Enterprise.", - "page_start": 18, - "page_end": 18, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "How to apply the PDM to my work ?", - "target_page": 1, - "target_passage": "Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **10.6 Submit inventory (PM)**\n\nThis section describes on how the PM submits the inventory by selecting tables for the general submission after being approved by the NFP (See section 10.5).\n\n# **10.6.1 Submit select tables for preparing the general submission**\n\n- 1. Log in as PM.\n- 2. Click on \"View Inventories Progress\" under sub menu \"Submission Management\".\n- 3. The \"View Inventories Progress\" screen appears.\n- 4. Select the appropriate inventory by clicking the box under column \"Working inventory\" (figure 68, a). *** Note: The selected inventory year to be submitted should be in status \"approved\" (figure 68, b).\n- 5. Click on \"Work on Inventories\" under Submission Management (figure 68, c).\n- This opens the Submit Inventory initial screen (figure 69).\n- 6. Click the inventory year to be submitted (figure 69, a).\n- 7. Press the \"Generate Official Submission\" button (figure 69, c).\n\n### *Figure 68. View Inventories Progress screen – select inventory for the preparation for the general submission*\n\n### *Figure 69. Submit select tables for the preparation for the general submission*", - "page_start": 41, - "page_end": 41, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **10 Submission management**\n\n# **10.1 Workflow**\n\nCreating and preparing an inventory, generating tables for checking by the NFP and approving and/or rejecting submission, follows a number of steps known collectively as a workflow. This chapter describes the workflow relating to the submission of the GHG inventory/(ies), which users should follow to create, prepare, and send GHG inventories for internal checking, and approval/rejection of the submission by the NFP, within the NAIIS web application (figure 52).\n\n### *Figure 52: Non-Annex I Inventory Software workflow*\n\n# **10.2 Start of inventory/submission (NFP or PM)**\n\nThis procedure allows the NFP or PM to start a new (created) inventory. The existing data for the inventory year identified will be made available in the new inventory/submission.\n\nThese are the steps to start a new inventory:\n\n- 1. Click on \"View Inventories Progress\" under sub menu \"Submission Management\" (figure 53).\n### *Figure 53. View Inventories Progress sub menu*\n\n- 2. The \"View Inventories Progress\" screen appears (figure 54).\n- 3. Select the appropriate inventory by clicking the box under column \"Working Inventory\" (figure 54, a).\n\n*** Note: The selected appropriate inventory should be in status \"created\" (figure 54, b)", - "page_start": 34, - "page_end": 34, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "### THE PURPOSE OF A RESIGNATION LETTER:\n\nThe purpose of a resignation letter is to give your employer official notice that you will be leaving the organisation. However, it is usually appropriate to inform your manager of your intention to resign in person, and then to follow up your conversation with the formal resignation letter.\n\nWhat to include:\n\nYour resignation letter should be short and to the point. Keep it positive and professional – this is not the place to voice your dissatisfaction with your job.\n\nIn your letter, you should make sure that you include the following:\n\n#### 1. A clear statement of your intention to resign.\n\nExample:\n\n\"Please accept this letter as formal notice of my resignation from my post as Assistant IT Manager at XYZ.\"\n\n### 2.\n\n### Reference to your notice period (where applicable), as well as your last working day with the organisation.\n\nExample:\n\n\"My last working day will be in two weeks' time, on 31 August 2015.\"\n\n#### 3.\n\n#### Your reason for leaving.\n\nYou don't need to elaborate on this if you don't want to. Remember to keep it positive, and not to make any rude, offensive or insulting remarks about the organisation or your co- workers, no matter how tempting it might be.", - "page_start": 48, - "page_end": 48, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# TABLE OF CONTENTS:\n\n- 1. General Language Tips to Get You Started\n- 2. Parts of Speech\n- 3. Punctuation\n- 4. Commonly Confused Words and Phrases\n- 5. Tips for Filling in Your College Registration Form\n- 6. Learn How to Summarise Your Study Material\n- 7. How to Ask for Help from Your Tutor\n- 8. Tips for Completing Your Written Assignments\n- 9. Tips for Answering Exam Questions\n- 10. Language Skills at Work How to Write a Cover Letter\n- 11. Language Skills at Work How to Write a Resignation Letter\n- 12. Language Skills at Work Sending E-mails to Your Colleagues", - "page_start": 2, - "page_end": 2, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "#### First Paragraph\n\nIntroduce yourself, and explain why you are writing the letter. If you are responding to a job advertisement, state which advertisement you are responding to, and indicate where you found it.\n\n#### For example:\n\n\"I would like to apply for the position of Graphic Designer, as advertised in the Career Times on 1 March 2015.\"\n\nIf possible, mention a mutual contact or acquaintance.\n\nFor example:\n\n\"Samantha Stevens mentioned that you are looking for an experienced Graphic Designer with a keen interest in the fashion industry.\"\n\n#### Second Paragraph\n\nMention your qualifications, skills and experience, and relate them to the needs of the company. Give relevant examples of how you have used your skills in the past to perform similar tasks and responsibilities to those set out in the job description.\n\n#### Third Paragraph\n\nExplain why you want to work for this organisation in particular. Where relevant, explain any gaps in your CV. If you don't have the required academic qualifications, for example, you can explain how your practical work experience makes up for it.\n\n#### Fourth paragraph\n\nMention any documents or attachments that you have included with your cover letter, and state your availability for an interview.\n\n#### Close\n\nThank the recipient for taking the time to read your letter, and sign off with a professional greeting, such as \"Yours sincerely\" or \"Kind regards\", followed by your full name, telephone number and e-mail address.", - "page_start": 46, - "page_end": 46, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## A Summary of the Registration Process at Oxbridge Academy\n\n#### SEND YOUR REGISTRATION FORM\n\nSend your registration form to the registrations office at Oxbridge Academy via one of the following channels:\n\nFax: 086 262 5550 Post: PO Box 12723, Die Boord, 7613 E-mail: registrar@oxbridgeacademy.co.za\n\n#### FILL IN THE REGISTRATION FORM\n\n**2**\n\nThe registration form follows an easy-to-complete four step layout.\n\n#### IF YOU ARE REGISTERING FOR an ICB, or NATED COURSE\n\nmake sure to indicate your preferred exam centre.\n\n**3**\n\nAs soon as your details have been captured on our system you will receive confirmation of your registration via e-mail or SMS\n\n#### ATTACH THE FOLLOWING DOCUMENTS **6**\n\n- 1. Copy of your ID\n- 2. Proof of highest grade passed\n- 3. Proof of other qualifications\n- 4. Proof of payment\n\n**5**\n\n#### IF YOU ARE UNDER 18, OR IF YOU ARE UNEMPLOYED\n\nmake sure that your parent/guardian/guarantor signs the form.\n\n**4**\n\nPAY YOUR REGISTRATION FEE", - "page_start": 26, - "page_end": 26, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### PLEASE REMEMBER TO ATTACH THE FOLLOWING DOCUMENTS TO YOUR REGISTRATION FORM:\n\nA copy of your ID\n\nProof of your highest grade passed\n\nProof of any other relevant qualifications you have obtained", - "page_start": 23, - "page_end": 23, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# **Applications in the application group window**\n\nIf the report that you define is one of several reports that will be stored in the same application group, you can use the report wizard (Figure 3-18) to define the following information for the report:\n\n- - The database field that contains the values that identify an application within the application group\n- -The folder field that users use to search a specific application\n- -The length of the application ID field\n\nIf you select Document Size, Content Manager OnDemand adds a field to the application group and folder. Content Manager OnDemand stores the size of the document in the application group field when data is loaded.\n\nIf you select Page Count, Content Manager OnDemand adds a field to the application group and folder. Content Manager OnDemand stores the number of pages in the document in the application group field when data is loaded.\n\nYou must provide the folder names for both fields (Document Size and Page Count). You do not need to specify the names for the application group fields because they are predefined.\n\n| Application Wizard | x | | |\n| --- | --- | --- | --- |\n| Application Group Name: | My Credit Card Statements | | |\n| What do you want to name the Application? | Summary Reports | | |\n| What description do you want to use? | | | |\n| What Application Identifier do you want to use? | 1 | | |\n| Cancel | Help | < Back | Next > |\n\nFigure 3-18 Application group\n\n# **Enhanced Retention Management and Interoperate with IBM FileNet P8 Platform window**\n\nIn this window (Figure 3-19 on page 64), you can configure the application group to work with the following features:\n\n- -Enhanced Retention Management feature of Content Manager OnDemand\n- -Interoperability between Content Manager OnDemand and IBM FileNet® P8 Platform", - "page_start": 86, - "page_end": 86, - "source_file": "sg246915.pdf" - }, - { - "text": "The file must also include valid paths for the Tivoli Storage Manager options files and all of the Tivoli Storage Manager components that are used. The parameters in the file are used to reference the first Tivoli Storage Manager Server. A single object server can reference multiple Tivoli Storage Manager servers.\n\n| ###################################################### |\n| --- |\n| # Storage Manager Parameters (Library/Object Server) # |\n| ###################################################### |\n| # |\n| # Storage Manager for OnDemand to use |\n| # |\n| ARS_STORAGE_MANAGER=TSM |\n| ####################################### |\n| # TSM Parameters (Object Server Only) # |\n| ####################################### |\n| DSMSERV_DIR=/usr/tivoli/tsm/server/bin |\n| DSMSERV_CONFIG=/usr/tivoli/tsm/server/bin/dsmserv.opt |\n| DSM_DIR=/usr/tivoli/tsm/client/api/bin64 |\n| DSM_CONFIG=/usr/tivoli/tsm/client/api/bin64/dsm.opt |\n| DSM_LOG=/tmp |\n| DSMG_DIR=/usr/tivoli/tsm/client/api/bin64 |\n| DSMG_CONFIG=/usr/tivoli/tsm/client/api/bin64/dsm.opt |\n| DSMG_LOG=/tmp |\n| DSMI_DIR=/usr/tivoli/tsm/client/api/bin64 |\n| DSMI_CONFIG=/usr/tivoli/tsm/client/api/bin64/dsm.opt |\n| DSMI_LOG=/tmp |\n\nFigure 5-3 ARS.CFG file for Tivoli Storage Manager configuration\n\n**Note:** For the Tivoli Storage Manager client that is used by Content Manager OnDemand, set COMPRESSION NO in the Tivoli Storage Manager client option file (dsm.opt for Windows or dsm.sys for UNIX). Content Manager OnDemand objects are compressed before they are sent to Tivoli Storage Manager for archival; therefore, compression by Tivoli Storage Manager is not required.", - "page_start": 117, - "page_end": 117, - "source_file": "sg246915.pdf" - }, - { - "text": "### STEP 4 – PAY YOUR REGISTRATION FEE AND SEND IN YOUR FORM\n\n| Registration fee payable upon registration either by cheque, postal order, bank deposit, electronic transfer or ATM deposit. Enclose the registration fee when submitting this form and we will send you a Welcome Pack that includes your 1st Study Unit, Success Study Guide and Student card. International students will be required to pay a deposit of R2400. | |\n| --- | --- |\n| * Attach proof of payment | |\n| IF YOU ARE: (A) YOUNGER THAN 18 YEARS OR (B) UNEMPLOYED | |\n| Parent/Guarantor Details | |\n| I approve and confirm this application. | |\n| Name: | Relation to student: |\n| ID No: | |\n| Cell No: | |\n| Home No: | Parent/Guardian/Guarantor Signature: |\n\nDifferent courses have different registration fees. Please check the course fees list (www.oxbridgeacademy.co.za/Documents/ Price-list-2015.pdf) to find out how much you need to pay to register for your chosen course, and pay this amount using the banking details provided at the bottom of the registration form. Remember to attach your proof of payment.\n\nIf you are under the age of 18, your parent or guardian will need to sign this section of the form to state that they are aware of your registration with Oxbridge Academy, and that they do not have any objections. If you are unemployed, you will need a guarantor to sign this section of the form. Your parent or guarantor will be held responsible if you miss any of your payments in relation to your course fees.", - "page_start": 25, - "page_end": 25, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "Which rivers flow through Lyon?", - "target_page": 1, - "target_passage": "It is located at the confluence of the rivers Rhône and Saône, ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **Lyon**\n\n**Lyon**[c] (Franco-Provençal: *Liyon*) is the second-largest city in France by urban area and the third largest by city limits.[14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km2 (19 sq mi),[15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021.[16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon.[18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## **History**\n\ncompanion\")\n\n**Location of Lyon**\n\n[b]\n\n**Toponymy**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n## **Primary and secondary schools**\n\nThere are some international private schools in the Lyon area, including:\n\n- Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n\t- Includes the *Section Japonaises* (リヨン・ジェルラン補習授業校 *Riyon Jeruran Hoshū Jugyō Kō* \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school[73]\n- Ombrosa;\n- International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n- Montessori School of Lyon.\n\n## **Supplementary education**\n\nOther Japanese supplementary schools:\n\n- The *Association Pour le Développement de la Langue et de la Culture Japonaises* (ADLCJ; リヨン補習授業校 *Riyon Hoshū Jugyō Kō*) is held in the *Maison Berty Albrecht* in Villeurbanne, near Lyon.[73] It was formed in 1987.[74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n# **Transport**\n\nLyon–Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu[75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981.[76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\nLyon 2: Berges du Rhône campus\n\nIPSA Lyon Campus\n\nPlatform I, Lyon-Part-Dieu train station\n\nT1 tramway on the Raymond Barre bridge", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n- 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n- 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh_1). *Chrd.lyon.fr*. 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english_1) on 24 January 2011. Retrieved 21 December 2017.\n- 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). *Digitaljournal.com*.\n- 35. (in French) Georges Duby (ed), *Histoire de la France : Dynasties et révolutions, de 1348 à 1852* (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n- 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n- 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). *unesco.org*. UNESCO World Heritage Centre. Retrieved 31 July 2015.\n- 38. Gregory, Stanley. \"Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).\" *Erdkunde*, vol. 8, no. 4, 1954, pp. 246–252. *JSTOR.*\n- 39. \"Données climatiques de la station de Lyon: Relevés de 2016 Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n- 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM_69029001.pdf) (PDF). *Fiche Climatologique: Statistiques 1991–2020 et records* (in French). Meteo France. Retrieved 14 July 2022.\n- 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). *meteo-lyon.net* (in French). Météo Villes. Retrieved 7 September 2023.\n- 42. \"Lyon–Bron (07480) WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n- 43. \"Normes et records 1961–1990: Lyon-Bron (69) altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n- 44. \"St-Irénée France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). *sacreddestinations.com*.\n- 45. \"Discover the Musée Miniature et Cinéma in Lyon | Unique in Europe\" (https://www.museeminiatureetcin ema.fr/en/). *Musée Miniature et Cinéma*.\n- 46. OECD. \"City statistics : Economy\" (https://stats.oecd.org/Index.aspx?datasetcode=FUA_CITY). Retrieved 16 January 2023.\n- 47. \"Le laboratoire P4, ménagerie virale\" (https://wayback.archive-it.org/all/20090606013924/http://www.lemonde. fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-virale_1202866_3244.html). *Le Monde*. France. Archived from the original (http://www.lemonde.fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-viral e_1202866_3244.html) on 6 June 2009. Retrieved 8 July 2009.\n- 48. \"Official site of Lyon\" (https://web.archive.org/web/20100424192931/http://www.grandlyon.com/La-Part-Dieu.2 315.0.html). Grandlyon.com. Archived from the original (http://www.grandlyon.com/La-Part-Dieu.2315.0.html) on 24 April 2010. Retrieved 3 April 2011.\n- 49. Jean-Baptiste Onofrio : *Essai d'un glossaire des patois de Lyonnais, Forez et Beaujolais*, Lyon 1864\n- 50. \"Pierre Alain Muet Archives 2008\" (https://web.archive.org/web/20100124093221/http://pa-muet.com/archives. htm). Pa-muet.com. 17 June 2008. Archived from the original (http://pa-muet.com/archives.htm) on 24 January 2010. Retrieved 25 January 2010.\n- 51. \"Bottazzi fait le mur\" (https://web.archive.org/web/20071125163711/http://www.brefonline.com/numeroERA_af fichearticle.asp?idA=3262). Brefonline.Com. Archived from the original (http://www.brefonline.com/numeroER A_affichearticle.asp?idA=3262) on 25 November 2007. Retrieved 5 February 2009.\n- 52. \"The African Museum of Lyon Website\" (https://web.archive.org/web/20090219232752/http://musee-africain-ly on.org/). Musee-africain-lyon.org. Archived from the original (http://www.musee-africain-lyon.org/) on 19 February 2009. Retrieved 5 February 2009.\n- 53. UNESCO World Heritage Site (http://www.lyon.fr/vdl/sections/en/tourisme/copy_of_patrimoine/a_patrimoinem ondial) Archived (https://web.archive.org/web/20110718090826/http://www.lyon.fr/vdl/sections/en/Tourisme/co py_of_patrimoine/a_patrimoinemondial) 18 July 2011 at the Wayback Machine. City of Lyon official website. Retrieved 26 November 2009.", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000. Source: EHESS [70] and INSEE [71]\n\n## **Foreign-born**\n\n# **Education**\n\n### **Universities and tertiary education**\n\n- École Centrale de Lyon;\n- École Normale Supérieure de Lyon\n- EM Lyon (École de Management de Lyon);\n- ECE Lyon (École de Commerce Européenne de Lyon);\n- Institut d'études politiques de Lyon (Sciences Po Lyon);\n- CPE Lyon;\n- CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n- ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n- EPITECH;\n- EPITA;\n- ENTPE (École Nationale des Travaux Publiques de l'État);\n- École nationale vétérinaire de Lyon (ENVL);\n- ESME-Sudria;\n- École des Beaux-Arts;\n- E-Artsup;\n- INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n- Polytech Lyon;\n- Institut supérieur européen de gestion group;\n- ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n- Institution des Chartreux;\n- Institut polytechnique des sciences avancées;\n- Université Claude Bernard (Lyon 1);\n- Université Lumière (Lyon 2);\n- Université Jean Moulin (Lyon 3);\n- IAE (Institut d'Administration des Entreprises de Lyon);\n- Institut Sup'Biotech de Paris;\n- Catholic University of Lyon;\n- ESDES Business School;\n- IDRAC (International School of Management);\n- Wesford Graduate Business School;\n- IFAG (Business Management School);\n- Institut supérieur européen de formation par l'action;\n- Le Lycée du Parc;\n- La Martinière Lyon;\n- Web@cademie;\n- CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n| Country of birth | Population (2020) |\n| --- | --- |\n| Algeria | 14,779 |\n| Morocco | 5,245 |\n| Tunisia | 4,879 |\n| Italy | 3,351 |\n| Portugal | 3,068 |\n| Spain | 2,064 |\n| DR Congo | 1,520 |\n| China | 1,429 |\n| Cameroon | 1,364 |\n| Senegal | 1,198 |\n\nENS Lyon: René Descartes campus\n\nLyon 3: Manufacture des Tabacs campus", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the *canuts* (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as *traboules*, enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum.[33][34]\n\n# **Geography**\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula – the \"*Presqu'île*\" – bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned.[35]\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location\n\nfor Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways.[36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.\n\nMassacre during the Canut rebellion of 1834\n\nThe Saône-Rhône confluence", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \"*Primat des Gaules*\".[30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the Fourvière Hill\n\nunder French control until the 14th century.\n\n#### **Modern Lyon**\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\".[31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the *Bourse* (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings.[32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\nand supported the Girondins. The city was besieged by Revolutionary armies for over two months before it surrendered in October 1793. Many buildings were destroyed, especially around the Place Bellecour, and Jean-Marie Collot d'Herbois and Joseph Fouché administered the execution of more than 2,000 people. The Convention ordered that its name be changed to \"Liberated City\", and a plaque was erected that proclaimed \"Lyons made war on Liberty; Lyons no longer exists\". A decade later, Napoleon ordered the reconstruction of all the buildings demolished during that period.\n\n| • Metro density | 500/km2 (1,300/sq mi) |\n| --- | --- |\n| Time zone | UTC+01:00 (CET) |\n| • Summer (DST) | UTC+02:00 (CEST) |\n| INSEE/Postal code | 69123 (https://www.inse |\n| | e.fr/fr/statistiques/14055 |\n| | 99?geo=COM-69123) |\n| | /69001-69009 |\n| Elevation | 162–349 m (531– |\n| | 1,145 ft) |\n| Website | lyon.fr (https://www.lyon. |\n| | fr/) |\n| 1 French | Land Register data, which excludes |\n\nlakes, ponds, glaciers > 1 km2 (0.386 sq mi or 247 acres) and river estuaries.\n\n#### **Timeline of Lyon**\n\n**Historical affiliations** Roman Empire (Gallia Lugdunensis), 43 BC-286 Western Roman Empire (Gallia Lugdunensis), 286-411 Kingdom of the Burgundians, 411–534 Francia, 534–843 Middle Francia, 843–855 Lotharingia, 855–879 Lower Burgundy, 879-933 Kingdom of Arles, 933–1312 Kingdom of France (Lyonnais), 1312– 1792 French First Republic, 1792–1793 Counter-revolutionary, 1793 French First Republic, 1793–1804 First French Empire, 1804–1814 Kingdom of France, 1814–1815 First French Empire, 1815 Kingdom of France, 1815–1830 Kingdom of France, 1830–1848 French Second Republic, 1848–1852 Second French Empire, 1852–1870 French Third Republic, 1870–1940 Vichy France, 1940–1944 French Fourth Republic, 1944–1958 France, 1958–present\n\nLyon under siege in 1793", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Both Vieux Lyon and the slopes of Croix-Rousse are known for their narrow passageways (named *traboules*) that pass through buildings and link streets on either side. The first examples of traboules are thought to have been built in Lyon in the 4th century. [54] The traboules allowed the inhabitants to get from their homes to the Saône quickly and allowed the canuts on the Croix-Rousse hill to get from their workshops to the textile merchants at the foot of the hill.\n\n#### **Gastronomy**\n\nLyon has a long and chronicled culinary arts tradition. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\",[55] a claim repeated by later writers such as Bill Buford. [56] Renowned 3-star Michelin chefs such as Marie Bourgeois[57] and Eugénie Brazier[58] developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success. [59] The *bouchon* is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south. Another Lyon tradition is a type of brunch food called \"mâchons\", made of local charcuterie and usually accompanied by Beaujolais red wine. Mâchons were the customary meal of the canuts, the city's silk workers, who ate a late-morning meal after they finished their shifts in the factories.[60]\n\nOther traditional local dishes include coq au vin; quenelle; gras double; salade lyonnaise (lettuce with bacon, croûtons and a poached egg); and the sausage-based rosette lyonnaise and andouillette. Popular local confections include marron glacé and coussin de Lyon. Cervelle de canut (literally, \"silk worker's brains\") is a cheese spread/dip made of a base of fromage blanc, seasoned with chopped herbs, shallots, salt, pepper, olive oil and vinegar.\n\nPassage de l'Argue\n\nÎle Barbe bakery at the Halles de Lyon-Paul Bocuse\n\nMore recently, the french tacos was invented in Lyon suburbs (Vaulx-en-Velin) (or Grenoble according to some theories), in the early 2000s and is now famous worldwide.[61][62]\n\n#### **Sport**\n\nLyon is home to the football club Olympique Lyonnais (OL), whose men's team plays in Ligue 1 and has won the championship of that competition seven times, all consecutively from 2002 to 2008.[63] OL played until December 2015 at the 43,000 seat Stade de Gerland, which also hosted matches of the 1998 FIFA World Cup. Since 2016, the team has played at the Parc Olympique Lyonnais, a 59,000-seat stadium located in the eastern suburb of Décines-Charpieu. [64] OL operates a women's team, Olympique Lyonnais Féminin, which competes in and dominates Division 1 Féminine. They won fourteen consecutive top-flight championships (2007–2020), and additionally claim the four titles won by the original incarnation of FC Lyon, a\n\nParc Olympique Lyonnais\n\nwomen's football club that merged into OL in 2004 (the current FC Lyon was founded in 2009). The OL women have also won the UEFA Women's Champions League eight times, including in five consecutive editions from 2016 to 2020. Lyon hosted the 2019 FIFA Women's World Cup semi-finals as well as the Final on 7 July at Stade de Lyon.\n\nLyon has a rugby union team, Lyon OU, in the Top 14, which moved into Stade de Gerland full-time in 2017–18. In addition, Lyon has a rugby league side called Lyon Villeurbanne that plays in the French rugby league championship. The club's home is the Stade Georges Lyvet in Villeurbanne.", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The name of the city has taken the forms *Lugdon*, *Luon*, and since the 13th century, *Lyon*. The Gallic *Lugdun* or *Lugdunon* that was Latinized in Roman as Lugdunum is composed of two words. The first may be the name of the Celtic god Lug (in charge of order and law), or the derived word *lugon*, meaning \"crow\" (the crow being the messenger of Lug), but might also be another word *lug*, meaning \"light\". The second is *dunos* ('fortress', 'hill'). The name thus may designate the hill of Fourvière, on which the ancient city of Lyon is founded, but could mean \"hill of the god Lug\", \"hill of the crows\" or \"shining hill\".[21] [22]\n\nAlternatively Julius Pokorny associates the first part of the word with the Indo-European radical **lūg* ('dark, black, swamp'), the basis of the toponyms Ludza in Latvia, Lusatia in Germany (from Sorbian *Łužica*), and several places in the Czech Republic named Lužice;[23] it could then also be compared to Luze in Franche-Comté and various hydronyms such as Louge.\n\nFurther down, in the current Saint-Vincent district, was the Gallic village of Condate, probably a simple hamlet of sailors or fishermen living on the banks of the Saône. *Condate* is a Gallic word meaning \"confluence\", from which the Confluence district gets its name.\n\nIn Roman times the city was called *Caput Galliæ*, meaning \"capital of the Gauls\". As an homage to this title, the Archbishop of Lyon is still called the Primate of Gaul.\n\nDuring the revolutionary period, Lyon was renamed *Commune-Affranchie* (\"Emancipated Commune\") on 12 October 1793 by a decree of the Convention Nationale. It resumed its name in 1794, after the end of the Terror.\n\nLyon is called *Liyon* in Franco-Provençal. [24]\n\n#### **Ancient Lyon**\n\nAccording to the historian Dio Cassius, in 43 BC, the Roman Senate ordered the creation of a settlement for Roman refugees of war with the Allobroges. These refugees had been expelled from Vienne and were now encamped at the confluence of the Saône and Rhône rivers. The foundation was built on Fourvière hill and officially called *Colonia Copia Felix Munatia*, a name invoking prosperity and the blessing of the gods. The city became increasingly referred to as *Lugdunum* (and occasionally *Lugudunum*[25] ).[26] The earliest translation of this Gaulish place-name as \"Desired Mountain\" is offered by the 9th-century *Endlicher Glossary*. [27] In contrast, some modern scholars have proposed a Gaulish hill-fort named Lug[o]dunon, after the Celtic god Lugus (cognate with Old Irish *Lugh*, Modern Irish *Lú*), and *dúnon* (hillfort).\n\nThe Romans recognised that Lugdunum's strategic location at the convergence of two navigable rivers made it a natural communications hub. The city became the starting point of main Roman roads in the area, and it quickly became the capital of the province, Gallia Lugdunensis. Two Emperors were born in this city: Claudius, whose speech is preserved in the Lyon Tablet in which he justifies the nomination of Gallic Senators, and Caracalla.\n\n| Country | France |\n| --- | --- |\n| Region | Auvergne-Rhône-Alpes |\n| Metropolis | Lyon Metropolis |\n| Arrondissement | Lyon |\n| Subdivisions | 9 arrondissements |\n| Government | |\n| • Mayor (2020– | [2] Grégory Doucet |\n| 2026) | (EELV) |\n| 1 Area | 47.87 km2 (18.48 sq mi) |\n| [3]) • Urban (2020 | 1,141.4 km2 |\n| | (440.7 sq mi) |\n| [4] • Metro (2020 ) | 4,605.8 km2 |\n| | (1,778.3 sq mi) |\n| [5] Population (2022) | 520,774 |\n| • Rank | 3rd in France |\n| • Density | 11,000/km2 |\n| | (28,000/sq mi) |\n| • Urban (Jan. | 1,702,921 |\n| [6] 2021 ) | |\n| • Urban density | 1,500/km2 (3,900/sq mi) |\n| • Metro (Jan. | 2,308,818 |\n| [7] 2021 ) | |", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia4.pdf" - }, - { - "text": "| Mayor | Term start | Term end | Party |\n| --- | --- | --- | --- |\n| Antoine Gailleton | 1881 | 1900 | |\n| Victor Augagneur | 1900 | 30 October 1905 | PRS |\n| Édouard Herriot | 30 October 1905 | 20 September 1940 | Radical |\n| Georges Cohendy | 20 September 1940 | 1941 | Nominated and dismissed by Vichy |\n| Georges Villiers | 1941 | 1942 | Nominated and dismissed by Vichy |\n| Pierre-Louis-André Bertrand | 1942 | 1944 | Nominated by Vichy |\n| Justin Godart | 1944 | 18 May 1945 | Radical |\n| Édouard Herriot | 18 May 1945 | 26 March 1957 | Radical |\n| Pierre Montel, ad interim | 26 March 1957 | 14 April 1957 | Radical |\n| Louis Pradel | 14 April 1957 | 27 November 1976 | DVD |\n| Armand Tapernoux, ad interim | 27 November 1976 | 5 December 1976 | DVD |\n| Francisque Collomb | 5 December 1976 | 24 March 1989 | DVD |\n| Michel Noir | 24 March 1989 | 25 June 1995 | RPR |\n| Raymond Barre | 25 June 1995 | 25 March 2001 | DVD |\n| Gérard Collomb | 25 March 2001 | 17 July 2017 | PS |\n| Georges Képénékian | 17 July 2017 | 5 November 2018 | LREM |\n| Gérard Collomb | 5 November 2018 | 4 July 2020 | LREM |\n| Grégory Doucet | 4 July 2020 | Incumbent | EELV |\n\n### **Metropolis**\n\nSince 2015, the commune of Lyon (48 km 2 (19 sq mi) in land area) and 58 suburban communes have formed the Metropolis of Lyon (534 km2 (206 sq mi) in land area), a directly elected metropolitan authority now in charge of most urban issues. The Metropolis of Lyon is the only metropolitan authority in France which is a territorial collectivity, on par with French communes and departments. Its metropolitan council was for the first time directly elected by universal suffrage in 2020 within 14 electoral wards, the only directly elected metropolitan council in France.\n\nThe 14 electoral wards are the following (see map for location):\n\n- Lônes et coteaux Lyon-Centre (Lyon-Centre) Lyon-Est (Lyon-East) Lyon-Nord (Lyon-North) Lyon-Ouest Lyon-Sud Lyon-Sud-Est Ouest Plateau Nord-Caluire Porte des Alpes Portes du Sud Rhône Amont Val de Saône Villeurbanne\nThe six wards with names starting with \"Lyon\" are all located within the commune of Lyon. The Villeurbanne ward is coterminous with the namesake commune. All other seven wards each group various suburban communes.\n\nMap of the Metropolis of Lyon and its 59 communes (the commune of Lyon is in red)", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 2nd arrondissement: Cordeliers, Bellecour, Ainay, Perrache, Confluence, Sainte-Blandine\n- 3rd arrondissement: Guillotière (north), Préfecture, Part-Dieu, Villette, Dauphiné/Sans Souci, Montchat, Grange Blanche (north), Monplaisir (north)\n- 4th arrondissement: Plateau de la Croix-Rousse, Serin\n- 5th arrondissement: Vieux Lyon (Saint-Paul, Saint-Jean, Saint-Georges), Saint-Just, Saint-Irénée,[44] Fourvière, Point du Jour, Ménival, Battières, Champvert (south)\n- 6th arrondissement: Brotteaux, Bellecombe, Parc de la Tête d'or, Cité Internationale\n- 7th arrondissement: Guillotière (south), Jean Macé, Gerland\n- 8th arrondissement: Monplaisir (south), Bachut, États-Unis, Grand Trou/Moulin à Vent, Grange Blanche (south), Laënnec, Mermoz, Monplaisir-la-Plaine\n- 9th arrondissement: Vaise, Duchère, Rochecardon, St-Rambert-l'Île-Barbe, Gorge de Loup, Observance, Champvert (north)\n\nGeographically, Lyon's two main rivers, the Saône and the Rhône, divide the arrondissements into three groups:\n\n- To the west of the Saône, the fifth arrondissement covers the old city of Vieux Lyon, Fourvière hill and the plateau beyond. The 9th is immediately to the north, and stretches from Gorge de Loup, through Vaise to the neighbouring suburbs of Écully, Champagne-au-Mont-d'Or, Saint-Didier-au-Mont-d'Or, Saint-Cyr-au-Mont-d'Or and Collonges-au-Mont-d'Or.\n- Between the two rivers, on the Presqu'île, are the second, first, and fourth arrondissements. The second includes most of the city centre, Bellecour and Perrache railway station, and reaches as far as the confluence of the two rivers. The first is directly to the north of the second and covers part of the city centre (including the Hôtel de Ville) and the slopes of La Croix-Rousse. To the north of the Boulevard is the fourth arrondissement, which covers the Plateau of La Croix-Rousse, up to its boundary with the commune of Caluire-et-Cuire.\n- To the east of the Rhône, are the third, sixth, seventh, and eighth arrondissements.\n\n#### **Mayors**\n\nThis is a list of mayors of the commune of Lyon since the end of the 19th century.\n\nThe lion, symbol of the city, on display at Maison des avocats\n\nMap of the City of Lyon divided into 9 arrondissements", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "How big was Lyon's population in 2022? ", - "target_page": 2, - "target_passage": "Population (2022) 520,774", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **Lyon**\n\n**Lyon**[c] (Franco-Provençal: *Liyon*) is the second-largest city in France by urban area and the third largest by city limits.[14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km2 (19 sq mi),[15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021.[16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon.[18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## **History**\n\ncompanion\")\n\n**Location of Lyon**\n\n[b]\n\n**Toponymy**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000. Source: EHESS [70] and INSEE [71]\n\n## **Foreign-born**\n\n# **Education**\n\n### **Universities and tertiary education**\n\n- École Centrale de Lyon;\n- École Normale Supérieure de Lyon\n- EM Lyon (École de Management de Lyon);\n- ECE Lyon (École de Commerce Européenne de Lyon);\n- Institut d'études politiques de Lyon (Sciences Po Lyon);\n- CPE Lyon;\n- CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n- ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n- EPITECH;\n- EPITA;\n- ENTPE (École Nationale des Travaux Publiques de l'État);\n- École nationale vétérinaire de Lyon (ENVL);\n- ESME-Sudria;\n- École des Beaux-Arts;\n- E-Artsup;\n- INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n- Polytech Lyon;\n- Institut supérieur européen de gestion group;\n- ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n- Institution des Chartreux;\n- Institut polytechnique des sciences avancées;\n- Université Claude Bernard (Lyon 1);\n- Université Lumière (Lyon 2);\n- Université Jean Moulin (Lyon 3);\n- IAE (Institut d'Administration des Entreprises de Lyon);\n- Institut Sup'Biotech de Paris;\n- Catholic University of Lyon;\n- ESDES Business School;\n- IDRAC (International School of Management);\n- Wesford Graduate Business School;\n- IFAG (Business Management School);\n- Institut supérieur européen de formation par l'action;\n- Le Lycée du Parc;\n- La Martinière Lyon;\n- Web@cademie;\n- CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n| Country of birth | Population (2020) |\n| --- | --- |\n| Algeria | 14,779 |\n| Morocco | 5,245 |\n| Tunisia | 4,879 |\n| Italy | 3,351 |\n| Portugal | 3,068 |\n| Spain | 2,064 |\n| DR Congo | 1,520 |\n| China | 1,429 |\n| Cameroon | 1,364 |\n| Senegal | 1,198 |\n\nENS Lyon: René Descartes campus\n\nLyon 3: Manufacture des Tabacs campus", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "| Mayor | Term start | Term end | Party |\n| --- | --- | --- | --- |\n| Antoine Gailleton | 1881 | 1900 | |\n| Victor Augagneur | 1900 | 30 October 1905 | PRS |\n| Édouard Herriot | 30 October 1905 | 20 September 1940 | Radical |\n| Georges Cohendy | 20 September 1940 | 1941 | Nominated and dismissed by Vichy |\n| Georges Villiers | 1941 | 1942 | Nominated and dismissed by Vichy |\n| Pierre-Louis-André Bertrand | 1942 | 1944 | Nominated by Vichy |\n| Justin Godart | 1944 | 18 May 1945 | Radical |\n| Édouard Herriot | 18 May 1945 | 26 March 1957 | Radical |\n| Pierre Montel, ad interim | 26 March 1957 | 14 April 1957 | Radical |\n| Louis Pradel | 14 April 1957 | 27 November 1976 | DVD |\n| Armand Tapernoux, ad interim | 27 November 1976 | 5 December 1976 | DVD |\n| Francisque Collomb | 5 December 1976 | 24 March 1989 | DVD |\n| Michel Noir | 24 March 1989 | 25 June 1995 | RPR |\n| Raymond Barre | 25 June 1995 | 25 March 2001 | DVD |\n| Gérard Collomb | 25 March 2001 | 17 July 2017 | PS |\n| Georges Képénékian | 17 July 2017 | 5 November 2018 | LREM |\n| Gérard Collomb | 5 November 2018 | 4 July 2020 | LREM |\n| Grégory Doucet | 4 July 2020 | Incumbent | EELV |\n\n### **Metropolis**\n\nSince 2015, the commune of Lyon (48 km 2 (19 sq mi) in land area) and 58 suburban communes have formed the Metropolis of Lyon (534 km2 (206 sq mi) in land area), a directly elected metropolitan authority now in charge of most urban issues. The Metropolis of Lyon is the only metropolitan authority in France which is a territorial collectivity, on par with French communes and departments. Its metropolitan council was for the first time directly elected by universal suffrage in 2020 within 14 electoral wards, the only directly elected metropolitan council in France.\n\nThe 14 electoral wards are the following (see map for location):\n\n- Lônes et coteaux Lyon-Centre (Lyon-Centre) Lyon-Est (Lyon-East) Lyon-Nord (Lyon-North) Lyon-Ouest Lyon-Sud Lyon-Sud-Est Ouest Plateau Nord-Caluire Porte des Alpes Portes du Sud Rhône Amont Val de Saône Villeurbanne\nThe six wards with names starting with \"Lyon\" are all located within the commune of Lyon. The Villeurbanne ward is coterminous with the namesake commune. All other seven wards each group various suburban communes.\n\nMap of the Metropolis of Lyon and its 59 communes (the commune of Lyon is in red)", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n- 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n- 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh_1). *Chrd.lyon.fr*. 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english_1) on 24 January 2011. Retrieved 21 December 2017.\n- 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). *Digitaljournal.com*.\n- 35. (in French) Georges Duby (ed), *Histoire de la France : Dynasties et révolutions, de 1348 à 1852* (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n- 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n- 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). *unesco.org*. UNESCO World Heritage Centre. Retrieved 31 July 2015.\n- 38. Gregory, Stanley. \"Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).\" *Erdkunde*, vol. 8, no. 4, 1954, pp. 246–252. *JSTOR.*\n- 39. \"Données climatiques de la station de Lyon: Relevés de 2016 Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n- 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM_69029001.pdf) (PDF). *Fiche Climatologique: Statistiques 1991–2020 et records* (in French). Meteo France. Retrieved 14 July 2022.\n- 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). *meteo-lyon.net* (in French). Météo Villes. Retrieved 7 September 2023.\n- 42. \"Lyon–Bron (07480) WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n- 43. \"Normes et records 1961–1990: Lyon-Bron (69) altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n- 44. \"St-Irénée France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). *sacreddestinations.com*.\n- 45. \"Discover the Musée Miniature et Cinéma in Lyon | Unique in Europe\" (https://www.museeminiatureetcin ema.fr/en/). *Musée Miniature et Cinéma*.\n- 46. OECD. \"City statistics : Economy\" (https://stats.oecd.org/Index.aspx?datasetcode=FUA_CITY). Retrieved 16 January 2023.\n- 47. \"Le laboratoire P4, ménagerie virale\" (https://wayback.archive-it.org/all/20090606013924/http://www.lemonde. fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-virale_1202866_3244.html). *Le Monde*. France. Archived from the original (http://www.lemonde.fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-viral e_1202866_3244.html) on 6 June 2009. Retrieved 8 July 2009.\n- 48. \"Official site of Lyon\" (https://web.archive.org/web/20100424192931/http://www.grandlyon.com/La-Part-Dieu.2 315.0.html). Grandlyon.com. Archived from the original (http://www.grandlyon.com/La-Part-Dieu.2315.0.html) on 24 April 2010. Retrieved 3 April 2011.\n- 49. Jean-Baptiste Onofrio : *Essai d'un glossaire des patois de Lyonnais, Forez et Beaujolais*, Lyon 1864\n- 50. \"Pierre Alain Muet Archives 2008\" (https://web.archive.org/web/20100124093221/http://pa-muet.com/archives. htm). Pa-muet.com. 17 June 2008. Archived from the original (http://pa-muet.com/archives.htm) on 24 January 2010. Retrieved 25 January 2010.\n- 51. \"Bottazzi fait le mur\" (https://web.archive.org/web/20071125163711/http://www.brefonline.com/numeroERA_af fichearticle.asp?idA=3262). Brefonline.Com. Archived from the original (http://www.brefonline.com/numeroER A_affichearticle.asp?idA=3262) on 25 November 2007. Retrieved 5 February 2009.\n- 52. \"The African Museum of Lyon Website\" (https://web.archive.org/web/20090219232752/http://musee-africain-ly on.org/). Musee-africain-lyon.org. Archived from the original (http://www.musee-africain-lyon.org/) on 19 February 2009. Retrieved 5 February 2009.\n- 53. UNESCO World Heritage Site (http://www.lyon.fr/vdl/sections/en/tourisme/copy_of_patrimoine/a_patrimoinem ondial) Archived (https://web.archive.org/web/20110718090826/http://www.lyon.fr/vdl/sections/en/Tourisme/co py_of_patrimoine/a_patrimoinemondial) 18 July 2011 at the Wayback Machine. City of Lyon official website. Retrieved 26 November 2009.", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \"*Primat des Gaules*\".[30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the Fourvière Hill\n\nunder French control until the 14th century.\n\n#### **Modern Lyon**\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\".[31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the *Bourse* (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings.[32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\nand supported the Girondins. The city was besieged by Revolutionary armies for over two months before it surrendered in October 1793. Many buildings were destroyed, especially around the Place Bellecour, and Jean-Marie Collot d'Herbois and Joseph Fouché administered the execution of more than 2,000 people. The Convention ordered that its name be changed to \"Liberated City\", and a plaque was erected that proclaimed \"Lyons made war on Liberty; Lyons no longer exists\". A decade later, Napoleon ordered the reconstruction of all the buildings demolished during that period.\n\n| • Metro density | 500/km2 (1,300/sq mi) |\n| --- | --- |\n| Time zone | UTC+01:00 (CET) |\n| • Summer (DST) | UTC+02:00 (CEST) |\n| INSEE/Postal code | 69123 (https://www.inse |\n| | e.fr/fr/statistiques/14055 |\n| | 99?geo=COM-69123) |\n| | /69001-69009 |\n| Elevation | 162–349 m (531– |\n| | 1,145 ft) |\n| Website | lyon.fr (https://www.lyon. |\n| | fr/) |\n| 1 French | Land Register data, which excludes |\n\nlakes, ponds, glaciers > 1 km2 (0.386 sq mi or 247 acres) and river estuaries.\n\n#### **Timeline of Lyon**\n\n**Historical affiliations** Roman Empire (Gallia Lugdunensis), 43 BC-286 Western Roman Empire (Gallia Lugdunensis), 286-411 Kingdom of the Burgundians, 411–534 Francia, 534–843 Middle Francia, 843–855 Lotharingia, 855–879 Lower Burgundy, 879-933 Kingdom of Arles, 933–1312 Kingdom of France (Lyonnais), 1312– 1792 French First Republic, 1792–1793 Counter-revolutionary, 1793 French First Republic, 1793–1804 First French Empire, 1804–1814 Kingdom of France, 1814–1815 First French Empire, 1815 Kingdom of France, 1815–1830 Kingdom of France, 1830–1848 French Second Republic, 1848–1852 Second French Empire, 1852–1870 French Third Republic, 1870–1940 Vichy France, 1940–1944 French Fourth Republic, 1944–1958 France, 1958–present\n\nLyon under siege in 1793", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n## **Primary and secondary schools**\n\nThere are some international private schools in the Lyon area, including:\n\n- Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n\t- Includes the *Section Japonaises* (リヨン・ジェルラン補習授業校 *Riyon Jeruran Hoshū Jugyō Kō* \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school[73]\n- Ombrosa;\n- International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n- Montessori School of Lyon.\n\n## **Supplementary education**\n\nOther Japanese supplementary schools:\n\n- The *Association Pour le Développement de la Langue et de la Culture Japonaises* (ADLCJ; リヨン補習授業校 *Riyon Hoshū Jugyō Kō*) is held in the *Maison Berty Albrecht* in Villeurbanne, near Lyon.[73] It was formed in 1987.[74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n# **Transport**\n\nLyon–Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu[75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981.[76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\nLyon 2: Berges du Rhône campus\n\nIPSA Lyon Campus\n\nPlatform I, Lyon-Part-Dieu train station\n\nT1 tramway on the Raymond Barre bridge", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the *canuts* (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as *traboules*, enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum.[33][34]\n\n# **Geography**\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula – the \"*Presqu'île*\" – bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned.[35]\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location\n\nfor Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways.[36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.\n\nMassacre during the Canut rebellion of 1834\n\nThe Saône-Rhône confluence", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Lyon is also home to the Lyon Hockey Club, an ice hockey team that competes in France's national ice hockey league. The Patinoire Charlemagne is the seat of Club des Sports de Glace de Lyon, the club of Olympic ice dancing champions Marina Anissina and Gwendal Peizerat, and world champions Isabelle Delobel and Olivier Shoenfelder. [65] Lyon-Villeurbanne also has a basketball team, ASVEL, that plays at the Astroballe arena.\n\nStade de Gerland\n\n#### **Street art**\n\nSince 2000, Birdy Kids, a group of graffiti artists from the city, has decorated several random buildings and walls along the Lyon ring road. In 2012, the artist collective was chosen to represent the city as its cultural ambassadors.[66]\n\n## **Demographics**\n\nThe population of the city (commune) of Lyon proper was 522,250 at the January 2021 census.[15] As of 2011, 14% of its population was born outside Metropolitan France.[67]\n\n| | | | | Population of Lyon (commune) | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | (within 2020 borders) | | | | |\n| Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. |\n| 1801 | 101,760 | — | 1876 | 344,513 | +1.33% | 1946 | 464,104 | +0.02% |\n| 1806 | 114,643 | +2.41% | 1881 | 378,581 | +1.84% | 1954 | 475,343 | +0.29% |\n| 1821 | 149,611 | +1.79% | 1886 | 404,172 | +1.45% | 1962 | 535,746 | +1.54% |\n| 1831 | 182,668 | +2.02% | 1891 | 440,315 | +1.78% | 1968 | 527,800 | −0.25% |\n| 1836 | 198,683 | +1.60% | 1896 | 468,311 | +1.25% | 1975 | 456,716 | −2.06% |\n| 1841 | 206,670 | +0.79% | 1901 | 461,687 | −0.29% | 1982 | 413,095 | −1.42% |\n| 1846 | 238,466 | +2.86% | 1906 | 474,652 | +0.56% | 1990 | 415,487 | +0.07% |\n| 1851 | 259,220 | +1.68% | 1911 | 462,248 | −0.53% | 1999 | 445,452 | +0.78% |\n| 1856 | 293,743 | +2.66% | 1921 | 462,446 | +0.00% | 2010 | 484,344 | +0.78% |\n| 1861 | 320,326 | +1.72% | 1926 | 463,125 | +0.03% | 2015 | 513,275 | +1.17% |\n| 1866 | 325,219 | +0.30% | 1931 | 463,647 | +0.02% | 2021 | 522,250 | +0.29% |\n| 1872 | 324,590 | −0.03% | 1936 | 463,061 | −0.03% | | | |\n\nAll figures come from population censuses. Figures from 1911 to 1936 (incl.) are the redressed figures calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is the one published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000. Source: EHESS [69] and INSEE [15]\n\nThe city of Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021.[16]\n\n| | | | | Population of Lyon (metropolis) (59 communes, within 2020 borders) | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. |\n| 1861 | 418,515 | — | 1906 | 627,073 | +0.60% | 1968 | 1,077,794 | +2.17% |\n| 1866 | 427,522 | +0.43% | 1911 | 629,931 | +0.09% | 1975 | 1,153,402 | +0.98% |\n| 1872 | 426,552 | −0.04% | 1921 | 659,007 | +0.45% | 1982 | 1,138,718 | −0.18% |\n| 1876 | 453,540 | +1.37% | 1926 | 691,446 | +0.97% | 1990 | 1,166,797 | +0.30% |\n| 1881 | 493,778 | +1.66% | 1931 | 743,297 | +1.46% | 1999 | 1,199,589 | +0.31% |\n| 1886 | 527,621 | +1.47% | 1936 | 738,220 | −0.14% | 2010 | 1,296,166 | +0.72% |\n| 1891 | 566,115 | +1.46% | 1946 | 746,062 | +0.11% | 2015 | 1,370,678 | +1.12% |\n| 1896 | 600,881 | +1.21% | 1954 | 790,662 | +0.71% | 2021 | 1,424,069 | +0.64% |\n| 1901 | 608,856 | +0.26% | 1962 | 947,569 | +2.34% | | | |", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia4.pdf" - }, - { - "text": "1,600,000 m 2 (17,222,256.67 sq ft) of office space and services and more than 55,000 jobs.[48] *Cité Internationale*, created by the architect Renzo Piano is located in the border of the Parc de la Tête d'Or in the 6th arrondissement. The worldwide headquarters of Interpol is located there. The district of *Confluence*, in the south of the historic centre, is a new pole of economical and cultural development.\n\nTourism is an important part of the Lyon economy, with one billion euros in 2007 and 3.5 million hotel-nights in 2006 provided by non-residents. Approximately 60% of tourists visit for business, with the rest for leisure. In January 2009, Lyon ranked first in France for hostels business. The festivals most important for attracting tourists are the *Fête des lumières*, the *Nuits de Fourvière* every summer, the *Biennale d'art contemporain* and the *Nuits Sonores*.\n\n# **Culture**\n\nSince the Middle Ages, the region residents have spoken several dialects of Franco-Provençal. The Lyonnais dialect was replaced by the French language as the importance of the city grew. However some \"frenchified\" Franco-Provençal words can also be heard in the French of the Lyonnais, who call their little boys and girls \"gones\" and \"fenottes\" for example.[49]\n\n- The Lumière brothers pioneered cinema in the town in 1895. The Institut Lumière, built as Auguste Lumiere's house, and a fascinating piece of architecture in its own right, holds many of their first inventions and other early cinematic and photographic artifacts.\nGuignol, created in the early 19th C., associated with the silk-workers\n\n8 December each year is marked by the Festival of Lights (la Fête des lumières), a celebration of thanks to the Virgin Mary, who purportedly saved the city from a deadly plague in the Middle Ages. During the event, the local population places candles (*luminions*) at their windows and the city of Lyon organizes large-scale light shows onto the sides of important Lyonnais monuments, such as the medieval Cathédrale St-Jean.\n\n- The Saint Francis of Sales church is famous for its large and unaltered Cavaillé-Coll pipe organ, attracting audiences from around the world.\n- The Opéra Nouvel (New Opera House) is the home of the Opéra National de Lyon. The original opera house was re-designed by the distinguished French architect Jean Nouvel between 1985 and 1993 and is named after him.\n- Lyon is also the French capital of \"*trompe l'œil*\" walls, a very ancient tradition. Many are to be seen around the city. This old tradition is now finding a contemporary expression, for example in the art of Guillaume Bottazzi.[50][51]\n- The Brothers of the Sacred Heart, a Roman Catholic congregation that operates schools in Europe and North America, was founded in Lyon in 1821.\n- The African Museum of Lyon is one of the oldest museums situated in Lyon.[52]\n- The Museum of Resistance and Deportation looks at the various individuals prominent in the Resistance movement in World War II. The building is strongly linked to Klaus Barbie. Lyon sees itself as the centre of the French resistance and many members were shot in Place Bellecour in the town centre. The exhibition is largely a series of , mini-biographies of those involved.\n- Lyon is a pilot city of the Council of Europe and the European Commission Intercultural cities program.\n\n## **UNESCO World Heritage Site**\n\nThe historic site of Lyon was designated a UNESCO World Heritage Site in 1998. In its designation, UNESCO cited the \"exceptional testimony to the continuity of urban settlement over more than two millennia on a site of great commercial and strategic significance.\"[37] The specific regions comprising the historic site include the Roman district and Fourvière, the Renaissance district (Vieux Lyon), the silk district (slopes of Croix-Rousse), and the Presqu'île, which features architecture from the 12th century to modern times.[53]", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Both Vieux Lyon and the slopes of Croix-Rousse are known for their narrow passageways (named *traboules*) that pass through buildings and link streets on either side. The first examples of traboules are thought to have been built in Lyon in the 4th century. [54] The traboules allowed the inhabitants to get from their homes to the Saône quickly and allowed the canuts on the Croix-Rousse hill to get from their workshops to the textile merchants at the foot of the hill.\n\n#### **Gastronomy**\n\nLyon has a long and chronicled culinary arts tradition. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\",[55] a claim repeated by later writers such as Bill Buford. [56] Renowned 3-star Michelin chefs such as Marie Bourgeois[57] and Eugénie Brazier[58] developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success. [59] The *bouchon* is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south. Another Lyon tradition is a type of brunch food called \"mâchons\", made of local charcuterie and usually accompanied by Beaujolais red wine. Mâchons were the customary meal of the canuts, the city's silk workers, who ate a late-morning meal after they finished their shifts in the factories.[60]\n\nOther traditional local dishes include coq au vin; quenelle; gras double; salade lyonnaise (lettuce with bacon, croûtons and a poached egg); and the sausage-based rosette lyonnaise and andouillette. Popular local confections include marron glacé and coussin de Lyon. Cervelle de canut (literally, \"silk worker's brains\") is a cheese spread/dip made of a base of fromage blanc, seasoned with chopped herbs, shallots, salt, pepper, olive oil and vinegar.\n\nPassage de l'Argue\n\nÎle Barbe bakery at the Halles de Lyon-Paul Bocuse\n\nMore recently, the french tacos was invented in Lyon suburbs (Vaulx-en-Velin) (or Grenoble according to some theories), in the early 2000s and is now famous worldwide.[61][62]\n\n#### **Sport**\n\nLyon is home to the football club Olympique Lyonnais (OL), whose men's team plays in Ligue 1 and has won the championship of that competition seven times, all consecutively from 2002 to 2008.[63] OL played until December 2015 at the 43,000 seat Stade de Gerland, which also hosted matches of the 1998 FIFA World Cup. Since 2016, the team has played at the Parc Olympique Lyonnais, a 59,000-seat stadium located in the eastern suburb of Décines-Charpieu. [64] OL operates a women's team, Olympique Lyonnais Féminin, which competes in and dominates Division 1 Féminine. They won fourteen consecutive top-flight championships (2007–2020), and additionally claim the four titles won by the original incarnation of FC Lyon, a\n\nParc Olympique Lyonnais\n\nwomen's football club that merged into OL in 2004 (the current FC Lyon was founded in 2009). The OL women have also won the UEFA Women's Champions League eight times, including in five consecutive editions from 2016 to 2020. Lyon hosted the 2019 FIFA Women's World Cup semi-finals as well as the Final on 7 July at Stade de Lyon.\n\nLyon has a rugby union team, Lyon OU, in the Top 14, which moved into Stade de Gerland full-time in 2017–18. In addition, Lyon has a rugby league side called Lyon Villeurbanne that plays in the French rugby league championship. The club's home is the Stade Georges Lyvet in Villeurbanne.", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "What is the climate in Lyon ?", - "target_page": 5, - "target_passage": " Lyon has a humid subtropical climate ( Köppen: Cfa), bordering an oceanic climate (Köppen: Cfb, Trewartha: Do).", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "# **Lyon**\n\n**Lyon**[c] (Franco-Provençal: *Liyon*) is the second-largest city in France by urban area and the third largest by city limits.[14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km2 (19 sq mi),[15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021.[16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon.[18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## **History**\n\ncompanion\")\n\n**Location of Lyon**\n\n[b]\n\n**Toponymy**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000. Source: EHESS [70] and INSEE [71]\n\n## **Foreign-born**\n\n# **Education**\n\n### **Universities and tertiary education**\n\n- École Centrale de Lyon;\n- École Normale Supérieure de Lyon\n- EM Lyon (École de Management de Lyon);\n- ECE Lyon (École de Commerce Européenne de Lyon);\n- Institut d'études politiques de Lyon (Sciences Po Lyon);\n- CPE Lyon;\n- CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n- ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n- EPITECH;\n- EPITA;\n- ENTPE (École Nationale des Travaux Publiques de l'État);\n- École nationale vétérinaire de Lyon (ENVL);\n- ESME-Sudria;\n- École des Beaux-Arts;\n- E-Artsup;\n- INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n- Polytech Lyon;\n- Institut supérieur européen de gestion group;\n- ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n- Institution des Chartreux;\n- Institut polytechnique des sciences avancées;\n- Université Claude Bernard (Lyon 1);\n- Université Lumière (Lyon 2);\n- Université Jean Moulin (Lyon 3);\n- IAE (Institut d'Administration des Entreprises de Lyon);\n- Institut Sup'Biotech de Paris;\n- Catholic University of Lyon;\n- ESDES Business School;\n- IDRAC (International School of Management);\n- Wesford Graduate Business School;\n- IFAG (Business Management School);\n- Institut supérieur européen de formation par l'action;\n- Le Lycée du Parc;\n- La Martinière Lyon;\n- Web@cademie;\n- CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n| Country of birth | Population (2020) |\n| --- | --- |\n| Algeria | 14,779 |\n| Morocco | 5,245 |\n| Tunisia | 4,879 |\n| Italy | 3,351 |\n| Portugal | 3,068 |\n| Spain | 2,064 |\n| DR Congo | 1,520 |\n| China | 1,429 |\n| Cameroon | 1,364 |\n| Senegal | 1,198 |\n\nENS Lyon: René Descartes campus\n\nLyon 3: Manufacture des Tabacs campus", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n- 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n- 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh_1). *Chrd.lyon.fr*. 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english_1) on 24 January 2011. Retrieved 21 December 2017.\n- 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). *Digitaljournal.com*.\n- 35. (in French) Georges Duby (ed), *Histoire de la France : Dynasties et révolutions, de 1348 à 1852* (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n- 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n- 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). *unesco.org*. UNESCO World Heritage Centre. Retrieved 31 July 2015.\n- 38. Gregory, Stanley. \"Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).\" *Erdkunde*, vol. 8, no. 4, 1954, pp. 246–252. *JSTOR.*\n- 39. \"Données climatiques de la station de Lyon: Relevés de 2016 Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n- 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM_69029001.pdf) (PDF). *Fiche Climatologique: Statistiques 1991–2020 et records* (in French). Meteo France. Retrieved 14 July 2022.\n- 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). *meteo-lyon.net* (in French). Météo Villes. Retrieved 7 September 2023.\n- 42. \"Lyon–Bron (07480) WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n- 43. \"Normes et records 1961–1990: Lyon-Bron (69) altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n- 44. \"St-Irénée France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). *sacreddestinations.com*.\n- 45. \"Discover the Musée Miniature et Cinéma in Lyon | Unique in Europe\" (https://www.museeminiatureetcin ema.fr/en/). *Musée Miniature et Cinéma*.\n- 46. OECD. \"City statistics : Economy\" (https://stats.oecd.org/Index.aspx?datasetcode=FUA_CITY). Retrieved 16 January 2023.\n- 47. \"Le laboratoire P4, ménagerie virale\" (https://wayback.archive-it.org/all/20090606013924/http://www.lemonde. fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-virale_1202866_3244.html). *Le Monde*. France. Archived from the original (http://www.lemonde.fr/planete/article/2009/06/05/le-laboratoire-p4-menagerie-viral e_1202866_3244.html) on 6 June 2009. Retrieved 8 July 2009.\n- 48. \"Official site of Lyon\" (https://web.archive.org/web/20100424192931/http://www.grandlyon.com/La-Part-Dieu.2 315.0.html). Grandlyon.com. Archived from the original (http://www.grandlyon.com/La-Part-Dieu.2315.0.html) on 24 April 2010. Retrieved 3 April 2011.\n- 49. Jean-Baptiste Onofrio : *Essai d'un glossaire des patois de Lyonnais, Forez et Beaujolais*, Lyon 1864\n- 50. \"Pierre Alain Muet Archives 2008\" (https://web.archive.org/web/20100124093221/http://pa-muet.com/archives. htm). Pa-muet.com. 17 June 2008. Archived from the original (http://pa-muet.com/archives.htm) on 24 January 2010. Retrieved 25 January 2010.\n- 51. \"Bottazzi fait le mur\" (https://web.archive.org/web/20071125163711/http://www.brefonline.com/numeroERA_af fichearticle.asp?idA=3262). Brefonline.Com. Archived from the original (http://www.brefonline.com/numeroER A_affichearticle.asp?idA=3262) on 25 November 2007. Retrieved 5 February 2009.\n- 52. \"The African Museum of Lyon Website\" (https://web.archive.org/web/20090219232752/http://musee-africain-ly on.org/). Musee-africain-lyon.org. Archived from the original (http://www.musee-africain-lyon.org/) on 19 February 2009. Retrieved 5 February 2009.\n- 53. UNESCO World Heritage Site (http://www.lyon.fr/vdl/sections/en/tourisme/copy_of_patrimoine/a_patrimoinem ondial) Archived (https://web.archive.org/web/20110718090826/http://www.lyon.fr/vdl/sections/en/Tourisme/co py_of_patrimoine/a_patrimoinemondial) 18 July 2011 at the Wayback Machine. City of Lyon official website. Retrieved 26 November 2009.", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n## **Primary and secondary schools**\n\nThere are some international private schools in the Lyon area, including:\n\n- Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n\t- Includes the *Section Japonaises* (リヨン・ジェルラン補習授業校 *Riyon Jeruran Hoshū Jugyō Kō* \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school[73]\n- Ombrosa;\n- International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n- Montessori School of Lyon.\n\n## **Supplementary education**\n\nOther Japanese supplementary schools:\n\n- The *Association Pour le Développement de la Langue et de la Culture Japonaises* (ADLCJ; リヨン補習授業校 *Riyon Hoshū Jugyō Kō*) is held in the *Maison Berty Albrecht* in Villeurbanne, near Lyon.[73] It was formed in 1987.[74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n# **Transport**\n\nLyon–Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu[75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981.[76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\nLyon 2: Berges du Rhône campus\n\nIPSA Lyon Campus\n\nPlatform I, Lyon-Part-Dieu train station\n\nT1 tramway on the Raymond Barre bridge", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \"*Primat des Gaules*\".[30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the Fourvière Hill\n\nunder French control until the 14th century.\n\n#### **Modern Lyon**\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\".[31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the *Bourse* (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings.[32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\nand supported the Girondins. The city was besieged by Revolutionary armies for over two months before it surrendered in October 1793. Many buildings were destroyed, especially around the Place Bellecour, and Jean-Marie Collot d'Herbois and Joseph Fouché administered the execution of more than 2,000 people. The Convention ordered that its name be changed to \"Liberated City\", and a plaque was erected that proclaimed \"Lyons made war on Liberty; Lyons no longer exists\". A decade later, Napoleon ordered the reconstruction of all the buildings demolished during that period.\n\n| • Metro density | 500/km2 (1,300/sq mi) |\n| --- | --- |\n| Time zone | UTC+01:00 (CET) |\n| • Summer (DST) | UTC+02:00 (CEST) |\n| INSEE/Postal code | 69123 (https://www.inse |\n| | e.fr/fr/statistiques/14055 |\n| | 99?geo=COM-69123) |\n| | /69001-69009 |\n| Elevation | 162–349 m (531– |\n| | 1,145 ft) |\n| Website | lyon.fr (https://www.lyon. |\n| | fr/) |\n| 1 French | Land Register data, which excludes |\n\nlakes, ponds, glaciers > 1 km2 (0.386 sq mi or 247 acres) and river estuaries.\n\n#### **Timeline of Lyon**\n\n**Historical affiliations** Roman Empire (Gallia Lugdunensis), 43 BC-286 Western Roman Empire (Gallia Lugdunensis), 286-411 Kingdom of the Burgundians, 411–534 Francia, 534–843 Middle Francia, 843–855 Lotharingia, 855–879 Lower Burgundy, 879-933 Kingdom of Arles, 933–1312 Kingdom of France (Lyonnais), 1312– 1792 French First Republic, 1792–1793 Counter-revolutionary, 1793 French First Republic, 1793–1804 First French Empire, 1804–1814 Kingdom of France, 1814–1815 First French Empire, 1815 Kingdom of France, 1815–1830 Kingdom of France, 1830–1848 French Second Republic, 1848–1852 Second French Empire, 1852–1870 French Third Republic, 1870–1940 Vichy France, 1940–1944 French Fourth Republic, 1944–1958 France, 1958–present\n\nLyon under siege in 1793", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the *canuts* (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as *traboules*, enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum.[33][34]\n\n# **Geography**\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula – the \"*Presqu'île*\" – bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned.[35]\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location\n\nfor Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways.[36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.\n\nMassacre during the Canut rebellion of 1834\n\nThe Saône-Rhône confluence", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Both Vieux Lyon and the slopes of Croix-Rousse are known for their narrow passageways (named *traboules*) that pass through buildings and link streets on either side. The first examples of traboules are thought to have been built in Lyon in the 4th century. [54] The traboules allowed the inhabitants to get from their homes to the Saône quickly and allowed the canuts on the Croix-Rousse hill to get from their workshops to the textile merchants at the foot of the hill.\n\n#### **Gastronomy**\n\nLyon has a long and chronicled culinary arts tradition. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\",[55] a claim repeated by later writers such as Bill Buford. [56] Renowned 3-star Michelin chefs such as Marie Bourgeois[57] and Eugénie Brazier[58] developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success. [59] The *bouchon* is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south. Another Lyon tradition is a type of brunch food called \"mâchons\", made of local charcuterie and usually accompanied by Beaujolais red wine. Mâchons were the customary meal of the canuts, the city's silk workers, who ate a late-morning meal after they finished their shifts in the factories.[60]\n\nOther traditional local dishes include coq au vin; quenelle; gras double; salade lyonnaise (lettuce with bacon, croûtons and a poached egg); and the sausage-based rosette lyonnaise and andouillette. Popular local confections include marron glacé and coussin de Lyon. Cervelle de canut (literally, \"silk worker's brains\") is a cheese spread/dip made of a base of fromage blanc, seasoned with chopped herbs, shallots, salt, pepper, olive oil and vinegar.\n\nPassage de l'Argue\n\nÎle Barbe bakery at the Halles de Lyon-Paul Bocuse\n\nMore recently, the french tacos was invented in Lyon suburbs (Vaulx-en-Velin) (or Grenoble according to some theories), in the early 2000s and is now famous worldwide.[61][62]\n\n#### **Sport**\n\nLyon is home to the football club Olympique Lyonnais (OL), whose men's team plays in Ligue 1 and has won the championship of that competition seven times, all consecutively from 2002 to 2008.[63] OL played until December 2015 at the 43,000 seat Stade de Gerland, which also hosted matches of the 1998 FIFA World Cup. Since 2016, the team has played at the Parc Olympique Lyonnais, a 59,000-seat stadium located in the eastern suburb of Décines-Charpieu. [64] OL operates a women's team, Olympique Lyonnais Féminin, which competes in and dominates Division 1 Féminine. They won fourteen consecutive top-flight championships (2007–2020), and additionally claim the four titles won by the original incarnation of FC Lyon, a\n\nParc Olympique Lyonnais\n\nwomen's football club that merged into OL in 2004 (the current FC Lyon was founded in 2009). The OL women have also won the UEFA Women's Champions League eight times, including in five consecutive editions from 2016 to 2020. Lyon hosted the 2019 FIFA Women's World Cup semi-finals as well as the Final on 7 July at Stade de Lyon.\n\nLyon has a rugby union team, Lyon OU, in the Top 14, which moved into Stade de Gerland full-time in 2017–18. In addition, Lyon has a rugby league side called Lyon Villeurbanne that plays in the French rugby league championship. The club's home is the Stade Georges Lyvet in Villeurbanne.", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## **Climate**\n\nLyon has a humid subtropical climate (Köppen: *Cfa*), bordering an oceanic climate (*Köppen*: *Cfb*, Trewartha: *Do*).[38] The mean temperature in Lyon in the coldest month is 4.1 °C (39.4 °F) in January and in the warmest month in July is 22.6 °C (72.7 °F). Precipitation is adequate year-round, at an average of 820 mm (32.3 in), the winter months are the driest. The highest recorded temperature was 40.5 °C (104.9 °F) on 13 August 2003 while the lowest recorded temperature was −24.6 °C (−12.3 °F) on 22 December 1938.[39]\n\nIce on the Saône, 2012", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia4.pdf" - }, - { - "text": "| Mayor | Term start | Term end | Party |\n| --- | --- | --- | --- |\n| Antoine Gailleton | 1881 | 1900 | |\n| Victor Augagneur | 1900 | 30 October 1905 | PRS |\n| Édouard Herriot | 30 October 1905 | 20 September 1940 | Radical |\n| Georges Cohendy | 20 September 1940 | 1941 | Nominated and dismissed by Vichy |\n| Georges Villiers | 1941 | 1942 | Nominated and dismissed by Vichy |\n| Pierre-Louis-André Bertrand | 1942 | 1944 | Nominated by Vichy |\n| Justin Godart | 1944 | 18 May 1945 | Radical |\n| Édouard Herriot | 18 May 1945 | 26 March 1957 | Radical |\n| Pierre Montel, ad interim | 26 March 1957 | 14 April 1957 | Radical |\n| Louis Pradel | 14 April 1957 | 27 November 1976 | DVD |\n| Armand Tapernoux, ad interim | 27 November 1976 | 5 December 1976 | DVD |\n| Francisque Collomb | 5 December 1976 | 24 March 1989 | DVD |\n| Michel Noir | 24 March 1989 | 25 June 1995 | RPR |\n| Raymond Barre | 25 June 1995 | 25 March 2001 | DVD |\n| Gérard Collomb | 25 March 2001 | 17 July 2017 | PS |\n| Georges Képénékian | 17 July 2017 | 5 November 2018 | LREM |\n| Gérard Collomb | 5 November 2018 | 4 July 2020 | LREM |\n| Grégory Doucet | 4 July 2020 | Incumbent | EELV |\n\n### **Metropolis**\n\nSince 2015, the commune of Lyon (48 km 2 (19 sq mi) in land area) and 58 suburban communes have formed the Metropolis of Lyon (534 km2 (206 sq mi) in land area), a directly elected metropolitan authority now in charge of most urban issues. The Metropolis of Lyon is the only metropolitan authority in France which is a territorial collectivity, on par with French communes and departments. Its metropolitan council was for the first time directly elected by universal suffrage in 2020 within 14 electoral wards, the only directly elected metropolitan council in France.\n\nThe 14 electoral wards are the following (see map for location):\n\n- Lônes et coteaux Lyon-Centre (Lyon-Centre) Lyon-Est (Lyon-East) Lyon-Nord (Lyon-North) Lyon-Ouest Lyon-Sud Lyon-Sud-Est Ouest Plateau Nord-Caluire Porte des Alpes Portes du Sud Rhône Amont Val de Saône Villeurbanne\nThe six wards with names starting with \"Lyon\" are all located within the commune of Lyon. The Villeurbanne ward is coterminous with the namesake commune. All other seven wards each group various suburban communes.\n\nMap of the Metropolis of Lyon and its 59 communes (the commune of Lyon is in red)", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia4.pdf" - }, - { - "text": "1,600,000 m 2 (17,222,256.67 sq ft) of office space and services and more than 55,000 jobs.[48] *Cité Internationale*, created by the architect Renzo Piano is located in the border of the Parc de la Tête d'Or in the 6th arrondissement. The worldwide headquarters of Interpol is located there. The district of *Confluence*, in the south of the historic centre, is a new pole of economical and cultural development.\n\nTourism is an important part of the Lyon economy, with one billion euros in 2007 and 3.5 million hotel-nights in 2006 provided by non-residents. Approximately 60% of tourists visit for business, with the rest for leisure. In January 2009, Lyon ranked first in France for hostels business. The festivals most important for attracting tourists are the *Fête des lumières*, the *Nuits de Fourvière* every summer, the *Biennale d'art contemporain* and the *Nuits Sonores*.\n\n# **Culture**\n\nSince the Middle Ages, the region residents have spoken several dialects of Franco-Provençal. The Lyonnais dialect was replaced by the French language as the importance of the city grew. However some \"frenchified\" Franco-Provençal words can also be heard in the French of the Lyonnais, who call their little boys and girls \"gones\" and \"fenottes\" for example.[49]\n\n- The Lumière brothers pioneered cinema in the town in 1895. The Institut Lumière, built as Auguste Lumiere's house, and a fascinating piece of architecture in its own right, holds many of their first inventions and other early cinematic and photographic artifacts.\nGuignol, created in the early 19th C., associated with the silk-workers\n\n8 December each year is marked by the Festival of Lights (la Fête des lumières), a celebration of thanks to the Virgin Mary, who purportedly saved the city from a deadly plague in the Middle Ages. During the event, the local population places candles (*luminions*) at their windows and the city of Lyon organizes large-scale light shows onto the sides of important Lyonnais monuments, such as the medieval Cathédrale St-Jean.\n\n- The Saint Francis of Sales church is famous for its large and unaltered Cavaillé-Coll pipe organ, attracting audiences from around the world.\n- The Opéra Nouvel (New Opera House) is the home of the Opéra National de Lyon. The original opera house was re-designed by the distinguished French architect Jean Nouvel between 1985 and 1993 and is named after him.\n- Lyon is also the French capital of \"*trompe l'œil*\" walls, a very ancient tradition. Many are to be seen around the city. This old tradition is now finding a contemporary expression, for example in the art of Guillaume Bottazzi.[50][51]\n- The Brothers of the Sacred Heart, a Roman Catholic congregation that operates schools in Europe and North America, was founded in Lyon in 1821.\n- The African Museum of Lyon is one of the oldest museums situated in Lyon.[52]\n- The Museum of Resistance and Deportation looks at the various individuals prominent in the Resistance movement in World War II. The building is strongly linked to Klaus Barbie. Lyon sees itself as the centre of the French resistance and many members were shot in Place Bellecour in the town centre. The exhibition is largely a series of , mini-biographies of those involved.\n- Lyon is a pilot city of the Council of Europe and the European Commission Intercultural cities program.\n\n## **UNESCO World Heritage Site**\n\nThe historic site of Lyon was designated a UNESCO World Heritage Site in 1998. In its designation, UNESCO cited the \"exceptional testimony to the continuity of urban settlement over more than two millennia on a site of great commercial and strategic significance.\"[37] The specific regions comprising the historic site include the Roman district and Fourvière, the Renaissance district (Vieux Lyon), the silk district (slopes of Croix-Rousse), and the Presqu'île, which features architecture from the 12th century to modern times.[53]", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": " What should do the rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided ?", - "target_page": 2, - "target_passage": "ensure that the register is kept in that church or chapel, and (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n- (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n\n(6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## **Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales**\n\n**3.**—(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n\n(2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)—\n\n- (a) the date and place of the marriage;\n- (b) the name and surname of each party;\n- (c) the date of birth of each party;\n- (d) the occupation (if any) of each party;\n- (e) the address of each party at the time of the marriage;\n- (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n- (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n- (h) the name and surname of the clergyman by whom the marriage was solemnized.\n\n(3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n\n- (4) After making a record under paragraph (2) the clergyman must sign it.\n(5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n### **Requirements about the keeping of registers of marriage services**\n\n**4.**—(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must—\n\n- (a) ensure that the register is kept in that church or chapel, and\n- (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n\n(2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\n*Abi Tierney* Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "I approve\n\n*Kevin Foster* Parliamentary Under Secretary of State 29th April 2021 Home Office\n\n## **EXPLANATORY NOTE**\n\n*(This note is not part of the Regulations)* \n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as \"registers of marriage services\" to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\n \n\n© Crown copyright 2021\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## **2021 No. 538**\n\n## **MARRIAGE, ENGLAND AND WALES**\n\n# The Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\n| Made - - - | - | 29th April 2021 |\n| --- | --- | --- |\n| Coming into force - | - | 4th May 2021 |\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949(**a**).\n\n#### **Citation, commencement, extent and interpretation**\n\n**1.**—(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n\n(2) These Regulations come into force on 4th May 2021.\n\n(3) These Regulations extend to England and Wales.\n\n(4) In these Regulations, \"chapel\" does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies(**b**).\n\n#### **Duty of parochial church councils to provide registers of marriage services**\n\n**2.**—(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England(**c**) in that parish in which banns of matrimony may be published.\n\n(2) Books provided under paragraph (1) are to be known as \"registers of marriage services\".\n\n(3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n\n(4) The register must be made of durable material.\n\n(5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it—\n\n(<b>a) 1949 c. 76 (12 & 13 Geo 6). Section 74 was amended by Schedule 2 to the Registration Service Act 1953 (c. 37) and by paragraph 5(1)(d) of Schedule 2 to the Transfer of Functions (Registration) Order 2008 (S.I. 2008/678) and subsequently renumbered as section 74(1) by article 12 of the Registration of Marriages etc. (Electronic Communications and Electronic Storage) Order 2009 (S.I. 2009/2821). Section 74(1) was amended by paragraph 19 of Schedule 15 to the Immigration Act 2016 (c. 19) and paragraph 43 of Schedule 1 to the Registration of Marriages Regulations 2021 (S.I. 2021/411), which also inserted subsection (1A).\n\n(<b>b) See section 68(2) of the Marriage Act 1949. The certification function of the Admiralty under that section was transferred to the Secretary of State by the Defence (Transfer of Functions) Act 1964 (c. 15).\n\n(<b>c) Section 78(2) of the Marriage Act 1949 provides for references to the Church of England to be construed as including references to the Church in Wales.", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "Assistant Minister.\n\n# **43. Tenure of office of Ministers and Assistant Ministers**\n\nThe office of any Minister or Assistant Minister shall become vacant-\n\n- (a) in the case of a Minister or Assistant Minister appointed from among the Members of the National Assembly, or in the case of a Minister or Assistant Minister appointed from among persons who are not Members of the Assembly who becomes a Member of the Assembly before the expiration of four months from the date of his or her appointment-\n\t- (i) if he or she ceases to be a Member of the National Assembly otherwise than by reason of a dissolution of the National Assembly; or\n\t- (ii) if, at the first sitting of the Assembly after a general election, he or she is not a Member of the Assembly;\n- (b) in the case of a Minister or Assistant Minister appointed from among persons who are not Members of the Assembly, if before the expiration of four months from the date of his or her appointment-\n\t- (i) circumstances arise (other than a dissolution of the Assembly) that, if he or she were such a Member, would cause him or her to vacate his or her seat in the Assembly; or\n\t- (ii) he or she does not become a Member of the Assembly;\n- (c) if the holder of the office is removed from office by the President;\n- (d) upon the assumption by any person of the office of President.\n\n# **44. Cabinet**\n\n(1) There shall be a Cabinet which shall consist of the President, Vice-President and the Ministers.\n\n(2) There shall preside at meetings of the Cabinet-\n\n- (a) the President;\n- (b) in the absence of the President, the Vice-President; or\n- (c) in the absence of the President and the Vice-President, such Minister as the President may designate.\n\n(3) The Cabinet may act notwithstanding any vacancy in its membership.\n\n# **45. Oaths to be taken by Ministers and Assistant Ministers**\n\nThe Vice-President, a Minister or an Assistant Minister shall not enter upon the duties of his or her office unless he or she has taken and subscribed the oath of allegiance and such oath for the due execution of his or her office as may be prescribed by Parliament.\n\n# **46. Secretary to the Cabinet**\n\n(1) There shall be a Secretary to the Cabinet whose office shall be a public office.\n\n(2) The Secretary to the Cabinet shall have charge of the Cabinet Office and shall be responsible, in accordance with such instructions as may be given to him or her by the President, for arranging the business for, and keeping the minutes of, the Cabinet, for conveying decisions of the Cabinet to the appropriate person or authority, and shall have such other functions as the President may from time to time direct.\n\n# **PART III**\n\n# **Executive Functions (ss 47-56)**\n\n# **47. Functions of President**\n\n(1) The executive power of Botswana shall vest in the President and, subject to the provisions of this Constitution, shall be exercised by him or her either directly or through officers subordinate to him or her.\n\n(2) In the exercise of any function conferred upon him or her by this Constitution or any other law the President shall, unless it is otherwise provided, act in his or her own deliberate judgment and shall not be obliged to follow the advice tendered by any other", - "page_start": 22, - "page_end": 22, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "from among the Members of the Assembly, to perform the functions of the office of Vice- President and any person so appointed may discharge those functions accordingly:\n\nProvided that a person appointed under this subsection shall cease to perform the functions of the office of Vice-President-\n\n- (i) if his or her appointment is revoked by the Vice-President;\n- (ii) if he or she ceases to be a Member of the Assembly otherwise than by reason of a dissolution of Parliament; or\n- (iii) if the Vice-President ceases to perform the functions of the office of President. (6) In this section references to Members of the Assembly shall, in the event of\n\nParliament being dissolved, be construed as references to those persons who immediately before the dissolution were Members of the Assembly.\n\n## **40. Salary and allowances of President**\n\n(1) The President shall receive such salary and allowances as may be prescribed by resolution of the National Assembly, which shall be a charge on the general revenues of the Republic.\n\n(2) The salary and allowances of the President shall not be altered to his or her disadvantage during his or her period of office.\n\n(3) A person who has held the office of President shall receive such pension or, upon the expiration of his or her term of office, such gratuity as may be prescribed by resolution of the National Assembly, which shall be a charge on the Consolidated Fund.\n\n# **41. Protection of President in respect of legal proceedings**\n\n(1) Whilst any person holds or performs the functions of the office of President no criminal proceedings shall be instituted or continued against him or her in respect of anything done or omitted to be done by him or her either in his or her official capacity or in his or her private capacity and no civil proceedings shall be instituted or continued in respect of which relief is claimed against him or her in respect of anything done or omitted to be done in his or her private capacity.\n\n(2) Where provision is made by law limiting the time within which proceedings of any description may be brought against any person, the term of any person in the office of President shall not be taken into account in calculating any period of time prescribed by that law which determines whether any such proceedings as are mentioned in subsection (1) of this section may be brought against that person.\n\n## **PART II**\n\n# **The Cabinet (ss 42-46)**\n\n# **42. Ministers and Assistant Ministers**\n\n(1) There shall be such offices of Minister of the Government (not exceeding six or such other number as Parliament may from time to time provide) as may be established by Parliament or, subject to the provisions of any Act of Parliament, by the President.\n\n(2) There shall be such offices of Assistant Minister (not exceeding three or such number as Parliament may from time to time provide) as may be established by Parliament or, subject to the provisions of any Act of Parliament, by the President.\n\n(3) Appointments to the office of Minister or Assistant Minister shall be made by the President from among Members of the National Assembly:\n\nProvided that-\n\n- (i) not more than four persons may be appointed as Minister or Assistant Minister from amongst persons who are not Members of the Assembly but are qualified for election as such; and\n- (ii) if occasion arises for making an appointment to the office of a Minister or an Assistant Minister while Parliament is dissolved a person who was a Member of the Assembly before the dissolution may be appointed as a Minister or an", - "page_start": 21, - "page_end": 21, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "(d) the Industrial Court.\n\n(2) In this Constitution, unless the context otherwise requires, references to offices in the public service shall be construed as including references to the offices of judges of the Court of Appeal and judges of the High Court and the offices of members of all subordinate courts (being offices the emoluments attaching to which, or any part of the emoluments attaching to which, are paid directly out of moneys provided by Parliament).\n\n(3) For the purposes of this Constitution a person shall not be considered to be a public officer by reason only that he or she is in receipt of any remuneration or allowance as the President, Vice-President, a Minister or Assistant Minister, Speaker, Deputy Speaker or Member of the Assembly, a Member of the Ntlo ya Dikgosi or a member of any Commission established by this Constitution.\n\n(4) For the purposes of this Constitution, a person shall not be considered as holding a public office by reason only of the fact that he or she is in receipt of a pension or other like allowance in respect of service under the Government of Botswana or the former Protectorate of Bechuanaland.\n\n(5) In this Constitution, unless the context otherwise requires, a reference to the holder of an office by the term designating his or her office shall be construed as including a reference to any person for the time being lawfully acting in or performing the functions of that office:\n\nProvided that nothing in this subsection shall apply to references to the President or Vice-President in section 35, 36 or 39 of this Constitution.\n\n(6) In this Constitution, unless it is otherwise provided or required by the context, a reference to the power to make appointments to any office shall be construed as including a reference to the power to make appointments on promotion and transfer and to confirm appointments and to the power to appoint a person to act in or perform the functions of that office at any time when the office is vacant or the holder thereof is unable (whether by reason of absence or infirmity of mind or body or any other cause) to perform the functions of that office.\n\n(7) References in this Constitution to the power to remove a public officer from his or her office shall be construed as including references to any power conferred by any law to require or permit that officer to retire from the public service:\n\nProvided that nothing in this subsection shall be construed as conferring on any person or authority power to require a judge of the Court of Appeal or the High Court, the Auditor-General or the Director of Public Prosecutions to retire from the public service.\n\n(8) Any provision in this Constitution that vests in any person or authority power to remove any public officer from his or her office shall be without prejudice to the power of any person or authority to abolish any office or to any law providing for the compulsory retirement of public officers generally or in any class of public officer on attaining an age specified therein.\n\n(9) Where power is vested by this Constitution in any person or authority to appoint any person to act in or perform the functions of any office if the holder thereof is himself unable to perform those functions, no such appointment shall be called in question on the ground that the holder of the office was not unable to perform those functions.\n\n(10) No provision of this Constitution that any person or authority shall not be subject to the direction or control of any other person or authority in the exercise of any functions under this Constitution shall be construed as precluding a court of law from exercising jurisdiction in relation to any question whether that person or authority has performed those functions in accordance with this Constitution or any other law.\n\n(11) Where any power is conferred by this Constitution to make any Act, order,", - "page_start": 54, - "page_end": 54, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (d) to visit a person (\"D\") whom P reasonably believes is dying, and where P is a member of D's household or a close family member or friend of D;\n- (e) to attend the funeral of a member of P's household or a close family member;\n- (f) in other exceptional circumstances such as—\n\t- (i) to seek medical assistance where this is required urgently or on the advice of a registered medical practitioner including to access services from dentists, opticians, audiologists, chiropodists, chiropractors, osteopaths and other medical and health practitioners, including services relating to mental health,\n\t- (ii) to access critical public services including social services or services provided to victims (such as victims of crime),\n\t- (iii) to avoid injury or illness or to escape risk of harm,\n\t- (iv) to access veterinary services where this is required urgently or on the advice of a veterinary surgeon.\n\n(2) P may only leave or be outside of the place where P is self-isolating in reliance on the grounds mentioned in sub-paragraph (1)(c), (d) or (e)—\n\n- (a) if P has been given prior permission by a person authorised by the Secretary of State for this purpose;\n- (b) if P complies with any reasonable requirements imposed by the person so authorised in relation to the exercise, the visit to the person or attendance at the funeral.\n\n#### **Meaning of \"place\"**\n\n**14.** For the purposes of this Schedule the place referred to in paragraphs 8 to 13 means the room in the designated accommodation where P is staying and, if connected to the room where P is staying, the room of any person referred to in paragraph 11(a) (travelling companion), including any balcony, and does not include the communal areas or any garden, yard, passage, stair, garage, outhouse or appurtenance of the accommodation in which the place is situated.\n\n#### **Designations**\n\n**15.** The Secretary of State must designate for the purposes of this Schedule—\n\n- (a) accommodation;\n- (b) transportation to the designated accommodation,\n\nand must publish details of the designations in such manner as appears to the Secretary of State to be appropriate.\n\n#### **Duties where P is a child**\n\n**16.** If P is a child—\n\n- (a) any person who has custody or charge of P when P is travelling to England must ensure, so far as is reasonably practicable, that P complies with the obligations in paragraphs 5 and 6;\n- (b) any person who has custody or charge of P during P's period of self-isolation must ensure, so far as is reasonably practicable, that P self-isolates in accordance with this Schedule.\n\n#### **Person caring for P**\n\n**17.** A person may reside in the place where P is residing pursuant to this Schedule to provide assistance P reasonably requires by reason of—\n\n- (a) P being a child; or\n- (b) any disability of P's,", - "page_start": 77, - "page_end": 77, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "court to try a member of a disciplined force for a criminal offence notwithstanding any trial and conviction or acquittal of that member under the disciplinary law of that force, so, however, that any court so trying such a member and convicting him or her shall in sentencing him or her to any punishment take into account any punishment awarded him or her under that disciplinary law;\n\n- (e) subsection (8) of this section to the extent that the law in question authorizes a court to convict a person of a criminal offence under any customary law to which, by virtue of that law, such person is subject.\n(13) In the case of any person who is held in lawful detention, the provisions of subsection (1), subsection (2)(d) and (e) and subsection (3) of this section shall not apply in relation to his or her trial for a criminal offence under the law regulating the discipline of persons held in such detention.\n\n(14) In this section \"criminal offence\" means a criminal offence under the law in force in Botswana.\n\n## **11. Protection of freedom of conscience**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of conscience, and for the purposes of this section the said freedom includes freedom of thought and of religion, freedom to change his or her religion or belief, and freedom, either alone or in community with others, and both in public and in private, to manifest and propagate his or her religion or belief in worship, teaching, practice and observance.\n\n(2) Every religious community shall be entitled, at its own expense, to establish and maintain places of education and to manage any place of education which it wholly maintains; and no such community shall be prevented from providing religious instruction for persons of that community in the course of any education provided at any place of education which it wholly maintains or in the course of any education which it otherwise provides.\n\n(3) Except with his or her own consent (or, if he or she is a minor, the consent of his or her guardian) no person attending any place of education shall be required to receive religious instruction or to take part in or attend any religious ceremony or observance if that instruction, ceremony or observance relates to a religion other than his or her own.\n\n(4) No person shall be compelled to take any oath which is contrary to his or her religion or belief or to take any oath in a manner which is contrary to his or her religion or belief.\n\n(5) Nothing contained in or done under the authority of any law shall be held to be inconsistent with or in contravention of this section to the extent that the law in question makes provision which is reasonably required-\n\n- (a) in the interests of defence, public safety, public order, public morality or public health; or\n- (b) for the purpose of protecting the rights and freedoms of other persons, including the right to observe and practise any religion without the unsolicited intervention of members of any other religion,\n\nand except so far as that provision or, as the case may be, the thing done under the authority thereof is shown not to be reasonably justifiable in a democratic society.\n\n## **12. Protection of freedom of expression**\n\n(1) Except with his or her own consent, no person shall be hindered in the enjoyment of his or her freedom of expression, that is to say, freedom to hold opinions without interference, freedom to receive ideas and information without interference, freedom to communicate ideas and information without interference (whether the", - "page_start": 10, - "page_end": 10, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Lyon Cathedral Maison du Crible\n\n(16th C.) in the Vieux Lyon\n\nSaint-Nizier Church\n\nÉglise Saint-Paul\n\nÉglise Saint-Bonaventure Church of Saint-Just, Lyon Basilica of Saint-Martin\n\nd'Ainay\n\nManécanterie, Lyon\n\n## **17th and 18th centuries**\n\n- City Hall on the Place des Terreaux, built by architects Jules Hardouin-Mansart and Robert de Cotte\n- Musée des beaux-arts de Lyon, fine arts museum housed in a former convent of the 17th century, including the Baroque *chapelle Saint-Pierre*\n- Hôtel-Dieu de Lyon (17th and 18th century), historical hospital with a baroque chapel\n- Temple du Change (17th and 18th century), former stock exchange of Lyon, Protestant temple since the 18th century\n- Place Bellecour, one of the largest town squares in Europe\n- Chapelle de la Trinité (1622), the first Baroque chapel built in Lyon, and part of the former École de la Trinité, now Collège-lycée Ampère\n- Église Saint-Polycarpe (1665–1670), Classical church\n- Église Saint-Just (16th to 18th century), Classical church\n- Saint-Bruno des Chartreux (17th and 18th century), church, masterpiece of Baroque architecture\n- Église Notre Dame Saint-Vincent (18th century), Neo-classical church", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## **122. Remuneration of certain officers**\n\n(1) There shall be paid to the holders of the offices to which this section applies such salaries and such allowances as may be prescribed by Parliament.\n\n(2) The salaries and any allowances payable to the holders of the offices to which this section applies shall be a charge on the Consolidated Fund.\n\n(3) The salary payable to the holder of any office to which this section applies and his or her terms of office, other than allowances, shall not be altered to his or her disadvantage after his or her appointment.\n\n(4) Where a person's salary or terms of office depend upon his or her option, the salary or terms for which he or she opts shall, for the purposes of subsection (3) of this section, be deemed to be more advantageous to him or her than any others for which he or she might have opted.\n\n(5) This section applies to the offices of judge of the Court of Appeal, judge of the High Court, member of the Public Service Commission, member of the Judicial Service Commission, member of the Delimitation Commission, Auditor-General, Director of Public Prosecutions and Attorney-General.\n\n## **123. Public debt**\n\n(1) There shall be charged on the Consolidated Fund all debt charges for which Botswana is liable.\n\n(2) For the purposes of this section debt charges include interest, sinking fund charges, the repayment or amortization of debt, and all expenditure in connection with the raising of loans on the security of the revenues or the Consolidated Fund of the former Protectorate of Bechuanaland or Botswana, and the service and redemption of debt thereby created.\n\n### **124. Auditor-General**\n\n(1) There shall be an Auditor-General, whose office shall be a public office.\n\n(2) The public accounts of Botswana and of all officers, courts and authorities of the Government of Botswana shall be audited and reported on by the Auditor-General and for that purpose the Auditor-General or any person authorized by him or her in that behalf shall have access to all books, records, reports and other documents relating to those accounts:\n\nProvided that, if it is so provided by Parliament in the case of any body corporate directly established by law, the accounts of that body corporate shall be audited and reported on by such person as may be specified by or under that law.\n\n(3) The Auditor-General shall submit his or her reports to the Minister responsible for finance, who shall cause them to be laid before the National Assembly.\n\n(4) The Auditor-General shall perform such other duties and exercise such other powers in relation to the accounts of the Government or the accounts of other public authorities or other bodies as may be prescribed by or under any Act of Parliament.\n\n(5) In the exercise of his or her functions the Auditor-General shall not be subject to the direction or control of any other person or authority.\n\n# **CHAPTER IX Miscellaneous (ss 125-127)**\n\n## **125. Resignations**\n\n(1) Any person who is appointed or elected to any office established by this Constitution may resign from that office by writing under his or her hand addressed to the person or authority by whom he or she was appointed or elected:\n\nProvided that in the case of a person who holds office as President his or her resignation from that office shall be addressed to the Chief Justice, in the case of a person who holds office as Speaker or Deputy Speaker of the National Assembly his or her resignation from that office shall be addressed to the Assembly, in the case of an", - "page_start": 52, - "page_end": 52, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "What are Tesla's total liabilities and equity in 2024?", - "target_page": 5, - "target_passage": "119,852", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### Table of Contents\n\n#### Legal Proceedings\n\n#### Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the \"2018 CEO Performance Award\"). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n#### Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n#### Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\nDeferred revenue is equivalent to the total transaction price allocated to the performance obligations that are unsatisfied, or partially unsatisfied, as of the balance sheet date. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $711 million and $360 million for the nine months ended September 30, 2024 and 2023, respectively. Of the total deferred revenue balance as of September 30, 2024, we expect to recognize $821 million of revenue in the next 12 months. The remaining balance will be recognized at the time of transfer of control of the product or over the performance period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our automotive deliveries. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $245 million and $242 million, respectively, in Accounts receivable, net, and $868 million and $1.04 billion, respectively, in Other non-current assets for the long-term portion.\n\nWe offer resale value guarantees to our commercial banking partners in connection with certain vehicle leasing programs. Under these programs, we originate the lease with our end customer and immediately transfer the lease and the underlying vehicle to our commercial banking partner, with the transaction being accounted for as a sale under ASC 606, Revenue from Contracts with Customers. We estimate a guarantee liability in accordance with ASC 460, Guarantees and record it within other liabilities on our consolidated balance sheet. On a quarterly basis, we assess the estimated market value of vehicles sold under this program to determine whether there have been changes to the amount of expected resale value guarantee liabilities. The total recorded guarantee liabilities on vehicles sold under this program were immaterial as of September 30, 2024 and December 31, 2023. Our maximum exposure on the guarantees we provide if they are unable to sell the vehicle at or above the vehicle's contractual residual value at the end of the lease term was $1.04 billion and $166 million as of September 30, 2024 and December 31, 2023, respectively. September 30, 2024 December 31, 2023\n\n#### Automotive Regulatory Credits\n\nAs of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $4.72 billion. Of this amount, we expect to recognize $683 million in the next 12 months and the rest over the remaining performance obligation period. Additionally, changes in regulations on automotive regulatory credits may significantly impact our remaining performance obligations and revenue to be recognized under these contracts.\n\n# Automotive Leasing Revenue\n\n# Direct Sales-Type Leasing Program\n\nLease receivables relating to sales-type leases are presented on the consolidated balance sheets as follows (in millions):\n\n| vehicles sold under this program to determine whether there have been changes to the amount of expected resale value | | | |\n| --- | --- | --- | --- |\n| guarantee liabilities. The total recorded guarantee liabilities on vehicles sold under this program were immaterial as of | | | |\n| September 30, 2024 and December 31, 2023. Our maximum exposure on the guarantees we provide if they are unable to sell | | | |\n| the vehicle at or above the vehicle's contractual residual value at the end of the lease term was $1.04 billion and $166 million as | | | |\n| of September 30, 2024 and December 31, 2023, respectively. | | | |\n| Automotive Regulatory Credits | | | |\n| As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially | | | |\n| unsatisfied for contracts with an original expected length of more than one year was $4.72 billion. Of this amount, we expect to | | | |\n| recognize $683 million in the next 12 months and the rest over the remaining performance obligation period. Additionally, | | | |\n| changes in regulations on automotive regulatory credits may significantly impact our remaining performance obligations and | | | |\n| revenue to be recognized under these contracts. | | | |\n| Automotive Leasing Revenue | | | |\n| Direct Sales-Type Leasing Program | | | |\n| Lease receivables relating to sales-type leases are presented on the consolidated balance sheets as follows (in millions): | | | |\n| September 30, 2024 December 31, 2023 | | | |\n| Gross lease receivables $ | $ | 584 | 780 |\n| Unearned interest income | | (48) | (78) |\n| Allowance for expected credit losses | | (7) | (6) |\n| Net investment in sales-type leases $ | $ | 529 | 696 |\n| Reported as: | | | |\n| Prepaid expenses and other current assets $ | $ | 171 | 189 |\n| Other non-current assets | | 358 | 507 |\n| $ Net investment in sales-type leases | $ | 529 | 696 |\n| 11 | | | |", - "page_start": 14, - "page_end": 14, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### Table of Contents\n\nOn October 21, 2022, a lawsuit was filed in the Delaware Court of Chancery by a purported shareholder of Tesla alleging, among other things, that board members breached their fiduciary duties in connection with their oversight of the Company's 2018 settlement with the SEC, as amended. Among other things, the plaintiff seeks reforms to the Company's corporate governance and internal procedures, unspecified damages, and attorneys' fees. The lawsuit has been stayed pending resolution of a motion to consolidate certain derivative lawsuits in the Delaware Court of Chancery referenced below.\n\nOn November 15, 2021, JPMorgan Chase Bank (\"JP Morgan\") filed a lawsuit against Tesla in the Southern District of New York alleging breach of a stock warrant agreement that was entered into as part of a convertible notes offering in 2014. In 2018, JP Morgan informed Tesla that it had adjusted the strike price based upon Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. Tesla disputed JP Morgan's adjustment as a violation of the parties' agreement. In 2021, Tesla delivered shares to JP Morgan per the agreement, which they duly accepted. JP Morgan now alleges that it is owed approximately $162 million as the value of additional shares that it claims should have been delivered as a result of the adjustment to the strike price in 2018. On January 24, 2022, Tesla filed multiple counterclaims as part of its answer to the underlying lawsuit, asserting among other points that JP Morgan should have terminated the stock warrant agreement in 2018 rather than make an adjustment to the strike price that it should have known would lead to a commercially unreasonable result. Tesla believes that the adjustments made by JP Morgan were neither proper nor commercially reasonable, as required under the stock warrant agreements. JP Morgan filed a motion for judgment on the pleadings, which Tesla opposed, and on September 12, 2024, the Court denied JP Morgan's motion.\n\n#### Certain Derivative Lawsuits in Delaware\n\nBefore converting from a Delaware to Texas corporation on June 13, 2024, three separate derivative actions brought by purported Tesla stockholders were filed in the Delaware Court of Chancery on May 24, June 10 and June 13, 2024, purportedly on behalf of Tesla, against current and former directors regarding topics involving Elon Musk and others, X Corp. (formerly Twitter) and x.AI. These suits assert various claims, including breach of fiduciary duty and breach of contract, and seek unspecified damages and other relief. On August 6, 2024, the plaintiffs in these three actions moved to consolidate the matters into a single case, and a hearing on that motion is scheduled for November 18, 2024.\n\n#### Litigation and Investigations Relating to Alleged Discrimination and Harassment\n\nOn February 9, 2022, the California Civil Rights Department (\"CRD,\" formerly \"DFEH\") filed a civil complaint against Tesla in Alameda County, California Superior Court, alleging systemic race discrimination, hostile work environment and pay equity claims, among others. CRD's amended complaint seeks monetary damages and injunctive relief. The case is currently in discovery. Trial is scheduled for September 15, 2025.\n\nAdditionally, on June 1, 2022 the Equal Employment Opportunity Commission (\"EEOC\") issued a cause finding against Tesla that closely parallels the CRD's allegations. On September 28, 2023, the EEOC filed a civil complaint against Tesla in the United States District Court for the Northern District of California asserting claims for race harassment and retaliation and seeking, among other things, monetary and injunctive relief.\n\nOn June 16, 2022, two Tesla stockholders filed separate derivative actions in the U.S. District Court for the Western District of Texas, purportedly on behalf of Tesla, against certain of Tesla's current and former directors. Both suits assert claims for breach of fiduciary duty, unjust enrichment, and violation of the federal securities laws in connection with alleged race and gender discrimination and sexual harassment. Among other things, plaintiffs seek declaratory and injunctive relief, unspecified damages payable to Tesla, and attorneys' fees. On July 22, 2022, the Court consolidated the two cases and on September 6, 2022, plaintiffs filed a consolidated complaint. On November 7, 2022, the defendants filed a motion to dismiss the case and on September 15, 2023, the Court dismissed the action but granted plaintiffs leave to file an amended complaint. On November 2, 2023, plaintiff filed an amended complaint purportedly on behalf of Tesla, against Elon Musk. On December 19, 2023, the defendants moved to dismiss the amended complaint, which the Court granted on April 12, 2024, with leave for plaintiffs to amend. On May 15, 2024, plaintiffs filed a second amended consolidated complaint purportedly on behalf of Tesla, against Mr. Musk. On July 1, 2024, the defendants moved to dismiss the second amended consolidated complaint.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\n#### ITEM 2. MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nThe following discussion and analysis should be read in conjunction with the consolidated financial statements and the related notes included elsewhere in this Quarterly Report on Form 10-Q.\n\n#### Overview\n\nOur mission is to accelerate the world's transition to sustainable energy. We design, develop, manufacture, lease and sell high-performance fully electric vehicles, solar energy generation systems and energy storage products. We also offer maintenance, installation, operation, charging, insurance, financial and other services related to our products. Additionally, we are increasingly focused on products and services based on AI, robotics and automation.\n\nIn 2024, we produced approximately 1,314,000 consumer vehicles and delivered approximately 1,294,000 consumer vehicles through the third quarter. We are focused on profitable growth, including by leveraging existing factories and production lines to introduce new and more affordable products, further improving and deploying our FSD capabilities, including through our planned robotaxi product, reducing costs, increasing vehicle production, utilized capacity and delivery capabilities, improving and developing our vehicles and battery technologies, vertically integrating and localizing our supply chain, and expanding our global infrastructure, including our service and charging infrastructure.\n\nIn 2024, we deployed 20.41 GWh of energy storage products through the third quarter. We are focused on ramping the production and increasing the market penetration of our energy storage products.\n\nDuring the three and nine months ended September 30, 2024, we recognized total revenues of $25.18 billion and $71.98 billion, respectively, representing increases of $1.83 billion and $377 million, respectively, compared to the same periods in the prior year. During the three and nine months ended September 30, 2024, our net income attributable to common stockholders was $2.17 billion and $4.77 billion, respectively, representing an increase of $314 million and a decrease of $2.30 billion, respectively, compared to the same periods in the prior year. We continue to ramp production and build and optimize our manufacturing capacity, expand our operations while focusing on further cost reductions and operational efficiencies to enable increased deliveries and deployments of our products, and invest in research and development to accelerate our AI, software, and fleet-based profits for further revenue growth.\n\nWe ended the third quarter of 2024 with $33.65 billion in cash and cash equivalents and investments, representing an increase of $4.55 billion from the end of 2023. Our cash flows provided by operating activities were $10.11 billion during the nine months ended September 30, 2024, compared to $8.89 billion during the same period ended September 30, 2023, representing an increase of $1.22 billion. Capital expenditures amounted to $8.56 billion during the nine months ended September 30, 2024, compared to $6.59 billion during the same period ended September 30, 2023, representing an increase of $1.96 billion. Overall growth has allowed our business to generally fund itself, and we will continue investing in a number of capital-intensive projects and research and development in upcoming periods. Production Location Vehicle Model(s) Production Status Fremont Factory Model S / Model X Active Model 3 / Model Y Active\n\n#### Management Opportunities, Challenges and Uncertainties and 2024 Outlook\n\n#### Automotive—Production\n\nThe following is a summary of the status of production of each of our announced vehicle models in production and under development, as of the date of this Quarterly Report on Form 10-Q:\n\n| our manufacturing capacity, expand our operations while focusing on further cost reductions and operational efficiencies to | | |\n| --- | --- | --- |\n| enable increased deliveries and deployments of our products, and invest in research and development to accelerate our AI, | | |\n| software, and fleet-based profits for further revenue growth. | | |\n| We ended the third quarter of 2024 with $33.65 billion in cash and cash equivalents and investments, representing an | | |\n| increase of $4.55 billion from the end of 2023. Our cash flows provided by operating activities were $10.11 billion during the | | |\n| nine months ended September 30, 2024, compared to $8.89 billion during the same period ended September 30, 2023, | | |\n| representing an increase of $1.22 billion. Capital expenditures amounted to $8.56 billion during the nine months ended | | |\n| September 30, 2024, compared to $6.59 billion during the same period ended September 30, 2023, representing an increase of | | |\n| $1.96 billion. Overall growth has allowed our business to generally fund itself, and we will continue investing in a number of | | |\n| capital-intensive projects and research and development in upcoming periods. | | |\n| Management Opportunities, Challenges and Uncertainties and 2024 Outlook | | |\n| The following is a summary of the status of production of each of our announced vehicle models in production and | | |\n| under development, as of the date of this Quarterly Report on Form 10-Q: | | |\n| Production Location | Vehicle Model(s) | Production Status |\n| Fremont Factory | Model S / Model X | Active |\n| Model 3 / Model Y | | Active |\n| Gigafactory Shanghai | Model 3 / Model Y | Active |\n| Gigafactory Berlin-Brandenburg | Model Y | Active |\n| Gigafactory Texas | Model Y | Active |\n| Cybertruck | | Active |\n| Gigafactory Nevada | Tesla Semi | Pilot production |\n| Various | Next Generation Platform | In development |\n| TBD | Roadster | In development |\n| 26 | | |", - "page_start": 31, - "page_end": 31, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\n#### PART II. OTHER INFORMATION ITEM 1. LEGAL PROCEEDINGS\n\nFor a description of our material pending legal proceedings, please see Note 10, Commitments and Contingencies, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n# ITEM 1A. RISK FACTORS\n\nOur operations and financial results are subject to various risks and uncertainties, including the factors discussed in Part I, Item 1A, Risk Factors in our Annual Report on Form 10-K for the year ended December 31, 2023, which could adversely affect our business, financial conditions and future results.\n\n# ITEM 2. UNREGISTERED SALES OF EQUITY SECURITIES AND USE OF PROCEEDS\n\nIn connection with the offering of 2.00% Convertible Senior Notes due 2024 in May 2019, we sold warrants to each of Société Générale, Wells Fargo Bank, National Association, Credit Suisse Capital LLC (later assigned to UBS AG, London Branch) and Goldman, Sachs & Co. LLC (together, the \"2019 Warrantholders\"). Between August 19, 2024 and September 30, 2024, we issued an aggregate of 8,506,223 shares of our common stock to the 2019 Warrantholders pursuant to their exercise of such warrants, which were net of the applicable exercise prices. Such shares were issued pursuant to an exemption from registration provided by Rule 3(a)(9) of the Securities Act of 1933.\n\n# ITEM 3. DEFAULTS UPON SENIOR SECURITIES\n\nNone.\n\n# ITEM 4. MINE SAFETY DISCLOSURES\n\nNot applicable.\n\n# ITEM 5. OTHER INFORMATION\n\nNone of the Company's directors or officers adopted, modified or terminated a Rule 10b5-1 trading arrangement or a non-Rule 10b5-1 trading arrangement during the Company's fiscal quarter ended September 30, 2024, as such terms are defined under Item 408(a) of Regulation S-K, except as follows:\n\nOn July 25, 2024, Robyn Denholm, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 674,345 shares of our common stock (all resulting from stock options expiring in June 2025), subject to certain conditions. The arrangement's expiration date is June 18, 2025.\n\nOn July 31, 2024, Kimbal Musk, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 152,088 shares of our common stock, subject to certain conditions. The arrangement's expiration date is May 30, 2025.\n\nOn August 12, 2024, Kathleen Wilson-Thompson, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 300,000 shares of our common stock, subject to certain conditions. The arrangement's expiration date is February 28, 2025.\n\n36", - "page_start": 46, - "page_end": 46, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Consolidated Financial Statements June 30, 2024 and 2023\n\n(With Independent Auditors' Report Thereon)", - "page_start": 0, - "page_end": 0, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). *The Economist*. 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n- 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). *CNN Business*. Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n- 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). *The New York Times*. Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n- 203. \"Electricity 2024 Analysis\" (https://www.iea.org/reports/electricity-2024). *IEA*. 24 January 2024. Retrieved 13 July 2024.\n- 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). *Vox*. New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n- 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm_campaign=wp_post_most&utm_medium =email&utm_source=newsletter&wpisrc=nl_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). *Washington Post*.\n- 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). *Goldman Sachs*. Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n- 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). *Wall Street Journal*. Dow Jones.\n- 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). *Wall Street Journal*. Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n- 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). *Bloomberg*.\n- 210. Halper, Evan (20 September 2024). \"Microsoft deal would reopen Three Mile Island nuclear plant to power AI\" (https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mi le-island-nuclear-constellation). *Washington Post*.", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### Table of Contents\n\nGross margin for total automotive increased from 18.7% to 20.1% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower average combined cost per unit of our vehicles, an increase in FSD revenue and an increase in regulatory credits revenue, partially offset by lower average selling price on our vehicles, as discussed above.\n\nGross margin for total automotive decreased from 19.7% to 19.0% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 primarily due to lower average selling price on our vehicles and temporary under-utilization of manufacturing capacity during production ramps, partially offset by lower average combined cost per unit of our vehicles, an increase in regulatory credits revenue and an increase in FSD revenue, as discussed above.\n\nGross margin for total automotive & services and other segment increased from 17.4% to 18.7% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for total automotive & services and other segment decreased from 18.5% to 17.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The changes in gross margin are primarily due to the automotive gross margin factors discussed above.\n\n#### Energy Generation and Storage Segment\n\nCost of energy generation and storage revenue increased $473 million, or 40%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Cost of energy generation and storage revenue increased $1.39 billion, or 37%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases in cost of revenues were primarily due to increases in Megapack and Powerwall deployments, partially offset by increases in IRA manufacturing credits recognized as compared to the prior periods.\n\nGross margin for energy generation and storage increased from 24.4% to 30.5% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for energy generation and storage increased from 18.0% to 26.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to margin improvements for our energy storage products driven by cost reductions, including benefits from IRA manufacturing credits, and a higher proportion of our storage business, which operated at a higher gross margin, within the segment as compared to the prior periods. September 30, Change September 30, Change (Dollars in millions) 2024 2023 $ % 2024 2023 $ % Research and development $ 1,039 $ 1,161 $ (122) (11)% $ 3,264 $ 2,875 $ 389 14 % As a percentage of revenues 4 % 5 % 5 % 4 %\n\n#### Research and Development Expense\n\n| Three Months Ended | | | | | | | | Nine Months Ended | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Research and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, | | | | | | | | | | | | |\n| 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially | | | | | | | | | | | | |\n| offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in | | | | | | | | | | | | |\n| the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower | | | | | | | | | | | | |\n| R&D expenses in the current period. | | | | | | | | | | | | |\n| R&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine | | | | | | | | | | | | |\n| months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI | | | | | | | | | | | | |\n| programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 | | | | | | | | | | | | |\n| as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies. | | | | | | | | | | | | |\n| Selling, General and Administrative Expense | | | | | | | | | | | | |\n| Three Months Ended Nine Months Ended | | | | | | | | | | | | |\n| September 30, Change September 30, Change | | | | | | | | | | | | |\n| (Dollars in millions) 2024 2023 $ % 2024 2023 $ % | | | | | | | | | | | | |\n| (67) | Selling, general and administrative $ | 1,186 $ | 1,253 | | $ | (5)% | $ | 3,837 $ 3,520 | | $ | 317 | 9 % |\n| As a percentage of revenues | | 5 % | | 5 % | | | | 5 % | 5 % | | | |\n\nResearch and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower R&D expenses in the current period.\n\nR&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies.\n\n#### Selling, General and Administrative Expense\n\n| Research and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, |\n| --- |\n| 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially |\n| offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in |\n| the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower |\n| R&D expenses in the current period. |\n| R&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine |\n| months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI |\n| programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 |\n| as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies. |\n| Selling, General and Administrative Expense |\n| Three Months Ended Nine Months Ended |\n| September 30, Change September 30, Change |\n| (Dollars in millions) 2024 2023 $ % 2024 2023 $ % |\n| Selling, general and administrative $ 1,186 $ 1,253 $ (67) (5)% $ 3,837 $ 3,520 $ 317 9 % |\n| As a percentage of revenues 5 % 5 % 5 % 5 % |\n| 31 |", - "page_start": 39, - "page_end": 39, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI.[195][196] Another discussed approach is to envision a separate *sui generis* system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[197]\n\n#### **Dominance by tech giants**\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[201][202]\n\n#### **Power needs and environmental impacts**\n\nIn January 2024, the International Energy Agency (IEA) released *Electricity 2024, Analysis and Forecast to 2026*, forecasting electric power use.[203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[205]\n\nA 2024 Goldman Sachs Research Paper, *AI Data Centers and the Coming US Power Demand Surge*, found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[207]\n\nIn 2024, the *Wall Street Journal* reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.[209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "Where was Tesla incorporated? ", - "target_page": 13, - "target_passage": "State of Delaware", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### Table of Contents\n\n#### Legal Proceedings\n\n#### Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the \"2018 CEO Performance Award\"). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n#### Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n#### Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\nOn October 21, 2022, a lawsuit was filed in the Delaware Court of Chancery by a purported shareholder of Tesla alleging, among other things, that board members breached their fiduciary duties in connection with their oversight of the Company's 2018 settlement with the SEC, as amended. Among other things, the plaintiff seeks reforms to the Company's corporate governance and internal procedures, unspecified damages, and attorneys' fees. The lawsuit has been stayed pending resolution of a motion to consolidate certain derivative lawsuits in the Delaware Court of Chancery referenced below.\n\nOn November 15, 2021, JPMorgan Chase Bank (\"JP Morgan\") filed a lawsuit against Tesla in the Southern District of New York alleging breach of a stock warrant agreement that was entered into as part of a convertible notes offering in 2014. In 2018, JP Morgan informed Tesla that it had adjusted the strike price based upon Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. Tesla disputed JP Morgan's adjustment as a violation of the parties' agreement. In 2021, Tesla delivered shares to JP Morgan per the agreement, which they duly accepted. JP Morgan now alleges that it is owed approximately $162 million as the value of additional shares that it claims should have been delivered as a result of the adjustment to the strike price in 2018. On January 24, 2022, Tesla filed multiple counterclaims as part of its answer to the underlying lawsuit, asserting among other points that JP Morgan should have terminated the stock warrant agreement in 2018 rather than make an adjustment to the strike price that it should have known would lead to a commercially unreasonable result. Tesla believes that the adjustments made by JP Morgan were neither proper nor commercially reasonable, as required under the stock warrant agreements. JP Morgan filed a motion for judgment on the pleadings, which Tesla opposed, and on September 12, 2024, the Court denied JP Morgan's motion.\n\n#### Certain Derivative Lawsuits in Delaware\n\nBefore converting from a Delaware to Texas corporation on June 13, 2024, three separate derivative actions brought by purported Tesla stockholders were filed in the Delaware Court of Chancery on May 24, June 10 and June 13, 2024, purportedly on behalf of Tesla, against current and former directors regarding topics involving Elon Musk and others, X Corp. (formerly Twitter) and x.AI. These suits assert various claims, including breach of fiduciary duty and breach of contract, and seek unspecified damages and other relief. On August 6, 2024, the plaintiffs in these three actions moved to consolidate the matters into a single case, and a hearing on that motion is scheduled for November 18, 2024.\n\n#### Litigation and Investigations Relating to Alleged Discrimination and Harassment\n\nOn February 9, 2022, the California Civil Rights Department (\"CRD,\" formerly \"DFEH\") filed a civil complaint against Tesla in Alameda County, California Superior Court, alleging systemic race discrimination, hostile work environment and pay equity claims, among others. CRD's amended complaint seeks monetary damages and injunctive relief. The case is currently in discovery. Trial is scheduled for September 15, 2025.\n\nAdditionally, on June 1, 2022 the Equal Employment Opportunity Commission (\"EEOC\") issued a cause finding against Tesla that closely parallels the CRD's allegations. On September 28, 2023, the EEOC filed a civil complaint against Tesla in the United States District Court for the Northern District of California asserting claims for race harassment and retaliation and seeking, among other things, monetary and injunctive relief.\n\nOn June 16, 2022, two Tesla stockholders filed separate derivative actions in the U.S. District Court for the Western District of Texas, purportedly on behalf of Tesla, against certain of Tesla's current and former directors. Both suits assert claims for breach of fiduciary duty, unjust enrichment, and violation of the federal securities laws in connection with alleged race and gender discrimination and sexual harassment. Among other things, plaintiffs seek declaratory and injunctive relief, unspecified damages payable to Tesla, and attorneys' fees. On July 22, 2022, the Court consolidated the two cases and on September 6, 2022, plaintiffs filed a consolidated complaint. On November 7, 2022, the defendants filed a motion to dismiss the case and on September 15, 2023, the Court dismissed the action but granted plaintiffs leave to file an amended complaint. On November 2, 2023, plaintiff filed an amended complaint purportedly on behalf of Tesla, against Elon Musk. On December 19, 2023, the defendants moved to dismiss the amended complaint, which the Court granted on April 12, 2024, with leave for plaintiffs to amend. On May 15, 2024, plaintiffs filed a second amended consolidated complaint purportedly on behalf of Tesla, against Mr. Musk. On July 1, 2024, the defendants moved to dismiss the second amended consolidated complaint.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\n# Other Litigation Related to Our Products and Services\n\nWe are also subject to various lawsuits that seek monetary and other injunctive relief. These lawsuits include proposed class actions and other consumer claims that allege, among other things, purported defects and misrepresentations related to our products and services. For example, on September 14, 2022, a proposed class action was filed against Tesla, Inc. and related entities in the U.S. District Court for the Northern District of California, alleging various claims about the Company's driver assistance technology systems under state and federal law. This case was later consolidated with several other proposed class actions, and a Consolidated Amended Complaint was filed on October 28, 2022, which seeks damages and other relief on behalf of all persons who purchased or leased from Tesla between January 1, 2016, to the present. On October 5, 2022, a proposed class action complaint was filed in the U.S. District Court for the Eastern District of New York asserting similar state and federal law claims against the same defendants. On September 30, 2023, the Court dismissed this action with leave to amend the complaint. On November 20, 2023, the plaintiff moved to amend the complaint, which Tesla opposed. On August 8, 2024, the Court denied the plaintiff's motion for leave to file an amended complaint and entered judgment for Tesla. On September 5, 2024, the plaintiff filed a notice of appeal to United States Court of Appeals for the Second Circuit. On March 22, 2023, the plaintiffs in the Northern District of California consolidated action filed a motion for a preliminary injunction to order Tesla to (1) cease using the term \"Full Self-Driving Capability\" (FSD Capability), (2) cease the sale and activation of FSD Capability and deactivate FSD Capability on Tesla vehicles, and (3) provide certain notices to consumers about proposed courtfindings about the accuracy of the use of the terms Autopilot and FSD Capability. Tesla opposed the motion. On September 30, 2023, the Court denied the request for a preliminary injunction, compelled four of five plaintiffs to arbitration, and dismissed the claims of the fifth plaintiff with leave to amend the complaint. On October 31, 2023, the remaining plaintiff in the Northern District of California action filed an amended complaint, which Tesla moved to dismiss, and on May 15, 2024, the Court granted in part and denied in part Tesla's motion. On October 2, 2023, a similar proposed class action was filed in San Diego County Superior Court in California. Tesla subsequently removed the San Diego County case to federal court and on January 8, 2024, the federal court granted Tesla's motion to transfer the case to the U.S. District Court for the Northern District of California. Tesla moved to compel arbitration, which the plaintiff did not oppose, and on June 27, 2024, the Court stayed the case pending arbitration.\n\nOn February 27, 2023, a proposed class action was filed in the U.S. District Court for the Northern District of California against Tesla, Inc., Elon Musk and certain current and former Company executives. The complaint alleges that the defendants made material misrepresentations and omissions about the Company's Autopilot and FSD Capability technologies and seeks money damages and other relief on behalf of persons who purchased Tesla stock between February 19, 2019, and February 17, 2023. An amended complaint was filed on September 5, 2023, naming only Tesla, Inc. and Elon Musk as defendants. On November 6, 2023, Tesla moved to dismiss the amended complaint. On September 30, 2024, the Court granted Tesla's motion to dismiss without prejudice.\n\nOn March 14, 2023, a proposed class action was filed against Tesla, Inc. in the U.S. District Court for the Northern District of California. Several similar complaints were also filed in the same court and these cases have now all been consolidated. These complaints allege that Tesla violates federal antitrust and warranty laws through its repair, service, and maintenance practices and seeks, among other relief, damages for persons who paid Tesla for repairs services or Tesla compatible replacement parts from March 2019 to March 2023. On July 17, 2023, these plaintiffs filed a consolidated amended complaint. On September 27, 2023, the court granted Tesla's motion to compel arbitration as to three of the plaintiffs, and on November 17, 2023, the court granted Tesla's motion to dismiss without prejudice. The plaintiffs filed a Consolidated Second Amended Complaint on December 12, 2023, which Tesla moved to dismiss. Plaintiffs also appealed the court's arbitration order, which was denied. On June 17, 2024, the Court granted in part and denied in part Tesla's motion to dismiss the Consolidated Second Amended Complaint.\n\nThe Company intends to vigorously defend itself in these matters; however, we cannot predict the outcome or impact. We are unable to reasonably estimate the possible loss or range of loss, if any, associated with these claims, unless noted.", - "page_start": 28, - "page_end": 28, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\nWe are focused on growing our manufacturing capacity, which includes capacity for manufacturing newer vehicle models such as our Cybertruck, Tesla Semi and future vehicles utilizing aspects of our next generation platform, and ramping the production at our Gigafactories to their installed production capacities as well as increasing production rate and efficiency at our current factories. The next phase of production growth will depend on the continued ramp at our factories and be initiated by advances in autonomy and the introduction of new products, including those built on our next generation vehicle platform, as well as our ability to add to our available sources of battery cell supply by manufacturing our own cells that we are developing to have high-volume output, lower capital and production costs and longer range. Our goals are to improve vehicle performance, decrease production costs and increase affordability and customer awareness.\n\nThese plans are subject to uncertainties inherent in establishing and ramping manufacturing operations, which may be exacerbated by new product and manufacturing technologies we introduce, the number of concurrent international projects, any industry-wide component constraints, labor shortages and any future impact from events outside of our control. For example, during the first quarter of 2024, we experienced a sequential decline in production volumes partially caused by the early phase of the production ramp of the updated Model 3 at our Fremont factory, and factory shutdowns at Gigafactory Berlin-Brandenburg resulting from shipping diversions caused by the Red Sea conflict and an arson attack. Moreover, we have set ambitious technological targets with our plans for battery cells as well as for iterative manufacturing and design improvements for our vehicles with each new factory.\n\n#### Automotive—Demand, Sales, Deliveries and Infrastructure\n\nOur cost reduction efforts, cost innovation strategies, and additional localized procurement and manufacturing are key to our vehicles' affordability and have allowed us to competitively price our vehicles. We will also continue to generate demand by improving our vehicles' performance and functionality, including through product offerings and features based on artificial intelligence such as Autopilot, FSD (Supervised), and other software, and delivering new vehicles and vehicle options. In addition, we have been increasing awareness, and expanding our vehicle financing programs, including attractive leasing terms for our customers. Moreover, we expect to continue to benefit from ongoing electrification of the automotive sector and increasing environmental regulations and initiatives.\n\nHowever, we operate in a cyclical industry that is sensitive to shifting consumer trends, political and regulatory uncertainty, including with respect to trade and the environment, all of which can be compounded by inflationary pressures, rising energy prices, interest rate fluctuations and the liquidity of enterprise customers. For example, as inflationary pressures increased across the markets in which we operate, central banks in developed countries raised interest rates rapidly and substantially, which impacted the affordability of vehicle lease and finance arrangements. Further, sales of vehicles in the automotive industry also tend to be cyclical in many markets, which may expose us to increased volatility as we expand and adjust our operations. Moreover, as additional competitors enter the marketplace and help bring the world closer to sustainable transportation, we will have to adjust and continue to execute well to maintain our momentum. Additionally, our suppliers' liquidity and allocation plans may be affected by current challenges in the North American automotive industry, which could reduce our access to components or result in unfavorable changes to cost. These macroeconomic and industry trends have had, and will likely continue to have, an impact on the pricing of, and order rate for our vehicles, and in turn our operating margin. Changes in government and economic incentives or tariffs may also impact our sales, cost structure and the competitive landscape. We will continue to adjust accordingly to such developments, and we believe our ongoing cost reduction, including improved production innovation and efficiency at our newest factories and lower logistics costs, and focus on operating leverage will continue to benefit us in relation to our competitors, while our new products will help enable future growth.\n\nAs our production increases, we must work constantly to similarly increase vehicle delivery capability so that it does not become a bottleneck on our total deliveries. We are also committed to reducing the percentage of vehicles delivered in the third month of each quarter, which will help to reduce the cost per vehicle. As we expand our manufacturing operations globally, we will also have to continue to increase and staff our delivery, servicing and charging infrastructure accordingly, maintain our vehicle reliability and optimize our Supercharger locations to ensure cost effectiveness and customer satisfaction. In particular, as other automotive manufacturers have announced their adoption of the North American Charging Standard (\"NACS\") and agreements with us to utilize our Superchargers, we must correspondingly expand our network in order to ensure adequate availability to meet customer demands. We also remain focused on continued enhancements of the capability and efficiency of our servicing operations.", - "page_start": 33, - "page_end": 33, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "# UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 10-Q Texas 91-2197729\n\n(Mark One)\n\n- x QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934 (Address of principal executive offices) (Zip Code)\nFor the quarterly period ended September 30, 2024\n\nOR\n\n- o TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934\nFor the transition period from _________ to _________\n\nCommission File Number: 001-34756\n\n# Tesla, Inc.\n\n(Exact name of registrant as specified in its charter)\n\n(State or other jurisdiction of incorporation or organization)\n\n1 Tesla Road Austin, Texas 78725\n\n(I.R.S. Employer\n\nIdentification No.)\n\n(512) 516-8177 (Registrant's telephone number, including area code)\n\n#### Securities registered pursuant to Section 12(b) of the Act:\n\n| 1934 | | |\n| --- | --- | --- |\n| For the transition period from _________ to _________ | | |\n| Commission File Number: 001-34756 | | |\n| Tesla, Inc. | | |\n| (Exact name of registrant as specified in its charter) | | |\n| (State or other jurisdiction of | | (I.R.S. Employer |\n| incorporation or organization) | | Identification No.) |\n| 1 Tesla Road | | |\n| Austin, Texas | | 78725 |\n| (512) 516-8177 | | |\n| (Registrant's telephone number, including area code) | | |\n| Securities registered pursuant to Section 12(b) of the Act: | | |\n| Title of each class Trading Symbol(s) | | Name of each exchange on which registered |\n| Common stock | TSLA | The Nasdaq Global Select Market |\n| Indicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 (\"Exchange Act\") during the preceding 12 months (or for such shorter period that the registrant was required to file such reports), and (2) has been | | |\n| subject to such filing requirements for the past 90 days. Yes x No o | | |\n| Indicate by check mark whether the registrant has submitted electronically every Interactive Data File required to be submitted pursuant to Rule 405 | | |\n\nIndicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 (\"Exchange Act\") during the preceding 12 months (or for such shorter period that the registrant was required to file such reports), and (2) has been subject to such filing requirements for the past 90 days. Yes x No o\n\nIndicate by check mark whether the registrant has submitted electronically every Interactive Data File required to be submitted pursuant to Rule 405 of Regulation S-T (§232.405 of this chapter) during the preceding 12 months (or for such shorter period that the registrant was required to submit such files). Yes x No o\n\nIndicate by check mark whether the registrant is a large accelerated filer, an accelerated filer, a non-accelerated filer, a smaller reporting company, or an emerging growth company. See the definitions of \"large accelerated filer,\" \"accelerated filer,\" \"smaller reporting company\" and \"emerging growth company\" in Rule 12b-2 of the Exchange Act:\n\n| Large accelerated filer | x | Accelerated filer | o |\n| --- | --- | --- | --- |\n| Non-accelerated filer | o | Smaller reporting company | o |\n| Emerging growth company | o | | |\n\nIf an emerging growth company, indicate by check mark if the registrant has elected not to use the extended transition period for complying with any new or revised financial accounting standards provided pursuant to Section 13(a) of the Exchange Act. o\n\nIndicate by check mark whether the registrant is a shell company (as defined in Rule 12b-2 of the Exchange Act). Yes o No x As of October 18, 2024, there were 3,210,059,659 shares of the registrant's common stock outstanding.", - "page_start": 0, - "page_end": 0, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\n# Certain Investigations and Other Matters\n\nWe regularly receive requests for information, including subpoenas, from regulators and governmental authorities such as the National Highway Traffic Safety Administration, the National Transportation Safety Board, the Securities and Exchange Commission (\"SEC\"), the Department of Justice (\"DOJ\"), and various local, state, federal, and international agencies. The ongoing requests for information include topics such as operations, technology (e.g., vehicle functionality, vehicle incidents, Autopilot and FSD Capability), compliance, finance, data privacy, and other matters related to Tesla's business, its personnel, and related parties. We routinely cooperate with such formal and informal requests for information, investigations, and other inquiries. To our knowledge no government agency in any ongoing investigation has concluded that any wrongdoing occurred. We cannot predict the outcome or impact of any ongoing matters. Should the government decide to pursue an enforcement action, there exists the possibility of a material adverse impact on our business, results of operation, prospects, cash flows, financial position or brand.\n\nWe are also subject to various other legal proceedings, risks and claims that arise from the normal course of business activities. For example, during the second quarter of 2023, a foreign news outlet reported that it obtained certain misappropriated data including, purportedly non-public Tesla business and personal information. Tesla has made notifications to potentially affected individuals (current and former employees) and regulatory authorities and we are working with certain law enforcement and other authorities. On August 5, 2023, a putative class action was filed in the United States District Court for the Northern District of California, purportedly on behalf of all U.S. individuals impacted by the data incident, followed by several additional lawsuits, that each assert claims under various state laws and seeks monetary damages and other relief. If an unfavorable ruling or development were to occur in these or other possible legal proceedings, risks and claims, there exists the possibility of a material adverse impact on our business, results of operations, prospects, cash flows, financial position or brand.\n\n#### Note 11 – Variable Interest Entity Arrangements\n\nThe aggregate carrying values of the variable interest entities' assets and liabilities, after elimination of any intercompany transactions and balances, in the consolidated balance sheets were as follows (in millions):\n\n| financial position or brand. | | |\n| --- | --- | --- |\n| We are also subject to various other legal proceedings, risks and claims that arise from the normal course of business | | |\n| activities. For example, during the second quarter of 2023, a foreign news outlet reported that it obtained certain | | |\n| misappropriated data including, purportedly non-public Tesla business and personal information. Tesla has made notifications | | |\n| to potentially affected individuals (current and former employees) and regulatory authorities and we are working with certain | | |\n| law enforcement and other authorities. On August 5, 2023, a putative class action was filed in the United States District Court | | |\n| for the Northern District of California, purportedly on behalf of all U.S. individuals impacted by the data incident, followed by | | |\n| several additional lawsuits, that each assert claims under various state laws and seeks monetary damages and other relief. If an | | |\n| unfavorable ruling or development were to occur in these or other possible legal proceedings, risks and claims, there exists the | | |\n| possibility of a material adverse impact on our business, results of operations, prospects, cash flows, financial position or brand. | | |\n| Note 11 – Variable Interest Entity Arrangements | | |\n| The aggregate carrying values of the variable interest entities' assets and liabilities, after elimination of any | | |\n| intercompany transactions and balances, in the consolidated balance sheets were as follows (in millions): | | |\n| September 30, December 31, | | |\n| 2024 2023 | | |\n| Assets | | |\n| Current assets | | |\n| Cash and cash equivalents $ 51 $ | | 66 |\n| Accounts receivable, net 28 | | 13 |\n| Prepaid expenses and other current assets 263 361 | | |\n| Total current assets 342 440 | | |\n| Operating lease vehicles, net 451 | | — |\n| Solar energy systems, net 2,524 3,278 | | |\n| Other non-current assets 190 369 | | |\n| 3,507 $ 4,087 Total assets | $ | |\n| Liabilities | | |\n| Current liabilities | | |\n| Accrued liabilities and other $ 36 $ | | 67 |\n| Deferred revenue 7 | | 6 |\n| Current portion of debt and finance leases 1,930 1,564 | | |\n| Total current liabilities 1,973 1,637 | | |\n| Deferred revenue, net of current portion 81 | | 99 |\n| 1,826 2,041 Debt and finance leases, net of current portion | | |\n| $ 3,880 $ 3,777 Total liabilities | | |\n| 24 | | |", - "page_start": 29, - "page_end": 29, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. [210] The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation.[211]\n\nAfter the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.[212] Taiwan aims to phase out nuclear power by 2025.[212] On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[212]\n\nAlthough most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 *Bloomberg* article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[213] Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[213]\n\nOn 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center. [214] According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[214]\n\n#### **Misinformation**\n\nYouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation.[215] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[216] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem .\n\nIn 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[217] AI pioneer Geoffrey Hinton expressed concern about AI enabling \"authoritarian leaders to manipulate their electorates\" on a large scale, among other risks.[218]\n\n#### **Algorithmic bias and fairness**\n\nMachine learning applications will be biased[k] if they learn from biased data.[220] The developers may not be aware that the bias exists.[221] Bias can be introduced by the way training data is selected and by the way a model is deployed.[222][220] If a biased algorithm is used to make decisions that can seriously", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### **NOTE 1 — ORGANIZATION**\n\nMGM MIRAGE (the \"Company\"), formerly MGM Grand, Inc., is a Delaware corporation, incorporated on January 29, 1986. As of December 31, 2004 approximately 58% of the outstanding shares of the Company's common stock were owned by Tracinda Corporation, a Nevada corporation wholly owned by Kirk Kerkorian. MGM MIRAGE acts largely as a holding company and, through wholly-owned subsidiaries, owns and/or operates casino resorts.\n\nThe Company owns and operates the following casino resorts on the Las Vegas Strip in Las Vegas, Nevada: Bellagio, MGM Grand Las Vegas, The Mirage, Treasure Island (\"TI\"), New York-New York and the Boardwalk Hotel and Casino. The Company owns a 50% interest in the joint venture that owns and operates the Monte Carlo Resort & Casino, also located on the Las Vegas Strip.\n\nThe Company owns three resorts in Primm, Nevada at the California/Nevada state line – Whiskey Pete's, Buffalo Bill's and the Primm Valley Resort – as well as two championship golf courses located near the resorts. The Company also owns Shadow Creek, an exclusive world-class golf course located approximately ten miles north of its Las Vegas Strip resorts.\n\nThe Company, through its wholly owned subsidiary, MGM Grand Detroit, Inc., and its local partners formed MGM Grand Detroit, LLC, to develop a hotel, casino and entertainment complex in Detroit, Michigan. MGM Grand Detroit, LLC operates a casino in an interim facility in downtown Detroit. See Note 10 for discussion of the revised development agreement with the City of Detroit and plans for a permanent casino resort.\n\nThe Company owns and operates Beau Rivage, a beachfront resort located in Biloxi, Mississippi. The Company also owns a 50% interest in a limited liability company that owns Borgata, a casino resort at Renaissance Pointe, located in the Marina area\n\nof Atlantic City, New Jersey. Boyd Gaming Corporation owns the other 50% of Borgata and also operates the resort. Borgata opened in July 2003. The Company owns approximately 95 developable acres adjacent to Borgata, a portion of which consists of common roads, landscaping and master plan improvements which the Company designed and developed as required under the agreement with Boyd.\n\nUntil July 2004, the Company owned and operated MGM Grand Australia and until January 2004, the Company owned and operated the Golden Nugget Las Vegas in downtown Las Vegas and the Golden Nugget Laughlin in Laughlin, Nevada (the \"Golden Nugget Subsidiaries\"). Until June 2003, the Company operated PLAYMGMMIRAGE.com, the Company's online gaming website based in the Isle of Man. See Note 3 for further information regarding these discontinued operations. In the second quarter of 2002, the Company received proceeds of $11 million upon termination of management agreements covering four casinos in the Republic of South Africa. Prior to the termination, the Company managed three permanent casinos and one interim casino and received management fees from its partner, Tsogo Sun Gaming & Entertainment. The termination fee was recorded as part of other revenues in the accompanying consolidated statements of income.\n\nThe Company is actively seeking future development opportunities in the United Kingdom. In May 2003, the Company acquired a 25% interest in Metro Casinos Limited, a United Kingdom gaming company which operates a casino in Bristol. See Note 10 for discussion of other potential developments in the United Kingdom.\n\nIn June 2004, the Company entered into a joint venture agreement to develop, build and operate a hotel-casino resort in Macau S.A.R. The agreement is subject to, among other things, the approval of the government of Macau S.A.R., and other regulatory approvals, as well as the entry into a subconcession agreement with the holder of one of the existing concessions.", - "page_start": 55, - "page_end": 55, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "This annual report contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, including statements regarding our expectations, hopes, intentions, or strategies regarding the future. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to, changes in the interest rate environment, management's business strategy, national, regional and local market conditions, and legislative and regulatory conditions. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.\n\n#### **General**\n\nShenandoah Telecommunications Company is a diversified telecommunications company providing both regulated and unregulated telecommunications services through its nine wholly owned subsidiaries. These subsidiaries provide local exchange telephone services, wireless personal communications services (PCS), as well as cable television, paging, Internet access, long distance, fiber optics facilities, and leased tower facilities. The Company is the exclusive provider of wireless mobility communications network products and services under the Sprint brand from Harrisonburg, Virginia to Harrisburg, York and Altoona, Pennsylvania. The Company refers to the Hagerstown, Maryland; Martinsburg, West Virginia; and Harrisonburg and Winchester, Virginia markets as its Quad State markets. The Company refers to the Altoona, Harrisburg, and York, Pennsylvania markets as its Central Penn markets. Competitive local exchange carrier (CLEC) services were established on a limited basis during 2002. In addition, the Company sells and leases equipment, mainly related to services it provides, and also participates in emerging services and technologies by direct investment in non-affiliated companies.\n\nThe Company reports revenues as wireless, wireline and other revenues. These revenue classifications are defined as follows: Wireless revenues are made up of the Personal Communications Company (a PCS Affiliate of Sprint), and the Mobile Company. Wireline revenues include the following subsidiary revenues in the financial results: Telephone Company, Network Company, Cable Television Company, and the Long Distance Company. Other revenues are comprised of the revenues of ShenTel Service Company, the Leasing Company, ShenTel Communications Company and the Holding Company. For additional information on the Company's business segments, see Note 14 to audited consolidated financial statements appearing elsewhere in this report.\n\nThe Company participates in the telecommunications industry, which requires substantial investment in fixed assets or plant. This significant capital requirement may preclude profitability during the initial years of operation. The strategy of the Company is to grow and diversify the business by adding services and geographic areas that can leverage the existing plant, but to do so within the opportunities and constraints presented by the industry. For many years the Company focused on reducing reliance on the regulated telephone operation, which up until 1981 was the primary business within the Company. This initial diversification was concentrated in other wireline businesses, such as the cable television and regional fiber facility businesses, but in 1990 the Company made its first significant investment in the wireless sector through its former investment in the Virginia 10 RSA Limited partnership. By 1998, revenues of the regulated telephone operation had decreased to 59.2% of total revenues. In that same year more than 76.6% of the Company's total revenue was generated by wireline operations, and initiatives were already underway to make wireless a more significant contributor to total revenues.\n\nDuring the 1990's significant investments were made in the cellular and PCS (wireless) businesses. The VA 10 RSA cellular operation, in which the Company held a 66% interest and was the general partner, experienced rapid revenue growth and excellent margins in the late 1990's. The cellular operation covered only six counties, and became increasingly dependent on roaming revenues. Management believed the roaming revenues and associated margins would be unsustainable as other wireless providers increasingly offered nationally-branded services with significantly reduced usage charges. To position it to participate in the newer, more advanced, digital wireless services, in 1995 the Company entered the PCS business through an affiliation with American Personal Communications (APC), initiating service along the Interstate 81 corridor from Harrisonburg, Virginia to Chambersburg, Pennsylvania. This territory was a very close match to the Company's fiber network, thereby providing economic integration that might not be available to other wireless carriers. In 1999, the Company entered a new affiliation arrangement with Sprint, the successor to APC (which introduced the Company to a nationally-branded wireless service) and expanded the PCS footprint further into Central Pennsylvania. The Company's combined capital investment in 2000 and 2001 in the PCS operation was $45.1 million.", - "page_start": 40, - "page_end": 40, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "NISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT_", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "What is the reason for the increase in Tesla's tax rate from 2023 to 2024?", - "target_page": 26, - "target_passage": " increase in our effective tax rate is primarily due to the impact of releasing the valuation allowance on our U.S. deferred tax assets in the fourth quarter of 2023 and changes in the mix of our jurisdictional earnings", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "#### Table of Contents\n\n#### Legal Proceedings\n\n#### Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the \"2018 CEO Performance Award\"). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n#### Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n#### Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\nGross margin for total automotive increased from 18.7% to 20.1% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower average combined cost per unit of our vehicles, an increase in FSD revenue and an increase in regulatory credits revenue, partially offset by lower average selling price on our vehicles, as discussed above.\n\nGross margin for total automotive decreased from 19.7% to 19.0% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 primarily due to lower average selling price on our vehicles and temporary under-utilization of manufacturing capacity during production ramps, partially offset by lower average combined cost per unit of our vehicles, an increase in regulatory credits revenue and an increase in FSD revenue, as discussed above.\n\nGross margin for total automotive & services and other segment increased from 17.4% to 18.7% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for total automotive & services and other segment decreased from 18.5% to 17.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The changes in gross margin are primarily due to the automotive gross margin factors discussed above.\n\n#### Energy Generation and Storage Segment\n\nCost of energy generation and storage revenue increased $473 million, or 40%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Cost of energy generation and storage revenue increased $1.39 billion, or 37%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases in cost of revenues were primarily due to increases in Megapack and Powerwall deployments, partially offset by increases in IRA manufacturing credits recognized as compared to the prior periods.\n\nGross margin for energy generation and storage increased from 24.4% to 30.5% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for energy generation and storage increased from 18.0% to 26.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to margin improvements for our energy storage products driven by cost reductions, including benefits from IRA manufacturing credits, and a higher proportion of our storage business, which operated at a higher gross margin, within the segment as compared to the prior periods. September 30, Change September 30, Change (Dollars in millions) 2024 2023 $ % 2024 2023 $ % Research and development $ 1,039 $ 1,161 $ (122) (11)% $ 3,264 $ 2,875 $ 389 14 % As a percentage of revenues 4 % 5 % 5 % 4 %\n\n#### Research and Development Expense\n\n| Three Months Ended | | | | | | | | Nine Months Ended | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Research and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, | | | | | | | | | | | | |\n| 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially | | | | | | | | | | | | |\n| offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in | | | | | | | | | | | | |\n| the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower | | | | | | | | | | | | |\n| R&D expenses in the current period. | | | | | | | | | | | | |\n| R&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine | | | | | | | | | | | | |\n| months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI | | | | | | | | | | | | |\n| programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 | | | | | | | | | | | | |\n| as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies. | | | | | | | | | | | | |\n| Selling, General and Administrative Expense | | | | | | | | | | | | |\n| Three Months Ended Nine Months Ended | | | | | | | | | | | | |\n| September 30, Change September 30, Change | | | | | | | | | | | | |\n| (Dollars in millions) 2024 2023 $ % 2024 2023 $ % | | | | | | | | | | | | |\n| (67) | Selling, general and administrative $ | 1,186 $ | 1,253 | | $ | (5)% | $ | 3,837 $ 3,520 | | $ | 317 | 9 % |\n| As a percentage of revenues | | 5 % | | 5 % | | | | 5 % | 5 % | | | |\n\nResearch and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower R&D expenses in the current period.\n\nR&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies.\n\n#### Selling, General and Administrative Expense\n\n| Research and development (\"R&D\") expenses decreased $122 million, or 11%, in the three months ended September 30, |\n| --- |\n| 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially |\n| offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in |\n| the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower |\n| R&D expenses in the current period. |\n| R&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine |\n| months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI |\n| programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 |\n| as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies. |\n| Selling, General and Administrative Expense |\n| Three Months Ended Nine Months Ended |\n| September 30, Change September 30, Change |\n| (Dollars in millions) 2024 2023 $ % 2024 2023 $ % |\n| Selling, general and administrative $ 1,186 $ 1,253 $ (67) (5)% $ 3,837 $ 3,520 $ 317 9 % |\n| As a percentage of revenues 5 % 5 % 5 % 5 % |\n| 31 |", - "page_start": 39, - "page_end": 39, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). *The Economist*. 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n- 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). *CNN Business*. Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n- 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). *The New York Times*. Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n- 203. \"Electricity 2024 Analysis\" (https://www.iea.org/reports/electricity-2024). *IEA*. 24 January 2024. Retrieved 13 July 2024.\n- 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). *Vox*. New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n- 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm_campaign=wp_post_most&utm_medium =email&utm_source=newsletter&wpisrc=nl_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). *Washington Post*.\n- 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). *Goldman Sachs*. Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n- 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). *Wall Street Journal*. Dow Jones.\n- 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). *Wall Street Journal*. Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n- 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). *Bloomberg*.\n- 210. Halper, Evan (20 September 2024). \"Microsoft deal would reopen Three Mile Island nuclear plant to power AI\" (https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mi le-island-nuclear-constellation). *Washington Post*.", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### Table of Contents\n\nOur provision for income taxes increased by $434 million in the three months ended September 30, 2024 and increased by $652 million in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. Our effective tax rate increased from 8% to 22% in the three months ended September 30, 2024 and increased from 10% to 23% in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. These increases are primarily due to the impact of releasing the valuation allowance on our U.S. deferred tax assets in the fourth quarter of 2023 and changes in mix of jurisdictional earnings.\n\nSee Note 9, Income Taxes, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q for further details.\n\n#### Liquidity and Capital Resources\n\nWe expect to continue to generate net positive operating cash flow as we have done in the last five fiscal years. The cash we generate from our core operations enables us to fund ongoing operations and production, our research and development projects for new products and technologies including our proprietary battery cells, additional manufacturing ramps at existing manufacturing facilities, the construction of future factories, and the continued expansion of our retail and service locations, body shops, Mobile Service fleet, Supercharger, including to support NACS, energy product installation capabilities and autonomy and other artificial intelligence enabled products.\n\nIn addition, because a large portion of our future expenditures will be to fund our growth, we expect that if needed we will be able to adjust our capital and operating expenditures by operating segment. For example, if our near-term manufacturing operations decrease in scale or ramp more slowly than expected, including due to global economic or business conditions, we may choose to correspondingly slow the pace of our capital expenditures. Finally, we continually evaluate our cash needs and may decide it is best to raise additional capital or seek alternative financing sources to fund the rapid growth of our business, including through drawdowns on existing or new debt facilities or financing funds. Conversely, we may also from time to time determine that it is in our best interests to voluntarily repay certain indebtedness early.\n\nAccordingly, we believe that our current sources of funds will provide us with adequate liquidity during the 12-month period following September 30, 2024, as well as in the long-term.\n\nSee the sections below for more details regarding the material requirements for cash in our business and our sources of liquidity to meet such needs.\n\n#### Material Cash Requirements\n\nFrom time to time in the ordinary course of business, we enter into agreements with vendors for the purchase of components and raw materials to be used in the manufacture of our products. However, due to contractual terms, variability in the precise growth curves of our development and production ramps, and opportunities to renegotiate pricing, we generally do not have binding and enforceable purchase orders under such contracts beyond the short-term, and the timing and magnitude of purchase orders beyond such period is difficult to accurately project.\n\nAs discussed in and subject to the considerations referenced in Part I, Item 2, Management's Discussion and Analysis of Financial Condition and Results of Operations—Management Opportunities, Challenges and Uncertainties and 2024 Outlook —Cash Flow and Capital Expenditure Trends in this Quarterly Report on Form 10-Q, we currently expect our capital expenditures to support our projects globally to exceed $11.00 billion in 2024 and be between $8.00 to $10.00 billion in each of the following two fiscal years. We also have certain obligations in connection with our operations at Gigafactory New York and Gigafactory Shanghai, as outlined in Part II, Item 7, Management's Discussion and Analysis of Financial Condition and Results of Operations—Liquidity and Capital Resources—Material Cash Requirements in our Annual Report on Form 10-K for the year ended December 31, 2023.\n\nAs of September 30, 2024, we and our subsidiaries had outstanding $7.42 billion in aggregate principal amount of indebtedness, of which $2.12 billion is current. For details regarding our indebtedness, refer to Note 7, Debt, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n#### Sources and Conditions of Liquidity\n\nOur sources to fund our material cash requirements are predominantly from our deliveries and servicing of new and used vehicles, sales and installations of our energy storage products, interest income, and proceeds from debt facilities and equity offerings, when applicable.", - "page_start": 42, - "page_end": 42, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\nDeferred revenue is equivalent to the total transaction price allocated to the performance obligations that are unsatisfied, or partially unsatisfied, as of the balance sheet date. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $711 million and $360 million for the nine months ended September 30, 2024 and 2023, respectively. Of the total deferred revenue balance as of September 30, 2024, we expect to recognize $821 million of revenue in the next 12 months. The remaining balance will be recognized at the time of transfer of control of the product or over the performance period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our automotive deliveries. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $245 million and $242 million, respectively, in Accounts receivable, net, and $868 million and $1.04 billion, respectively, in Other non-current assets for the long-term portion.\n\nWe offer resale value guarantees to our commercial banking partners in connection with certain vehicle leasing programs. Under these programs, we originate the lease with our end customer and immediately transfer the lease and the underlying vehicle to our commercial banking partner, with the transaction being accounted for as a sale under ASC 606, Revenue from Contracts with Customers. We estimate a guarantee liability in accordance with ASC 460, Guarantees and record it within other liabilities on our consolidated balance sheet. On a quarterly basis, we assess the estimated market value of vehicles sold under this program to determine whether there have been changes to the amount of expected resale value guarantee liabilities. The total recorded guarantee liabilities on vehicles sold under this program were immaterial as of September 30, 2024 and December 31, 2023. Our maximum exposure on the guarantees we provide if they are unable to sell the vehicle at or above the vehicle's contractual residual value at the end of the lease term was $1.04 billion and $166 million as of September 30, 2024 and December 31, 2023, respectively. September 30, 2024 December 31, 2023\n\n#### Automotive Regulatory Credits\n\nAs of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $4.72 billion. Of this amount, we expect to recognize $683 million in the next 12 months and the rest over the remaining performance obligation period. Additionally, changes in regulations on automotive regulatory credits may significantly impact our remaining performance obligations and revenue to be recognized under these contracts.\n\n# Automotive Leasing Revenue\n\n# Direct Sales-Type Leasing Program\n\nLease receivables relating to sales-type leases are presented on the consolidated balance sheets as follows (in millions):\n\n| vehicles sold under this program to determine whether there have been changes to the amount of expected resale value | | | |\n| --- | --- | --- | --- |\n| guarantee liabilities. The total recorded guarantee liabilities on vehicles sold under this program were immaterial as of | | | |\n| September 30, 2024 and December 31, 2023. Our maximum exposure on the guarantees we provide if they are unable to sell | | | |\n| the vehicle at or above the vehicle's contractual residual value at the end of the lease term was $1.04 billion and $166 million as | | | |\n| of September 30, 2024 and December 31, 2023, respectively. | | | |\n| Automotive Regulatory Credits | | | |\n| As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially | | | |\n| unsatisfied for contracts with an original expected length of more than one year was $4.72 billion. Of this amount, we expect to | | | |\n| recognize $683 million in the next 12 months and the rest over the remaining performance obligation period. Additionally, | | | |\n| changes in regulations on automotive regulatory credits may significantly impact our remaining performance obligations and | | | |\n| revenue to be recognized under these contracts. | | | |\n| Automotive Leasing Revenue | | | |\n| Direct Sales-Type Leasing Program | | | |\n| Lease receivables relating to sales-type leases are presented on the consolidated balance sheets as follows (in millions): | | | |\n| September 30, 2024 December 31, 2023 | | | |\n| Gross lease receivables $ | $ | 584 | 780 |\n| Unearned interest income | | (48) | (78) |\n| Allowance for expected credit losses | | (7) | (6) |\n| Net investment in sales-type leases $ | $ | 529 | 696 |\n| Reported as: | | | |\n| Prepaid expenses and other current assets $ | $ | 171 | 189 |\n| Other non-current assets | | 358 | 507 |\n| $ Net investment in sales-type leases | $ | 529 | 696 |\n| 11 | | | |", - "page_start": 14, - "page_end": 14, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "#### Table of Contents\n\n#### Energy Generation and Storage Segment\n\n#### Energy Generation and Storage Sales\n\nWe record as deferred revenue any non-refundable amounts that are collected from customers related to prepayments, which is recognized as revenue ratably over the respective customer contract term. As of September 30, 2024 and December 31, 2023, deferred revenue related to such customer payments amounted to $1.73 billion and $1.60 billion, respectively, mainly due to contractual payment terms. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $1.09 billion and $511 million for the nine months ended September 30, 2024 and 2023, respectively. As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $6.61 billion. Of this amount, we expect to recognize $4.23 billion in the next 12 months and the rest over the remaining performance obligation period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our energy products. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $32 million and $31 million, respectively, in Accounts receivable, net, and $641 million and $578 million, respectively, in Other non-current assets for the long-term portion.\n\n#### Income Taxes\n\nWe are subject to income taxes in the U.S. and in many foreign jurisdictions. Significant judgment is required in determining our provision for income taxes, our deferred tax assets and liabilities and any valuation allowance recorded against our net deferred tax assets that are not more likely than not to be realized. We monitor the realizability of our deferred tax assets taking into account all relevant factors at each reporting period. In completing our assessment of realizability of our deferred tax assets, we consider our history of income (loss) measured at pre-tax income (loss) adjusted for permanent book-tax differences on a jurisdictional basis, volatility in actual earnings, excess tax benefits related to stock-based compensation in recent prior years and impacts of the timing of reversal of existing temporary differences. We also rely on our assessment of the Company's projected future results of business operations, including uncertainty in future operating results relative to historical results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all available information. Three Months Ended September 30, Nine Months Ended September 30, 2024 2023 2024 2023 Net income attributable to common stockholders $ 2,167 $ 1,853 $ 4,774 $ 7,069 Less: Buy-outs of noncontrolling interest — 2 (42) (3)\n\nOur provision for or benefit from income taxes for interim periods is determined using an estimate of our annual effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment.\n\n#### Net Income per Share of Common Stock Attributable to Common Stockholders\n\nThe following table presents the reconciliation of net income attributable to common stockholders to net income used in computing basic and diluted net income per share of common stock (in millions):\n\n| Company's projected future results of business operations, including uncertainty in future operating results relative to historical | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions | | | | | | | |\n| impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of | | | | | | | |\n| future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all | | | | | | | |\n| available information. | | | | | | | |\n| Our provision for or benefit from income taxes for interim periods is determined using an estimate of our annual | | | | | | | |\n| effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update | | | | | | | |\n| our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment. | | | | | | | |\n| Net Income per Share of Common Stock Attributable to Common Stockholders | | | | | | | |\n| The following table presents the reconciliation of net income attributable to common stockholders to net income used in | | | | | | | |\n| computing basic and diluted net income per share of common stock (in millions): | | | | | | | |\n| 2024 2023 2024 2023 | | | | | | | |\n| Net income attributable to common stockholders $ $ $ | $ | 2,167 | | 1,853 | 4,774 | 7,069 | |\n| Less: Buy-outs of noncontrolling interest 2 (42) | | — | | | | | (3) |\n| Net income used in computing basic and diluted net | | | | | | | |\n| $ $ income per share of common stock | $ | 2,167 | $ | 1,851 | 4,816 | | 7,072 |\n| 12 | | | | | | | |", - "page_start": 15, - "page_end": 15, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. [210] The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation.[211]\n\nAfter the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.[212] Taiwan aims to phase out nuclear power by 2025.[212] On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[212]\n\nAlthough most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 *Bloomberg* article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[213] Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[213]\n\nOn 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center. [214] According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[214]\n\n#### **Misinformation**\n\nYouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation.[215] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[216] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem .\n\nIn 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[217] AI pioneer Geoffrey Hinton expressed concern about AI enabling \"authoritarian leaders to manipulate their electorates\" on a large scale, among other risks.[218]\n\n#### **Algorithmic bias and fairness**\n\nMachine learning applications will be biased[k] if they learn from biased data.[220] The developers may not be aware that the bias exists.[221] Bias can be introduced by the way training data is selected and by the way a model is deployed.[222][220] If a biased algorithm is used to make decisions that can seriously", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Consolidated Financial Statements June 30, 2024 and 2023\n\n(With Independent Auditors' Report Thereon)", - "page_start": 0, - "page_end": 0, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n- 283. Christian (2020), pp. 67, 73.\n- 284. Yudkowsky (2008).\n- 285. Anderson & Anderson (2011).\n- 286. AAAI (2014).\n- 287. Wallach (2010).\n- 288. Russell (2019), p. 173.\n- 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). *Business Insider*. Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n- 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). *TechCrunch*. Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n- 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). *MIT Technology Review*. Retrieved 14 April 2024.\n- 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). *AI Business*. Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n- 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). *Ars Technica*. Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). *VentureBeat*. Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n- 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). *Vox*. Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and _safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "Which is the first candidate for experimenting the case of electrons interacting with a single boson mode?", - "target_page": 6, - "target_passage": "The primary candidate for such mode is an optical phonon", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "#### I. INTRODUCTION\n\nThe nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model. The most attractive idea to naturally explain the tiny neutrino masses is the seesaw mechanism [1], in which the right-handed (RH) neutrinos singlet under the SM gauge group are introduced. The minimal gauged U(1)B−L model based on the gauge group SU(3)C ×SU(2)L ×U(1)Y × U(1)B−L [2] is an elegant and simple extension of the SM, in which the RH neutrinos of three generations are necessarily introduced because of the gauge and gravitational anomaly cancellations. In addition, the mass of RH neutrinos arises associated with the U(1)B−L gauge symmetry breaking.\n\nAlthough the scale of the B−L gauge symmetry breaking is basically arbitrary as long as phenomenological constraints are satisfied, one interesting option is to take it to be the TeV scale [3]. It has been recently pointed out [4] that when the classical conformal invariance is imposed on the minimal U(1)B−L model, the symmetry breaking scale appears to be the TeV scale naturally. If this is the case, all new particles, the Z ′ gauge boson, the B − L Higgs boson H and the RH neutrinos appear at the TeV scale unless the U(1)B−L gauge coupling is extremely small, and they can be discovered at Large Hadron Collider [5–8]. Then we may be able to understand the relation between the gauge symmetry breaking and the origin of neutrino masses.\n\nAlthough such a TeV scale model is interesting and appealing, one might think that the absence of dark matter (DM) candidate is a shortcoming of this model. A sterile RH neutrino with mass of the order of MeV is one possibility [9]. In this paper, we propose a very simple idea to introduce the DM candidate in the minimal gauged U(1)B−L model. We introduce the Z2 parity into the model and impose one of three RH neutrinos to be odd, while the others even. In this way, the Z2-odd RH neutrino becomes stable and the DM candidate. Note that two RH neutrinos are enough to reconcile with the observed neutrino oscillation data, with a prediction of one massless light neutrino. Therefore, without introducing any additional new dynamical degrees of freedom, the DM particle arises in the minimal gauged U(1)B−L model.\n\nThe paper is organized as follows. In the next section, we briefly describe our model. In section III, we estimate the thermal relic density of the RH neutrino and identify the model", - "page_start": 1, - "page_end": 1, - "source_file": "1002.2525.pdf" - }, - { - "text": "FIG. 4: Top - a conductivity plot for the BCSI case in the presence of a lattice. The parameters are ∆ = 30 meV , Γ = 3.5 meV . Bottom – the behavior of Kubo sums. Note that (a) the spectral weight in the NS is always greater in the SCS, (b) the spectral weight decreases with Γ, and (c) the difference between NS and SCS decreases as Γ increases.\n\nlittle variation of ∆W(ωc) at above 0.1 − 0.3eV what implies that for larger ωc, ∆W(ωc) ≈ ∆WK >> ∆f(ωc).\n\nTo make this more quantitative, we compare in Fig. 6 ∆W(ωc) obtained for a constant DOS, when ∆W(ωc) = ∆f(ωc), and for the actual lattice dispersion, when ∆W(ωc) = ∆WK + ∆f(ωc). In the clean limit there is obviously little cutoff dependence beyond 0.1eV , i.e., ∆f(ωc) is truly small, and the difference between the two cases is just ∆WK. In the dirty limit, the situation is similar, but there is obviously more variation with ωc, and ∆f(ωc) becomes truly small only above 0.3eV . Note also that the position of the dip in ∆W(ωc) in the clean limit is at a larger ωc in the presence of the lattice than in a continuum.\n\n#### B. The Einstein boson model\n\nWe next consider the case of electrons interacting with a single boson mode which by itself is not affected by superconductivity. The primary candidate for such mode is an optical phonon. The imaginary part of the NS self energy has been discussed numerous times in the literature. We make one simplifying assumption – approximate the DOS by a constant in calculating fermionic self-energy. We will, however, keep the full lattice dispersion in the calculations of the optical integral. The advantage of this\n\nFIG. 5: The evolution of optical integral in NS(top) and SCS(bottom) for BCSI case. Plots are made for clean limit (solid lines, Γ = 3.5 meV ) and dirty limit (dashed lines, Γ = 150 meV ) for ∆ = 30 meV . Observe that (a) W(0) = 0 in the NS, but has a non-zero value in the SCS because of the δ-function (this value decreases in the dirty limit), and (b) the flat region in the SCS is due to the fact that σ ′ (ω) = 0 for Ω < 2∆. Also note that ∼ 90 − 95% of the spectral weight is recovered up to 1eV\n\napproximation is that the self-energy can be computed analytically. The full self-energy obtained with the lattice dispersion is more involved and can only be obtained numerically, but its structure is quite similar to the one obtained with a constant DOS.\n\nThe self-energy for a constant DOS is given by\n\n$$\\Sigma(i\\omega)=-\\frac{i}{2\\pi}\\lambda_{n}\\int d\\epsilon_{k}d(i\\Omega)\\chi(i\\Omega)G(\\epsilon_{k},i\\omega+i\\Omega)\\tag{13}$$\n\nwhere\n\n$$\\chi(i\\Omega)=\\frac{\\omega_{0}^{2}}{\\omega_{0}^{2}-(i\\Omega)^{2}}\\tag{14}$$\n\nand λn is a dimensionless electron-boson coupling. Integrating and transforming to real frequencies, we obtain\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,\\Theta(|\\omega|-\\omega_{o})$$\n \n \n\n$$\\Sigma^{\\prime}(\\omega)=-\\frac{1}{2}\\,\\lambda_{n}\\omega_{o}\\,log\\left|\\frac{\\omega+\\omega_{o}}{\\omega-\\omega_{o}}\\right|\\tag{15}$$\n\nIn the SCS, we obtain for ω < 0\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,R e\\left(\\frac{\\omega+\\omega_{o}}{\\sqrt{(\\omega+\\omega_{o})^{2}-\\Delta^{2}}}\\right)$$", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0764.pdf" - }, - { - "text": "From Eq. (19), one can see that σ (p) SI ∝ (sin 2θ/v′ ) 2 for a given DM mass mN . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σSI . 4 × 10−8 − 2 × 10−7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0.7 and 0.3, respectively.\n\n#### IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U(1)B−L model. We have introduced a discrete Z2 parity in the model, so that one RH neutrino assigned as Z2-odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s-channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "# Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W, and ∆W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models – a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off (ωc) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ωc equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆W, implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆W. In all models we studied ∆W is positive at small ωc, then crosses zero and approaches a negative value at large ωc, i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n#### I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ(Ω) and found that it is independent on the nature of the interactions and reduces to\n\n$$\\int_{0}^{\\infty}R e\\,\\sigma(\\Omega)\\,d\\Omega={\\frac{\\pi}{2}}{\\frac{n e^{2}}{m}}\\qquad\\qquad(1)$$\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state – henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ(Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ(Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule:2,3\n\n$$\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{NS}(\\Omega)=\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{sc}(\\Omega)+\\frac{\\pi n_{s}e^{2}}{2m}\\ \\ \\ \\ (2)$$\n\nwhere ns is the superfluid density, and πnse 2/(2m) is the spectral weight under the δ-functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the \"band\", or Kubo sum rule\n\n$$\\int_{0}^{\\cdot\\infty^{\\prime}}Re\\,\\sigma(\\Omega)\\,d\\Omega=W_{K}=\\frac{\\pi e^{2}}{2N}\\sum_{\\vec{k}}\\nabla_{k_{x}}^{2}\\varepsilon_{\\vec{k}}\\,n_{\\vec{k}}\\tag{3}$$\n\nwhere n~k is the electronic distribution function and ε~k is the band dispersion. Prime in the upper limit of the integration has the practical implication that the upper limit is much larger than the bandwidth of a given band which crosses the Fermi level, but smaller than the frequencies of interband transitions. Interactions with external objects, e.g., phonons or impurities, and interactions between fermions are indirectly present in the distribution function which is expressed via the full fermionic Green's function as n~k = T P m G( ~k, ωm). For ǫk = k 2/2m, ∇2 k~x ε~k = 1/m, WK = πne2/(2m), and Kubo sum rule reduces to Eq. (1). In general, however, ε~k is a lattice dispersion, and Eqs. (1) and (3) are different. Most important, WK in Eq. (3) generally depends on T and on the state of the system because of n~k . In this situation, the temperature evolution of the optical integral does not reduce to a simple redistribution of the spectral weight – the whole spectral weight inside the conduction band changes with T . This issue was first studied in detail by Hirsch 4 who introduced the now-frequently-used notation \"violation of the conductivity sum rule\".\n\nIn reality, as already pointed out by Hirsch, there is no true violation as the change of the total spectral weight", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λx,y,z/Jcluster ∼ p |Jx,y,z|/Jcluster.\n\n### V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n#### Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n# Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref.35 the couplings of all tetrahedron distortion modes to the spin system. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\n$$H_{\\rm cluster},\\ {\\rm SL}=(J_{\\rm cluster}/2)(\\sum_{\\ell}{\\bf S}_{\\ell})^{2}+J^{\\prime}\\sum_{\\ell 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap (∼ Jcluster), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref.24–27 .\n\n# IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the Jx,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1, . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j(k) as j1, . . . , j4 (k1, . . . , k4), and denote pseudo-spins on cluster j(k) as ~τj (~τk).\n\n# A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling Jcluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of Sℓ · Sm. Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials28–34. More details can be found in a recent review by Tchernyshyov and Chern35. And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1, . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref.35). Only two tetragonal to orthorhombic distortion modes, QE 1 and QE 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "What was the optical integral analysis proposed by Norman and Pépin?", - "target_page": 8, - "target_passage": "a phenomenological model for the self energy which fits normal state scattering rate measure- ments by ARPES", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W, and ∆W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models – a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off (ωc) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ωc equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆W, implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆W. In all models we studied ∆W is positive at small ωc, then crosses zero and approaches a negative value at large ωc, i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n#### I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ(Ω) and found that it is independent on the nature of the interactions and reduces to\n\n$$\\int_{0}^{\\infty}R e\\,\\sigma(\\Omega)\\,d\\Omega={\\frac{\\pi}{2}}{\\frac{n e^{2}}{m}}\\qquad\\qquad(1)$$\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state – henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ(Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ(Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule:2,3\n\n$$\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{NS}(\\Omega)=\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{sc}(\\Omega)+\\frac{\\pi n_{s}e^{2}}{2m}\\ \\ \\ \\ (2)$$\n\nwhere ns is the superfluid density, and πnse 2/(2m) is the spectral weight under the δ-functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the \"band\", or Kubo sum rule\n\n$$\\int_{0}^{\\cdot\\infty^{\\prime}}Re\\,\\sigma(\\Omega)\\,d\\Omega=W_{K}=\\frac{\\pi e^{2}}{2N}\\sum_{\\vec{k}}\\nabla_{k_{x}}^{2}\\varepsilon_{\\vec{k}}\\,n_{\\vec{k}}\\tag{3}$$\n\nwhere n~k is the electronic distribution function and ε~k is the band dispersion. Prime in the upper limit of the integration has the practical implication that the upper limit is much larger than the bandwidth of a given band which crosses the Fermi level, but smaller than the frequencies of interband transitions. Interactions with external objects, e.g., phonons or impurities, and interactions between fermions are indirectly present in the distribution function which is expressed via the full fermionic Green's function as n~k = T P m G( ~k, ωm). For ǫk = k 2/2m, ∇2 k~x ε~k = 1/m, WK = πne2/(2m), and Kubo sum rule reduces to Eq. (1). In general, however, ε~k is a lattice dispersion, and Eqs. (1) and (3) are different. Most important, WK in Eq. (3) generally depends on T and on the state of the system because of n~k . In this situation, the temperature evolution of the optical integral does not reduce to a simple redistribution of the spectral weight – the whole spectral weight inside the conduction band changes with T . This issue was first studied in detail by Hirsch 4 who introduced the now-frequently-used notation \"violation of the conductivity sum rule\".\n\nIn reality, as already pointed out by Hirsch, there is no true violation as the change of the total spectral weight", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "Louis' patriline is the line from which he is descended from father to son.\n\nPatrilineal descent is the principle behind membership in royal houses, as it can be traced back through the generations - which means that if King Louis were to choose a historically accurate house name it would be Robertian, as all his male-line ancestors have been of that house.\n\nLouis is a member of the House of Bourbon, a branch of the Capetian dynasty and of the Robertians.\n\nLouis' patriline is the line from which he is descended from father to son. It follows the Bourbon kings of France, and the Counts of Paris and Worms. This line can be traced back more than 1,200 years from Robert of Hesbaye to the present day, through Kings of France & Navarre, Spain and Two-Sicilies, Dukes of Parma and Grand-Dukes of Luxembourg, Princes of Orléans and Emperors of Brazil. It is one of the oldest in Europe.\n\n1. Robert II of Worms and Rheingau (Robert of Hesbaye), 770–807\n\n- 2. Robert III of Worms and Rheingau, 808–834\n- 3. Robert IV the Strong, 820–866\n- 4. Robert I of France, 866–923\n- 5. Hugh the Great, 895–956\n- 6. Hugh Capet, 941–996\n- 7. Robert II of France, 972–1031\n- 8. Henry I of France, 1008–1060\n- 9. Philip I of France, 1053–1108\n- 10. Louis VI of France, 1081–1137\n- 11. Louis VII of France, 1120–1180\n- 12. Philip II of France, 1165–1223\n- 13. Louis VIII of France, 1187–1226\n- 14. Louis IX of France, 1214–1270\n- 15. Robert, Count of Clermont, 1256–1317\n- 16. Louis I, Duke of Bourbon, 1279–1342\n- 17. James I, Count of La Marche, 1319–1362\n- 18. John I, Count of La Marche, 1344–1393\n- 19. Louis, Count of Vendôme, 1376–1446\n- 20. Jean VIII, Count of Vendôme, 1428–1478\n- 21. François, Count of Vendôme, 1470–1495\n- 22. Charles de Bourbon, Duke of Vendôme, 1489–1537\n- 23. Antoine, King of Navarre, Duke of Vendôme, 1518–1562\n- 24. Henry IV, King of France and of Navarre, 1553–1610\n- 25. Louis XIII, King of France and Navarre, 1601–1643\n\n26. Louis XIV, King of France and Navarre, 1638–1715\n\n#### **Issue**\n\n| Name | Birth | Death | Notes |\n| --- | --- | --- | --- |\n| | | By Maria Theresa, Infanta of Spain, Archduchess of Austria, Queen of France and of Navarre (20 September 1638 – 30 July 1683) | |\n| Louis, le Grand Dauphin | 1 November 1661 | 14 April 1711 | Fils de France. Dauphin of France (1661–1711). Had issue. |\n| | | | Father of Louis, Dauphin of France, Philip V of Spain and |\n| | | | Charles, Duke of Berry. Grandfather of Louis XV of France |\n| Anne Élisabeth | 18 November 1662 | 30 December 1662 | Fille de France. Died in infancy. |\n| Marie Anne | 16 November 1664 | 26 December 1664 | Fille de France. Died in infancy. |\n| Marie Thérèse | 2 January 1667 | 1 March 1672 | Fille de France. Known as Madame Royale and la Petite |\n| | | | Madame. Died in childhood. |\n| Philippe Charles, Duke of Anjou | 5 August 1668 | 10 July 1671 | Fils de France. Died in childhood. |\n| Louis François, Duke of Anjou | 14 June 1672 | 4 November 1672 | Fils de France. Died in infancy. |\n\nThis is an incomplete list of Louis XIV's illegitimate children. He reputedly had more, but the difficulty in fully documenting all such births restricts the list only to the better-known and/or legitimised.", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters *θ*:\n\n$p(\\Theta)=Dir(\\Theta|\\theta)$ (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters *θ* with the following update equation:\n\n$$\\theta_{t+1}=\\omega*\\theta_{t}+\\eta*\\chi t\\tag{20}$$\n\nThe updated value for the concentration parameter (*θt*+1) is found by adding the previous concentration parameter *θt* multiplied by a forgetting rate *ω* to the observed data count *χ* (either the observation in the case of **A** learning, or the inferred state or state transition for other matrices) multiplied by a learning rate *η*. With this relatively simple update equation—which, in essence, amounts to just counting the occurrences of categories—an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## **3. Using ActiveInference.jl**\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference. This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n#### *3.1. Creating and Using a POMDP*\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\n✞ ☎\n\n```\nusing Pkg\nPkg.add( ActiveInference )\n✝ ✆\n```\nIt can then be loaded into the current project environment:\n\n✞ ☎ **using** ActiveInference ✝ ✆\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "When you define the optical storage group, you provide the following information:\n\n- -Storage group name\n- -Description of the storage group\n- - Volume full reset when optical volumes are rewritable and you want to reuse the storage space (only available with local area network (LAN)-attached optical jukeboxes)\n- - Free space threshold percent (the percent at which Content Manager OnDemand starts storing to rewritable volumes again if the volume full reset parameter is checked)\n- -Storage group type, which is primary or backup\n\nAfter you define the optical storage group, use IBM Navigator for i to define the optical volumes to the Content Manager OnDemand system (Figure 5-20).\n\n| Content Manager OnDemand | | |\n| --- | --- | --- |\n| Optical Volume Definition | | |\n| Add an optical volume definition | | |\n| Current instance QUSROND | | |\n| *Volume name | VIRT001 | |\n| Type | | Megabytes |\n| O Primary | | |\n| Backup | | |\n| Capacity | 20000 | |\n| Optical media family | VWRM | |\n| Optical storage group | REPORTS | |\n| Optical library | | |\n| Opposite side volume name | | |\n| Volume is full | | |\n| System where located | | |\n| Data written to volume 0 | | Megabytes |\n| Deleted or expired data 0 | | Mogabytos |\n| Times accessed O | | |\n| Cancel | | |\n\nFigure 5-20 Content Manager OnDemand for i optical volume definition\n\nWhen you define optical volumes, provide this information:\n\n- -Volume name: Your volume name.\n- -Volume type: Primary or backup.\n- -Capacity in megabytes: Capacity of one side of the optical media after it is initialized.\n- - Optical media family:\n\t- Rewritable (REWT)\n\t- WORM\n\t- Universal Disk Format single-sided (UDF1) that is used by DVD RAM drives\n\t- Universal Disk Format or double-sided (UDF2)\n\t- Virtual Rewritable (VRWT)\n\t- Virtual WORM (VWRM)\n- -Optical storage group: Your optical storage group.\n- -Optical library: Library name, which can be provided for documentation.", - "page_start": 145, - "page_end": 145, - "source_file": "sg246915.pdf" - }, - { - "text": "and 640-nm diode lasers. Full thickness, tiled, confocal image stacks with a 2- to 3-mm interval in the Z-axis were obtained through a 203 dry lens (0.8 NA) with the confocal aperture set to 1 Airy unit or less. All image capture was performed using Zen Blue Edition software (Carl Zeiss Microscopy GmbH, Jena, Germany), and analyses were performed using Zen Blue or FIJI.45\n\n#### 2.5. Image analysis\n\nDuring all image quantification, the experimenter was blind to the experimental groups. For quantification of the total number of cells within the DRG, a modified optical dissector stereological method was used11,18,47 (Fig. S1, http://links.lww.com/PAIN/C84). To account for tissue shrinkage during processing, the mean thickness (t) of each section on one slide (ie, 1 in 5 sections) was calculated by taking the mean of the thickest and thinnest cell-containing regions (ie, not fiber tract-containing regions) of the section (NB: no optical correction to thickness was applied; given the use of a dry lens, this value will not reflect actual section thickness, though this was kept consistent throughout the study). The cell-containing, crosssectional area (a) was then calculated, using the middle optical section from the series and drawing around the cell-containing regions. Section volume (Vsec) was then calculated:\n\n$$\\mathbb{V}\\mathrm{sec}\\,=\\,t\\times a$$\n\nUsing the Cavalieri principle, the cell-containing volume of the DRG was calculated11:\n\n$$\\forall D\\bar{D}\\bar{G}=\\bar{a}\\times\\bar{t}\\times D$$\n\nwhere a 5 mean cell-containing cross-sectional area, t 5 mean section thickness, and l 5 \"length\" of the DRG (determined from the total number of sections collected). The number of neurons per section (Nsec) was quantified in all immunostained sections. This included only neurons with a visible nucleus (in the NeuN channel), excluded cells with a nucleus visible within the top frame of the Z-stack, and included any neurons with a nucleus visible in any other field within Z-stack, including the bottom frame of Z-stack. The cell density or the number of cells per unit vol (Nv) was then calculated:\n\n$$N_{V}={\\frac{N_{\\mathrm{sec}}}{V_{\\mathrm{sec}}}}$$\n\nFinally, the total number of cells per DRG (NDRG) was calculated:\n\n$$N_{D\\!\\!D\\!\\!G}\\,=\\,\\overline{{{N_{\\nu}}}}\\times V_{D\\!\\!D\\!\\!G}$$\n\nFor quantification of the proportion of FB-labelled cells colabelled with afferent subpopulation markers, initially, the total number of FB-filled neuronal cell profiles with a visible nucleus anywhere within the section was counted, with the observer blind to other channels. The other channel was then revealed, and instances of co-labelling were quantified. No stereological correction was applied, given that the similar size of neuronal nuclei would prevent over-counts of large neurons and that no comparisons of the total number of labelled cells were made. For soma area analyses, the area of neuronal soma expressing the appropriate marker was measured in the optical section within the Z-stack in which that neuron was at its largest, by drawing around the perimeter of the neuron in Fiji/ImageJ v2.14.0/1.54f.\n\n#### 2.6. Tissue clearing and 3D volumetric analyses\n\nDorsal root ganglia were extracted from animals 4 weeks post-SNItrans for whole DRG analyses. In this study, tissue was extracted from a combination of MrgDCreERT2;Ai14, ThCreERT2;Ai14, and CalcaCreERT2;Ai14 lines (mixed sex).3 One month after SNItrans, animals were transcardially perfused with sterile saline followed by a fixative containing 4% formaldehyde. Ipsilateral and contralateral L4 DRG were removed and postfixed for 24 hours on a shaker at room temperature before being washed in PBS and stored at 280˚C in CI-VM1 (35% dimethyl sulfoxide, 35% ethylene glycol in PBS) until clearing. Tissue clearing was then performed as previously described.67 In brief, the tissue was exposed to a gradient of 1-propanol containing 0.3% triethylamine (30, 50, 75, 90, 95, 100, 100%) and washed in this solution at 37˚C for 24 hours. The tissue was then rehydrated in PBS and labelled with primary antibodies for 1 week at 37˚C (mouse anti-TDP43 and 2x anti-RFP, Table 2). The tissue was washed for 24 hours and incubated with appropriate secondary antibodies (Table 2) for another week at 37˚C. The tissue was subsequently washed for 24 hours, dehydrated again in increasing concentrations of 1 propanol containing 0.3% triethylamine, and mounted in benzyl alcohol with benzyl benzoate (1:2 ratio) containing 0.3% triethylamine on glass slides with silicone spacers. Imaging was performed on an Olympus spinning disk confocal microscope at 20x, with 2-mm z-steps. The tissue was stored at 4˚C for ;16 months before imaging, so only the tissue that remained transparent at this time was used for downstream analyses. Volumetric analyses were performed using Imaris using the \"spots\" feature with region growth (to allow for different-sized spots), background subtraction, and point spread function elongation (standard 2 3 XY). Initial spot diameters were set based on MrgDCreERT2;Ai14 nuclear size (as labelled by red fluorescent protein (RFP)). Spot classification was then performed blind by adjusting the quality threshold to balance detection in superficial and deep tissue. This step was necessary due to differences in tissue quality after long-term storage. Any labelled spots in the adjacent nerve were then deleted (eg, labelled Schwann cells or debris). Count and volumetric data were then exported for analysis in R. Data were filtered for very small (,5 mm3 ) and very large (.2000 mm3 ) spots to further remove any debris, labelled satellite glia or doublets within the ganglia. In both cases, these filters were approximate and did not exclude the possibility that some spots correspond to either class in the final dataset. The upper limit of the \"small\" DRG nuclei size category was defined as the upper bound of 32 easily identifiable MrgD1 nuclei (258 mm3 ). The boundary between \"medium\" and \"large\" bins (400 mm3 ) was less clearly defined in the samples and was therefore set as the approximate midpoint of the volume distribution. A combined size category for all nuclei greater than 258 mm3 was also examined, and the results mirrored those of \"medium\" and \"large\" bins.\n\n# 2.7. Gene Ontology\n\nGene Ontology term analyses were performed on previously published mouse subtype RNA-seq after SNI (GSE2164443 ). In this study, subtype-specific bulk RNA-seq was performed on 5 transgenic mouse lines through reporter labelling and fluorescence activated cell sorting. spliced transcripts alignment to a reference was used to map reads to the GRCm38 (mm10) Mouse Genome,14 and Samtools was used to sort, index, and merge Binary Alignment Map files in line with published reports.28 Quality control was performed as per Barry et al.3 Downstream analyses were performed using DESeq2 on grouped male and female samples.31 For differentially expressed genes (false discovery rate) (FDR , 0.05, LFC .1) (log-fold change), GO analyses were performed using the Wallenius method using goSeq (R). In this study, significantly regulated terms related to", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed2.pdf" - }, - { - "text": "generative models, or even (deep learning-based) amortised inference models. These various extensions could provide valuable tools for using AIF models in both theoretical and applied research.\n\n**Author Contributions:** Conceptualisation, S.W.N., J.E.L. and P.T.W.; methodology, S.W.N., J.E.L. and P.T.W.; software, S.W.N., J.E.L. and P.T.W.; formal analysis, S.W.N. and J.E.L.; writing—original draft preparation, S.W.N. and J.E.L.; writing—review and editing, C.H., K.F., C.M. and P.T.W.; visualisation, S.W.N. and J.E.L.; supervision, C.M. and P.T.W.; project administration, P.T.W. All authors read and agreed to the published version of this manuscript.\n\n**Funding:** C.M. acknowledges funding from Aarhus Universitets Forskningsfonds (grant no. AUFF-E-2019-7-10) and from the Carlsberg Foundation (grant no. CF21-0439).\n\n**Institutional Review Board Statement:** Not applicable.\n\n**Informed Consent Statement:** Not applicable.\n\n**Data Availability Statement:** The original data presented in this study are openly available in ActiveInferenceJuliaPaper at URL: https://osf.io/j3k5q/.\n\n**Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses or interpretation of data; in the writing of this manuscript; or in the decision to publish the results.\n\n## **Abbreviations**\n\nThe following abbreviations are used in this manuscript:\n\n| AIF | Active inference |\n| --- | --- |\n| FEP | Free energy principle |\n| VFE | Variational free energy |\n| EFE | Expected free energy |\n| MCMC | Markov Chain Monte Carlo |\n| POMDP | Partially Observed Markov Decision Process |\n\n## **References**\n\n- 1. Parr, T.; Pezzulo, G.; Friston, K.J. *Active Inference: The Free Energy Principle in Mind, Brain, and Behavior*; The MIT Press: Cambridge, MA, USA, 2022. [CrossRef]\n- 2. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; O'Doherty, J.; Pezzulo, G. Active inference and learning. *Neurosci. Biobehav. Rev.* **2016**, *68*, 862–879. [CrossRef]\n- 3. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; Pezzulo, G. Active inference: A process theory. *Neural Comput.* **2017**, *29*, 1–49. [CrossRef]\n- 4. Friston, K.J.; Stephan, K.E. Free-energy and the brain. *Synthese* **2007**, *159*, 417–458. [CrossRef] [PubMed]\n- 5. Friston, K. The free-energy principle: A unified brain theory? *Nat. Rev. Neurosci.* **2010**, *11*, 127–138. [CrossRef] [PubMed]\n- 6. Friston, K. The free-energy principle: A rough guide to the brain? *Trends Cogn. Sci.* **2009**, *13*, 293–301. [CrossRef] [PubMed]\n- 7. Friston, K. A free energy principle for a particular physics. *arXiv* **2019**, arXiv:1906.10184. [CrossRef]\n- 8. Friston, K.; Da Costa, L.; Sajid, N.; Heins, C.; Ueltzhöffer, K.; Pavliotis, G.A.; Parr, T. The free energy principle made simpler but not too simple. *Phys. Rep.* **2023**, *1024*, 1–29. [CrossRef]\n- 9. Friston, K.; Kiebel, S. Predictive coding under the free-energy principle. *Philos. Trans. R. Soc. B Biol. Sci.* **2009**, *364*, 1211–1221. [CrossRef] [PubMed]\n- 10. Karl, F. A Free Energy Principle for Biological Systems. *Entropy* **2012**, *14*, 2100–2121. [CrossRef]\n- 11. Corcoran, A.W.; Pezzulo, G.; Hohwy, J. From allostatic agents to counterfactual cognisers: Active inference, biological regulation, and the origins of cognition. *Biol. Philos.* **2020**, *35*, 32. [CrossRef]\n- 12. Heins, C.; Millidge, B.; Da Costa, L.; Mann, R.P.; Friston, K.J.; Couzin, I.D. Collective behavior from surprise minimization. *Proc. Natl. Acad. Sci. USA* **2024**, *121*, e2320239121. [CrossRef] [PubMed]\n- 13. Patzelt, E.H.; Hartley, C.A.; Gershman, S.J. Computational Phenotyping: Using Models to Understand Individual Differences in Personality, Development, and Mental Illness. *Personal. Neurosci.* **2018**, *1*, e18. [CrossRef] [PubMed]", - "page_start": 29, - "page_end": 29, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "| Core Concepts | |\n| --- | --- |\n| AIF | Active inference is a formal framework for modelling behaviour and cog |\n| | nition. Perception and action are cast as minimising free energy—the VFE |\n| | and EFE, respectively—given a generative model of the environment. |\n| VFE | The variational free energy F quantifies how well a generative model |\n| | explains incoming sensory observations. It can be rewritten as the negative |\n| | log model evidence (called surprise) upper-bounded by the divergence |\n| | from the optimal posterior p(s o). Perception as inference is accomplished |\n| | by selecting the approximate posterior q(s) with the lowest associated |\n| | VFE. |\n| | F[q(s), o] ≜ DKL[q(s)∥p(o,s)] = DKL[q(s)∥p(s o)] − ln p(o) |\n| | {z } {z } Divergence Surprise |\n| EFE | The expected free energy G quantifies the expected future free energy |\n| | under an action policy π. It consists of an information gain term and a |\n| | pragmatic value term that provide a natural balance between exploratory |\n| | and goal-seeking behaviour. Action as inference is accomplished by select |\n| | ing the action policy with the lowest associated EFE. |\n| | = − Eq(o˜,s˜ π) [ln q(s˜ o˜, π) − ln q(s˜ π)] − Eq(o˜ π) [ln p(o˜ C)] Gπ |\n| | {z } {z } Information gain Pragmatic value |\n| Generative | The generative model is an agent's formal assumptions about the structure |\n| model | and dynamics of its environment, based on which perceptual and active |\n| | inferences are carried out. Many types of generative models exist that are |\n| | suitable for different environments and tasks. |\n| POMDP | The Partially Observable Markov Decision Process is a type of flexible |\n| | generative model that is widely used in the AIF literature. In discrete time |\n| | and usually a discrete state space, this model type is parametrised to fit a |\n| | given task by a set matrices containing probability distributions. |\n\n## **2. Active Inference with POMDPs**\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44–47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4–8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "quantities as its target: the variational free energy (*VFE*) in the case of perception and the expected free energy (*EFE*) in the case of action. The *VFE* is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The *EFE* is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low *EFE* lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n#### *2.1. POMDPs in Active Inference*\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nA discrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: **A**, **B**, **C**, **D** and **E** [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\n**A**, also called the *observation model*, represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state *s*, and a row for each possible observation *o*. Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If **A** is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.\n\n**B**, also called the *transition model*, describes the state-to-state transition probabilities of environmental states *s*. **B** encodes the agent's assumptions about how the environment changes over time, depending on its actions. It has a column and a row for each environmental state *s*, where each column is a categorical probability distribution over the states the environment will take on the next time step, given the state it is currently in. If the environment is modelled as multidimensional, there will be a matrix for each environmental state factor. Additionally, there is a separate matrix for each possible action (making each factor in **B** a tensor). This means that for every factor in the model, there may be one or more actions that pick out the appropriate slice of the tensor. Action therefore allows the agent to predict that the environment (and the corresponding observations) will change differently depending on the actions that it chooses. If **B** is imprecise (i.e., highly entropic),", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "*Conclusion:* In summary, we propose a new subnatural linewidth spectroscopy technique, which is a laser by using Ramsey seperated-field cavity to realize the output of stimulated-emission radiation via multiple coherent interaction with atomic beam. We find the linewidth of Ramsey laser is subnatural if we choose an appropriate atomic level, and the bad-cavity laser mechanism will dramatically reduce cavityrelated noise as discussed in active optical clock [15–19]. Our results show that this new subnatural linewidth spectroscopy is superior to conventional optical Ramsey seperated-field spectroscopy and any other available subnatural spectroscopy technique at present [3–10]. Considering one have to apply the separated-field method in any phase detection as in Ramsey-Bord*e*´interferometer [2], to investigate the effects of phase differences between the two oscillating fields [31] in this stimulated separated-field method with such subnatural linewidth will be our next research aim.\n\nWe acknowledge Yiqiu Wang and Deshui Yu for fruitful discussions. This work is supported by MOST of China (grant 2005CB724500, National Natural Science Foundation of China (grant 60837004, 10874009), National Hi-Tech Research and Development (863) Program.\n\n- ∗ E-mail: jbchen@pku.edu.cn\n- † E-mail: hongguo@pku.edu.cn.\n- [1] N. F. Ramsey, Phys. Rev. **76**, 996 (1949).\n- [2] B. Dubetsky and P. R. Berman, In *Atom Interferometry*, edited by P. R. Berman (Academic Press, Cambridge, MA, 1997).\n- [3] M. M. Salour, Rev. Mod. Phys. **50**, 667 (1978).\n- [4] J. Wong and J. C. Garrison, Phys. Rev. Lett. **44**, 1254 (1980).\n- [5] P. L. Knight and P. E. Coleman, J. Phys. B: Atom. Molec. Phys. **13** 4345 (1980).\n- [6] H. -W. Lee, P. Meystre, and M. O. Scully, Phys. Rev. A **24**, 1914 (1981).\n- [7] F. Shimizu, K. Shimizu, and H. Takuma, Phys. Rev. A **28**, 2248 (1983).\n- [8] W. Gawlik, J. Kowalski, F. Tr¨ager, and M. Vollmer, Phys. Rev.\n\nLett. **48**, 871 (1982).\n\n- [9] H. J. Carmichael, R. J. Brecha, M. G. Raizen, H. J. Kimble, and P. R. Rice, Phys. Rev. A **40**, 5516 (1989).\n- [10] U. W. Rathe, M. O. Scully, Letters in Mathematical Physics **34**, 297 (1995)\n- [11] K. Numata, A. Kemery, J. Camp, Phys Rev Lett, **93**, 250602 (2004).\n- [12] A. D. Ludlow *et al.*, Opt. Lett. **32**, 641 (2007).\n- [13] H. J. Kimble, B. L. Lev, and J. Ye, Phys. Rev. Lett. **101**, 260602 (2008).\n- [14] J. Chen, and X.Chen, In *Proceedings of the 2005 IEEE International Frequency Control Symposium and Exposition*, (IEEE, 2005), p.608.\n- [15] J. Chen, e-print arXiv:0512096 quant-ph; Chinese Science Bulletin **54**, 348 (2009).\n- [16] D. Yu and J. Chen, Phys. Rev. A **78**, 013846 (2008).\n- [17] J. Chen, In *Frequency Standards and Metrology: Proceedings of the 7th Symposium*, edited by Maleki Lute (World Scientific Publishing Company, 2009).\n- [18] Y. Wang, Chinese Science Bulletin **54**, 347 (2009).\n- [19] D. Meiser, J. Ye, D. R. Carlson, and M. J. Holland, Phys. Rev. Lett. **102**, 163601 (2009)\n- [20] F. Strumia, Metrologia **8**, 85 (1972).\n- [21] G. Kramer, J. Opt. Soc. Am. **68**, 1634 (1978).\n- [22] V. S. Letokhov and B. D. Pavlik, Opt. Spectrosc. USSR **32**, 455 (1972).\n- [23] Ye. V. Baklanov, B. Ya, Dubetsky, V. P. Chebotayev, Appl. Phys. **9**, 171 (1976).\n- [24] J. C. Bergquist, S. A. Lee, and L. L. Hall, Phys. Rev. Lett. **38**, 159 (1977).\n- [25] L. Davidovich, Rev. Mod. Phys. **68**, 127 (1996).\n- [26] M. I. Kolobov, L. Davidovich, E. Giacobino, and C. Fabre, Phys. Rev. A **47**, 1431 (1993).\n- [27] M. Sargent III, M. O. Scully, and W. E. Lamb, *Laser Physics* (Addition Wesley, Reading, MA, 1974).\n- [28] N. A. Abraham, P. Mandel, and L. M. Narducci, *Dynamic Instabilities and Pulsations in Lasers*, Progress in Optics XXV, edited by E. Wolf (Elsevier, Amsterdam, 1988).\n- [29] L. Pasternack, D. M. Silver, D. R. Yarkony, and P. J. Dagdigian, J. Phys. B **13**, 2231 (1980).\n- [30] K. An and M. S. Feld, Phys. Rev. A **56**, 1662(1997).\n- [31] N. F. Ramsey and H. B. Silsbee, Phys. Rev. **84**, 506(1951).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2670.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "What is the Ferrel-Glover-Tinkham sum rule?", - "target_page": 1, - "target_passage": "the redistribution of the spectral weight between normal and superconducting state", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Dataset Correlation Heatmap (Spearman)\n\nFigure 12: Heatmap representing the correlation regarding model performance across tasks.", - "page_start": 18, - "page_end": 18, - "source_file": "arxiv4.pdf" - }, - { - "text": "## Acknowledgements\n\nWe would like to thank M. Norman, Tom Timusk, Dmitri Basov, Chris Homes, Nicole Bontemps, Andres Santander-Syro, Ricardo Lobo, Dirk van der Marel, A. Boris, E. van Heumen, A. B. Kuzmenko, L. Benfato, and\n\n- 1 R. Kubo, J. Phys. Soc. Jpn 12, 570(1957).\n- 2 R.A. Ferrrel and R.E. Glover, Phys. Rev.109, 1398 (1958).\n- 3 M. Tinkham and R.A. Ferrrel, Phys. Rev. Lett. 2, 331 (1959), M. Tinkham, Introduction to Superconductivity (McGraw-Hill, New York, 1975).\n- 4 J. Hirsch, Physica C 199, 305 (1992).\n- 5 D. N. Basov and T. Timusk, Rev. Mod. Phys. 77, 721 (2005); A. V. Puchkov, D. N. Basov and T. Timusk, J. Phys. Cond. Matter 8, 10049 (1996).\n- 6 C. M. Varma et al, Phys. Rev. Lett. 63, 1996 (1989).\n- 7 D. N. Basov, S. I. Woods, A. S. Katz, E. J. Singley, R. C. Dynes, M. Xu, D. G. Hinks, C. C. Homes and M. Strongin, Science 283, 49 (1999).\n- 8 H.J.A Molegraaf, C. Presura, D. van der Marel, P.H. Kess, M. Li, Science 295, 2239 (2002); A. B. Kuzmenko, H. J. A. Molegraaf, F. Carbone and D. van der Marel, Phys. Rev. B 72, 144503 (2005).\n- 9 A. F. Santander-Syro, R. P. S. M. Lobo, N. Bontemps, Z. Konstantinovic, Z. Z. Li and H. Raffy, Europhys. Lett. 62, 568 (2003);\n- 10 A. V. Boris, N. N. Kovaleva, O. V. Dolgov, T. Holden, C. T. Lin, B. Keimer and C. Bernhard, Science 304, 708 (2004).\n- 11 G. Deutscher, A. F. Santander-Syro and N. Bontemps, Phys. Rev. B 72, 092504 (2005).\n- 12 F. Carbone, A. B. Kuzmenko, H. J. A. Molegraaf, E. van Heumen, V. Lukovac, F. Marsiglio, D. van der Marel, K. Haule, G. Kotliar, H. Berger, S. Courjault, P. H. Kes and M. Li, Phys. Rev. B 74, 064510 (2006).\n- 13 C. C. Homes, S. V. Dordevic, D. A. Bonn, R. Liang and W. N. Hardy, Phys. Rev. B 69, 024514 (2004).\n- 14 J. Hwanget al, Phys. Rev. B 73, 014508 (2006).\n- 15 E. van Heumen, R. Lortz, A. B. Kuzmenko, F. Carbone, D. van der Marel, X. Zhao, G. Yu, Y. Cho, N. Barisic, M. Greven, C. C. Homes and S. V. Dordevic, Phys. Rev. B 75, 054522 (2007).\n- 16 M. Ortolani, P. Calvani and S. Lupi, Phys. Rev. Lett. 94, 067002 (2005).\n- 17 A.F. Santander-Syro, R.P.S.M. Lobo, and N. Bontemps, Phys. Rev. B 70, 134504(2004), A. F. Santander-Syro, R. P. S. M. Lobo, N. Bontemps, Z. Konstantinovic, Z. Z. Li and H. Raffy, Europhys. Lett. 62, 568 (2003).\n- 18 P. F. Maldague, Phys. Rev. B 16 2437 (1977); E. H. Kim, Phys. Rev. B 58 2452 (1998).\n- 19 J. Hirsch, Physica C, 201, 347 (1992) and Ref 4.\n- 20 for a review see F. Marsiglio, J. Superconductivity and Novel Magnetism 22, 269 (2009).\n- 21 F. Marsiglio, E. van Heumen, A. B. Kuzmenko, Phys. Rev. B 77 144510 (2008).\n- 22 M. R. Norman, A. V. Chubukov, E. van Heumen, A. B. Kuzmenko, and D. van der Marel, Phys. Rev. B 76, 220509 (2007).\n- 23 J. E. Hirsch and F. Marsiglio, Physica C 331, 150 (2000)\n\nF. Marsiglio for many discussions concerning the infrared conductivity and optical integrals and thank A. Boris, E. van Heumen, J. Hirsch, and F. Marsiglio for the comments on the manuscript. The work was supported by nsf-dmr 0906953.\n\nand Phys. Rev. B 62, 15131 (2000).\n\n- 24 A. Toschi, M. Capone, M. Ortolani, P. Calvani, S. Lupi and C. Castellani, Phys. Rev. Lett. 95, 097002 (2005).\n- 25 F. Marsiglio, F. Carbone, A. Kuzmenko and D. van der Marel, Phys. Rev. B 74, 174516 (2006).\n- 26 L. Benfatto, S. G. Sharapov, N. Andrenacci and H. Beck, Phys. Rev. B 71, 104511 (2005).\n- 27 D. van der Marel, H.J.A. Molegraaf, C. Presura, and I. Santoso, Concepts in Electron Correlations, edited by A. Hewson and V. Zlatic (Kluwer, 2003)\n- 28 L. Benfatto, J.P. Carbotte and F. Marsiglio, Phys. Rev. B 74, 155115 (2006)\n- 29 F. Marsiglio, Phys. Rev. B 73, 064507(2006).\n- 30 M.R. Norman and C. P´epin, Phys. Rev. B 66, 100506(R) (2002).\n- 31 J. Fink et al., Phys. Rev. B 74, 165102(R) (2006).\n- 32 M. Eschrig, Adv. Phys. 55, 47-183 (2006)\n- 33 M.R. Norman and A.V. Chubukov, Phys. Rev. B 73, 140501(R)(2006).\n- 34 A.E. Karakozov and E.G. Maksimov, cond-mat/0511185, A. E. Karakozov, E. G. Maksimov and O. V. Dolgov, Solid State Comm. 124, 119 (2002); A. E. Karakozov and E. G. Maksimov, ibid. 139, 80 (2006).\n- 35 see e.g., P. B. Allen, Phys. Rev. B 3, 305 (1971); S. V. Shulga, O. V. Dolgov and E. G. Maksimov, Physica C 178, 266 (1991).\n- 36 A. A. Abriskov and L. P. Gor'kov, JETP 35, 1090 (1959), Sang Boo Nam, Phys. Rev. 156, 470 (1967).\n- 37 Theory of superconductivity, Schrieffer, (W. A. Benjamin Inc., New York 1964).\n- 38 M.R. Norman, M. Randeria, H. Ding, and J.C. Campuzano, Phys. Rev. B 52, 615 (1995).\n- 39 Z.X. Shen and D.S. Dessau, Phys. Rep. 253, 1(1995), J. C. Campuzano, M. R. Norman, and M. Randeria, \"Superconductivity\"(Vol-1), 923-992, Springer (2008).\n- 40 A. V. Chubukov, Ar. Abanov, and D. N. Basov, Phys. Rev. B 68, 024504 (2003).\n- 41 T. Valla et al., Phys. Rev. Lett 85, 828(2000).\n- 42 Kaminski et al., Phys. Rev. B 71, 014517 (2005).\n- 43 Robert Haslinger and Andrey V. Chubukov, Phys. Rev. B 67, 140504(2003).\n- 44 C. Castellani, C. DiCastro, and M. Grilli, Phys. Rev. Lett. 75, 4650 (1995).\n- 45 Ar. Abanov, A. Chubukov, and J. Schmalian, Adv. Phys. 52, 119 (2003).\n- 46 Dessau et al., Phys. Rev. Lett 66, 2160(1991), Norman et al, Phys. Rev. Lett. 79, 3506(1997).\n- 47 M.R. Norman and H. Ding, Phys. Rev. B 57, 11089(1998).\n- 48 C. Timm, D. Manske and K. H. Bennemann, Phys. Rev. B 66, 094515(2002).\n- 49 A.V. Chubukov, M.R. Norman, Phys. Rev. B 70, 174505(2004).\n- 50 In this respect, our results are consistent with the analysis", - "page_start": 14, - "page_end": 14, - "source_file": "1001.0764.pdf" - }, - { - "text": "- 26 K. S. Raman, R. Moessner, S. L. Sondhi, Phys. Rev. B 72, 064413 (2005).\n- 27 D. F. Schroeter, E. Kapit, R. Thomale, and M. Greiter, Phys. Rev. Lett. 99, 097202 (2007); R. Thomale, E. Kapit, D. F. Schroeter, and M. Greiter, Phys. Rev. B 80, 104406 (2009).\n- 28 O. Tchernyshyov, R. Moessner, S. L. Sondhi, Phys. Rev. Lett. 88, 067203 (2002).\n- 29 F. Becca, F. Mila, Phys. Rev. Lett. 89, 037204 (2002).\n- 30 K. Penc, N. Shannon, H. Shiba, Phys. Rev. Lett. 93, 197203 (2004).\n- 31 C. Weber, F. Becca, F. Mila, Phys. Rev. B 72, 024449 (2005).\n- 32 G.-W. Chern, C. J. Fennie, O. Tchernyshyov, Phys. Rev.\n\nB 74, 060405(R) (2006).\n\n- 33 D. L. Bergman, R. Shindou, G. A. Fiete, L. Balents, Phys. Rev. B 74, 134409 (2006).\n- 34 Fa Wang, Ashvin Vishwanath, Phys. Rev. Lett. 100, 077201 (2008).\n- 35 O. Tchernyshyov, G.-W. Chern, arXiv:0907.1693 (2009).\n- 36 Y. Taguchi, Y. Oohara, H. Yoshizawa, N. Nagaosa, Y. Tokura, Science 291, 2573 (2001).\n- 37 X. G. Wen, Frank Wilczek, A. Zee, Phys. Rev. B 39, 11413 (1989); X. G. Wen, Phys. Rev. B 40, 7387 (1989).\n- 38 Dimitris I. Tsomokos, Juan Jos´e Garc´ıa-Ripoll, Nigel R. Cooper, Jiannis K. Pachos, Phys. Rev. A 77, 012106 (2008).", - "page_start": 10, - "page_end": 10, - "source_file": "1001.0266.pdf" - }, - { - "text": "FIG. 5: Transition temperatures TN (n) and TC (n) vs. film thickness n.\n\nthe same is true for the crossing point of the Binder cumulant of the average magnetization M (not reported in figure), which is located at TC(8) = 133.3(3) K. These data give a first rough indication that also for n = 8 all the planes of the sample are still ordering almost at the same temperature; such property has been observed for all the investigated thicknesses n below 16, so that TC(n) results quite n-independent (see also Fig. 5) .\n\nAlthough the layer subtraction does not seem to modify TC (n), the onset of helical arrangement is observed to shift at lower temperatures as n decreases. The chirality κ defined in Eq. (4) is reported in Fig 4b for n = 8. As the temperature decreases, around T ∼ 80 K we can identify a finite-size behaviour of κ which, at variance with the previous one, can be easily recognized as typical of an effective phase transition. Such conclusion is confirmed by the analysis of the chiral susceptibility χκ (Fig. 4c), which for the largest L has a maximum at T = 85 K. Assuming that the order parameter (4) is the relevant one to single out the onset of the fan arrangement, we can get a more accurate estimate of TN (8) by looking at the Binder cumulant u4(κ), reported in Fig. 4d. By making use of the MH technique, we locate the crossing point at TN (8) = 92(2) K. Finally, it is worthwhile to observe as the specific heat does not show any anomaly at TN (8), being the entropy substantially removed at TC (8).\n\nThe scenario just outlined for n = 8 results to be correct in the thickness range 6 ≤ n . 15, where a clear separation between TN (n) and TC(n) can be easily figured out. In such temperature window, the strong surface effects produce a quasi-FM set-up of the magnetic film structure along the z-direction. While leaving to the next Section a more detailed discussion of this regime, we report in Fig. 5 a plot of TN (n) and TC(n) vs. n for all the simulated thicknesses. The separation between the two critical temperatures is maximum for n = 6, where TN (6) = 38(4), that is TN (6) ∼ 1 3 TC(6). For films with less than six layers no fan order is observed, i.e. for n = 5 and below the chirality does not display any typical feature of fan ordering at any temperature below TC(n). As a representative quantity we finally look at the rotation\n\nFIG. 6: Rotation angle ∆ϕl between magnetic moments on NN layers (l + 1, l) at some low temperatures, for thickness n = 5 and n = 6, and lateral dimension L = 64.\n\nangle of the magnetization between nearest planes:\n\n$$\\Lambda\\varphi_{l}=\\varphi_{l+1}-\\varphi_{l}=\\arccos\\left[M_{l}^{x}M_{l+1}^{x}+M_{l}^{y}M_{l+1}^{y}\\right]\\tag{10}$$\n\nwhere (Mx l , My l ) is the magnetic vector profile for each plane l. ∆ϕl is displayed in Fig. 6a and Fig. 6b, for n = 6 and n = 5, respectively. In Fig. 6a, a quite clear fan stabilization is observed when the temperature decreases, while in Fig. 6b, i.e. for n = 5, ∆ϕl keeps an almost temperature independent very small value; what's more, ∆ϕl seems to loose any temperature dependence as T = 0 is approached. We attribute the absence of fan arrangement for n ≤ 5 as simply due to the lack of \"bulk planes\" inside the film, so that we are left with only a 2d trend at TC(n), i.e. at the temperature where the order parameters defined in Eqs. (2) and (3) show a critical behaviour.\n\n# IV. DISCUSSION AND CONCLUSION\n\nA possible framework to analyze the results presented in the previous Section is suggested by Fig. 5, where we can easily distinguish three significant regions: i) high thickness, n > 16, where the films substantially display a bulk behaviour, with the single planes ordering temperature coinciding with the helical phase transition one; ii) intermediate thickness, 6 ≤ n . 15, where the temperature corresponding to the onset of in-plane order, TC (n), is still ≃ T Ho N , but where the helical/fan arrangement stabilizes only below a finite temperature TN (n) < TC (n); iii) low thickness,1 ≤ n ≤ 5, where TC(n) . T Ho N but no fan phase is present at any temperature.\n\nThe observed behaviour in region iii) can be reasonably attributed to the decreasing relevance of the contribution to the total energy of the system coming from the competitive interactions among NNN planes as the film thickness decreases; moreover, the thinness of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0510.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "#### NAVWEPS OO-BOT-BO TABLE OF CONTENTS\n\n| FRICTION EFFECTS. | 52 |\n| --- | --- |\n| Viscous Bow.. | 52 |\n| Boundarglayers | 52 |\n| Laminar flow | |\n| Transition | |\n| Turbulent flow | |\n| ReyooldsNumber | 54 |\n| Definition | |\n| Skin friction versus Reynolds Number | |\n| Airflowseparatioa | 56 |\n| Pressure distribution | |\n| Prcswrc gradient and boundary layer energy | |\n| Factors affecting separation | |\n| Scaleeffect | 59 |\n| Effect on aerodynamic characteristics | |\n| Reynolds Number correlation | |\n| PLANFORM EFFECTS AND AIRPLANE DRAG | |\n| EFFECT OF WING PLANFORM.. | 61 |\n| . . | |\n| Descr1puon of planform | 61 |\n| Area, span,, and chord | |\n| Aspect ratm and taper | |\n| Sweepback | |\n| Mean aerodynamic chord | |\n| Development of lift by a wing.. | . 63 |\n| vortex system | |\n| Ti and bound vortices | |\n| I&cd flow and downwash | |\n| Scction angle of attack | |\n| Induced angle of attack | |\n| INDUCED DRAG. : | 66 |\n| Induced angle of attack and inclined lift. | 66 |\n| Induced drag coefficient, | 68 |\n| Effect of lift coefficient | |\n| Effect of aspect ratio | |\n| Effectoflift | 68 |\n| Effea of altitude.. | |\n| EffectofsPeed | 2; |\n| Effect of aspect ratio. | 71 |\n| Lift and dra characteristics | |\n| Influcncc of f ow aspxt ratio configurations | |\n| EFFECT OF TAPER AND StiEEPtiACK. | 74 |\n| Spanwise lift distribution | 74 76 |\n| localinducedflow Effect on lift and drag characteristics. .', | 76 |\n| STALL PATI'ERNS. | 77 |\n| Pnvorablestallpattern | |\n| EffeaofpIanform | :: |\n| Taper Sweepback | |\n| Modifications for stall characteristics. | 86 |", - "page_start": 8, - "page_end": 8, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Quantitative measures established by regulation to ensure capital adequacy require Bankshares and each of its subsidiaries to maintain minimum amounts and ratios (set forth in the table below) of total and Tier I capital (as defined in the regulations) to risk-weighted assets (as defined), and of Tier I capital (as defined), to average assets (as defined). Management believes as of December 31, 2002 and 2001, that Bankshares and each of its subsidiaries meet all capital adequacy requirements to which they are subject.\n\nAs of December 31, 2002 and 2001, the most recent notification from each respective subsidiaries' primary regulator categorized each of Bankshares' subsidiaries as well-capitalized under the regulatory framework for prompt corrective action. To be categorized as well capitalized, the subsidiaries must maintain minimum total risk-based, Tier I risk-based, and Tier I leverage ratios as set forth in the table.\n\nThere are no conditions or events since that notification that management believes have changed the institutions' categories. Bankshares' and its significant subsidiaries' actual capital amounts and ratios are presented in the table below:\n\n| | | | | | To Be Well | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | | | For Capital | | Capitalized Under Prompt Corrective | |\n| | Actual | | Adequacy Purposes: | | Action Provisions: | |\n| | Amount | Ratio | Amount | Ratio | Amount | Ratio |\n| As of December 31, 2002: | | | | | | |\n| Total Capital (to Risk-Weighted Assets): | | | | | | |\n| Consolidated | $213,725,000 | 20% | ≥$ 87,579,000 | ≥ 8% | N/A | N/A |\n| First National Bank of Abilene | $ 68,874,000 | 17% | ≥$ 32,153,000 | ≥ 8% | ≥$ 40,191,000 ≥ 10% | |\n| San Angelo National Bank | $ 16,039,000 | 12% | ≥$ 10,816,000 | ≥ 8% | ≥$ 13,520,000 ≥ 10% | |\n| Weatherford National Bank | $ 19,758,000 | 18% | ≥$ 8,802,000 | ≥ 8% | ≥$ 11,002,000 ≥ 10% | |\n| Tier I Capital (to Risk-Weighted Assets): | | | | | | |\n| Consolidated | $202,507,000 | 18% | ≥$ 43,790,000 | ≥ 4% | N/A | N/A |\n| First National Bank of Abilene | $ 64,971,000 | 16% | ≥$ 16,077,000 | ≥ 4% | ≥$ 24,115,000 ≥ 6% | |\n| San Angelo National Bank | $ 14,703,000 | 11% | ≥$ 5,408,000 | ≥ 4% | ≥$ 8,112,000 ≥ 6% | |\n| Weatherford National Bank | $ 18,757,000 | 17% | ≥$ 4,401,000 | ≥ 4% | ≥$ 6,601,000 ≥ 6% | |\n| Tier I Capital (to Average Assets): | | | | | | |\n| Consolidated | $202,507,000 | 11% | ≥$ 57,856,000 | ≥ 3% | N/A | N/A |\n| First National Bank of Abilene | $ 64,971,000 | 9% | ≥$ 20,626,000 | ≥ 3% | ≥$ 34,377,000 ≥ 5% | |\n| San Angelo National Bank | $ 14,703,000 | 5% | ≥$ 8,410,000 | ≥ 3% | ≥$ 14,016,000 ≥ 5% | |\n| Weatherford National Bank | $ 18,757,000 | 10% | ≥$ 5,884,000 | ≥ 3% | ≥$ 9,807,000 ≥ 5% | |", - "page_start": 87, - "page_end": 87, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "#### Chatree – Ore Mined and Treated\n\n## Production and Costs\n\nProduction for the year was 133,681 ounces of gold and 1,000,569 ounces of silver.\n\nTotal mill throughput of 5.7 million tonnes was 11.4% higher than 2012 despite the 63 days that the new plant was shut down during the process for the granting of its Metallurgical License. The overall plant availability was 98.1%.\n\nTotal cash costs for the year were $US767 per ounce ($US620 per ounce exclusive of Thai royalties). The average royalty paid to the Thai Government was $US147 per ounce of gold. Total production costs after depreciation and amortisation were $US952 per ounce of gold produced.\n\nAt year end, 9.7 million tonnes of ore was stockpiled with an average contained gold grade of 0.57 grams per tonne (g/t) representing 178,086 ounces of gold.\n\n## Operational Performance\n\nDuring the year 7.1 million tonnes of ore was mined, with a waste-to-ore strip ratio of 2.09:1. The average grade of mined ore was 0.72 g/t gold and 8.56 g/t silver.\n\nAdditional ore was generated by revising the mining sequence in A Pit Stage 2 and accessing near surface high grade oxide ore tonnes from Q Prospect.\n\nTotal volume of material mined at Chatree for the year was 8.4 million Bank Cubic Metres (\"BCM\") including 2.7 million BCM of ore.\n\nAn additional 566,000 BCM of laterite and clay material was excavated and used for the construction of the second lift of second tailings storage facility (TSF#2).\n\nSome 1.3 million loose cubic metres (LCM) of ore was relocated from the Marginal Grade Stockpiles to the primary crusher to supplement ore from the mining pits.\n\nTwo areas were mined during the year:\n\n- 〉 A Pit, where 8.3 million BCM of material was mined (2.7 million BCM of ore) at a stripping ratio of 2.09:1 waste to ore; and\n- 〉 Q Prospect where 298 thousand BCM of material was mined (143 thousand BCM of ore) at a stripping ratio of 1.1:1 waste to ore.\n\nThe mechanical reliability and hence availability of the major fleet items has been below expectations over the last few years.", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "| | Carrying | | Contractual | Less than | | 1 to 3 | | 4 to 5 | | More than |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| December 31, 2012 | amount | | cash flows | 1 year | | years | | years | | 5 years |\n| Accounts payable and accrued liabilities | $ 2,135 | | $ 2,135 | $ 2,135 | $ | – | $ | | – | $ – |\n| Long-term debt | 10,789 | | 10,858 | 348 | | 1,920 | | 1,500 | | 7,090 |\n| Other long-term financial liabilities | 33 | | 33 | – | | 17 | | 10 | | 6 |\n| Expenditure Derivative instruments: | | | | | | | | | | |\n| Cash outflow (Canadian dollar) | | – | 366 | 231 | | 135 | | – | | – |\n| Cash inflow (Canadian dollar equivalent of US dollar) | – | | (378) | (239) | (139) | | | – | | – |\n| Debt Derivative instruments: | | | | | | | | | | |\n| Cash outflow (Canadian dollar) | – | | 4,797 | 460 | | 2,338 | | – | | 1,999 |\n| Cash inflow (Canadian dollar equivalent of US dollar) | – | | (4,208) 2 | (348) 2 | | (1,920) 2 | | – | | (1,940) 2 |\n| Net carrying amount of Derivatives | 511 | | | | | | | | | |\n| | $ 13,468 | | $ 13,603 | $ 2,587 | | $ 2,351 | | $ 1,510 | | $ 7,155 |\n\n1 The terms of our accounts receivable securitization program are committed until it expires on December 31, 2015.\n\n2 Represents Canadian dollar equivalent amount of US dollar inflows matched to an equal amount of US dollar maturities in long-term debt for Debt Derivatives.\n\nThe tables below shows net interest payments over the life of the long-term debt, including the impact of the associated Debt Derivatives as at December 31, 2013 and 2012:\n\n| | Less than | 1 to 3 | 4 to 5 | More than |\n| --- | --- | --- | --- | --- |\n| December 31, 2013 | 1 year | years | years | 5 years |\n| Interest payments | $ 743 | $ 1,258 | $ 1,093 | $ 5,341 |\n| | Less than | 1 to 3 | 4 to 5 | More than |\n| December 31, 2012 | 1 year | years | years | 5 years |\n| Interest payments | $ 686 | $ 1,168 | $ 901 | $ 3,929 |\n\n#### **Market Risk**\n\nMarket risk is the risk that changes in market prices, such as fluctuations in the market prices of our publicly traded investments, our share price, foreign exchange rates and interest rates, will affect our income, cash flows or the value of our financial instruments. The derivative instruments we use to manage this risk are described in note 2.\n\n#### *Publicly Traded Investments*\n\nWe manage risk related to fluctuations in the market prices of our investments in publicly traded companies by regularly reviewing publicly available information related to these investments to ensure that any risks are within our established levels of risk tolerance. We do not routinely engage in risk management practices such as hedging, derivatives or short selling with respect to our publicly traded investments.\n\nAt December 31, 2013, a $1 change in the market price per share of our publicly traded investments would have resulted in a $14 million change in our other comprehensive income, net of income taxes of $2 million.\n\n#### *Stock-Based Compensation*\n\nOur liability related to stock-based compensation is marked-to-market each period. Stock-based compensation expense is affected by the change in the price of our Class B Non-Voting shares during the life of an award, including SARs, RSUs and DSUs. We use Equity Derivatives from time to time to manage our exposure in our stock-based compensation expense.\n\nAt December 31, 2013, a $1 change in the market price of our Class B Non-Voting shares would not have any impact on net income or other comprehensive income, including the impact related to our Equity Derivatives.\n\n#### *Foreign Exchange and Interest Rates*\n\nWe use Debt Derivatives to manage risks from fluctuations in foreign exchange and interest rates associated with our US dollar denominated debt instruments, designating the derivatives as hedges of specific debt instruments for economic and accounting purposes. We use Expenditure Derivatives to manage the foreign exchange risk in our operations, designating them as hedges for certain of our operational and capital expenditures.\n\nAt December 31, 2013, all of our outstanding long-term debt was at fixed interest rates and all of our US dollar-denominated long-term debt was hedged against fluctuations in foreign exchange rates using Debt Derivatives. As a result, with respect to the long-term debt and Debt Derivatives, a one cent change in the Canadian dollar relative to the US dollar would have no effect on net income.\n\nA one cent change in the Canadian dollar relative to the US dollar would have resulted in no impact to net income and a $7 million change, net of income taxes of $2 million, in other comprehensive income for the year ended December 31, 2013 related to our Expenditure Derivatives.\n\nA portion of our accounts receivable and accounts payable and accrued liabilities is denominated in US dollars. Due to the short-term nature of these receivables and payables, there is no significant market risk from fluctuations in foreign exchange rates as at December 31, 2013.", - "page_start": 117, - "page_end": 117, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "performed an outlier check, labeling images as a 'low-quality outlier' if the correlation coefficient was >3 s.d. from the absolute mean. None of our scans were flagged as outliers. The reconstructed participant files were aggregated into one connectometry database per metric.\n\n*Day2Day control dataset*. To compare our findings against a control group of nonpregnant densely-sampled individuals, we used the Day-2Day dataset23 which offered comparable whole-brain T1 and T2 MTL scans for eight participants (two male) scanned 12–50 times over 2–7 months. Each participant was run through the ANTs CT and ASHS processing pipelines as outlined above ('Cortical volume and thickness' and 'Hippocampal segmentation'). To note, for each participant, we created an SST based on their first two sessions for consistency with the primary dataset; subfield volumes for the T2 MTL scans did not undergo manual retouching. Due to missing header information on the publicly available diffusion scans, we were unable to benchmark our white matter changes with the Day2Day dataset.\n\n**Statistical analysis.** Statistical analyses were conducted using R (sMRI; version 3.4.4) and DSI Studio (dMRI; Chen-2022-07-31).\n\n*Summary brain metrics*. To reflect the existing literature, we first explored brain metrics across the entire study duration (prepregnancy through postpartum, *n* = 26 scans). When including all sessions, total brain volume, GMV, CT, global QA, ventricle volume and CSF displayed nonlinear trends over time; therefore, we used generalized additive models (GAM; cubic spline basis, *k* = 10, smoothing = GCV), a method of nonparametric regression analysis (R package, mgcv76), to explore the relationship between summary brain metrics (outcome variables) and gestation week (smooth term). Each model underwent examination (gam.check function) to ensure it was correctly specified with regards to (1) the choice of basis dimension (*k*) and (2) the distribution of model residuals (see mgcv documentation in ref. 76). The general pattern of results held after toggling model parameters; however, we note the risk of overinterpreting complex models with small sample sizes77. To address overfitting and cross-validate our basis type selection, we also fit the data using nonpenalized general linear models (GLM) with both linear and polynomial terms for gestation week. We compared the performance of each GLM (that is, models using only a linear term versus models with polynomial terms) via the Akaike information criterion (AIC), which revealed that cubic models consistently outperformed both linear and quadratic models (AICdiff > 3), providing additional evidence for nonlinear changes in structural brain variables over time. Determining whether these patterns replicate in larger cohorts and whether complex models are better suited to capture data patterns across individuals will be a necessary next step.\n\n*Cortical GMV and CT*. We then narrowed our analyses to the first 19 sessions (baseline—36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1–5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at *q* < 0.05).\n\n*Subcortical GMV*. A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at *q* < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy (*n* = 7 bilateral subregions and *n* = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at *q* < 0.05.\n\n*White matter microstructure*. DSI Studio's correlational tractography74 was used to analyze the relationship between white matter structure and gestational week (*n* = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones (*n* = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: *t* score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas78.\n\n*Day2Day dataset: measurement variability*. To establish a marker of normative variability over half a year, we computed metrics of measurement variability using the Day2Day dataset23, which provided both whole-brain T1 and high-resolution T2 MTL scans. For each region, *j*, of the Schaefer parcellation, we assessed across-session variability, *ε*, as\n\n$$\\varepsilon_{j}=100\\times\\mathrm{mean}\\left({\\frac{|t_{s}-{\\hat{t}}|}{{\\hat{t}}}}\\right)$$\n\nWhere *ts* is the morphometric measurement of a parcel for session *s* and *t* ̂ is the mean of *t* across sessions55,79. Thus, we defined variability as the mean absolute percent difference between each individual and the mean across sessions. Across-session variability estimates for all 400 regions were then averaged across eight participants, and a global measure of cortical GMV variability was computed by averaging across the 400 regions. This approach was repeated independently for the T2 hippocampal scans, wherein we computed across-session variability for each parcel of the ASHS parcellation scheme (*n* = 7 bilateral subfields). However, it is important to note that raw subfield values (that is, no manual retouching) were used for Day2Day variability assessments and should be interpreted with caution. Finally, to better compare against our own data, we repeated this approach using our", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "What does Kitaev show about spin- 1/2 model?", - "target_page": 1, - "target_passage": "spin- 1/2 model can be mapped to a model with one Majo- rana fermion per site coupled to Ising gauge fields on the links", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "FIG. 1: The honeycomb lattice for the Kitaev model. Filled and open circles indicate two sublattices. x, y, z label the links along three different directions used in (1).\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability4,6. And many generalizations to other(even 3D) lattices have been developed in the last few years10–16. All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices17,18, or in superconducting circuits19. But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin20, in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t2g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of Sj ·Sk [or the spin-chirality Sj ·(Sk ×Sℓ)] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the Jx,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref.9,10. Right: enlarged picture of the clusters with the four physical spins labeled as 1, . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling Jcluster.\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n# II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.\n\nIn this Section we will construct the pseudo-spin-1/2 from a cluster of four physical spins, and map the physical spin operators to pseudo-spin operators. The mapping constructed here will be used in later Sections to construct the effective Kitaev model. In this Section we will work entirely within the four-spin cluster, all unspecified physical spin subscripts take values 1, . . . , 4.\n\nConsider a cluster of four spin-1/2 moments(called physical spins hereafter), labeled by S1,...,4, antiferromagnetically coupled to each other (see the right bottom part of FIG. 2). The Hamiltonian within the cluster(up to a constant) is simply the Heisenberg antiferromagnetic(AFM) interactions,\n\n$$H_{\\rm cluster}=\\left(J_{\\rm cluster}/2\\right)\\left({\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4}\\right)^{2}\\tag{2}$$\n\nThe energy levels should be apparent from this form: one group of spin-2 quintets with energy 3Jcluster, three groups of spin-1 triplets with energy Jcluster, and two spin singlets with energy zero. We will consider large positive", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "# Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang1\n\n1Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n| I. Introduction. | 1 |\n| --- | --- |\n| II. Formulation of the Pseudo-spin-1/2 from | |\n| Four-spin Cluster. | 2 |\n| III. Realization of the Kitaev Model. | 3 |\n| IV. Generate the High Order Physical Spin | |\n| Interactions by Perturbative Expansion. | 5 |\n| A. Generate the High Order Terms by Coupling | |\n| to Optical Phonon. | 5 |\n| B. Generate the High Order Terms by Magnetic | |\n| Interactions between Clusters. | 7 |\n| V. Conclusions. | 8 |\n| Acknowledgments | 8 |\n| A. Coupling between Distortions of a | |\n| Tetrahedron and the Pseudo-spins | 8 |\n| B. Derivation of the Terms Generated by | |\n| Second Order Perturbation of Inter-cluster | |\n| Magnetic Interactions | 9 |\n| References | 10 |\n\n#### I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential to realize non-Abelian anyons. The model simply reads\n\n$$H_{\\rm Kitaev}=-\\sum_{x-{\\rm links}\\ }J_{x}\\tau_{j}^{x}\\tau_{k}^{x}-\\sum_{y-{\\rm links}\\ }J_{y}\\tau_{j}^{y}\\tau_{k}^{y}$$\n \n$$-\\sum_{z-{\\rm links}\\ }J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere τ x,y,z are Pauli matrices, and x, y, z-links are defined in FIG. 1. It was shown by Kitaev1 that this spin-1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as |Jx|, |Jy|, and |Jz| satisfy the triangular relation, sum of any two of them is greater than the third one1 . It was further proposed by Kitaev1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems2,3. The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works4–7. Exact diagonalization has been used to study the Kitaev model on small lattices8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models9 .\n\nMany generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λx,y,z/Jcluster ∼ p |Jx,y,z|/Jcluster.\n\n### V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n#### Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n# Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref.35 the couplings of all tetrahedron distortion modes to the spin system. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\n$$H_{\\rm cluster},\\ {\\rm SL}=(J_{\\rm cluster}/2)(\\sum_{\\ell}{\\bf S}_{\\ell})^{2}+J^{\\prime}\\sum_{\\ell 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap (∼ Jcluster), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref.24–27 .\n\n# IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the Jx,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1, . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j(k) as j1, . . . , j4 (k1, . . . , k4), and denote pseudo-spins on cluster j(k) as ~τj (~τk).\n\n# A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling Jcluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of Sℓ · Sm. Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials28–34. More details can be found in a recent review by Tchernyshyov and Chern35. And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1, . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref.35). Only two tetragonal to orthorhombic distortion modes, QE 1 and QE 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "Figure 4.32. Spin Characteristics", - "page_start": 327, - "page_end": 327, - "source_file": "00-80T-80.pdf" - }, - { - "text": "From Eq. (19), one can see that σ (p) SI ∝ (sin 2θ/v′ ) 2 for a given DM mass mN . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σSI . 4 × 10−8 − 2 × 10−7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0.7 and 0.3, respectively.\n\n#### IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U(1)B−L model. We have introduced a discrete Z2 parity in the model, so that one RH neutrino assigned as Z2-odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s-channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "Jcluster limit. So only the singlet sector remains in low energy.\n\nThe singlet sector is then treated as a pseudo-spin-1/2 Hilbert space. From now on we denote the pseudo-spin-1/2 operators as T = (1/2)~τ, with ~τ the Pauli matrices. It is convenient to choose the following basis of the pseudo-spin\n\n$$\\tau^{z}=\\pm1\\rangle=\\frac{1}{\\sqrt{6}}\\Big{(}|\\uparrow\\downarrow\\uparrow\\uparrow\\rangle+\\omega^{-\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\rangle+\\omega^{\\tau^{z}}|\\uparrow\\uparrow\\uparrow\\rangle$$\n \n$$+|\\uparrow\\uparrow\\downarrow\\downarrow\\rangle+\\omega^{-\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\downarrow\\rangle+\\omega^{\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\uparrow\\rangle\\Big{)}\\tag{3}$$\n\nwhere ω = e 2πi/3 is the complex cubic root of unity, | ↓↓↑↑i and other states on the right-hand-side(RHS) are basis states of the four-spin system, in terms of S z quantum numbers of physical spins 1, . . . , 4 in sequential order. This pseudo-spin representation has been used by Harris et al. to study magnetic ordering in pyrochlore antiferromagnets21 .\n\nWe now consider the effect of Heisenberg-type interactions Sj · Sk inside the physical singlet sector. Note that since any Sj · Sk within the cluster commutes with the cluster Hamiltonian Hcluster (2), their action do not mix physical spin singlet states with states of other total physical spin. This property is also true for the spinchirality operator used later. So the pseudo-spin Hamiltonian constructed below will be exact low energy Hamiltonian, without truncation errors in typical perturbation series expansions.\n\nIt is simpler to consider the permutation operators Pjk ≡ 2Sj · Sk + 1/2, which just exchange the states of the two physical spin-1/2 moments j and k (j 6= k). As an example we consider the action of P34,\n\n$$P_{34}|\\tau^{z}=-1\\rangle=\\frac{1}{\\sqrt{6}}\\Big{(}|\\downarrow\\uparrow\\uparrow\\rangle+\\omega|\\uparrow\\uparrow\\downarrow\\rangle+\\omega^{2}|\\uparrow\\uparrow\\downarrow\\uparrow\\rangle$$\n \n$$+|\\uparrow\\uparrow\\downarrow\\downarrow\\rangle+\\omega|\\uparrow\\downarrow\\uparrow\\rangle+\\omega^{2}|\\uparrow\\downarrow\\uparrow\\downarrow\\rangle\\Big{)}$$\n \n$$=|\\tau^{z}=+1\\rangle$$\n\nand similarly P34|τ z = −1i = |τ z = +1i. Therefore P34 is just τ x in the physical singlet sector. A complete list of all permutation operators is given in TABLE I. We can choose the following representation of τ x and τ y ,\n\n$$\\begin{split}\\tau^{x}&=P_{12}=2\\mathbf{S}_{1}\\cdot\\mathbf{S}_{2}+1/2\\\\ \\tau^{y}&=(P_{13}-P_{14})/\\sqrt{3}=(2/\\sqrt{3})\\mathbf{S}_{1}\\cdot(\\mathbf{S}_{3}-\\mathbf{S}_{4})\\end{split}\\tag{4}$$\n\nMany other representations are possible as well, because several physical spin interactions may correspond to the same pseudo-spin interaction in the physical singlet sector, and we will take advantage of this later.\n\nFor τ z we can use τ z = −iτx τ y , where i is the imaginary unit,\n\n$$\\tau^{z}=-i(2/\\sqrt{3})(2{\\bf S}_{1}\\cdot{\\bf S}_{2}+1/2){\\bf S}_{1}\\cdot({\\bf S}_{3}-{\\bf S}_{4})\\quad(5)$$\n\n| physical spin | pseudo-spin | | | |\n| --- | --- | --- | --- | --- |\n| P12, and P34 τ | x | | | |\n| P13, and P24 | x + (√ −(1/2)τ | | 3/2)τ | y |\n| P14, and P23 | x − −(1/2)τ | √ ( | 3/2)τ | y |\n| −χ234, χ341, −χ412, and χ123 ( | √ z 3/4)τ | | | |\n\nTABLE I: Correspondence between physical spin operators and pseudo-spin operators in the physical spin singlet sector of the four antiferromagnetically coupled physical spins. Pjk = 2Sj ·Sk + 1/2 are permutation operators, χjkℓ = Sj ·(Sk ×Sℓ) are spin-chirality operators. Note that several physical spin operators may correspond to the same pseudo-spin operator.\n\nHowever there is another simpler representation of τ z , by the spin-chirality operator χjkℓ = Sj · (Sk × Sℓ). Explicit calculation shows that the effect of S2 ·(S3 × S4) is −( √ 3/4)τ z in the physical singlet sector. This can also be proved by using the commutation relation [S2 ·S3, S2 · S4] = iS2 · (S3 × S4). A complete list of all chirality operators is given in TABLE I. Therefore we can choose another representation of τ z ,\n\n$$\\tau^{z}=-\\chi_{234}/(\\sqrt{3}/4)=-(4/\\sqrt{3}){\\bf S}_{2}\\cdot({\\bf S}_{3}\\times{\\bf S}_{4})\\qquad(6)$$\n\nThe above representations of τ x,y,z are all invariant under global spin rotation of the physical spins.\n\nWith the machinery of equations (4), (5), and (6), it will be straightforward to construct various pseudo-spin-1/2 Hamiltonians on various lattices, of the Kitaev variety and beyond, as the exact low energy effective Hamiltonian of certain spin-1/2 models with spin-rotation symmetry. In these constructions a pseudo-spin lattice site actually represents a cluster of four spin-1/2 moments.\n\n#### III. REALIZATION OF THE KITAEV MODEL.\n\nIn this Section we will use directly the results of the previous Section to write down a Hamiltonian whose low energy sector is described by the Kitaev model. The Hamiltonian will be constructed on the physical spin lattice illustrated in FIG. 2. In this Section we will use j, k to label four-spin clusters (pseudo-spin-1/2 sites), the physical spins in cluster j are labeled as Sj1, . . . , Sj4.\n\nApply the mappings developed in Section II, we have the desired Hamiltonian in short notation,\n\n$$H=\\sum_{\\begin{subarray}{c}\\text{cluster}\\\\ \\text{cluster}\\end{subarray}}H_{\\text{cluster}}-\\sum_{x-\\text{links}}J_{x}\\tau_{j}^{x}\\tau_{k}^{x}\\tag{7}$$\n \n$$-\\sum_{\\begin{subarray}{c}y-\\text{links}\\end{subarray}}J_{y}\\tau_{j}^{y}\\tau_{k}^{y}-\\sum_{z-\\text{links}}J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere j, k label the honeycomb lattice sites thus the fourspin clusters, Hcluster is given by (2), τ x,y,z should be replaced by the corresponding physical spin operators in (4) and (5) or (6), or some other equivalent representations of personal preference.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0266.pdf" - }, - { - "text": "minimisation [9]. Choosing actions that minimise the expected free energy (*EFE*) of their consequences provides a natural balance between exploratory and exploitative behaviour; generalises descriptive approaches to behavioural modelling, like reinforcement learning and expected utility maximisation; and provides a singular approach to adaptive behaviour that can be used across different environments. AIF was argued to be applicable to any selforganising system that actively maintains a stable boundary that defines its integrity [10], a broad category that includes cells and plants [11], as well as humans [2] and even collectives [12]. Owing to its generality, AIF has seen a rise in popularity across multiple fields. It is used for theoretical simulations of the mechanisms underlying various types of behaviour [2], computational phenotyping in computational psychiatry [13,14], and agentbased simulations of population dynamics [15], as well as in engineering and robotics [16]. In AIF, perception and concurrent action are based on performing a variational Bayesian inversion of a generative model of the environment (i.e., a model of how the environment changes and brings about sensory observations). This belief updating includes inferring (hidden) states of the environment, learning parameters of the generative model and learning the structure of the generative model. Since the requisite inference schemes come pre-specified, the main task in AIF modelling becomes specifying an appropriate generative model. This includes specifying priors over environmental states, as well as what might be called *prior preferences*, *preference priors* or *goal priors*: immutable prior expectations that make up an agents' preferences by furnishing a set of predictions over future states or observations; in fulfilling these predictions, free energy is minimised. The space of possible generative models is vast, and they often have to be handcrafted for a given environment. However, there are some families of generative models that can be considered \"universal\" in the sense that they can be used for most environments. Currently, the most popular of these is the discrete state-space Partially Observable Markov Decision Process (POMDP) based generative models. Since they are ubiquitous in the literature, we focus here on making these types of generative models available to researchers. There are, however, other types of universal generative models, like generalised filtering models [17] or Hierarchical Gaussian Filtering-based models [18,19], that will be implemented in the future.\n\nTools for simulating POMDP-AIF models were originally developed as part of the DEM [20] library for MATLAB [21] (part of the larger SPM library [22]). Since then, a modal and flexible software package pymdp [23] was created for Python [24], as well as a performance-oriented package cpp-AIF [25] for C++ [26] that can be used across platforms. Finally, the factor graph library RxInfer [27] for Julia [28] has also been used to implement some AIF models on an efficient factor graph back-end [29–31]. The important tools that these packages provide make AIF available for researchers to perform simulation studies and for use in engineering contexts. They do not, however, usually allow for fitting models to empirically observed data, which is a fundamental method used in cognitive modelling [32], often in the context of computational psychiatry [13], to infer the mechanisms underlying variations in behaviour or to investigate the differences between (for example, clinical) populations. Smith and colleagues [33] provided a guide for manually doing variational Bayesian parameter estimation based on empirical data, but only in MATLAB and restricted to a particular class of variational parameter estimation methods (variational Laplace), instead of the sampling-based methods that currently predominate in the field of cognitive modelling [34,35].\n\nIn this paper, we introduce ActiveInference.jl, a new software library for Julia [28] that aims to provide easy-to-use tools for model fitting with AIF models and to introduce AIF to the growing community of researchers using Julia for computational psychiatry and cognitive modelling. Julia is a free and open-source high-level programming language that retains an easy user interface reminiscent of that in MATLAB and Python. Simultaneously,", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "with respect to model ranking?\n\nTo go further than the correlation analysis among datasets regarding their topics (see section 3.1.5), subsequent analysis will be conducted regarding how they rank models. Additionally, complementary insights will be derived from examining correlations of models relative to their strengths and weaknesses across different datasets.\n\n### 4 Results and discussion\n\nIn this section, we present the results through the prism of our research questions.\n\n### Q1: Is there a model that outstands on all tasks?\n\nModels performances for each task are presented in appendix Tables 9, 10, 11, 12 and 13. Figure 1 shows the critical difference diagram of average score ranks.\n\nAs in MTEB (Muennighoff et al., 2022), no model claims state-of-the-art in all tasks even if the *text-embedding-3-large* model is in first place on average on all tasks (see Table 9). It ranks first for the classification and reranking tasks. For the clustering task, *text-embedding-ada-002* is the best model. The models *voyage-code-2*, *textembedding-3-small* and *mistral-embed* share the top positions in the retrieval task ranking. For the pair classification task, *laser2* is ahead of its competitors. Finally, *sentence-camembert-large* leads on the STS task and *multilingual-e5-small* has the best results for summarization.\n\nFigure 1 shows a global model comparison across all datasets. The models are arranged horizontally according to their performance, with the best models on the left. The black bars represent the statistical equivalence between the models' performances. The statistically equivalent top performers for this benchmark are OpenAI's models *text-embedding-3-large*, *text-embedding-3 small* and *text-embedding-ada-002*. Interestingly, many models do not show a significant performance gap between their base and large flavours. Some French models stand out among the multilingual models, such as *Solon-embeddings-large-0.1*, *sentence_croissant_alpha_v0.3* and *sentencecamembert-large*.\n\n### Q2: Are there any links between model characteristics and performance?\n\nThe Spearman correlations between the average rank of the models and their characteristics are the following:\n\n- *Tuned for sentence similarity*: 0.727\n- *Finetuned vs pretrained*: 0.544\n- *Model number of parameters*: 0.49\n- *Embedding dimension*: 0.452\n- *Closed source*: 0.449\n- *Max sequence length*: 0.336\n- *Multilingual*: 0.103\n- *English*: 0.025\n- *English but tuned on other languages*: -0.025\n- *French*: -0.134\n- *Bilingual*: -0.135\n\nAdditionally, all cross-correlations between characteristics are reported in appendix Figure 10.\n\nAs expected, the score most strongly correlates with whether the evaluated models were trained on a sentence similarity task. Of course, this criterion is connected to the more general *Finetuned* one. The only top-performing models solely pre-trained are from the *E5* family, where the pre-training is, in fact, contrastive and optimized for similarity. Conversely, models pre-trained on token-level tasks and generating embeddings via pooling appear less well-suited for the benchmark tasks.\n\nFurthermore, we observe a performance correlation with the embedding dimension and the model's number of parameters, which are often correlated themselves. This appears very clearly on the relative ranking of *E5* and *T5* models (see Figure 1). However, some small models perform very well on the benchmark, such as the standard version of the multilingual universal sentence encoder or *Solon-embeddings-base-1.0*. Notably, the maximum sequence length, while an important criterion for generative tasks with LLMs, is less correlated with performance than the other dimensions. This can be explained by many datasets containing relatively small texts (see appendix Table 3 showing that 14 datasets have less than 50 tokens).\n\nRegarding language, it is surprising that good performance is not particularly correlated with French models in particular. In reality, the other aspects of the models, such as being fine-tuned", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv4.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "How can fractionalised Majorana fermion excitations be understood?", - "target_page": 1, - "target_passage": "from the more familiar Jordan-Wigner transformation of 1D spin systems", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "high-energy fermions and is an input for the low-energy theory. Below we follow Refs. 31,33 and assume that the momentum dependence of a collective boson is flat near (π, π). The self energy within such model has been worked out consistently in Ref. 31,33. In the normal state\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{1}{2}\\,\\lambda_{n}\\omega_{sf}\\,log\\left(1+\\frac{\\omega^{2}}{\\omega_{sf}^{2}}\\right)$$\n \n \n\n$$\\Sigma^{\\prime}(\\omega)=-\\lambda_{n}\\omega_{sf}\\,arctan\\frac{\\omega}{\\omega_{sf}}\\tag{19}$$\n\nwhere λn is the spin-fermion coupling constant, and ωsf is a typical spin relaxation frequency of overdamped spin collective excitations with a propagator\n\n$$\\chi(q\\sim Q,\\Omega)=\\frac{\\chi_{Q}}{1-i\\frac{\\Omega}{\\omega_{s f}}}\\qquad\\qquad(20)$$\n\nwhere χQ is the uniform static susceptibility. If we use Ornstein-Zernike form of χ(q) and use either Eliashberg 45 or FLEX computational schemes48, we get rather similar behavior of Σ as a function of frequency and rather similar behavior of optical integrals.\n\nThe collective nature of spin fluctuations is reflected in the fact that the coupling λ and the bosonic frequency ωsf are related: λ scales as ξ 2 , where ξ is the bosonic mass (the distance to a bosonic instability), and ωsf ∝ ξ −2 (see Ref. 49). For a flat χ(q ∼ Q) the product λωsf does not depend on ξ and is the overall dimensional scale for boson-mediated interactions.\n\nIn the SCS fermionic excitations acquire a gap. This gap affects fermionic self-energy in two ways: directly, via the change of the dispersion of an intermediate boson in the exchange process involving a CB, and indirectly, via the change of the propagator of a CB. We remind ourselves that the dynamics of a CB comes from a particlehole bubble which is indeed affected by ∆.\n\nThe effect of a d−wave pairing gap on a CB has been discussed in a number of papers, most recently in31. In a SCS a gapless continuum described by Eq. (20) transforms into a gaped continuum, with a gap about 2∆ and a resonance at ω = ω0 < 2∆, where for a d−wave gap we define ∆ as a maximum of a d−wave gap.\n\nThe spin susceptibility near (π, π) in a superconductor can generally be written up as\n\n$$\\chi(q\\sim Q,\\Omega)=\\frac{\\chi_{Q}}{1-i\\frac{\\Pi(\\Omega)}{\\omega_{s f}}}\\qquad\\qquad(21)$$\n\nwhere Π is evaluated by adding up the bubbles made out of two normal and two anomalous Green's functions. Below 2∆, Π(Ω) is real (∼ Ω 2/∆ for small Ω), and the resonance emerges at Ω = ω0 at which Π(ω0) = ωsf . At frequencies larger than 2∆, Π(Ω) has an imaginary part, and this gives rise to a gaped continuum in χ(Ω).\n\nThe imaginary part of the spin susceptibility around the resonance frequency ω0 is31\n\n$$\\chi^{\\prime\\prime}(q,\\Omega)=\\frac{\\pi Z_{o}\\omega_{0}}{2}\\delta(\\Omega-\\omega_{0})\\tag{22}$$\n\nwhere Zo ∼ 2 ωsfχ0/ ∂Π ∂ω |Ω=ω0 . The imaginary part of the spin susceptibility describing a gaped continuum exists for for Ω ≥ 2∆ and is\n\n$$\\chi^{^{\\prime\\prime}}(q,\\Omega)=I m\\left[\\frac{\\chi_{0}}{1-\\frac{1}{\\omega_{s f}}\\left(\\frac{4\\Delta^{2}}{\\Omega}D(\\frac{4\\Delta^{2}}{\\Omega^{2}})+i\\Omega K_{2}(1-\\frac{4\\Delta^{2}}{\\Omega^{2}})\\right)}\\right]$$\n\n$$\\approx I m\\left[\\frac{\\chi_{0}}{1-\\frac{1}{\\omega_{s f}}\\left(\\frac{\\pi\\Delta^{2}}{\\Omega}+i\\frac{\\pi}{2}\\Omega\\right)\\right]}\\ \\mathrm{for}\\ \\Omega>>2\\Delta\\ \\ \\ \\ (23)$$\n\nIn Eq. (23) D(x) = K1(x)−K2(x) x , and K1(x) and K2(x) are Elliptic integrals of first and second kind. The real part of χ is obtained by Kramers-Kr¨onig transform of the imaginary part.\n\nSubstituting Eq 6 for χ(q, Ω) into the formula for the self-energy one obtains Σ′′(ω) in a SCS state as a sum of two terms31\n\nwhere,\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=\\Sigma^{\\prime\\prime}_{A}(\\omega)+\\Sigma^{\\prime\\prime}_{B}(\\omega)\\tag{24}$$\n\n$$\\Sigma_{A}^{\\prime\\prime}(\\omega)=\\frac{\\pi Z_{o}}{2}\\,\\lambda_{n}\\omega_{o}\\,R e\\left(\\frac{\\omega+\\omega_{o}}{\\sqrt{(\\omega+\\omega_{o})^{2}-\\Delta^{2}}}\\right)$$\n\ncomes from the interaction with the resonance and\n\n$$\\Sigma_{B}^{\\prime\\prime}(\\omega)=-\\lambda_{n}\\int_{2\\Delta}^{|E|}dx\\,Re\\,\\frac{\\omega+x}{\\sqrt{(\\omega+x)^{2}-\\Delta^{2}}}\\,\\frac{\\frac{\\omega}{\\omega_{\\omega}}K_{2}\\left(1-\\frac{4\\Delta^{2}}{x^{2}}\\right)}{\\left[1-\\frac{4\\Delta^{2}}{\\omega_{\\omega}\\,I}D\\left(\\frac{4\\Delta^{2}}{x^{2}}\\right)\\right]^{2}+\\left[\\frac{x}{\\omega_{\\omega}\\,I}K_{2}\\left(1-\\frac{4\\Delta^{2}}{x^{2}}\\right)\\right]^{2}}\\tag{25}$$\n\ncomes from the interaction with the gaped continuum. The real part of Σ is obtained by Kramers-Kr¨onig trans-", - "page_start": 10, - "page_end": 10, - "source_file": "1001.0764.pdf" - }, - { - "text": "in a given band is compensated by an appropriate change of the spectral weight in other bands such that the total spectral weight, integrated over all bands, is conserved, as in Eq. (1). Still, non-conservation of the spectral weight within a given band is an interesting phenomenon as the degree of non-conservation is an indicator of relevant energy scales in the problem. Indeed, when relevant energy scales are much smaller than the Fermi energy, i.e., changes in the conductivity are confined to a near vicinity of a Fermi surface (FS), one can expand εk near kF as εk = vF (k − kF ) + (k − kF ) 2/(2mB) + O(k − kF ) 3 and obtain ∇2 k~x ε~k ≈ 1/mB [this approximation is equivalent to approximating the density of states (DOS) by a constant]. Then WK becomes πne2/(2mB) which does not depend on temperature. The scale of the temperature dependence of WK is then an indicator how far in energy the changes in conductivity extend when, e.g., a system evolves from a normal metal to a superconductor. Because relevant energy scales increase with the interaction strength, the temperature dependence of WK is also an indirect indicator of whether a system is in a weak, intermediate, or strong coupling regime.\n\nIn a conventional BCS superconductor the only relevant scales are the superconducting gap ∆ and the impurity scattering rate Γ. Both are generally much smaller than the Fermi energy, so the optical integral should be almost T -independent, i.e., the spectral weight lost in a superconducting state at low frequencies because of gap opening is completely recovered by the zero-frequency δfunction. In a clean limit, the weight which goes into a δ−function is recovered within frequencies up to 4∆. This is the essence of FGT sum rule 2,3. In a dirty limit, this scale is larger, O(Γ), but still WK is T -independent and there was no \"violation of sum rule\".\n\nThe issue of sum rule attracted substantial interest in the studies of high Tc cuprates5–18,21–26 in which pairing is without doubts a strong coupling phenomenon. From a theoretical perspective, the interest in this issue was originally triggered by a similarity between WK and the kinetic energy K = 2P ε~k n~k . 18–20 For a model with a simple tight binding cosine dispersion εk ∝ (cos kx + cos ky), d 2 ε~k d k2 x ∼ −ε~k and WK = −K. For a more complex dispersion there is no exact relation between WK and K, but several groups argued 17,27,28 that WK can still be regarded as a good monitor for the changes in the kinetic energy. Now, in a BCS superconductor, kinetic energy increases below Tc because nk extends to higher frequencies (see Fig.2). At strong coupling, K not necessary increases because of opposite trend associated with the fermionic self-energy: fermions are more mobile in the SCS due to less space for scattering at low energies than they are in the NS. Model calculations show that above some coupling strength, the kinetic energy decreases below Tc 29. While, as we said, there is no one-to-one correspondence between K and WK, it is still likely that, when K decreases, WK increases.\n\nA good amount of experimental effort has been put into\n\naddressing the issue of the optical sum rule in the c−axis7 and in-plane conductivities 8–16 in overdoped, optimally doped, and underdoped cuprates. The experimental results demonstrated, above all, outstanding achievements of experimental abilities as these groups managed to detect the value of the optical integral with the accuracy of a fraction of a percent. The analysis of the change of the optical integral between normal and SCS is even more complex because one has to (i) extend NS data to T < Tc and (ii) measure superfluid density with the same accuracy as the optical integral itself.\n\nThe analysis of the optical integral showed that in overdoped cuprates it definitely decreases below Tc, in consistency with the expectations at weak coupling11. For underdoped cuprates, all experimental groups agree that a relative change of the optical integral below Tc gets much smaller. There is no agreement yet about the sign of the change of the optical integral : Molegraaf et al.8 and Santander-Syro et al.9 argued that the optical integral increases below Tc, while Boris et al.10 argued that it decreases.\n\nTheoretical analysis of these results21,22,25,28,30 added one more degree of complexity to the issue. It is tempting to analyze the temperature dependence of WK and relate it to the observed behavior of the optical integral, and some earlier works25,28,30 followed this route. In the experiments, however, optical conductivity is integrated only up to a certain frequency ωc, and the quantity which is actually measured is\n\n$$W(\\omega_{c})=\\int_{0}^{\\omega_{c}}\\,Re\\,\\sigma(\\Omega)\\,d\\Omega=W_{K}+f(\\omega_{c})$$\n \n$$f(\\omega_{c})=-\\int_{\\omega_{c}}^{\\prime\\,\\infty^{\\prime}}\\,Re\\,\\sigma(\\Omega)\\,d\\Omega\\tag{4}$$\n\nThe Kubo formula, Eq. (3) is obtained assuming that the second part is negligible. This is not guaranteed, however, as typical ωc ∼ 1 − 2eV are comparable to the bandwidth.\n\nThe differential sum rule ∆W is also a sum of two terms\n\n$$\\Delta W(\\omega_{c})=\\Delta W_{K}+\\Delta f(\\omega_{c})\\tag{5}$$\n\nwhere ∆WK is the variation of the r.h.s. of Eq. 3, and ∆f(ωc) is the variation of the cutoff term. Because conductivity changes with T at all frequencies, ∆f(ωc) also varies with temperature. It then becomes the issue whether the experimentally observed ∆W(ωc) is predominantly due to \"intrinsic\" ∆WK, or to ∆f(ωc). [A third possibility is non-applicability of the Kubo formula because of the close proximity of other bands, but we will not dwell on this.]\n\nFor the NS, previous works21,22 on particular models for the cuprates indicated that the origin of the temperature dependence of W(ωc) is likely the T dependence of the cutoff term f(ωc). Specifically, Norman et. al.22 approximated a fermionic DOS by a constant (in which", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 9: ∆W vs the cut-off for the EB model. It remains negative for larger cut-offs. Parameters are the same as before. The dot indicates the value of ∆W(∞) = ∆WK\n\nof the lattice (the dashed line in Fig. 9).\n\n#### C. Marginal Fermi liquid model\n\nFor their analysis of the optical integral, Norman and P´epin30 introduced a phenomenological model for the self energy which fits normal state scattering rate measurements by ARPES41. It constructs the NS Σ′′ (ω) out of two contributions - impurity scattering and electronelectron scattering which they approximated phenomenologically by the marginal Fermi liquid form of αω at small frequencies6 (MFLI model). The total Σ′′ is\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=\\Gamma\\,+\\,\\alpha|\\omega|f\\left(\\frac{\\omega}{\\omega_{s a t}}\\right)\\tag{17}$$\n\nwhere ωsat is about ∼ 1 2 of the bandwidth, and f(x) ≈ 1 for x < 1 and decreases for x > 1. In Ref 30 f(x) was assumed to scale as 1/x at large x such that Σ′′ is flat at large ω. The real part of Σ(ω) is obtained from Kramers-Kr¨onig relations. For the superconducting state, they obtained Σ′′ by cutting off the NS expression on the lower end at some frequency ω1 (the analog of ω0 + ∆ that we had for EB model):\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=(\\Gamma\\,+\\,\\alpha|\\omega|)\\Theta(|\\omega|-\\omega_{1})\\qquad\\quad(18)$$\n\nwhere Θ(x) is the step function. In reality, Σ′′ which fits ARPES in the NS has some angular dependence along the Fermi surface42, but this was ignored for simplicity. This model had gained a lot of attention as it predicted the optical sum in the SCS to be larger than in the NS, i.e., ∆W > 0 at large frequencies. This would be consistent with the experimental findings in Refs. 8,9 if, indeed, one identifies ∆W measured up to 1eV with ∆WK.\n\nWe will show below that the sign of ∆W in the MFLI model actually depends on how the normal state results are extended to the superconducting state and, moreover, will argue that ∆WK is actually negative if the extension is done such that at α = 0 the results are consistent with\n\nBCSI model. However, before that, we show in Figs 10- 12 the conductivities and the optical integrals for the original MFLI model.\n\nFIG. 10: Top –the conductivities in the NS and SCS in the original MFLI model of Ref.30. We set Γ = 70 meV , α = 0.75, ∆ = 32 meV , ω1 = 71 meV . Note that σ ′ (ω) in the SCS begins at Ω = ∆ + ω1. Bottom – the behavior of WK with Γ.\n\nIn Fig 10 we plot the conductivities in the NS and the SCS and Kubo sums WK vs Γ at α = 0.75 showing that the spectral weight in the SCS is indeed larger than in the NS. In Fig 11 we show the behavior of the optical sums W(ωc) in NS and SCS. The observation here is that only ∼ 75−80% of the Kubo sum is recovered up to the scale of the bandwidth implying that there is indeed a significant spectral weight well beyond the bandwidth. And in Fig 12 we show the behavior of ∆W(wc). We see that it does not change sign and remain positive at all ωc, very much unlike the BCS case. Comparing the behavior of W(wc) with and without a lattice (solid and dashed lines in Fig. 12) we see that the 'finite bandwidth effect' just shifts the curve in the positive direction. We also see that the solid line flattens above roughly half of the bandwidth, i.e., at these frequencies ∆W(ωc) ≈ ∆WK. Still, we found that ∆W continues going down even above the bandwidth and truly saturates only at about 2 eV (not shown in the figure) supporting the idea that there is 'more' left to recover from higher frequencies.\n\nThe rationale for ∆WK > 0 in the original MFLI model has been provided in Ref. 30. They argued that this is closely linked to the absence of quasiparticle peaks in the NS and their restoration in the SCS state because the phase space for quasiparticle scattering at low energies is smaller in a superconductor than in a normal state.", - "page_start": 7, - "page_end": 7, - "source_file": "1001.0764.pdf" - }, - { - "text": "modified MFLI models. It is interesting that this holds despite the fact that for large λ CB model displays the physics one apparently needs to reverse the sign of ∆WK – the absence of the quasiparticle peak in the NS and its emergence in the SCS accompanied by the dip and the hump at larger energies. The absence of coherent quasiparticle in the NS at large λ is also apparent form Fig 21 where we show the normal state distribution functions for two different λ. For large λ the jump (which indicates the presence of quasiparticles) virtually disappears.\n\nOn a more careful look, we found that indifference of δW(ωc) to the increase of λ is merely the consequence of the fact that above we kept λωsf constant. Indeed, at small frequencies, fermionic self-energy in the NS is Σ′ = λω, Σ\" = λ 2ω 2/(λωsf ), and both Σ′ and Σ′′ increase with λ if we keep λωsf constant. But at frequencies larger than ωsf , which we actually probe by ∆W(ωc), the selfenergy essentially depends only on λωsf , and increasing λ but keeping λωsf constant does not bring us closer to the physics associated with the recovery of electron coherence in the SCS. To detect this physics, we need to see how things evolve when we increase λωsf above the scale of ∆ , i.e., consider a truly strong coupling when not only λ ≫ 1 but also the normal state ΣNS(ω ≥ ∆) >> ∆.\n\nTo address this issue, we took a larger λ for the same ωsf and re-did the calculation of the conductivities and optical integrals. The results for σ(ω) and ∆W(ωc) are presented in Fig. 22. We found the same behavior as before, i.e., ∆WK is negative. But we also found that the larger is the overall scale for the self-energy, the larger is a frequency of zero-crossing of ∆W(ωc). In particular, for the same λ and ωsf that were used in Ref. 33 to fit the NS conductivity data, the zero crossing is at ∼ 0.8 eV which is quite close to the bandwidth. This implies that at a truly strong coupling the frequency at which ∆W(ωc) changes sign can well be larger than the bandwidth of 1eV in which case ∆W integrated up to the bandwidth does indeed remain positive. Such behavior would be consistent with Refs.8,9. we also see from Fig. 22 that ∆WK becomes small at a truly strong coupling, and over a wide range of frequencies the behavior of ∆W(ωc) is predominantly governed by ∆f(ωc), i.e. by the cut-off term.50 The implication is that, to first approximation, ∆WK can be neglected and positive ∆W(wc) integrated to a frequency where it is still positive is almost compensated by the integral over larger frequencies. This again would be consistent with the experimental data in Refs. 8,9.\n\nIt is also instructive to understand the interplay between the behavior of ∆W(ωc) and the behavior of the difference of the kinetic energy between the SCS and the NS, δKE. We computed the kinetic energy as a function of λωsf and present the results in Fig. 23 for λ = 1 and 10. For a relatively weak λ = 1 the behavior is clearly BCS like- δKE > 0 and increases with increasing λωsf . However, at large λ = 10, we see that the kinetic energy begin decreasing at large λωsf and eventually changes sign. The behavior of δKE at a truly strong coupling is consistent with earlier calculation of the kinetic energy for Ornstein-Zernike form of the spin susceptibility43 .\n\nWe clearly see that the increase of the zero crossing frequency of ∆W(ωc) at a truly strong coupling is correlated with the non-BCS behavior of δKE. At the same time, the behavior of δW(ωc) is obviously not driven by the kinetic energy as eventually δW(ωc) changes sign and become negative. Rather, the increase in the frequency range where ∆W(ωc) remains positive and non-BCS behavior of δKE are two indications of the same effect that fermions are incoherent in the NS but acquire coherence in the SCS.\n\n### III. CONCLUSION\n\nIn this work we analyzed the behavior of optical integrals W(ωc) ∝ R ωc o σ(ω)dω and Kubo sum rules in the normal and superconducting states of interacting fermionic systems on a lattice. Our key goal was to understand what sets the sign of ∆WK = ∆W(∞) between the normal and superconducting states and what is the behavior of W(ωc) and ∆W(ωc) at finite ωc. In a weak coupling BCS superconductor, ∆W(ωc) is positive at ωc < 2∆ due to a contribution from superfluid density, but becomes negative at larger ωc, and approach a negative value of ∆WK. Our study was motivated by fascinating optical experiments on the cuprates7–10. In overdoped cuprates, there is clear indication11 that ∆W(ωc) becomes negative above a few ∆, consistent with BCS behavior. In underdoped cuprates, two groups argued8,9 that ∆W integrated up to the bandwidth remains positive, while the other group argued10 that it is negative.\n\nThe reasoning why ∆WK may potentially change sign at strong coupling involves the correlation between −WK and the kinetic energy. In the BCS limit, kinetic energy obviously increases in a SCS because of gap opening, hence −WK increases, and ∆WK is negative. At strong coupling, there is a counter effect – fermions become more mobile in a SCS due to a smaller self-energy.\n\nWe considered four models: a BCS model with impurities, a model of fermions interacting with an Einstein boson, a phenomenological MFL model with impurities, and a model of fermions interacting with collective spin fluctuations. In all cases, we found that ∆WK is negative, but how it evolves with ωc and how much of the sum rule is recovered by integrating up to the bandwidth depends on the model.\n\nThe result most relevant to the experiments on the cuprates is obtained for the spin fluctuation model. We found that at strong coupling, the zero-crossing of δW(ωc) occurs at a frequency which increases with the coupling strength and may become larger than the bandwidth at a truly strong coupling. Still, at even larger frequencies, ∆W(ωc) is negative.", - "page_start": 13, - "page_end": 13, - "source_file": "1001.0764.pdf" - }, - { - "text": "# Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W, and ∆W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models – a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off (ωc) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ωc equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆W, implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆W. In all models we studied ∆W is positive at small ωc, then crosses zero and approaches a negative value at large ωc, i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n#### I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ(Ω) and found that it is independent on the nature of the interactions and reduces to\n\n$$\\int_{0}^{\\infty}R e\\,\\sigma(\\Omega)\\,d\\Omega={\\frac{\\pi}{2}}{\\frac{n e^{2}}{m}}\\qquad\\qquad(1)$$\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state – henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ(Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ(Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule:2,3\n\n$$\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{NS}(\\Omega)=\\int_{0+}^{\\infty}\\,Re\\,\\sigma_{sc}(\\Omega)+\\frac{\\pi n_{s}e^{2}}{2m}\\ \\ \\ \\ (2)$$\n\nwhere ns is the superfluid density, and πnse 2/(2m) is the spectral weight under the δ-functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the \"band\", or Kubo sum rule\n\n$$\\int_{0}^{\\cdot\\infty^{\\prime}}Re\\,\\sigma(\\Omega)\\,d\\Omega=W_{K}=\\frac{\\pi e^{2}}{2N}\\sum_{\\vec{k}}\\nabla_{k_{x}}^{2}\\varepsilon_{\\vec{k}}\\,n_{\\vec{k}}\\tag{3}$$\n\nwhere n~k is the electronic distribution function and ε~k is the band dispersion. Prime in the upper limit of the integration has the practical implication that the upper limit is much larger than the bandwidth of a given band which crosses the Fermi level, but smaller than the frequencies of interband transitions. Interactions with external objects, e.g., phonons or impurities, and interactions between fermions are indirectly present in the distribution function which is expressed via the full fermionic Green's function as n~k = T P m G( ~k, ωm). For ǫk = k 2/2m, ∇2 k~x ε~k = 1/m, WK = πne2/(2m), and Kubo sum rule reduces to Eq. (1). In general, however, ε~k is a lattice dispersion, and Eqs. (1) and (3) are different. Most important, WK in Eq. (3) generally depends on T and on the state of the system because of n~k . In this situation, the temperature evolution of the optical integral does not reduce to a simple redistribution of the spectral weight – the whole spectral weight inside the conduction band changes with T . This issue was first studied in detail by Hirsch 4 who introduced the now-frequently-used notation \"violation of the conductivity sum rule\".\n\nIn reality, as already pointed out by Hirsch, there is no true violation as the change of the total spectral weight", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 3: Fractional coverage Θ in thermal equilibrium of Ni in a (a) monovacancy, (b) divacancy I, (c) divacancy II and (d) change in resistance ∆R per dopant site as a function of CO concentration in a background of air at room temperature and 1 bar of pressure. The reference concentration of CO is taken to be C0 =0.1 ppm. Note the change from linear to log scale on the y-axis at ∆R =10 Ω.\n\nFor a given background composition we may thus estimate the fractional coverages for each available adsorbate for a given type of doping. As an example, Fig. 3(a)-(c) shows the fractional coverage of a Ni atom occupying a monovacancy, divacancy I, and divacancy II, versus CO concentration in a background of air at room temperature and 1 bar of pressure. Due to the relatively small binding energy of N2 and H2O as compared to O2 and CO, all Ni sites will be either empty or occupied by O2 or CO. In particular, Ni in a monovacancy (top panel of Fig. 3) will be completely oxidized for all relevant CO concentrations. For the Ni occupied divacancy II structures we find the coverage of CO changes significantly around toxic concentrations (∼10 ppm).\n\nTo estimate the effect of adsorbates on the electrical conductance of doped CNTs, we first consider the change in conductance when a single molecule is adsorbed on a metal site of an otherwise pristine CNT. In Fig. 2(b) we show the calculated change in conductance relative to the metal site with no adsorbate. In contrast to the binding energies, there are no clear trends in the conductances. The sensitivity of the conductance is perhaps most clearly demonstrated by the absence of correlation between different types of vacancies, i.e. between the three panels in Fig. 2(b). Close to the Fermi level, the conductance of a perfect armchair CNT equals 2G0. The presence of the metal dopant leads to several dips in the transmission function known as Fano antiresonances [20]. The position and shape of these dips depend on the d-levels of the transition metal atom, the character of its bonding to the CNT, and is further affected by the presence of the adsorbate molecule. The coupling of all these factors is very complex and makes it difficult to estimate or rationalize the value of the conductance. For the spin polarized cases, we use the spin-averaged conductances, i.e. G = (G↑ + G↓)/2.\n\nNext, we estimate the resistance of a CNT containing several impurities (a specific metal dopant with different molecular adsorbates). Under the assumption that the electron phasecoherence length, lφ, is smaller than the average distance between the dopants, d, we may neglect quantum interference and obtain the total resistance by adding the scattering resistances due to each impurity separately. The scattering resistance due to a single impurity is given by\n\n$R_{s}(X)=1/G(X)-1/(2G_{0})$, (6)\n\nwhere G(X) is the Landauer conductance of the pristine CNT with a single metal dopant occupied by molecule X and 1/(2G0) is the contact resistance of a (6,6) CNT.\n\nWe may now obtain the total resistance per dopant site relative to the reference background signal as a function of the target molecule concentration\n\n∆R N ≈ X X Rs(X)(Θ[X, C] − Θ[X, C0]), (7)\n\nwhere N is the number of dopants, Θ[X, C] is the fractional coverage of species X at concentration C of the target and C0 is the reference concentration. Notice that the contact resistance drops out as we evaluate a change in resistance.\n\nIn Fig. 3(d) we show the change in resistance calculated from Eq. (7) as a function of CO concentration for Ni occupying the three types of vacancies. The background reference concentration of CO is taken to be C0 = 0.1 ppm. For the monovacancy there is very little change in resistivity. This is because most active sites are blocked by O2 at relevant CO concentrations, as shown in the upper panel of Fig. 3. For Ni in the divacancies there is, however, a change in resistance on the order of 1Ω per site. For concentrations above ∼1 ppm, the CO coverage of Ni in the divacancy II increases dramatically and this leads to a significant increase in resistance.\n\nWe now return to the discussion of the validity of Eq. (7). As mentioned, the series coupling of individual scatterers should be valid when lφ < d. However, even for lφ > d and assuming that the Anderson localization length, lloc in the system exceeds lφ, Eq. (7) remains valid if one replaces the actual resistance R by the sample averaged resistance hRi [29]. At room temperature under ambient conditions, interactions with external degrees of freedom such as internal CNT phonons and vibrational modes of the adsorbed molecules would rapidly randomize the phase of the electrons. Therefore Eq. (7) should certainly be valid in the limit of low doping concentrations. On the other hand, the total number of dopants, N, should be large enough for the statistical treatment of the coverage to hold. Finally, we stress that Eq. (7) represents a conservative estimate of the change in resistance. In fact, in the regime where lφ > lloc, i.e. in the Anderson localization regime, the resistance would be highly sensitive to changes in the fractional coverage of active sites. Calculation of the actual resistance of the CNT in this regime would, however, involve a full transport calculation in the presence of", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 15: Top – σ(ω) in the NS and the SCS in the 'corrected' MFLI model with the feedback from SC on the quasiparticle damping: iΓ term transforms into √ Γ −ω2+∆2 . In the SCS σ now begins at Ω = 2∆. The parameters are same as in Fig. 10. Bottom – the behavior of Kubo sum with Γ. Observe that W(ωc) in the NS is larger than in the SCS.\n\nFIG. 16: Evolution of the difference of the optical integrals between the SCS and the NS with the upper cut-off ωc for the \"corrected\" MFLI model. Now ∆W(ωc) is negative above some frequency. Parameters are same as in the Fig 15.\n\nmodel, where WK is larger in the NS for all Γ (see Fig. 4). In other words, the original MFLI model does not have the BCSI theory as its limiting case.\n\nWe modified the MFLI model is a minimal way by changing the damping term in a SCS to √ Γ −ω2+∆2 to be consistent with BCSI model. We still use Eq. (18) for the MFL term simply because this term was introduced in the NS on phenomenological grounds and there is no way to guess how it gets modified in the SCS state without first deriving the normal state self-energy microscopically (this is what we will do in the next section). The results of the calculations for the modified MFLI model are presented in Figs. 15 and 16. We clearly see that the behavior is now different and ∆WK < 0 for all Γ. This is the same behavior as we previously found in BCSI and EB models. So we argue that the 'unconventional' behavior exhibited by the original MFLI model is most likely the manifestation of a particular modeling inconsistency. Still, Ref. 30 made a valid point that the fact that quasiparticles behave more close to free fermions in a SCS than in a NS, and this effect tends to reverse the signs of ∆WK and of the kinetic energy 43. It just happens that in a modified MFLI model the optical integral is still larger in the NS.\n\n#### D. The collective boson model\n\nWe now turn to a more microscopic model- the CB model. The model describes fermions interacting by exchanging soft, overdamped collective bosons in a particular, near-critical, spin or charge channel31,44,45. This interaction is responsible for the normal state self-energy and also gives rise to a superconductivity. A peculiar feature of the CB model is that the propagator of a collective boson changes below Tc because this boson is not an independent degree of freedom (as in EB model) but is made out of low-energy fermions which are affected by superconductivity32 .\n\nThe most relevant point for our discussion is that this model contains the physics which we identified above as a source of a potential sign change of ∆WK. Namely, at strong coupling the fermionic self-energy in the NS is large because there exists strong scattering between low-energy fermions mediated by low-energy collective bosons. In the SCS, the density of low-energy fermions drops and a continuum collective excitations becomes gaped. Both effects reduce fermionic damping and lead to the increase of WK in a SCS. If this increase exceeds a conventional loss of WK due to a gap opening, the total ∆WK may become positive.\n\nThe CB model has been applied numerous times to the cuprates, most often under the assumption that nearcritical collective excitations are spin fluctuations with momenta near Q = (π, π). This version of a CB boson is commonly known as a spin-fermion model. This model yields dx2−y 2 superconductivity and explains in a quantitative way a number of measured electronic features of the cuprates, in particular the near-absence of the quasiparticle peak in the NS of optimally doped and underdoped cuprates39 and the peak-dip-hump structure in the ARPES profile in the SCS31,32,46,47. In our analysis we assume that a CB is a spin fluctuation.\n\nThe results for the conductivity within a spin-fermion model depend in quantitative (but not qualitative) way on the assumption for the momentum dispersion of a collective boson. This momentum dependence comes from", - "page_start": 9, - "page_end": 9, - "source_file": "1001.0764.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\n$$H_{\\rm cluster}=(J_{\\rm cluster}/2)(r\\cdot{\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4})^{2}\\tag{11}$$\n\nwith Jcluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap (∼ Jcluster), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref.24–27 .\n\n# IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the Jx,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1, . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j(k) as j1, . . . , j4 (k1, . . . , k4), and denote pseudo-spins on cluster j(k) as ~τj (~τk).\n\n# A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling Jcluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of Sℓ · Sm. Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials28–34. More details can be found in a recent review by Tchernyshyov and Chern35. And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1, . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref.35). Only two tetragonal to orthorhombic distortion modes, QE 1 and QE 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "FIG. 4: Top - a conductivity plot for the BCSI case in the presence of a lattice. The parameters are ∆ = 30 meV , Γ = 3.5 meV . Bottom – the behavior of Kubo sums. Note that (a) the spectral weight in the NS is always greater in the SCS, (b) the spectral weight decreases with Γ, and (c) the difference between NS and SCS decreases as Γ increases.\n\nlittle variation of ∆W(ωc) at above 0.1 − 0.3eV what implies that for larger ωc, ∆W(ωc) ≈ ∆WK >> ∆f(ωc).\n\nTo make this more quantitative, we compare in Fig. 6 ∆W(ωc) obtained for a constant DOS, when ∆W(ωc) = ∆f(ωc), and for the actual lattice dispersion, when ∆W(ωc) = ∆WK + ∆f(ωc). In the clean limit there is obviously little cutoff dependence beyond 0.1eV , i.e., ∆f(ωc) is truly small, and the difference between the two cases is just ∆WK. In the dirty limit, the situation is similar, but there is obviously more variation with ωc, and ∆f(ωc) becomes truly small only above 0.3eV . Note also that the position of the dip in ∆W(ωc) in the clean limit is at a larger ωc in the presence of the lattice than in a continuum.\n\n#### B. The Einstein boson model\n\nWe next consider the case of electrons interacting with a single boson mode which by itself is not affected by superconductivity. The primary candidate for such mode is an optical phonon. The imaginary part of the NS self energy has been discussed numerous times in the literature. We make one simplifying assumption – approximate the DOS by a constant in calculating fermionic self-energy. We will, however, keep the full lattice dispersion in the calculations of the optical integral. The advantage of this\n\nFIG. 5: The evolution of optical integral in NS(top) and SCS(bottom) for BCSI case. Plots are made for clean limit (solid lines, Γ = 3.5 meV ) and dirty limit (dashed lines, Γ = 150 meV ) for ∆ = 30 meV . Observe that (a) W(0) = 0 in the NS, but has a non-zero value in the SCS because of the δ-function (this value decreases in the dirty limit), and (b) the flat region in the SCS is due to the fact that σ ′ (ω) = 0 for Ω < 2∆. Also note that ∼ 90 − 95% of the spectral weight is recovered up to 1eV\n\napproximation is that the self-energy can be computed analytically. The full self-energy obtained with the lattice dispersion is more involved and can only be obtained numerically, but its structure is quite similar to the one obtained with a constant DOS.\n\nThe self-energy for a constant DOS is given by\n\n$$\\Sigma(i\\omega)=-\\frac{i}{2\\pi}\\lambda_{n}\\int d\\epsilon_{k}d(i\\Omega)\\chi(i\\Omega)G(\\epsilon_{k},i\\omega+i\\Omega)\\tag{13}$$\n\nwhere\n\n$$\\chi(i\\Omega)=\\frac{\\omega_{0}^{2}}{\\omega_{0}^{2}-(i\\Omega)^{2}}\\tag{14}$$\n\nand λn is a dimensionless electron-boson coupling. Integrating and transforming to real frequencies, we obtain\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,\\Theta(|\\omega|-\\omega_{o})$$\n \n \n\n$$\\Sigma^{\\prime}(\\omega)=-\\frac{1}{2}\\,\\lambda_{n}\\omega_{o}\\,log\\left|\\frac{\\omega+\\omega_{o}}{\\omega-\\omega_{o}}\\right|\\tag{15}$$\n\nIn the SCS, we obtain for ω < 0\n\n$$\\Sigma^{\\prime\\prime}(\\omega)=-\\frac{\\pi}{2}\\,\\lambda_{n}\\omega_{o}\\,R e\\left(\\frac{\\omega+\\omega_{o}}{\\sqrt{(\\omega+\\omega_{o})^{2}-\\Delta^{2}}}\\right)$$", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 2: Distribution functions in four cases (a) BCSI model, where one can see that for ε > 0, SC>NS implying KE increases in the SCS. (b) The original MFLI model of Ref. 30, where for ε > 0, SCNS, implying KE increases in the SCS. Observe that in the impurity-free CB model there is no jump in n(ǫ) indicating lack of fermionic coherence. This is consistent with ARPES39\n\n#### A. The BCS case\n\nIn BCS theory the quantity Z(ω) is given by\n\n$$Z_{B C S I}(\\omega)=1+\\frac{\\Gamma}{\\sqrt{\\Delta^{2}-(\\omega+i\\delta)^{2}}}\\qquad(11)$$\n\nand\n\n$$\\Sigma_{B C S I}(\\omega)=\\omega\\left(Z(\\omega)-1\\right)=i\\Gamma\\frac{\\omega}{\\sqrt{(\\omega+i\\delta)^{2}-\\Delta^{2}}}\\ \\ \\ (12)$$\n\nThis is consistent with having in the NS, Σ = iΓ in accordance with Eq 6. In the SCS, Σ(ω) is purely imaginary for ω > ∆ and purely real for ω < ∆. The self-energy has a square-root singularity at ω = ∆.\n\nIt is worth noting that Eq.12 is derived from the integration over infinite band. If one uses Eq.6 for finite band, Eq.12 acquires an additional frequency dependence at large frequencies of the order of bandwidth (the low frequency structure still remains the same as in Eq.12). In principle, in a fully self-consistent analysis, one should indeed evaluate the self-energy using a finite bandwidth. In practice, however, the self-energy at frequencies of order bandwidth is generally much smaller than ω and contribute very little to optical conductivity which predominantly comes from frequencies where the self-energy is comparable or even larger than ω. Keeping this in mind, below we will continue with the form of self-energy derived form infinite band. We use the same argument for all four models for the self-energy.\n\nFor completeness, we first present some well known results about the conductivity and optical integral for a constant DOS and then extend the discussion to the case where the same calculations are done in the presence of a particular lattice dispersion.\n\nFIG. 3: The BCSI case with a dispersion linearized around the Fermi surface. Evolution of the difference of optical integrals in the SCS and the NS with the upper cut-off ωc Observe that the zero crossing point increases with impurity scattering rate Γ and also the 'dip' spreads out with increasing Γ. ∆ = 30 meV\n\nFor a constant DOS, ∆W(ωc) = WSC (ωc) − WNS(ωc) is zero at ωc = ∞ and Kubo sum rule reduces to FGT sum rule. In Fig. 3 we plot for this case ∆W(ωc) as a function of the cutoff ωc for different Γ′ s. The plot shows the two well known features: zero-crossing point is below 2∆ in the clean limit Γ << ∆ and is roughly 2Γ in the dirty limit21,40 The magnitude of the 'dip' decreases quite rapidly with increasing Γ. Still, there is always a point of zero crossing and ∆W(ωc) at large ωc approaches zero from below.\n\nWe now perform the same calculations in the presence of lattice dispersion. The results are summarized in Figs 4,5, and 6.\n\nFig 4 shows conductivities σ(ω) in the NS and the SCS and Kubo sums WK plotted against impurity scattering Γ. We see that the optical integral in the NS is always greater than in the SCS. The negative sign of ∆WK is simply the consequence of the fact that nk is larger in the NS for ǫk < 0 and smaller for ǫk < 0, and ∇2 ε~k closely follows −ε~k for our choice of dispersion38), Hence nk is larger in the NS for ∇2 ε~k > 0 and smaller for ∇2 ε~k < 0 and the Kubo sum rule, which is the integral of the product of nk and ∇2 ε~k (Eq. 3), is larger in the normal state.\n\nWe also see from Fig. 4 that ∆WK decreases with Γ reflecting the fact that with too much impurity scattering there is little difference in nk between NS and SCS.\n\nFig 5 shows the optical sum in NS and SCS in clean and dirty limits (the parameters are stated in the figure). This plot shows that the Kubo sums are almost completely recovered by integrating up to the bandwidth of 1eV : the recovery is 95% in the clean limit and ∼ 90% in the dirty limit. In Fig 6 we plot ∆W(ωc) as a function of ωc in clean and dirty limits. ∆W(∞) is now non-zero, in agreement with Fig. 4 and we also see that there is", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0764.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "What happens when the spin-rotation symmetry is explicitly broken?", - "target_page": 2, - "target_passage": "makes them harder to realize in solid state systems", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "chirality interactions in cold atom optical lattices has been proposed38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λx,y,z/Jcluster ∼ p |Jx,y,z|/Jcluster.\n\n### V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n#### Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n# Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref.35 the couplings of all tetrahedron distortion modes to the spin system. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\n$$H_{\\rm cluster},\\ {\\rm SL}=(J_{\\rm cluster}/2)(\\sum_{\\ell}{\\bf S}_{\\ell})^{2}+J^{\\prime}\\sum_{\\ell}J_{x}\\tau_{j}^{x}\\tau_{k}^{x}-\\sum_{y-{\\rm links}\\ }J_{y}\\tau_{j}^{y}\\tau_{k}^{y}$$\n \n$$-\\sum_{z-{\\rm links}\\ }J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere τ x,y,z are Pauli matrices, and x, y, z-links are defined in FIG. 1. It was shown by Kitaev1 that this spin-1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as |Jx|, |Jy|, and |Jz| satisfy the triangular relation, sum of any two of them is greater than the third one1 . It was further proposed by Kitaev1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems2,3. The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works4–7. Exact diagonalization has been used to study the Kitaev model on small lattices8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models9 .\n\nMany generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "FIG. 1: The honeycomb lattice for the Kitaev model. Filled and open circles indicate two sublattices. x, y, z label the links along three different directions used in (1).\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability4,6. And many generalizations to other(even 3D) lattices have been developed in the last few years10–16. All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices17,18, or in superconducting circuits19. But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin20, in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t2g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of Sj ·Sk [or the spin-chirality Sj ·(Sk ×Sℓ)] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the Jx,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref.9,10. Right: enlarged picture of the clusters with the four physical spins labeled as 1, . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling Jcluster.\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n# II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.\n\nIn this Section we will construct the pseudo-spin-1/2 from a cluster of four physical spins, and map the physical spin operators to pseudo-spin operators. The mapping constructed here will be used in later Sections to construct the effective Kitaev model. In this Section we will work entirely within the four-spin cluster, all unspecified physical spin subscripts take values 1, . . . , 4.\n\nConsider a cluster of four spin-1/2 moments(called physical spins hereafter), labeled by S1,...,4, antiferromagnetically coupled to each other (see the right bottom part of FIG. 2). The Hamiltonian within the cluster(up to a constant) is simply the Heisenberg antiferromagnetic(AFM) interactions,\n\n$$H_{\\rm cluster}=\\left(J_{\\rm cluster}/2\\right)\\left({\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4}\\right)^{2}\\tag{2}$$\n\nThe energy levels should be apparent from this form: one group of spin-2 quintets with energy 3Jcluster, three groups of spin-1 triplets with energy Jcluster, and two spin singlets with energy zero. We will consider large positive", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\n$$H_{\\rm cluster}=(J_{\\rm cluster}/2)(r\\cdot{\\bf S}_{1}+{\\bf S}_{2}+{\\bf S}_{3}+{\\bf S}_{4})^{2}\\tag{11}$$\n\nwith Jcluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap (∼ Jcluster), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref.24–27 .\n\n# IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the Jx,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1, . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j(k) as j1, . . . , j4 (k1, . . . , k4), and denote pseudo-spins on cluster j(k) as ~τj (~τk).\n\n# A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling Jcluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of Sℓ · Sm. Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials28–34. More details can be found in a recent review by Tchernyshyov and Chern35. And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1, . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref.35). Only two tetragonal to orthorhombic distortion modes, QE 1 and QE 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "Figure 4.32. Spin Characteristics", - "page_start": 327, - "page_end": 327, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Jcluster limit. So only the singlet sector remains in low energy.\n\nThe singlet sector is then treated as a pseudo-spin-1/2 Hilbert space. From now on we denote the pseudo-spin-1/2 operators as T = (1/2)~τ, with ~τ the Pauli matrices. It is convenient to choose the following basis of the pseudo-spin\n\n$$\\tau^{z}=\\pm1\\rangle=\\frac{1}{\\sqrt{6}}\\Big{(}|\\uparrow\\downarrow\\uparrow\\uparrow\\rangle+\\omega^{-\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\rangle+\\omega^{\\tau^{z}}|\\uparrow\\uparrow\\uparrow\\rangle$$\n \n$$+|\\uparrow\\uparrow\\downarrow\\downarrow\\rangle+\\omega^{-\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\downarrow\\rangle+\\omega^{\\tau^{z}}|\\uparrow\\downarrow\\uparrow\\uparrow\\rangle\\Big{)}\\tag{3}$$\n\nwhere ω = e 2πi/3 is the complex cubic root of unity, | ↓↓↑↑i and other states on the right-hand-side(RHS) are basis states of the four-spin system, in terms of S z quantum numbers of physical spins 1, . . . , 4 in sequential order. This pseudo-spin representation has been used by Harris et al. to study magnetic ordering in pyrochlore antiferromagnets21 .\n\nWe now consider the effect of Heisenberg-type interactions Sj · Sk inside the physical singlet sector. Note that since any Sj · Sk within the cluster commutes with the cluster Hamiltonian Hcluster (2), their action do not mix physical spin singlet states with states of other total physical spin. This property is also true for the spinchirality operator used later. So the pseudo-spin Hamiltonian constructed below will be exact low energy Hamiltonian, without truncation errors in typical perturbation series expansions.\n\nIt is simpler to consider the permutation operators Pjk ≡ 2Sj · Sk + 1/2, which just exchange the states of the two physical spin-1/2 moments j and k (j 6= k). As an example we consider the action of P34,\n\n$$P_{34}|\\tau^{z}=-1\\rangle=\\frac{1}{\\sqrt{6}}\\Big{(}|\\downarrow\\uparrow\\uparrow\\rangle+\\omega|\\uparrow\\uparrow\\downarrow\\rangle+\\omega^{2}|\\uparrow\\uparrow\\downarrow\\uparrow\\rangle$$\n \n$$+|\\uparrow\\uparrow\\downarrow\\downarrow\\rangle+\\omega|\\uparrow\\downarrow\\uparrow\\rangle+\\omega^{2}|\\uparrow\\downarrow\\uparrow\\downarrow\\rangle\\Big{)}$$\n \n$$=|\\tau^{z}=+1\\rangle$$\n\nand similarly P34|τ z = −1i = |τ z = +1i. Therefore P34 is just τ x in the physical singlet sector. A complete list of all permutation operators is given in TABLE I. We can choose the following representation of τ x and τ y ,\n\n$$\\begin{split}\\tau^{x}&=P_{12}=2\\mathbf{S}_{1}\\cdot\\mathbf{S}_{2}+1/2\\\\ \\tau^{y}&=(P_{13}-P_{14})/\\sqrt{3}=(2/\\sqrt{3})\\mathbf{S}_{1}\\cdot(\\mathbf{S}_{3}-\\mathbf{S}_{4})\\end{split}\\tag{4}$$\n\nMany other representations are possible as well, because several physical spin interactions may correspond to the same pseudo-spin interaction in the physical singlet sector, and we will take advantage of this later.\n\nFor τ z we can use τ z = −iτx τ y , where i is the imaginary unit,\n\n$$\\tau^{z}=-i(2/\\sqrt{3})(2{\\bf S}_{1}\\cdot{\\bf S}_{2}+1/2){\\bf S}_{1}\\cdot({\\bf S}_{3}-{\\bf S}_{4})\\quad(5)$$\n\n| physical spin | pseudo-spin | | | |\n| --- | --- | --- | --- | --- |\n| P12, and P34 τ | x | | | |\n| P13, and P24 | x + (√ −(1/2)τ | | 3/2)τ | y |\n| P14, and P23 | x − −(1/2)τ | √ ( | 3/2)τ | y |\n| −χ234, χ341, −χ412, and χ123 ( | √ z 3/4)τ | | | |\n\nTABLE I: Correspondence between physical spin operators and pseudo-spin operators in the physical spin singlet sector of the four antiferromagnetically coupled physical spins. Pjk = 2Sj ·Sk + 1/2 are permutation operators, χjkℓ = Sj ·(Sk ×Sℓ) are spin-chirality operators. Note that several physical spin operators may correspond to the same pseudo-spin operator.\n\nHowever there is another simpler representation of τ z , by the spin-chirality operator χjkℓ = Sj · (Sk × Sℓ). Explicit calculation shows that the effect of S2 ·(S3 × S4) is −( √ 3/4)τ z in the physical singlet sector. This can also be proved by using the commutation relation [S2 ·S3, S2 · S4] = iS2 · (S3 × S4). A complete list of all chirality operators is given in TABLE I. Therefore we can choose another representation of τ z ,\n\n$$\\tau^{z}=-\\chi_{234}/(\\sqrt{3}/4)=-(4/\\sqrt{3}){\\bf S}_{2}\\cdot({\\bf S}_{3}\\times{\\bf S}_{4})\\qquad(6)$$\n\nThe above representations of τ x,y,z are all invariant under global spin rotation of the physical spins.\n\nWith the machinery of equations (4), (5), and (6), it will be straightforward to construct various pseudo-spin-1/2 Hamiltonians on various lattices, of the Kitaev variety and beyond, as the exact low energy effective Hamiltonian of certain spin-1/2 models with spin-rotation symmetry. In these constructions a pseudo-spin lattice site actually represents a cluster of four spin-1/2 moments.\n\n#### III. REALIZATION OF THE KITAEV MODEL.\n\nIn this Section we will use directly the results of the previous Section to write down a Hamiltonian whose low energy sector is described by the Kitaev model. The Hamiltonian will be constructed on the physical spin lattice illustrated in FIG. 2. In this Section we will use j, k to label four-spin clusters (pseudo-spin-1/2 sites), the physical spins in cluster j are labeled as Sj1, . . . , Sj4.\n\nApply the mappings developed in Section II, we have the desired Hamiltonian in short notation,\n\n$$H=\\sum_{\\begin{subarray}{c}\\text{cluster}\\\\ \\text{cluster}\\end{subarray}}H_{\\text{cluster}}-\\sum_{x-\\text{links}}J_{x}\\tau_{j}^{x}\\tau_{k}^{x}\\tag{7}$$\n \n$$-\\sum_{\\begin{subarray}{c}y-\\text{links}\\end{subarray}}J_{y}\\tau_{j}^{y}\\tau_{k}^{y}-\\sum_{z-\\text{links}}J_{z}\\tau_{j}^{z}\\tau_{k}^{z}$$\n\nwhere j, k label the honeycomb lattice sites thus the fourspin clusters, Hcluster is given by (2), τ x,y,z should be replaced by the corresponding physical spin operators in (4) and (5) or (6), or some other equivalent representations of personal preference.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0266.pdf" - }, - { - "text": "#### NAVWEPS 00-ROT-80 OPERATING STRENGTH LIMITitTIONS\n\ntwisting deformation will be great enough to nullify the effect on aileron deflection and the aileron effectiveness will-be zero. Since speeds above this point create rolling moments opposite to the direction controlled, this point is termed the \"aileron reversal speed.\" Operation beyond the reversal speed would create an obvious control difficulty. Also, the extremely large twisting moments which produce loss of aileron effectiveness create large twisting moments capable of structural damage.\n\nIn order to prevent loss of aileron effectiveness at high airspeeds, the wing must have high torsional stiffness. This may be a feature difficult to accomplish in a wing of very thin section and may favor the use of inboard ailerons to reduce the twisted span length and effectively increase torsional stiffness. The use of spoilers for lateral control minimizes the twisting moments and alleviates the reversal problem.\n\nDivergcm is another phenomenon common to flight at high dynamic pressures. Like aileron reversal, it is an effect due to the interaction of aerodynamic forces and elastic deflections of the structure. However, it differs from aileron reversal in that it is a violent instability which produces immediate failure. Figure 5.5 illustrates the process of instability. If the surface is above the divergence speed, any disturbance precipitates this sequence. Any change in lift takes place at the aerodynamic center of the section. The change in lift ahead of the elastic axis produces a twisting moment and a consequent twisting deflection. The change in angle of attack creates greater lift at the ac., greater twisting deflection, more lift, etc., until failure occurs.\n\nAt low flight speeds where the dynamic pressure is low, the relationship between aerodynamic force buildup and torsional deflection is 'stable. However, the change in lift per angle of attack is proportional to 'vz but the structural torsional stiffness of the wing remains constant. This relationship implies that at some high speed, the aerodynamic force\n\nbuildup may overpower the resisting torsional stiffness and \"divergence\" will occur. The divergence speed of the surfaces must be sufficiently high that the airplane does not encounter this phenomenon within the normal operating envelope. Sweepback, short span, and high taper help raise the divergence speed.\n\nF/titter involves aerodynamic forces, inertia forces and the elastic properties of a surface. The distribution of mass and stiffness in a structure determine certain natural frequencies and modes of vibration. If the structure is subject to a forcing frequency near these natural frequencies, a resonant condition can result with an unstable oscillation. The aircraft is subject to many aerodynamic excitations while in operation and the aerodynamic forces at various speeds have characteristic properties for rate of change of force and moment. The aerodynamic forces may interact with the structure in a fashion which may excite or negatively damp the natural modes of the structure and allow flutter. Flutter must not occur within the normal flight operating envelope and the natural modes must be damped if possible or designed to occur beyond the limit speed. A'typical flutter mode is illus- 'trated in figure 5.5.\n\nSince the problem is one of high speed flight, it is generally desirable to have 'very high natural frequencies and flutter speeds well above the normal operating speeds. Any change of stiffness or mass distribution will alter the modes and frequencies and thus allow a change in the flutter speeds. If the aircraft is not properly maintained and excessive play and flexibility exist, flutter could occur at flight speeds below the limit airspeed.\n\nCompres&ility pmblems may define the limit airspeed for an airplane in terms of Mach number. The supersonic airplane may experience a great decay of stability at some high Mach number or encounter critical structural or engine inlet temperatures due to aerodynamic heating. The transonic airplane at an excessive", - "page_start": 359, - "page_end": 359, - "source_file": "00-80T-80.pdf" - }, - { - "text": "excessive angles of attack. Of course, a low speed airplane could be: designed to be spinproof by making it stallproof. By limiting the amount of control deflection, the airplane may not have the longitudinal control power to trim to maximum lift angle of attack. Such a provision may be possible for certain light planes and commercial aircraft but would create an unrealistic and impractical limitation on the utility of a military airplane.\n\nThe modern high speed airplane configuration is typified by low aspect ratio, swept wing planforms with relatively large yaw and pitch inertia. The aerodynamic characteristics of such a configuration are shown in figure 4.32. The lift curve (C, versus U) is quite shallow at high angles of attack and maximum lift is not clearly defined. When this type of airplane is provided a rolling motion at high angles of attack, relatively small changes in C, take place. When this effect is combined with the relatively short span of this type airplane, it is apparent that the wing autorotation contribution will be quite weak and will not be a predominating pro-spin moment. The relatively large changes in drag coefficient with rolling motion imply .a predominance of yaw for the spin of the high speed airplane configuration.\n\nActually, various other factors contribute to the predominating yaw tendency for the spin of the modern airplane configuration. The static directional stability deteriorates at high angles of attack and may be so weak that extemely large yaw displacements result. In certain instances, very high angles of attack may bring such a decay in directional stability that a \"slice\" or extreme yaw displacement takes place before a true spin is apparent. At these high angles of attack, the adverse yaw due to roll and aileron deflection can be very strong and create large yaw displacements of the airplane prior to realizing a stall.\n\nThe aircraft with the relatively large, long fuselage can exhibit a significant moment contribution from the fuselage alone. The cross flow pattern on the fuselage at high angles of\n\nattack is capable of producing pro-spin moments of considerable magnitude which contribute to the self-sustaining nature of the spin. Also, the large distributed mass of the fuselage in rolling-yawing rotation contributes to inertia moments which flatten the spin and place the aircraft at extreme angles of attack.\n\nThe spin recovery of the modern high speed airplane involves principles which are similar to those of the spin recovery of the conventional airplane. However, the nature of the spin for the modern configuration may involve specific differences in technique necessary to reduce the sideslip and angle of attack. The use of opposite rudder to control the sideslip and effect recovery will depend on the effectiveness of the rudder when the airplane is in the spin. At high positive angles of attack and high sideslip the rudder effectiveness may be reduced and additional anti-spin moments must be provided for rapid recovery. The deflection of ailerons into the spin reduces the autorotation rolling moment and can produce adverse yaw to aid the rudder yawing moment in effecting recovery.\n\nThere may be many other specific differences in the technique necessary to effect spin recovery . The effectiveness of the rudder during recovery may be altered by the position of elevators or horizontal tail. Generally, full aft stick may be necessary during the initial phase of recovery to increase the effectiveness of the rudder. The use of power during the spin recovery of a propeller powered airplane may or may not aid recovery depending on the specific airplane and the particular nature of the slipstream effects. The use of power during the spin recovery of a jet powered airplane induces no significant or helpful flow but does offer the possibility of a severe compressor stall and adverse gyroscopic moments. Since the airplane is at high angle of attack and sideslip, the flow at the inlet may be very poor and the staI1 limits considerably reduced. These items serve to point out possible differences in technique required for various configurations. The spin recovery specific for", - "page_start": 328, - "page_end": 328, - "source_file": "00-80T-80.pdf" - }, - { - "text": "# Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti(1,2,3), A. Rettori(2,3), and A. Cuccoli(2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2)CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3)CNR-INFM S3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n, decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n# I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties2 , and in view of possible technological applications3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed4,5. A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials6 , itinerant MnSi7 , binary compounds as FeGe8 , glass transition of spins9 , and XY helimagnets, as Holmium, Terbium or Dysprosium10. In the latter case, a new universality class was predicted because a Z2 × SO(2) symmetry is spontaneously broken in the ordered phase2 : In fact, when dealing with such systems, in addition to the SO(2) symmetry of the spin degrees of freedom S~ i , one has to consider also the Z2 symmetry of the spin chirality κij ∝ h S~ i × S~ j iz .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures12. Recent experimental data on ultra-thin Holmium films13 have been lately interpreted and discussed14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al.16, allows for competitive middle-range interactions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Qz such that Qzc ′ ≃ 30◦ , where c ′ = c/2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x − y planes, while z will be taken parallel to c. For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 − 16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as \"surface planes\", i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "center of gravity aft of the main wheels a balancing load on the tail wheel must be produced toward the center of turn. When the tail wheel is free to swivel, the equilibrium of the turn requires a control force opposite to the direction of turn-i.e.. control force instability. The inherent stability problem exists because the center of gravity is aft of the point where the main side forces are developed. This condition is analogous to the case of static longitudinal stability with the center of gravity aft of the neutral point.\n\nThe conventional tail wheel configuration has this basic instability or ground loop tendency which must be stabilized by the pilot. At high rolling speeds where aerodynamic forces are significant, the aerodynamic directional stability of the airplane resists the ground looping tendency. The most likely times for a ground loop exist when rolling speeds are not high enough to provide a contribution of the aerodyhamic forces. When the tail wheel is free to swivel or when the normal force on the tail wheel is small, lack of pilot attention can allow the ground loop to take place.\n\nThe tricycle landing gear configuration has an inherent stability d,ue to the relative position of the main wheels and the center of gravity. Centrifugal force produced by a turn is balanced by the side force on the main wheels and a side force on the nose wheel in the direction of turn. Note that the freeing the nose wheel to swivel produces moments which bring the aircraft out of the turn. Thus, the tricycle configuration has a basic stability which.is given evidence by control displacement and a wheel side force in the direction of turn. Because of the contrast in stability, the tricycle configuration is much less difficult to maneuver than the tail wheel configuration and does not provide an inherent ground loop tendency. However, a steerable nose wheel is usually necessary to provide satisfactory maneuvering capabilities.\n\nThe bicycle configuration of landing gear has stability characteristics more like the automobile. If directional control is accomplished with the front wheels operated by power controls, no stability problem exists at low speeds. A problem can exist when the airplane is at high speeds because of a distribution of normal force being different from the ordinary static weight distribution. If the airplane is held onto the runway at speeds well above the normal takeoff and landing speeds, the front wheels carry a greater than ordinary amount of normal force and a tendency for instability exists. However, at these same high speeds the rudder is quite powerful and the condition is usually well within control.\n\nThe basically stable nature of the tricycle and bicycle landing gear configurations is best appreciated by the ease of control and ground maneuvering of the airplane. Operation of a conventional tail wheel configuration after considerable experience with tricycle cohfigurations requires careful consideration af the stability that must be furnished by the pilot during ground maneuvering.\n\n#### SPINS AND PROBLEMS OP SPIN RECOVERY\n\nThe motion of an airplane in a spin can involve many complex aerodynamic and inertia forces and moments. However, there are certain fundamental relationships regarding spins and spin recoveries with which all aviators should be familiar. The spin differs from a spiral dive in that the spin always involves flight at high angle of attack while the spiral dive involves a spiral motion of the airplane at relatively low angle of attack.\n\nThe stall characteristics and stability of the airplane at high lift coefficients are important in the initial tendencies of the airplane. As previously mentioned, it is desirable to have the wing initiate stall at the root first rather than tip first. Such a stall pattern prevents the undesirable rolling moments at high lift coeGients, provides suitable stall", - "page_start": 324, - "page_end": 324, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "What is the Oxbridge Academy email?", - "target_page": 59, - "target_passage": "Email: info@oxbridgeacademy.co.za", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Did you enjoy reading this book?\n\nJoin our online social community and share your opinion:\n\nwww.facebook.com/oxbridgeacademysa twitter.com/oxbridgeEdu www.linkedin.com/company/oxbridge-academy\n\nOxbridge Academy is an established distance learning college offering skills courses, national qualifications, and internationally recognised courses to students in South Africa and abroad.\n\nWith our head office in Stellenbosch in the Western Cape, we cater to our students' needs by recruiting industry-expert tutors to provide academic assistance via telephone and e-mail, as well as by designing our study material in such a way that it is clear, simple, and easy for our students to understand.\n\nWith us, studying from home is easy, affordable, and convenient.\n\n### CONTACT NUMBERS:\n\nTel: 021 1100 200 Tel:+2721 883 2454 (international) Fax: 086 111 2121 Fax: +2721 883 2378 (international)\n\nWhatsapp: 0605671585 Email: info@oxbridgeacademy.co.za\n\nPostal Address: PO Box 12723, Die Boord, Stellenbosch, 7613\n\nWe are registered with the Department of Higher Education and Training as a Private College in terms of Section 31(6)(a) of the Continuing Education and Training Act, 2006 (Act No. 16 of 2006). Registration No. 2009/FE07/070.", - "page_start": 58, - "page_end": 58, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### STEP 1 – SELECT YOUR COURSE\n\n| Oxbridge Academy Short Course: Marketing Management |\n| --- |\n| ADV101 |\n\nBefore you start filling in the registration form, you need to choose your course. Once you've identified the course that you would like to study, remember to check that you meet the entry requirements.\n\nYou can find the course name and course code for your chosen course on the relevant detailed course information page on our website. Have a look at the example in the screenshot below (the course name and course code are circled in red):\n\n| 021 110 0200 |\n| --- |\n| HOME ABOUT US COURSES s excellence in education |\n| Oxbridge Academy Short Course: Marketing Management |\n| Home / Oxbridge Academy snore |\n| This short course is designed to introduce you to the field of marketing management. It will equip you with the knowledge and skills you need to define the marketing concept, apply marketing decision-making, and explain marketing opportunities. |\n| Course code: |\n| ADV101 |\n| Accreditation status: |\n| This is an Oxbridge Academy Skills Course. |\n\nPlease make sure to check the accreditation status of your chosen course. Some of our courses are non-credit bearing skills development courses, which are neither accredited by external bodies nor registered on the NQF. Please go to our website: *oxbridgeacademy.co.za* for more information about our skills development courses.", - "page_start": 21, - "page_end": 21, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 5:\n\n## TIPS FOR FILLING IN YOUR COLLEGE REGISTRATION FORM\n\nApplying for college (www.oxbridgeacademy.co.za/enrol-now/) can be a daunting experience. Not only do you need to choose a course, but you also need to make sure that you:\n\n- meet the entry requirements\n- meet the deadlines\n- fill in the forms correctly\n- send the forms to the right address\n- include all the necessary attachments\n\nTo make the college registration process easier for you, we've compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/). The guide also includes general tips that will be relevant to the application and registration processes at other colleges.\n\n#### **There are 4 steps you need to follow when you want to register as a student at Oxbridge Academy:**\n\n- **1.** Select Your Course\n- **2.** Fill in Your Student Details\n- **3.** Select Your Delivery Option\n- **4.** Pay Your Registration Fee and Send in Your Form", - "page_start": 20, - "page_end": 20, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## A Summary of the Registration Process at Oxbridge Academy\n\n#### SEND YOUR REGISTRATION FORM\n\nSend your registration form to the registrations office at Oxbridge Academy via one of the following channels:\n\nFax: 086 262 5550 Post: PO Box 12723, Die Boord, 7613 E-mail: registrar@oxbridgeacademy.co.za\n\n#### FILL IN THE REGISTRATION FORM\n\n**2**\n\nThe registration form follows an easy-to-complete four step layout.\n\n#### IF YOU ARE REGISTERING FOR an ICB, or NATED COURSE\n\nmake sure to indicate your preferred exam centre.\n\n**3**\n\nAs soon as your details have been captured on our system you will receive confirmation of your registration via e-mail or SMS\n\n#### ATTACH THE FOLLOWING DOCUMENTS **6**\n\n- 1. Copy of your ID\n- 2. Proof of highest grade passed\n- 3. Proof of other qualifications\n- 4. Proof of payment\n\n**5**\n\n#### IF YOU ARE UNDER 18, OR IF YOU ARE UNEMPLOYED\n\nmake sure that your parent/guardian/guarantor signs the form.\n\n**4**\n\nPAY YOUR REGISTRATION FEE", - "page_start": 26, - "page_end": 26, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### ASSIGNMENT\n\n- 1. Identify the verb in the following sentence:\nThe grey elephant drinks water from the largest lake in Africa.\n\n- 2. Identify the collective noun in the following sentence:\nThe board of directors voted in favour of the decision.\n\n- 3. Correct the punctuation in the following sentence:\nAnthea will you please buy bread milk and eggs when you go to the shop.\n\n- 4. Choose the correct word:\nCharles was accepted/excepted into the engineering studies course at Oxbridge Academy.\n\n- 5. Choose the correct word:\nIts/It's time to go home now.\n\n- 6. Choose the correct word:\nThey were late for work, because there/their train was delayed.\n\n7. Choose the correct word:\n\nYou're/Your going to write your exam next week.", - "page_start": 54, - "page_end": 54, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### IN THIS E-BOOK, WE'LL BE HELPING YOU TO:\n\n- Develop your basic English language skills.\n- Improve your English grammar.\n\nApply your language and communication skills in a business contexT. (www.oxbridgeacademy.co.za/find-a- course/business-administrationcourses/)\n\n> *\"Grammar is a litmus test. If job hopefuls can't distinguish between 'to' and too', their applications go into the bin\"*\n\nKyle Wiens, CEO of iFixit\n\n*\"Grammar often seems to be a low priority in education. Are school undervaluing grammar, given that employers may rule out applications with sloppy writing?\"*\n\nThe New York Times", - "page_start": 5, - "page_end": 5, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### USING THE WRONG WORD CAN SOMETIMES HAVE AMUSING (AND EMBARRASSING) RESULTS.\n\nAs you can probably tell from the image above, using the wrong word can sometimes have amusing (and embarrassing) results. In some situations, however, the effect of using incorrect words may be more serious.\n\nIn academic or business writing, for example, the words that you choose will influence the reader's opinion of you.\n\nIncorrect word choice in an exam or assignment may cause you to lose marks, while using the wrong word in a business letter may create a bad first impression.\n\n(www.oxbridgeacademy.co.za/find-a-course/business-administration-courses/)", - "page_start": 14, - "page_end": 14, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 7:\n\n## HOW TO ASK FOR HELP FROM YOUR TUTOR\n\nAs a student, you are going to experience times when you need help with your studies. You might be unsure about an assignment question, you might be confused by a particular concept, or you might be stressed about the upcoming exams.\n\nAnd if you are studying via distance learning (www.oxbridgeacademy.co. za/distance-learning/), where you don't have any face-to-face interaction with lecturers, you will need to rely on your tutors for the necessary academic support.", - "page_start": 32, - "page_end": 32, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- Only include necessary attachments with your e-mails.\nRemember that many e-mail clients have a size limit on attachments, and that attachments over a certain size may cause your e-mail to be blocked.\n\n#### • Keep it professional.\n\nDon't pass on spam e-mails, chain letters, or inappropriate jokes, and don't spread gossip via e-mail.", - "page_start": 53, - "page_end": 53, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 8:\n\n### TIPS FOR COMPLETING YOUR WRITTEN ASSIGNMENTS\n\nDepending on which course you study, you will either be assessed by means of written assignments, or through a combination of written assignments and exams. Assignments not only help to deepen your understanding of the work, but they often also count toward your final mark.\n\nIt is therefore important that you put effort into your assignments, and that you complete them to the best of your ability.\n\nWe realise that, like many other students, you might be unsure of how to go about completing your assignments, or that you might be afraid of failure.\n\nIf you are an Oxbridge Academy student, we'd like you to know that we are here to help you every step of the way, and that we will give you the opportunity to resubmit your assignments if you don't achieve a pass mark the first time around.", - "page_start": 36, - "page_end": 36, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "When is it necessary to use a host multipathing driver for load balancing?", - "target_page": 340, - "target_passage": "For load balancing and access redundancy on the host side, the use of a host multipathing driver is required in the following situations: Protection from fabric link failures, including port failures on the IBM Spectrum Virtualize system nodes Protection from a host HBA failure (if two HBAs are in use) Protection from fabric failures if the host is connected through two HBAs to two separate fabrics Provide load balancing across the host HBA", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "*Figure 3-4 Overview of four-path host zoning*\n\nWhen possible, use the minimum number of paths that are necessary to achieve a sufficient level of redundancy. For the Storwize V7000 environment, no more than four paths per I/O Group are required to accomplish this layout.\n\nAll paths must be managed by the multipath driver on the host side. Make sure that the multipath driver on each server can handle the number of paths required to access all volumes mapped to the host.\n\nFor hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 3-5 on page 57. You can combine this schema with the previous four-path zoning schema.", - "page_start": 77, - "page_end": 77, - "source_file": "sg247938.pdf" - }, - { - "text": "When configuring multiple masters, the cluster installation process supports the native HA method. This method uses the native HA master capabilities that are built into OpenShift Container Platform and can be combined with any Load Balancing solution.\n\nIf a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed that you pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.\n\n**Note:** The HAProxy Load Balancer is intended to demonstrate the API server's HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based Load Balancer or take other steps to provide a highly available load balancer.\n\n#### **DNS**\n\nDNS service is an important component in the Red Hat OpenShift Container Platform environment. Regardless of the provider of DNS, an organization is required to have certain records in place to serve the various Red Hat OpenShift Container Platform components.\n\nConsidering the Load Balancer values for the Red Hat OpenShift Container Platform master service and infrastructure nodes running router Pods are known beforehand, entries must be configured into the DNS before starting the deployment procedure.\n\n#### *DNS for OpenShift applications*\n\nApplications that are served by OpenShift are accessible by the router on ports 80/TCP and 443/TCP. The router uses a wildcard record to map all host names under a specific sub domain to the same IP address without requiring a separate record for each name. This process allows Red Hat OpenShift Container Platform to add applications with arbitrary names if they are under that sub domain.\n\nFor example, a wildcard record for *.apps.example.com causes DNS name lookups for app1.apps.example.com and app2.apps.example.com to both return the same IP address: 9.109.x.y. All traffic is forwarded to the OpenShift Infrastructure Nodes (Routers). The Routers examine the HTTP headers of the queries and forward them to the correct destination.\n\nWith a load-balancer host address of 9.109.x.y, the wildcard DNS record for *.apps.example.com resolves IP address 9.109.x.y.\n\nA simple DNS round-robin resolution can be used to spread traffic across infrastructure nodes.\n\nFor production environments, it is recommended to have more advanced load balancing capabilities to distribute the traffic among the OpenShift Routers. In those cases, an external Load Balancer is used.\n\n#### **OpenShift Software Defined Networking (SDN)**\n\nRed Hat OpenShift Container Platform offers the ability to specify how pods communicate with each other. This process can be done by using Red Hat provided Software-defined networks (SDN) or a third-party SDN.\n\nDeciding on the suitable internal network for an Red Hat OpenShift Container Platform step is a crucial step. Unfortunately, no correct answer exists regarding the suitable pod network to chose because this choice varies based on the specific scenario requirements for how a Red Hat OpenShift Container Platform environment is to be used.", - "page_start": 109, - "page_end": 109, - "source_file": "sg248459.pdf" - }, - { - "text": "# **8.1 Host attachment overview**\n\nThe IBM Storwize V7000 system supports a wide range of host types (both IBM and non-IBM). This feature makes it possible to consolidate storage in an open systems environment into a common pool of storage. Then, you can use and manage the storage pool more efficiently as a single entity from a central point on the storage area network (SAN).\n\nThe ability to consolidate storage for attached open systems hosts provides the following benefits:\n\n- -Easier storage management\n- -Increased utilization rate of the installed storage capacity\n- -Advanced Copy Services functions offered across storage systems from separate vendors\n- -Only one multipath driver is required for attached hosts\n\nHosts can be connected to Storwize V7000 system using any of the following protocols:\n\n- -Fibre Channel (FC)\n- -Fibre Channel over Ethernet (FCoE)\n- -Internet Small Computer System Interface (iSCSI)\n- iSCSI Extensions over RDMA (iSER)\n- -Non-Volatile Memory Express (NVMe)\n\nHosts that connect to the Storwize V7000 system by using fabric switches that use FC or FCoE protocol must be zoned correctly, as described in 3.6, \"SAN configuration planning\" on page 50.\n\nHosts that connect to the Storwize V7000 system with iSCSI protocol must be configured correctly, as described in Chapter 3, \"Planning\" on page 43.\n\n**Note:** Certain host operating systems can be directly connected to the Storwize V7000 system without the need for FC fabric switches. For more information, see this page of the IBM System Storage Interoperation Center (SSIC).\n\nFor load balancing and access redundancy on the host side, the use of a host multipathing driver is required in the following situations:\n\n- - Protection from fabric link failures, including port failures on the IBM Spectrum Virtualize system nodes\n- -Protection from a host HBA failure (if two HBAs are in use)\n- - Protection from fabric failures if the host is connected through two HBAs to two separate fabrics\n- -Provide load balancing across the host HBAs\n\nFor more information about various host operating systems and versions that are supported by IBM Storwize V7000, see this page of the IBM System Storage Interoperation Center (SSIC).\n\nFor more information about how to attach various supported host operating systems to IBM Storwize V7000, see IBM Knowledge Center.\n\nIf your host operating system is not in SSIC, you can ask an IBM representative to submit a special request for support by using the Storage Customer Opportunity REquest (SCORE) tool for evaluation (log in required).", - "page_start": 339, - "page_end": 339, - "source_file": "sg247938.pdf" - }, - { - "text": "#### **5.3.2 Design considerations**\n\nThis section discusses some design considerations.\n\n#### **PowerVC considerations**\n\nIBM PowerVC is an advanced virtualization and cloud management offering for IBM Power Systems. Built on OpenStack, it provides comprehensive virtualization management and cloud deployments for IBM AIX, IBM i, and Linux virtual machines (VMs). PowerVC simplifies the lifecycle management of the virtualization for Power Systems. It includes a deep integration with IBM PowerVM virtualization technologies.\n\n#### *Availability zones (host groups)*\n\nHost groups, also known as *host aggregates* in OpenStack's terminology, allow you to create virtual boundaries around a group of hosts. It is a logical group of hosts, regardless of any features that they might or might not have in common. For example, the hosts feature the same architecture, network configuration, or storage, or hosts in the same rack or data center.\n\nWhen a host group is created by using the user interface, an availability zone with the same name is created and assigned to the host group. PowerVC also supports the standard OpenStack APIs for host groups and availability zones.\n\nHost groups (availability zones) include the following features:\n\n- -Every host must be in a host group\nAny hosts that do not belong to a user-defined host group are members of the default host group. The default host group cannot be deleted.\n\n- -Virtual machines are kept within the host group\nA virtual machine can be deployed to a specific host or to a host group. After deployment, that virtual machine must always be migrated or remote restarted within the host group.\n\n- -Placement policies are associated with host groups\nEvery host within a host group is subject to the host group's placement policy. The default placement policy is striping.\n\n- -Automated Remote Restart\nIf enabled, the PowerVC monitors hosts for failure by using the Platform Resource Scheduler (PRS) HA service. If a host fails, PowerVC automatically remote restarts the VMs from the failed host to another host within a host group.\n\n- -Dynamic Resource Optimizer (DRO)\nIf enabled, DRO continuously monitors your cloud environment's usage. You can specify that DRO monitors CPU usage or available memory. When a host is found to be overused, the DRO attempts to correct the situation by performing the action that you specified. It can migrate VMs to another host within a host group or, when applicable, work with Capacity on Demand (CoD) to activate mobile cores.\n\n**Note:** A host can belong only to one host group (availability zone).", - "page_start": 96, - "page_end": 96, - "source_file": "sg248459.pdf" - }, - { - "text": "# **8.2 Host clusters**\n\nIBM Storwize V7000 software supports host clusters starting with V7.7.1 and later. The host cluster allows a user to create a group of hosts to form a cluster. A cluster is treated as a single entity, which allows multiple hosts to access the same volumes.\n\nVolumes that are mapped to a host cluster are assigned to all members of the host cluster with the same SCSI ID.\n\nA typical user case is to define a host cluster that contains all of the WWPNs belonging to the hosts participating in a host operating system-based cluster, such as IBM PowerHA®, and Microsoft Cluster Server (MSCS).\n\nThe following new commands were added to deal with host clusters:\n\n- lshostcluster\n- lshostclustermember\n- lshostclustervolumemap\n- mkhost (modified to put host in a host cluster on creation)\n- rmhostclustermember\n- rmhostcluster\n- rmvolumehostclustermap\n\n# **8.3 N-Port Virtualization ID support**\n\nThe usage model for the Storwize V7000 is based on a two-way active/active node model. This is a pair of distinct control modules that share active/active access for any specific volume. These nodes each have their own Fibre Channel worldwide node name (WWNN). Therefore, ports that are presented from each node have a set of worldwide port names (WWPNs) that are presented to the fabric.\n\nTraditionally, if one node fails or is removed for some reason, the paths that are presented for volumes from that node go offline. In this case, it is up to the native O/S multipathing software to fail over from using both sets of WWPN to only those that remain online. Although this process is what multipathing software is designed to do, occasionally it can be problematic, particularly if paths are not seen as coming back online for some reason.\n\nStarting with Storwize V7000 V7.7, the system can be enabled into N_Port ID Virtualization (NPIV) mode. When NPIV mode is enabled on the Storwize V7000 system, ports do not come online until they are ready to service I/O, which improves host behavior around node unpends. In addition, path failures because of an offline node are masked from hosts and their multipathing driver do not need to perform any path recovery.\n\nWhen NPIV is enabled on Storwize V7000 nodes, each physical WWPN reports up to four virtual WWPNs, as listed in Table 8-1.\n\n| NPIV port | Port description |\n| --- | --- |\n| Primary Port | This is the WWPN that communicates with backend storage and can |\n| | be used for node to node traffic (local or remote). |\n| Primary scsi Host Attach | This is the WWPN that communicates with hosts. It is a target port |\n| Port | only. This is the primary port, so it is based on this local node's |\n| | WWNN. |\n\n*Table 8-1 IBM Spectrum Virtualize NPIV Ports*", - "page_start": 340, - "page_end": 340, - "source_file": "sg247938.pdf" - }, - { - "text": "- - For IBM i, depending on your retrieval patterns and system hardware configuration, it might be advantageous to *not* store a duplicate set of documents in the Content Manager OnDemand cache when you use ASM because ASM might already be using disk space. If the application group uses ASM, caches the data, and specifies the migration of data at load time, two copies of the data are stored during the load. One copy is stored in cache, and one copy is stored in the ASMREQUEST directory.\nTo avoid storing a duplicate set of documents in cache for non-AFP data, change Cache Data to No on the Storage Management tab of your application group definition. To avoid storing a duplicate set of documents in cache for AFP data, you might change Document Data to No Cache but leave Resource Data in cache for faster retrieval.\n\n- - For IBM i, every user that loads data must have a home directory. If users do not have a home directory, the temporary files are stored in the root directory of the integrated file system (IFS).\n- - If the data source is on a remote system, you can load the data into Content Manager OnDemand on the remote system and directly store the export data to the specified Content Manager OnDemand library and object server.\n\nOr, if the data source is on a remote system, you also can upload the data to the specified Content Manager OnDemand server through FTP and then load the data on the selected Content Manager OnDemand system.\n\n- - For Multiplatforms and z/OS, all file systems must be dedicated file systems that are mounted on their own mount points.\n- - For z/OS, when you load PDF reports (by using the PDF Indexer), placing the input report in the HFS or zFS causes the load to run nearly 50 times faster that compared to the input report that is placed in a VSAM file.\n\n# **13.2.3 Load testing**\n\nThe goal of load testing is to verify that, under stressful system conditions, the required amount of data can be loaded into the Content Manager OnDemand system within a time window.\n\nA general approach to load testing a system is described:\n\n- - Parallel loads: Run a single load and measure the load throughput. If the throughput does not meet the requirements, run two loads in parallel and measure the throughput. While the loads are run, collect system statistics to determine the system resources that are being used and any potential bottlenecks. Tune or acquire additional system resources as needed. Progressively increase the number of parallel loads until the required throughput is met.\n**Note:** For most users, a single load process meets the ingestion throughput requirements.\n\n- - Data types and exits: A different data type, and whether an exit is started during the load process, affects the load throughput. Test samples of the different types that represent the general loads.", - "page_start": 326, - "page_end": 326, - "source_file": "sg246915.pdf" - }, - { - "text": "#### NAVWEPS oo-EOT-80 OPERATING STRENGTH LIMITATIONS\n\n# GENERAL DEFINITIONS AND STRUC-TURAL REQUIREMENTS\n\nThere are strength requirements which ate common to all aircraft. In general, these requirements can be separated into three particular areas. These are detailed in the following discussion.\n\n#### STATIC STRENGTH\n\nThe static strength requirement is the consideration given to the effect of simple static loads with none of the ramifications of the repetition or cyclic variation of loads. An important reference point in the static strength requirement is the \"limit load\" condition. When the aircraft is at the design conliguration, there will be some maximum of load which would be anticipated from the mission requirement of the airplane. For example, a fighter or attack type aircraft, at the design configuration, may encounter a very peak load factor of 7.5 in the accomplishment of its mission. Of course, such an aircraft may be subject to load factors of 3, 4, 5, 6, 1, etc., but no more than 7.5 should be required to accomplish the mission. Thus, the limit load condition is the maximum of loads anticipated in normal operation of the aircraft, Various types of aircraft will have different limit load factors according to the primary mission of the aircraft. Typical values are tabulated below:\n\n| Type of aircraft: | hbi\"< limi, hi,orror |\n| --- | --- |\n| Fighter or attack. | 7.5 |\n| Trainer. | 7.5 |\n| T ransport, patrol, antisubmarine. | 3.0 or 2.5 |\n\nOf course, these examples are quite general and it is important to note that there may be variations according to specific mission requirements.\n\nSince the limit load is the maximum of the normally anticipated loads, the aircraft structure must withstand this load with no ill effects. Specilicallv, the primary structure of the aircraft should experience no objectionable\n\npermanent deformation when subjected to the limit load. In fact, the components must withstand this load with a positive margin. This requirement implies that the aircraft should withstand successfully the limit load and then return to the original unstressed shape when the load is removed. Obviously, if the aircraft is subjected to some load which is in excess of the limit load, the overstress may incur an objectionable permanent deformation of the primary structure and require replacement of the damaged parts.\n\nMany different flight and ground load conditions must be considered to define the most critical conditions for the structural components. In addition to positive lift flight, negative lift flight must be considered. Also, the effect of flap and landing gear configuration, gross weight, flight Mach,number, symmetry of loading, c.g. positions, etc., must be studied to account for all possible sources of critical loads. To verify the capability of the structure, ground static tests are conducted and flight demonstrations ate required.\n\nTo provide for the rare instances of flight when a load greater than the limit is required to prevent a disaster, an \"ultimate factor of safety\" is provided. Experience has shown that an ultimate factor of safety of 1.5 is sufficient for piloted aircraft. Thus, the aircraft must be capable of withstanding a load which is 1.3 times the design limit load. The primary structure of the aircraft must withstand the \"ultimate load\" (1.5 times limit) without failure. Of course, permanent deformation may be expected with this \"overstress\" but no actual failure of the major load-carrying components should take place at ultimate load Ground static tests are necessary to verify this capability of the structure.\n\nAn appreciation of the static strength requirements may be obtained by inspection of the basic properties of a typical aircraft metal. Figure 3.1 illustrates the typical static strength properties of a metal sample by a plot of applied stress versus resulting strain. At low values", - "page_start": 343, - "page_end": 343, - "source_file": "00-80T-80.pdf" - }, - { - "text": "In Figure 11-155 on page 600, the amount of time in microseconds that is required to transmit a packet across network links of varying bandwidth capacity is compared. The following packet sizes are used:\n\n- -64 bytes: The size of the common ping packet\n- -1500 bytes: The size of the standard TCP/IP packet\n- -2148 bytes: The size of an FC frame\n\nFinally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a packet from one location to another location. An MTU might cause fragmentation or be too large and cause too many retransmits when a packet is lost.\n\n**Note:** Unlike 1720 errors, 1920 errors are deliberately generated by the system because it evaluated that a relationship could impact the host's response time. The system has no indication on if/when the relationship can be restarted. Therefore, the relationship cannot be restarted automatically and it needs to be done manually.\n\n# **11.11.2 1720 error**\n\nThe 1720 error (event ID 050020) is the other problem remote copy might encounter. The amount of bandwidth that is needed for system-to-system communications varies based on the number of nodes. It is important that it is not zero. When a partner on either side stops communication, a 1720 is displayed in your error log. According to the product documentation, there are no likely field-replaceable unit breakages or other causes.\n\nThe source of this error is most often a fabric problem or a problem in the network path between your partners. When you receive this error, check your fabric configuration for zoning of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has more than 64 HBA ports zoned. The suggested zoning configuration for fabrics is one port for each node per I/O Group per fabric that is associated with the host.\n\nFor those fabrics with 64 or more host ports, this suggestion becomes a rule. Therefore, you see four paths to each volume discovered on the host because each host needs to have at least two FC ports from separate HBA cards, each in a separate fabric. On each fabric, each host FC port is zoned to two IBM SAN Volume Controller node ports where each node port comes from a different IBM SAN Volume Controller node. This configuration provides four paths per volume. More than four paths per volume are supported but not recommended.\n\nImproper zoning can lead to SAN congestion, which can inhibit remote link communication intermittently. Checking the zero buffer credit timer with IBM Spectrum Control and comparing against your sample interval reveals potential SAN congestion. If a zero buffer credit timer is more than 2% of the total time of the sample interval, it might cause problems.\n\nNext, always ask your network provider to check the status of the link. If the link is acceptable, watch for repeats of this error. It is possible in a normal and functional network setup to have occasional 1720 errors, but multiple occurrences could indicate a larger problem.\n\nIf you receive multiple 1720 errors, recheck your network connection and then check the system partnership information to verify its status and settings. Then, perform diagnostics for every piece of equipment in the path between the two IBM Storwize systems. It often helps to have a diagram that shows the path of your replication from both logical and physical configuration viewpoints.", - "page_start": 622, - "page_end": 622, - "source_file": "sg247938.pdf" - }, - { - "text": "- - Seven nodes deployment is highly available and suitable for production. The Master and Infrastructure Roles are deployed to three Nodes, the Computer Role is deployed to three Worker Nodes, and the Load Balancer is deployed to a single Node (see Figure 6-2).\n*Figure 6-2 OpenShift Container Platform 3.11 6xNodes + Load Balancer*\n\n- - **Th**ree nodes deployment is considered a supported testing and development environment. The Master and Infrastructure Roles are deployed to a single Node, and the Computer Role is deployed to two Worker Nodes (see Figure 6-3).\n*Figure 6-3 OpenShift Container Platform 3xNodes*", - "page_start": 127, - "page_end": 127, - "source_file": "sg248459.pdf" - }, - { - "text": "To force the VM placement (in general based on different rack or data center), you can create dual Host Groups, as shown in Figure 5-5.\n\n*Figure 5-5 PowerVC dualHost Groups (Availability Zones)*\n\n#### *Colocation rules*\n\nColocation rules, also known as *server groups* in OpenStack's terminology, are used to specify that selected virtual machines must always be kept on the same host (affinity) or can never be placed on the same host (anti-affinity).\n\nDuring deployment, migration, and remote restart, PowerVC ensures that these colocation rules are followed.", - "page_start": 98, - "page_end": 98, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "Which orbiting instrument provides near-continuous full-sky coverage in the hard X-ray/low-energy gamma-ray range?", - "target_page": 1, - "target_passage": "Gamma ray Burst Monitor", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi\n\nDept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\nA. Camero-Arranz\n\nFundaci´on Espa˜nola de Ciencia y Tecnolog´ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\nE. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\nM.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n### I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It consists of 12 NaI detectors 500 in diameter by 0.500 thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 500 in diameter by 500 thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140◦ , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ±35◦ slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26.5 ◦ , individual occultation steps last for ∼10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "FIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1.8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.\n\nFIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "FIG. 1: Single Crab occultation step in a single GBM NaI detector. Horizontal scale is in seconds centered on the occultation time. Vertical scale is in measured counts.\n\nThe shape of the individual occultation steps depends on energy and occultation angle. Transmission as a function of time is modeled as T(t) = exp[−µ(E)A(h)], where µ(E) is the mass attenuation coefficient of gamma rays at energy E in air and A(h) is the air mass along the line of sight at a given altitude h(t). Account is taken of the detector response as it changes as a function of angle across the fit window. For each source, occultation times are predicted. Each step is fit over a 4-minute window along with a quadratic background and using an assumed spectrum to determine the detector count rate due to the source. The instrument response is used to convert the count rate to a flux. Up to 31 steps are possible for a given source in a day, and these steps are summed to get a single daily average flux. The GBM occultation sensitivity exceeds that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV [5].\n\nThis work uses the GBM CTIME data, with its 8 broad energy channels and 0.256-second resolution, rebinned to 2-second resolution. The occultation technique relies on an input catalog of known sources. Currently, we are monitoring 64 sources. Of these 64 sources, 6 steady sources are detected above 100 keV with a significance of at least 5σ after ∼ 490 days of observations, and one transient source.\n\n#### III. RESULTS\n\nThe results presented here are preliminary. We have not completed the fine tuning of our algorithms, though the average fluxes are not expected to change much. Future work will include using the GBM CSPEC data, with its finer energy binning, to examine the detailed spectra for these sources.\n\nThe measured 20 - 50 keV GBM light curves are compared to Swift's 15 - 50 keV light curves for sev-\n\nFIG. 2: Crab light curve. Horizontal scale is in modified Julian days over the 490 day GBM exposure period. Vertical scale is in photons/cm2 /sec/keV averaged over daily intervals. Horizontal lines show the average flux in each of five energy bands increasing from top to bottom\n\neral sources over the same time intervals in ref. [2], where it is seen that the results measured by the two instruments compare well. At energies above the upper energy limit of ∼ 195 keV of the Swift 22-month catalog [6], however, the GBM observations provide the only wide-field monitor available of the low energy gamma ray sky.\n\n#### A. Steady Sources\n\nThe sources Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, and GRS 1915+105 are detected by GBM at energies above 100 keV. We show GBM light curves generated from the Earth occultation analysis in several energy bands with one day resolution for these six sources in Figures 2 - 7.\n\nTable I gives the fluxes and significances averaged over all the days from Aug. 12, 2008 (the beginning of science operations) to Dec. 15, 2009, approximately 490 days.\n\nThe Crab (Fig. 2) spectrum in the hard x-ray/low energy gamma-ray region can be described by a broken power law, with the spectrum steepening at 100 keV and then hardening at 650 keV [7, 8]. While the GBM CTIME data do not have the spectral resolution", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0955.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "What is Cyg X-1?", - "target_page": 3, - "target_passage": "is a HMXB and one of the first systems determined to contain a black hole", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "FIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1.8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.\n\nFIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "| Dependent variable | Utilization outcome | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | Any care | Opioids | Injection | Surgery | Diagnostic tests or imaging | Emergency room |\n| Age | | | | | | X |\n| Insurance | | | | | | X |\n| Comorbidities (CCI) | | X | | | X | |\n| Baseline disability | X | | X | X | X | X |\n| Baseline pain | | X | | | | |\n| Change in pain | X | X | | | X | X |\n| Change in disability | | | | X | | |\n| Change in 10-item OSPRO-YF | | | | X | | |\n\nTable 7 Summary of consistent individual predictors for each utilization outcome *\n\nCCI Charlson comorbidity index, OSPRO-YF Pain-related psychological distress screening tool *\n\nSignificant predictors (p < .05) for each dependent variable denoted with \"X\"\n\nservices, suggesting injection may be the most difficult service to predict with the included variable set.\n\n#### Surgery\n\nBaseline disability (OR = 3.13–3.25, p < 0.001), change in disability (OR = 3.04–3.05, p = 0.01) and change in 10-item OSPRO-YF score (OR = 1.12–1.14, p < 0.05) where consistent predictors of subsequent surgery. Notably, magnitude of prediction was comparable between change in disability and baseline disability. This was the only parsimonious model to include an OSPRO tool. In this case, an increase in pain-related psychological distress measured by the OSPRO-YF 10-item questionnaire over the first 4 weeks was associated with higher odds of surgery. The 3 predictors in this model explained just over 30% of the variance in surgery utilization.\n\n#### Diagnostic tests or imaging\n\nComorbidity index score (OR = 1.35–1.45, p < 0.05), baseline disability (OR = 2.25–2.66, p < 0.001), and baseline to 4-week change in pain intensity (OR = 3.04–3.05, p = 0.01) were significant predictors of diagnostic test or imaging utilization. Among these, baseline disability was the strongest predictor. In these models, higher comorbidity index, higher baseline disability and worsening pain were associated with higher odds of utilization. Together, these variables explained approximately 30% of the variance in utilization.\n\n#### Emergency room\n\nModels for emergency room use had the highest pseudo-R2 values of any individual service (0.48–0.50), but also had the largest number of predictors (8–9). Agreement between complete case and weighted models was moderate. The models converged on the following predictors: age (OR = 0.91–0.94, p < 0.05), insurance (OR = 8.99–13.15, p < 0.05), baseline disability (OR = 3.33–4.88, p < 0.001), and change in pain (OR = 1.59–1.77, p < 0.05). Higher utilization was associated with younger age, other insurance (e.g., self-pay, Worker's Compensation, or other commercial insurance) compared to private insurance, higher baseline disability and worsening of pain. In the weighted analysis, subjects with knee pain were less likely to utilize the emergency room than those with low back pain. However, this relationship was not significant (p = .06) in the complete case analysis. Of the significant predictors in both models, insurance status was the strongest individual predictor of subsequent emergency room use.\n\n#### Discussion\n\nThis study identified novel predictors for pain-related utilization outcomes following an episode of physical therapy for a primary complaint of musculoskeletal pain. The most robust finding from these analyses was that baseline disability and change in pain intensity over the first 4 weeks following physical therapy evaluation were consistent predictors of subsequent pain-related healthcare utilization in those participants that completed all follow up. Aside from these robust predictors, other individual predictors of utilization were highly outcome-specific. High model specificity for utilization outcomes observed in this study is consistent with a recent systematic review that found similar levels of model specificity for more traditional outcomes like pain intensity, disability and work absenteeism [14]. Across models, health-related variables were generally stronger predictors than sociodemographic factors, which is also supported by prior research [15, 16]. Additionally, there were cases when prediction models were improved for specific services (e.g. surgery, use of opioids) when considering change in pain, disability or pain-related psychological distress. A notable finding is that the OSPRO-YF had the greatest utility when used to measure change in pain-related psychological distress. Current risk prediction paradigms in musculoskeletal pain consider only baseline pain-related psychological distress. However, these results underscore the importance of", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed5.pdf" - }, - { - "text": "FIG. 3. (color online) (a) Polarization-averaged Mn L 2 , 3 spectrum for a Fe/(Ga,Mn)As film; (b) XMCD spectra measured in remanence at 2 K; (c) XMCD spectra measured under a 1000 Oe applied field at 2 K; (d) XMCD spectrum measured under a 2000 Oe applied field at 300 K. XMCD spectra are obtained using TEY (thick red lines) and FY (thin blue lines) detection.", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2449.pdf" - }, - { - "text": "| Model | Cache | Fibre Channel (FC) / | Drive slots | Power supply |\n| --- | --- | --- | --- | --- |\n| | | iSCSI / SAS ports | | |\n| 2076-624 (with | 64, 128, or 256 | 16 x 16 Gb / | 24 x 2.5-inch | Integrated dual |\n| two node | gigabytes (GB) | 6 x 1 Gb + 8x 10 Gb / | | power supplies |\n| canisters Gen2+) | | 4 x 12 Gb | | with battery |\n| 2076-524 (with | 32 or 64 | 4 x 16 Gb / | 24 x 2.5-inch | Integrated dual |\n| two node | gigabytes (GB) | 4 x 1 Gb + 4 x 10 Gb / | | power supplies |\n| canisters Gen2) | | 4 x 12 Gb | | with battery |\n| 2076-212 (with | Not applicable | -- / -- / 4 x 12 Gb | 12 x 3.5-inch | Integrated dual |\n| two expansion | (N/A) | | | power supplies |\n| canisters) | | | | |\n| 2076-224 (with | N/A | -- / -- / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual |\n| two expansion | | | | power supplies |\n| canisters) | | | | |\n| 2076-12F (with | N/A | -- / -- / 4 x 12 Gb | 12 x 3.5-inch | Integrated dual |\n| two expansion | | | | power supplies |\n| canisters Gen2) | | | | (attaches to |\n| | | | | 2076-524 and |\n| | | | | 2076-624 only) |\n| 2076-24F (with | N/A | -- / -- / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual |\n| two expansion | | | | power supplies |\n| canisters Gen2) | | | | (attaches to |\n| | | | | 2076-524 and |\n| | | | | 2076-624 only) |\n| 2076-92F (with | N/A | -- / -- / 4 x 12 Gb | 92 x 3.5-inch | Integrated dual |\n| two expansion | | | | power supplies |\n| canisters Gen2) | | | | (attaches to |\n| | | | | 2076-524 and |\n| | | | | 2076-624 only) |\n\n**Note:** The first generation of control enclosures (2076 - models 112, 124, 312, and 324) were withdrawn from marketing. However, expansion enclosures 2076-212 and 2076-224 can still be ordered (see Table 2-1) because they attach to those control enclosures only. Intermixing control enclosures with expansion enclosures of different generations is not a supported combination, and is refused by IBM Spectrum Virtualize software.\n\nThe first generation of IBM Storwize V7000 hardware is not supported by IBM Spectrum Virtualize V8.1. Any attempt to upgrade to V8.1 is rejected by the software. The last supported version for first-generation Storwize V7000 is V7.8.\n\n# **2.3.2 IBM Storage Utility offerings**\n\nThe IBM 2076 Model U7A is the IBM Storwize V7000 with a three-year warranty to be used in the Storage Utility Offering space. These models are physically and functionally identical to the Storwize V7000 model 624, except for target configurations and variable capacity billing. The variable capacity billing uses IBM Spectrum Control™ Storage Insights to monitor the system usage, which allows allocated storage usage above a base subscription rate to be billed per TB, per month.", - "page_start": 35, - "page_end": 35, - "source_file": "sg247938.pdf" - }, - { - "text": "# Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi\n\nDept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\nA. Camero-Arranz\n\nFundaci´on Espa˜nola de Ciencia y Tecnolog´ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\nE. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\nM.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n### I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It consists of 12 NaI detectors 500 in diameter by 0.500 thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 500 in diameter by 500 thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140◦ , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ±35◦ slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26.5 ◦ , individual occultation steps last for ∼10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "# **Annex 4: Default values**\n\n### **1. Fraction of carbon stored for reference approach**\n\nBitumen – 1 Coal oils and tars (from coking coal – 0.75 Ethane – 0.8 Gas/Diesel oil – 0.5 LPG – 0.8 Lubricants – 0.5 Naphtha – 0.8 Natural gas – 0.33\n\n### **2. Conversion factors**\n\n- a. CH4 volume CH4 Gg = 0.67\n- b. *Conversion factors for energy*\n\n| From | To | Multiply by |\n| --- | --- | --- |\n| J | TJ | 10-12 |\n| KJ | TJ | 10-9 |\n| MJ | TJ | 10-6 |\n| GJ | TJ | 10-3 |\n| TJ | TJ | 1 |\n| cal | TJ | 4.1868 x 10-12 |\n| kcal | TJ | 4.1868 x 10-9 |\n| Mcal | TJ | 4.1868 x 10-6 |\n| Gcal | TJ | 4.1868 x 10-3 |\n| Tcal | TJ | 4.1868 |\n| kWh | TJ | 3.6 x 10-6 |\n| MWh | TJ | 3.6 x 10-3 |\n| GWh | TJ | 3.6 |\n| Btu | TJ | 1.0551 x 10-9 |\n| kBtu | TJ | 1.0551 x 10-6 |\n| MBtu | TJ | 1.0551 x 10-3 |\n| GBtu | TJ | 1.0551 |\n| toe | TJ | 41.868 x 10-3 |\n| ktoe | TJ | 41.868 |\n| Mtoe | TJ | 4.1868 x 104 |\n| TJ | J | 1012 |\n| TJ | KJ | 109 |\n| TJ | MJ | 106 |\n| TJ | GJ | 103 |\n| TJ | cal | 238.8 x 109 |\n| TJ | kcal | 238.8 x 106 |\n| TJ | Mcal | 238.8 x 103 |\n| TJ | Gcal | 238.8 |\n| TJ | Tcal | 238.8 x 10-3 |\n| TJ | kWh | 277.8 x 103 |\n| TJ | MWh | 277.8 |\n| TJ | GWh | 277.8 x 10-3 |\n| TJ | Btu | 947.8 x 106 |\n| TJ | kBtu | 947.8 x 103 |\n| TJ | MBtu | 947.8 |\n| TJ | GBtu | 947.8 x 10-3 |\n| TJ | toe | 23.88 |\n| TJ | ktoe | 23.88 x x 10-3 |\n| TJ | Mtoe | 23.88 x 10-6 |", - "page_start": 48, - "page_end": 48, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "```\nvm4_memory = \"8\" # Memory GB\nvm4_cpu = \"2\" # Virtual CPU\nvm4_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\nvm4_name = \"lbsnode\" # Hostname prefix\nvm4_first_ip = \"192.168.11.212\" # Fist IP from a consecutive pool of IPs\nvm4_image_name = \"xiv_p9_image_rhel76\" # The image name\nvm4_remote_restart = \"true\" # Enable Auto Remote Restart\n```\n*Example 6-5 terrafomr.tfvars for seven nodes deployment*\n\n```\ncat terraform.tfvars\n#PowerVC (OpenStack)\n#---------------------------------\npowervc_user = \"ocpadmin\" # PowerVC user\npowervc_password = \"\" # PowerVC password\npowervc_server = \"192.168.11.31\" # PowerVC IP or hostname\npowervc_project = \"ocp-project\" # PowerVC project(tenant) name\n#General configuration:\n#---------------------------------\nssh_user = \"root\" # Image username\nssh_user_password = \"\" # Image password\nuser_public_key = \"ssh-rsa \nAAAAB3NzaC1yc2EAAAABIwAAAQEA09+YMqJ8VHX3HC7qy6HSxs3JjTGKbEgK+CExpf811uxsq+uJYbfXEKH19/NCf/U\nvpkozJBDDXDIxJ4uqOEBWDG4mUuu5U9a4lXgb6qaPYyXwVTygL/IcB0poSGEQQaJzhB05g71uZrya++sG1xHUjSQAQz\nhDuKrs4Bc3gcN4184UR+BX1pVgCls3NRn9hLrfLWS37M/kn+b/n6VMYYVpHsZ2XVydAn2nwuzktaEuWYaY/1cNd4xuu\nyVu08GQOon6t5KQ1EZBheADdSsyamulLqW9z4j6Y1wwDe4GPDc5zIW++ASDAZB0eEfbKGDLVdpFsI5YV8nLV1r/T0Y/\nFiFZqQ== Bogdan Savu;IBMROO45771;IBMROZZ014E826;J;\"\ndns1 = \"192.168.11.210\" # DNS server 1\ndns_domain = \"domain.example.com\" # DNS Domain Name\n#Network configuration\n#---------------------------------\nnet1_name = \"net_ocp_cluster2\" # Network Name\nnet1_vlan_id = \"1\" # VLAN ID\nnet1_subnet = \"192.168.11.0/21\" # Network/Mask \nnet1_gateway = \"192.168.11.1\" # Gateway\nnet1_start = \"192.168.11.202\" # First IP from Pool\nnet1_end = \"192.168.11.212\" # Last IP from Pool\n#VM1 configuration (OCP - Master Nodes)\n#---------------------------------\nvm1_number = \"3\" # Number of VMs\nvm1_memory = \"64\" # Memory GB\nvm1_cpu = \"8\" # Virtual CPU\nvm1_vcpu_ratio = \"2\" # vCPU RATIO 1:2 1 vCPU = 0.5 eCPU (cores)\nvm1_name = \"mstnode\" # Hostname prefix\nvm1_first_ip = \"192.168.11.202\" # Fist IP from a consecutive pool of IPs\nvm1_image_name = \"xiv_p9_image_rhel76\" # The image name\nvm1_remote_restart = \"true\" # Enable Auto Remote Restart\nvm1_storage_name = \"xiv_StoragePool\" # Storage Template\nvm1_dockerdisk1 = \"512\" # Docker disk size in GB for ephemeral storage\n#VM2 configuration (OCP - Infra Nodes)\n#---------------------------------\nvm2_number = \"0\" # Number of VMs\nvm2_memory = \"16\" # Memory GB\nvm2_cpu = \"2\" # Virtual CPU\nvm2_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\nvm2_name = \"infnode\" # Hostname prefix\n```", - "page_start": 131, - "page_end": 131, - "source_file": "sg248459.pdf" - }, - { - "text": "user_public_key = \"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA09+YMqJ8VHX3HC7qy6HSxs3JjTGKbEgK+CExpf811uxsq+uJYbfXEKH19/NCf/U vpkozJBDDXDIxJ4uqOEBWDG4mUuu5U9a4lXgb6qaPYyXwVTygL/IcB0poSGEQQaJzhB05g71uZrya++sG1xHUjSQAQz hDuKrs4Bc3gcN4184UR+BX1pVgCls3NRn9hLrfLWS37M/kn+b/n6VMYYVpHsZ2XVydAn2nwuzktaEuWYaY/1cNd4xuu yVu08GQOon6t5KQ1EZBheADdSsyamulLqW9z4j6Y1wwDe4GPDc5zIW++ASDAZB0eEfbKGDLVdpFsI5YV8nLV1r/T0Y/ FiFZqQ== Bogdan Savu;IBMROO45771;IBMROZZ014E826;J;\" dns1 = \"192.168.11.210\" # DNS server 1 dns_domain = \"domain.example.com\" # DNS Domain Name #Network configuration #-------------------------------- net1_name = \"net_ocp_cluster1\" # Network Name net1_vlan_id = \"1\" # VLAN ID net1_subnet = \"192.168.11.0/21\" # Network/Mask net1_gateway = \"192.168.11.1\" # Gateway net1_start = \"192.168.11.223\" # First IP from Pool net1_end = \"192.168.11.223\" # Last IP from Pool #VM1 configuration (OCP - Master Nodes) #-------------------------------- vm1_number = \"1\" # Number of VMs vm1_memory = \"32\" # Memory GB vm1_cpu = \"8\" # Virtual CPU vm1_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm1_name = \"bsocp\" # Hostname prefix vm1_first_ip = \"192.168.11.223\" # Fist IP from a consecutive pool of IPs vm1_image_name = \"xiv_p9_image_rhel76\" # The image name vm1_remote_restart = \"true\" # Enable Auto Remote Restart vm1_storage_name = \"xiv_StoragePool\" # Storage Template vm1_dockerdisk1 = \"0\" # Docker disk size in GB for ephemeral storage #VM2 configuration (OCP - Infra Nodes) #-------------------------------- vm2_number = \"0\" # Number of VMs vm2_memory = \"16\" # Memory GB vm2_cpu = \"4\" # Virtual CPU vm2_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm2_name = \"infnode\" # Hostname prefix vm2_first_ip = \"192.168.11.205\" # Fist IP from a consecutive pool of IPs vm2_image_name = \"xiv_p9_image_rhel76\" # The image name vm2_remote_restart = \"true\" # Enable Auto Remote Restart vm2_storage_name = \"xiv_StoragePool\" # Storage Template vm2_dockerdisk1 = \"68\" # Docker disk size in GB for ephemeral storage #VM3 configuration (OCP - Workers(App) Nodes) #-------------------------------- vm3_number = \"0\" # Number of VMs vm3_memory = \"32\" # Memory GB vm3_cpu = \"4\" # Virtual CPU vm3_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm3_name = \"appnode\" # Hostname prefix vm3_first_ip = \"192.168.11.208\" # Fist IP from a consecutive pool of IPs vm3_image_name = \"xiv_p9_image_rhel76\" # The image name vm3_remote_restart = \"false\" # Disable Auto Remote Restart vm3_storage_name = \"xiv_StoragePool\" # Storage Template vm3_dockerdisk1 = \"34\" # Docker disk size in GB for ephemeral storage #VM4 configuration (OCP - Load Balancer Node) #-------------------------------- vm4_number = \"0\" # Number of VMs", - "page_start": 130, - "page_end": 130, - "source_file": "sg248459.pdf" - }, - { - "text": "where I(ij ) = 1 if ij = s and I(ij ) = 0 if ij = w. In other words, the predicate is that the fraction of queries routed to the strong model is bounded by ϵ.\n\nControl plane integrity. A *control plane integrity adversary* is a randomized algorithm A that seeks to maliciously guide inference flow.\n\nIn an unconstrained LLM control plane integrity attack, the adversary A seeks to generate inputs ⃗x = ⃗x1, . . . , ⃗xq such that running RMω (⃗x) generates a transcript for which P((x1, i1), . . . ,(xq, iq)) = 0. This attack could be launched by an adversary who wants to maximize inference costs for a victim application using an LLM router.\n\nA harder setting requires input adaptation, where the adversary is given inputs x1, . . . , xq and it must find new inputs xˆ1, . . . , xˆq for which the transcript resulting from P((ˆx1, i1), . . . ,(ˆxq, iq)) = 0. There will be some competing constraint, such as that xj and xˆj are very similar for each j, or that the outputs yj ←$ RMω (xj ) and yˆj ←$ RMω (ˆxj ) are close. In the routing context, the adversary's goal is to increase the fraction of queries that get routed to the strong model, in order to improve the overall quality of responses, drive up the victim application's inference costs, or both.\n\nRelationship to evasion attacks. Evasion attacks [25, 43, 60] against an inference system (also called adversarial examples [32, 48, 49]) would, in our setting, seek to find a small modification ∆ to an input x such that RMω (x + ∆) ̸= RMω (x) where addition is appropriately defined based on input type (e.g., slight changes to text).\n\nOur attack setting is not the same. The control plane integrity adversary seeks to maliciously control the inference *flow*, not necessarily the *output* of inference. In an unconstrained attack, the adversary does not care what outputs are generated. In the input adaptation attack, the adversary seeks to craft inputs that modify the inference flow yet do *not* change the responses of the strong underlying LLM to the extent possible. Looking ahead, we will use evasion techniques in our adaptation attacks against learned control plane routers, but, importantly, not the overall inference.\n\nIn the other direction, undermining LLM control plane integrity could be a stepping stone toward evasion attacks. For example, if RMω is used to classify malicious content by combining LLMs each tuned to different types of harm categories, then modifying inputs to force inference flows away from appropriate models could aid evasion. We leave evaluation of how control-plane integrity attacks can enable evasion to future work.\n\nThreat models. Within the context of control plane integrity attacks against LLM routers, we identify several threat models that differ in terms of the adversary's goals and their knowledge about the target control plane RMω .\n\nIn terms of goals, an adversary may seek to *inflate the costs* of a victim application that utilizes an LLM control plane. As a kind of denial-of-service attack, such cost inflation would penalize the application developer who expects routing to control costs. Another adversarial goal could be *arbitrage*: consider an application that charges X dollars per query, whereas directly using Ms costs Y > X. The application's lower rate X makes economic sense assuming it uses a router to route the bulk of queries to a cheaper model Mw. An input adaptation attack in this setting can gain (indirect) access to Ms, obtaining an arbitrage advantage of Y − X per query. To be effective, this arbitrage adversary would want to ensure that adaptations do not lower response quality (i.e., it extracts all the value out of rerouting to Ms). As before, the victim in this case is the application that relies on routing to lower its costs (unsuccessfully, under this attack).\n\nWe now discuss adversarial capabilities. We assume that our victim application's prompt includes a substring that can be controlled by the adversary. This represents many real-world apps such as chatbots, coding assistants, writing assistants, and others, that insert user inputs into an LLM prompt. In crafting adversarial portions of prompts, an adversary may have various levels of knowledge about the victim application's router. We consider the following knowledge settings:\n\n- *White-box setting*: The adversary knows the control plane algorithm and its parameters ω.\n- *Black-box (transfer) setting*: The adversary does not know the control plane algorithm R and ω for the target model, but knows instead another control plane algorithm R′ ω′ and its parameters. We refer to R′ ω′ as the *surrogate*. For example, this could arise if an adversary trains their own router using available data. In this setting our attacks are also *zero-shot* in that they do not require any interaction with the target control plane before the query that is being rerouted.\n\n### 4 Confounding Control Planes with Gadgets\n\nWe now turn to our main contribution: a methodology for attacking LLM control plane integrity. The key insight is that an adversary can modify queries to mislead or \"confound\" the routing logic into routing these queries to an LLM of the adversary's choosing. Furthermore, we will demonstrate that these attacks can be black-box and *query-independent*, i.e., a single modification works for all queries and does not require advance knowledge of the specific router being attacked.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv1.pdf" - }, - { - "text": "H = X j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 − X z−links Jz (16/9)[Sj2 · (Sj3 × Sj4)][Sk2 · (Sk3 × Sk4)] − X x−links Jx (2Sj1 · Sj2 + 1/2)(2Sk1 · Sk2 + 1/2) − X y−links Jy (4/3)[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)] (8)\n\nWhile by the represenation (4) and (5), the Hamilto- nian becomes\n\nH = X j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 − X x−links Jx (2Sj1 · Sj2 + 1/2)(2Sk1 · Sk2 + 1/2) − X y−links Jy (4/3)[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)] − X z−links Jz (−4/3)(2Sj3 · Sj4 + 1/2)[Sj1 · (Sj3 − Sj4)](2Sk3 · Sk4 + 1/2)[Sk1 · (Sk3 − Sk4)] (9)\n\nThis model, in terms of physical spins S, has full spin rotation symmetry and time-reversal symmetry. A pseudo-magnetic field term P j ~h · ~τj term can also be included under this mapping, however the resulting Kitaev model with magnetic field is not exactly solvable. It is quite curious that such a formidably looking Hamiltonian (8), with biquadratic and six-spin(or eight-spin) terms, has an exactly solvable low energy sector.\n\nP We emphasize that because the first intra-cluster term cluster Hcluster commutes with the latter Kitaev terms independent of the representation used, the Kitaev model is realized as the exact low energy Hamiltonian of this model without truncation errors of perturbation theories, namely no (|Jx,y,z|/Jcluster) 2 or higher order terms will be generated under the projection to low energy cluster singlet space. This is unlike, for example, the t/U expansion of the half-filled Hubbard model22,23, where at lowest t 2/U order the effective Hamiltonian is the Heisenberg model, but higher order terms (t 4/U3 etc.) should in principle still be included in the low energy effective Hamiltonian for any finite t/U. Similar comparison can be made to the perturbative expansion studies of the Kitaev-type models by Vidal et al.9 , where the low energy effective Hamiltonians were obtained in certian anisotropic (strong bond/triangle) limits. Although the spirit of this work, namely projection to low energy sector, is the same as all previous perturbative approaches to effective Hamiltonians.\n\nNote that the original Kitaev model (1) has threefold rotation symmetry around a honeycomb lattice site, combined with a three-fold rotation in pseudo-spin space (cyclic permutation of τ x , τ y , τ z ). This is not apparent in our model (8) in terms of physical spins, under the current representation of τ x,y,z. We can remedy this by using a different set of pseudo-spin Pauli matrices τ ′x,y,z in (7),\n\n$$\\begin{array}{l}{{\\tau^{\\prime x}=\\sqrt{1/3}\\tau^{z}+\\sqrt{2/3}\\tau^{x},}}\\\\ {{\\tau^{\\prime y}=\\sqrt{1/3}\\tau^{z}-\\sqrt{1/6}\\tau^{x}+\\sqrt{1/2}\\tau^{y},}}\\\\ {{\\tau^{\\prime z}=\\sqrt{1/3}\\tau^{z}-\\sqrt{1/6}\\tau^{x}-\\sqrt{1/2}\\tau^{y}}}\\end{array}$$\n\nWith proper representation choice, they have a symmetric form in terms of physical spins,\n\n$$\\tau^{\\prime x}=-(4/3){\\bf S}_{2}\\cdot({\\bf S}_{3}\\times{\\bf S}_{4})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{2}+1/2)$$\n \n$$\\tau^{\\prime y}=-(4/3){\\bf S}_{3}\\cdot({\\bf S}_{4}\\times{\\bf S}_{2})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{3}+1/2)$$\n \n$$\\tau^{\\prime z}=-(4/3){\\bf S}_{4}\\cdot({\\bf S}_{2}\\times{\\bf S}_{3})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{4}+1/2)\\tag{10}$$\n\nSo the symmetry mentioned above can be realized by a three-fold rotation of the honeycomb lattice, with a cyclic permutation of S2, S3 and S4 in each cluster. This is in fact the three-fold rotation symmetry of the physical spin lattice illustrated in FIG. 2. However this more symmetric representation will not be used in later part of this paper.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0266.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "What satellite is the Gamma Ray Burst Observatory on?", - "target_page": 1, - "target_passage": " Fermi satellite", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi\n\nDept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\nA. Camero-Arranz\n\nFundaci´on Espa˜nola de Ciencia y Tecnolog´ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\nE. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\nM.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n### I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It consists of 12 NaI detectors 500 in diameter by 0.500 thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 500 in diameter by 500 thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140◦ , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ±35◦ slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26.5 ◦ , individual occultation steps last for ∼10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "FIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1.8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.\n\nFIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "FIG. 1: Single Crab occultation step in a single GBM NaI detector. Horizontal scale is in seconds centered on the occultation time. Vertical scale is in measured counts.\n\nThe shape of the individual occultation steps depends on energy and occultation angle. Transmission as a function of time is modeled as T(t) = exp[−µ(E)A(h)], where µ(E) is the mass attenuation coefficient of gamma rays at energy E in air and A(h) is the air mass along the line of sight at a given altitude h(t). Account is taken of the detector response as it changes as a function of angle across the fit window. For each source, occultation times are predicted. Each step is fit over a 4-minute window along with a quadratic background and using an assumed spectrum to determine the detector count rate due to the source. The instrument response is used to convert the count rate to a flux. Up to 31 steps are possible for a given source in a day, and these steps are summed to get a single daily average flux. The GBM occultation sensitivity exceeds that of BATSE at energies below ∼ 25 keV and above ∼ 1.5 MeV [5].\n\nThis work uses the GBM CTIME data, with its 8 broad energy channels and 0.256-second resolution, rebinned to 2-second resolution. The occultation technique relies on an input catalog of known sources. Currently, we are monitoring 64 sources. Of these 64 sources, 6 steady sources are detected above 100 keV with a significance of at least 5σ after ∼ 490 days of observations, and one transient source.\n\n#### III. RESULTS\n\nThe results presented here are preliminary. We have not completed the fine tuning of our algorithms, though the average fluxes are not expected to change much. Future work will include using the GBM CSPEC data, with its finer energy binning, to examine the detailed spectra for these sources.\n\nThe measured 20 - 50 keV GBM light curves are compared to Swift's 15 - 50 keV light curves for sev-\n\nFIG. 2: Crab light curve. Horizontal scale is in modified Julian days over the 490 day GBM exposure period. Vertical scale is in photons/cm2 /sec/keV averaged over daily intervals. Horizontal lines show the average flux in each of five energy bands increasing from top to bottom\n\neral sources over the same time intervals in ref. [2], where it is seen that the results measured by the two instruments compare well. At energies above the upper energy limit of ∼ 195 keV of the Swift 22-month catalog [6], however, the GBM observations provide the only wide-field monitor available of the low energy gamma ray sky.\n\n#### A. Steady Sources\n\nThe sources Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, and GRS 1915+105 are detected by GBM at energies above 100 keV. We show GBM light curves generated from the Earth occultation analysis in several energy bands with one day resolution for these six sources in Figures 2 - 7.\n\nTable I gives the fluxes and significances averaged over all the days from Aug. 12, 2008 (the beginning of science operations) to Dec. 15, 2009, approximately 490 days.\n\nThe Crab (Fig. 2) spectrum in the hard x-ray/low energy gamma-ray region can be described by a broken power law, with the spectrum steepening at 100 keV and then hardening at 650 keV [7, 8]. While the GBM CTIME data do not have the spectral resolution", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0955.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "When in present-day Poland did the first shift away from earlier ancestry occur?", - "target_page": 3, - "target_passage": "in the Middle to Late Bronze Age (1500 bce to 1000 bce), we observe a clear shift away from preceding ancestry originally associated with Corded Ware cultures", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## **Fig. 3 | Time transects across six geographical regions in Europe.**\n\n**a**–**f**, Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland (**a**), southeastern Europe (**b**), central Europe (**c**), Italy (**d**), Britain and Ireland (**e**) and Scandinavia (**f**). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals (*P* ≪ 1 × 10−32). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36–51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41–57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus (*P* = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium ce in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century ce burial of a 50–60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century ce associated with Longobards (Longobard_earlyMED(I))10 (Fig. 2c). This is consistent with the original study10, which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP))10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - }, - { - "text": "higher resolution using earlier genomes. Several other individuals from these Longobard burials (Longobard_earlyMED(II)) show no detectable ancestry from northern Europe and, instead, are more closely related to Iron Age groups in continental central Europe, putatively representing descendants of local people buried in a Longobard style. Our results are consistent with attestations that the Longobards originated in the areas of present-day northern Germany or Denmark, but that by the sixth century ce they incorporated multiple different cultural identities, and mixed ancestries. Present-day populations of Hungary do not appear to derive detectable ancestry from early medieval individuals from Longobard contexts, and are instead more similar to Scythian-related ancestry sources (Extended Data Fig. 6), consistent with the later impact of Avars, Magyars and other eastern groups58.\n\nIn southern Germany, the genetic ancestry of individuals from early medieval Bavaria probably associated with the historical Germanic-language-speaking Baiuvarii59 cannot be modelled as deriving ancestry solely from earlier groups in Iron Age central Germany (*P* ≪ 1 × 10−36). The Baiuvarii probably appeared in the region in the fifth century ce59, but their origins remain unresolved. Our current best model indicates a mixture with ancestry derived from EIA Peninsular Scandinavia and central Europe, suggesting an expansion of Scandinavian-related ancestry producing a regional ancestry shift (Figs. 2c and 3c).\n\nIn Italy, southward expansions of northern and central European ancestries appear by the Late Antiquity (approximately fourth century ce), where a clear diversification of ancestry can be observed compared with preceding time periods (Fig. 3d). However, no individuals with near 100% Scandinavian ancestry can be observed in the sampling data available so far.\n\nIn Britain, the ancestries of Iron Age and Roman individuals form a tight cluster in our MDS analysis (Fig. 3e), shifted relative to available preceding Bronze Age individuals from Ireland and Orkney, and adjacent to, but distinct from, available individuals in Iron Age and Roman central Europe. However, two first- to second-century ce burials from a Roman military fortress site in Austria (Klosterneuburg)5 carry ancestry that is currently indistinguishable from Iron Age or Roman populations of Britain, to the exclusion of other groups (qpWave cladality *P* = 0.11). One option is that they had ancestry from Britain; alternatively, currently unsampled populations from western continental Europe carried ancestries similar to Iron Age southern Britain.\n\nTwigstats substantially improves models of admixture between ancestries from Iron Age Britain and northern Europe in early medieval England9 , halving standard errors from 9% with SNPs to 4% when using time stratification (point estimates 80% and 79% Iron Age Britain-related ancestry, respectively). We used this improved resolution to demonstrate that an earlier Roman individual (6DT3) dating to approximately second to fourth century ce from the purported gladiator or military cemetery at Driffield Terrace in York (Roman *Eboracum*), England60, who was previously identified as an ancestry outlier61,62, specifically carried approximately 25% EIA Scandinavian Peninsula-related ancestry (Fig. 2c). This documents that people with Scandinavian-related ancestry already were in Britain before the fifth century ce, after which there was a substantial influx associated with Anglo-Saxon migrations9 . Although it is uncertain whether this individual was a gladiator or soldier, individuals and groups from northern Europe are indeed recorded in Roman sources both as soldiers and as enslaved gladiators63,64.\n\nAcross Europe, we see regional differences in the southeastern and southwestern expansions of Scandinavian-related ancestries. Early medieval groups from present-day Poland and Slovakia carry specific ancestry from one of the Scandinavian EIA groups—the one with individuals primarily from the northern parts of Scandinavia in the EIA—with no evidence of ancestry related to the other primary group in more southern Scandinavia (Fig. 2d). By contrast, in southern and western Europe, Scandinavian-related ancestry either derives from EIA southern Scandinavia—as in the cases of the probable Baiuvarii in Germany, Longobard-associated burials in Italy and early medieval burials in southern Britain—or cannot be resolved to a specific region in Scandinavia. If these expansions are indeed linked to language, this pattern is remarkably concordant with the main branches of Germanic languages, with the now-extinct eastern Germanic spoken by Goths in Ukraine on the one hand, and western Germanic languages such as Old English and Old High German recorded in the early medieval period on the other hand.\n\n### **Influx into pre-Viking Age Scandinavia**\n\nIn EIA Scandinavia (<500 ce), we find evidence for broad genetic homogeneity. Specifically, individuals from Denmark (100 ce–300 ce) were indistinguishable from contemporary people in the Scandinavian Peninsula (Fig. 2c). However, we observe a clear shift in genetic ancestry already in the eighth century ce (Late Iron Age/early Viking Age) on Zealand (present-day Denmark) for which a 100% EIA ancestry model is rejected (*P* = 1 × 10−17 using Twigstats; *P* = 7.5 × 10−4 without). This shift in ancestry persists among later Viking Age groups in Denmark, where all groups are modelled with varying proportions of ancestry related to Iron Age continental groups in central Europe (Figs. 3f and 4c). A non-parametric MDS of Viking Age individuals suggests that variation between individuals forms a cline spanning from the EIA Scandinavian Peninsula individuals to ancestry characteristic of central Europe (Fig. 4e). The observed shift in ancestry in Denmark cannot be confounded by potentially earlier unknown gene flow into Iron Age source groups in Austria, France and Germany, but such gene flow could affect the exact ancestry proportions.\n\nThese patterns are consistent with northward expansion of ancestry, potentially starting before the Viking Age, into the Jutland peninsula and Zealand island towards southern Sweden. The geographical origin of this ancestry is currently difficult to discern, as the available samples from Iron Age central Europe remain sparse. The timing of this expansion is constrained only by the samples available: this ancestry is not observed in individuals from the Copenhagen area of Denmark (around 100 ce–300 ce)6 , an individual from the southern tip of Sweden (around 500 ce)16, individuals from the Sandby Borg massacre site on Öland in present-day Sweden (around 500 ce)7 and 31 individuals from the mid-eighth century Salme ship burials in present-day Estonia (Extended Data Fig. 9), who probably originated in central Sweden6 . Therefore, this ancestry transformation most likely postdated these individuals in each particular region and mostly occurred in the second half of the first millennium ce.\n\nTo assess the full extent of the impact of this ancestry influx into Scandinavia, we next aimed to understand the ancestry of individuals in Scandinavia during the Viking Age. Previous studies have suggested that there was a diversity of ancestries in Scandinavia during this period6,7,65, due to increased maritime mobility, but have not reported per-individual ancestry estimates based on preceding ancestry. We analysed each individual's ancestry using a rotational qpAdm scheme (Fig. 4a, Extended Data Fig. 9 and Supplementary Table 4), which showed increased power in distinguishing models when restricted to recent coalescences with Twigstats (more than 80% of accepted one-source models in Twigstats were also accepted one-source models using all SNPs, compared with less than 17% for the inverse).\n\nWe investigated regional differences in non-local ancestry across Scandinavia. In Denmark, 25 out of 53 Viking Age individuals had detectable (*z-*score > 1) central European-related ancestry (CentralEurope. IronRoman or Portugal.IronRoman) in their best accepted qpAdm models. In Sweden 20 out of 62 individuals had detectable central European-related ancestry, concentrated almost entirely in southern regions (Fig. 4a,d). By contrast, in Norway, this ancestry was observed in only 2 out of 24 individuals, indicating a wide-ranging impact of incoming ancestry in southern Scandinavia and suggesting more", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "(including one with ancestry related to Britain) are part of the majority strontium values, consistent with them having grown up locally. By contrast, the six most clearly non-local individuals based on the stable isotopes all have 50% or more EIA Scandinavian Peninsula-related ancestry, although three individuals with wholly EIA Scandinavian Peninsula-related ancestry also had local values. This suggests that the presence of central European-related ancestry was not a transient phenomenon, but an ancestry shift that occurred at some point after about 500 ce, the period to which individuals from the massacre site at Sandby Borg ringfort on Öland were dated; these individuals all have strictly EIA Scandinavian-related ancestry. Indeed, one hypothesis is that the massacre at Sandby Borg could represent conflict associated with movements of people that contributed to later ancestry change, although other scenarios are possible and further synthesis of biomolecular and archaeological data is necessary to test this hypothesis.\n\n### **Viking Age mobility into Scandinavia**\n\nPrevious studies had suggested a major influx of ancestry related to Britain into Viking Age Scandinavia6,7 . Although we detect this ancestry in some individuals (7 individuals in Norway, 14 in Denmark and 14 in Sweden), including some individuals whose ancestry appears to be entirely derived from Iron Age Britain, its overall impact appears reduced compared with previous reports. Our analysis indicates a proportionally larger impact of ancestry from Iron Age Britain in northern Norway, with southern Scandinavia predominantly influenced by continental central European ancestries (Fig. 4d). We hypothesize that our estimates of ancestry from Britain are reduced relative to previous studies because ancestry related to Britain and continental central Europe may have been indistinguishable. This could be due to a lack of statistical power to distinguish these closely related sources with standard methods, as well as through potential biases introduced by using modern surrogate populations that have since been influenced by later gene flow (such as gene flow into Britain). We illustrate this by replicating the analyses previously described6,7 (Extended Data Fig. 8).\n\nSimilarly, a previous study has suggested that individuals at sites such as Kärda in southern Sweden carried ancestry from southern Europe6 . In our models, two Kärda individuals fit with central European-related ancestry, but none of the individuals has a substantial proportion of ancestry related to southern European sources (Extended Data Fig. 9). Instead, we detect ancestry from southern European sources in only three individuals from Scandinavia, and in relatively small proportions (Fig. 4a).\n\nInterestingly, we detect ancestry from Bronze and Iron Age sources from Eastern Europe (present-day Lithuania and Poland), concentrated in southeastern parts of Sweden, particularly the island of Gotland (14 individuals; Fig. 4a). This is consistent with previous genetic studies6,7 . We find that this ancestry is enriched in male individuals (Extended Data Fig. 7d), suggesting male-biased mobility and/or burial. The closest match tends to be Roman Iron Age Lithuanian genomes associated with Balts, which would be consistent with mobility across the Baltic Sea, but we caution that the geographical representation of available genomes is still limited.\n\n### **Viking Age expansion from Scandinavia**\n\nTraditionally, historical perspectives on what is now often referred to as the Viking diaspora placed an emphasis on the movements and settlements of population groups from various parts of Scandinavia67. Our explorative MDS analysis again indicates mixed ancestries related to the Scandinavian EIA, with regional differences that point to varied local admixture (Fig. 4e and Extended Data Fig. 10).\n\nIn Britain, most of the individuals recovered from the two late Viking Age mass graves identified at Ridgeway Hill, Dorset, and St John's College, Oxford6 , show ancestries typical of those seen in Viking Age southern Scandinavia (Fig. 4f). Further west, North Atlantic Viking Age individuals in the Faroe Islands, Iceland and Greenland carry ancestry from the Scandinavian Peninsula, with several individuals showing the continental central Europe-related ancestry signal found in southern Scandinavia (Fig. 4f) and others who share substantial ancestry with Iron Age Britain. In contrast to previous hypotheses68, we found a marginal enrichment of ancestry related to Britain and Ireland in men (15 out of 17 men and 3 out of 6 women with at least one accepted model involving Iron or Roman Age Britain as source; Fisher's exact test *P* = 0.089) (Extended Data Fig. 7c,e). However, sampling of additional individuals to improve distinction between early English- and Norse-related ancestries would be required to fully test this hypothesis.\n\nIn eastern Europe, we observe EIA Scandinavian ancestries in a Viking Age burial from Ukraine, and these ancestries are overrepresented in Viking Age burials from present-day Russia. At Staraya Ladoga in western Russia, we observe several individuals with EIA Scandinavian Peninsula-related ancestry and at least one individual dated to the eleventh century with apparent ancestry related to Iron Age Britain. The relative absence of Iron Age central European ancestry, which was largely restricted to southern Scandinavia during the Viking Age, is thus indicative that these individuals may have originated in the central/ northern parts of Sweden or Norway, where Viking Age individuals show the most similar ancestry profiles to them.\n\n### **Conclusions**\n\nOur approach, Twigstats, transfers the power advantage of haplotypebased approaches to a fully temporal framework, which is applicable to *f*-statistics and enables previously unavailable unbiased and time-stratified analyses of admixture. We demonstrated that Twigstats enables fine-scale quantitative modelling of ancestry proportions, revealing wide-ranging ancestry changes that affect northern and central Europe during the Iron, Roman and Viking ages. We reveal evidence of the southward and/or eastward expansion of individuals who probably spoke Germanic languages and who had Scandinavian-related ancestry in the first half of the first millennium ce. We note that 'Scandinavian-related' in this context relates to the ancient genomes available, and so it is entirely possible that these processes were driven, for example, from regions in northern-central Europe. This could be consistent with the attraction of the greater wealth, which tended to build up among Rome's immediate neighbours and may have played a major role in vectors of migration internal to communities in Europe who lived beyond the Roman frontier52. Later, patterns of gene flow seem to have turned northwards, with the spread of Iron Age Central Europe-related ancestry into Scandinavia. Overall, our approach can be used for the reconstruction of new high-resolution genetic histories around the world.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-024-08275-2.\n\n- 1. Lawson, D. J., Hellenthal, G., Myers, S. & Falush, D. Inference of population structure using dense haplotype data. *PLoS Genet.* **8**, 11–17 (2012).\n- 2. Hellenthal, G. et al. A genetic atlas of human admixture history. *Science* **343**, 747–751 (2014).\n- 3. Schiffels, S. et al. Iron Age and Anglo-Saxon genomes from East England reveal British migration history. *Nat. Commun.* **7**, 10408 (2016).\n- 4. Flegontov, P. et al. Palaeo-Eskimo genetic ancestry and the peopling of Chukotka and North America. *Nature* **570**, 236–240 (2019).\n- 5. Antonio, M. L. et al. Stable population structure in Europe since the Iron Age, despite high mobility. *eLife* **13**, e79714 (2024).", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Figure 11: Number of recent (within two years) OCU initiates presenting to treatment in 2005 and 2013, by age of individual at first presentation.**\n\nThe mode age of initiation has shifted from around 18 to around 25 and there is an older age profile throughout. Rises in average age of initiation have also been reported recently in cohorts of Australian injecting drug users (Horyniak et al., 2015). There appear to be two possible explanations.\n\n- There is a genuine shift towards new initiates being older, and for them to present to treatment much faster than in previous years.\n- There is a consistent, but small number of individuals who mis-report their age of onset when attending treatment i.e. who report that they have only been using opiates/crack for a short period when in fact they have been using for a far longer period, and that this is starting to really bias the numbers for recent cohorts because attendees from the original epidemic are becoming smaller.\n\nIt is possible then that the flattening we observe in the incidence trend is due to a small in-flux of older initiates, although mis-reporting may also explain that phenomenon. Either way though, as this analysis has made clear throughout, absolute numbers of new OCUs appear to be small – probably fewer than 10,000 per annum and the numbers of those involved with crime will be smaller still. In addition, despite a flattening in the probable trend in new users, there is currently no sign that it is likely to tip upwards. If anything, the data suggest the downward trend is set to resume, though clearly it remains important to monitor the situation.", - "page_start": 28, - "page_end": 28, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# **High-resolution genomic history of early medieval Europe**\n\nhttps://doi.org/10.1038/s41586-024-08275-2\n\nReceived: 14 December 2023\n\nAccepted: 23 October 2024\n\nPublished online: 1 January 2025\n\nOpen access\n\nCheck for updates\n\n**Leo Speidel1,2,3** ✉**, Marina Silva1 , Thomas Booth1 , Ben Raffield4 , Kyriaki Anastasiadou1 , Christopher Barrington5 , Anders Götherström6,7, Peter Heather8 & Pontus Skoglund1** ✉\n\nMany known and unknown historical events have remained below detection thresholds of genetic studies because subtle ancestry changes are challenging to reconstruct. Methods based on shared haplotypes1,2 and rare variants3,4 improve power but are not explicitly temporal and have not been possible to adopt in unbiased ancestry models. Here we develop Twigstats, an approach of time-stratifed ancestry analysis that can improve statistical power by an order of magnitude by focusing on coalescences in recent times, while remaining unbiased by population-specifc drift. We apply this framework to 1,556 available ancient whole genomes from Europe in the historical period. We are able to model individual-level ancestry using preceding genomes to provide high resolution. During the frst half of the frst millennium ce, we observe at least two diferent streams of Scandinavian-related ancestry expanding across western, central and eastern Europe. By contrast, during the second half of the frst millennium ce, ancestry patterns suggest the regional disappearance or substantial admixture of these ancestries. In Scandinavia, we document a major ancestry infux by approximately 800 ce, when a large proportion of Viking Age individuals carried ancestry from groups related to central Europe not seen in individuals from the early Iron Age. Our fndings suggest that time-stratifed ancestry analysis can provide a higher-resolution lens for genetic history.\n\nAncient genome sequencing has revolutionized our ability to reconstruct expansions, migrations and admixture events in the ancient past and understand their impact on human genetic variation today. However, tracing history using genetic ancestry has remained challenging, particularly in historical periods for which the richest comparative information from history and archaeology often exists. This is because ancestries in many geographical regions are often so similar as to be statistically indistinguishable with current approaches. One example is northern and central Europe since the start of the Iron Age around 500 bce, a period for which many long-standing questions remain, such as the nature of large-scale patterns of human migration during the fourth to sixth centuries ce, their impact on the Mediterranean world and later patterns of human mobility during the Viking Age (around 750–1050 ce).\n\nSeveral recent studies have documented substantial mobility and genetic diversity in these time periods, suggesting stable population structure despite high mobility5 , and have revealed genetic variation in Viking Age Scandinavia6–8 , early medieval England3,9 , early medieval Hungary10,11 and Iron Age and medieval Poland12. However, previous studies mostly used large modern cohorts to study ancestry change through time and space. This is because the differentiation between Iron Age groups in central and northern Europe is an order of magnitude lower (fixation index (*F*ST) = 0.1–0.7%; Extended Data Fig. 1) than, for example, the more commonly studied hunter-gatherer, early farmer and steppe-pastoralist groups that shaped the ancestry landscape of Stone Age and Bronze Age Europe13–16 (*F*ST = 5–9% (refs. 13,17)). Modern populations provide more power to detect differences, but their genetic affinity to ancient individuals may be confounded by later gene flow, that is, after the time of the ancient individual(s)18. The most principled approach is thus to build ancestry models in which source and 'outgroup/reference' populations are older than, or at least contemporary with, the target genome or group that we are trying to model18. However, this has been challenging, due to the limited statistical power offered by the thousands-fold lower sample sizes and reduced sequence quality of ancient genomes.\n\nReconstructing genetic histories and ancestry models from ancient DNA (aDNA) data commonly uses methods based on *f*-statistics13,19–22. Their popularity is rooted in a number of favourable properties, such as enabling analyses of lower-quality aDNA data, relative robustness to ascertainment and theoretical guarantees of unbiasedness, including in the presence of population bottlenecks21,23. Approaches derived from *f*-statistics, such as qpAdm13, are close to unique in enabling the unbiased fitting of admixture models, including identifying the number of such events and the closest representatives of sources13,14,23. However, *f*-statistics have not always had sufficient power to reconstruct events that involve closely related ancestries, despite increasing sample sizes6,24. Methods that identify haplotypes, or shared segments of DNA that are not broken down by recombination, have previously been shown to have more power than those using individual\n\n1 Ancient Genomics Laboratory, Francis Crick Institute, London, UK. 2 Genetics Institute, University College London, London, UK. 3 iTHEMS, RIKEN, Wako, Japan. 4 Department of Archaeology and Ancient History, Uppsala University, Uppsala, Sweden. 5 Bioinformatics and Biostatistics, Francis Crick Institute, London, UK. 6 Centre for Palaeogenetics, Stockholm University, Stockholm, Sweden. 7 Department of Archaeology and Classical Studies, Stockholm University, Stockholm, Sweden. 8 Department of History, King's College London, London, UK. ✉e-mail: leo.speidel@riken.jp; pontus.skoglund@crick.ac.uk", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on −log10[*P* values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a *P* value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium ce. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\n**Fine-scale structure in Neolithic Europe.** To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with *P* > 0.01, *z* < 2 for Yamnaya ancestry, and *z* < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n#### **Reporting summary**\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n#### **Data availability**\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n### **Code availability**\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76. All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of *f*-statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats_paper).\n\n- 70. Maier, R., Flegontov, P., Flegontova, O., Changmai, P. & Reich, D. On the limits of fitting complex models of population history to *f*-statistics. *eLife* **12**, e85492 (2023).\n- 71. Kelleher, J., Etheridge, A. M. & McVean, G. Efficient coalescent simulation and genealogical analysis for large sample sizes. *PLoS Comput. Biol.* **12**, e1004842 (2016).\n- 72. da Mota, B. S. et al. Imputation of ancient human genomes. *Nat. Commun.* **14**, 3660 (2023).\n- 73. Rubinacci, S., Ribeiro, D. M., Hofmeister, R. & Delaneau, O. Efficient phasing and imputation of low-coverage sequencing data using large reference panels. *Nat. Genet.* **53**, 120–126 (2021).\n- 74. The 1000 Genomes Project Consortium. A global reference for human genetic variation. *Nature* **526**, 68–74 (2015).\n- 75. Mallick, S. et al. The Simons Genome Diversity Project: 300 genomes from 142 diverse populations. *Nature* **538**, 201–206 (2016).\n- 76. Speidel, L. leospeidel/twigstats: Twigstats v1.0.1. *Zenodo* https://doi.org/10.5281/zenodo. 13833119 (2024).\n- 77. Skoglund, P. et al. Genetic evidence for two founding populations of the Americas. *Nature* **525**, 104–108 (2015).\n- 78. Prüfer, K. et al. The complete genome sequence of a Neanderthal from the Altai Mountains. *Nature* **505**, 43–49 (2014).\n- 79. Prüfer, K. et al. A high-coverage Neandertal genome from Vindija Cave in Croatia. *Science* **358**, 655–658 (2017).\n\n**Acknowledgements** L.S. was supported by a Sir Henry Wellcome Fellowship (220457/Z/20/Z). P.S. was supported by the European Molecular Biology Organization, the Vallee Foundation, the European Research Council (852558), the Wellcome Trust (217223/Z/19/Z) and Francis Crick Institute core funding (FC001595) from Cancer Research UK, the UK Medical Research Council and the Wellcome Trust. B.R. was supported by the Swedish Research Council (2021-03333).\n\n**Author contributions** P.S. supervised the study. L.S. and P.S. developed the method. L.S, M.S. and P.S. curated the dataset. L.S. and P.S. analysed the data and wrote the manuscript. L.S., M.S., T.B., B.R., K.A., C.B., A.G., P.H. and P.S. interpreted the results and edited the manuscript.\n\n**Funding** Open Access funding provided by The Francis Crick Institute.\n\n**Competing interests** The authors declare no competing interests.\n\n#### **Additional information**\n\n**Supplementary information** The online version contains supplementary material available at https://doi.org/10.1038/s41586-024-08275-2.\n\n**Correspondence and requests for materials** should be addressed to Leo Speidel or Pontus Skoglund.\n\n**Peer review information** *Nature* thanks Jerome Kelleher, Duncan Sayer and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.\n\n**Reprints and permissions information** is available at http://www.nature.com/reprints.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": "The analysis showed that of the 149 Drug Action Team areas in England, 72 per cent had decreases in new OCU treatment numbers in the year to September 2014 compared to the previous year. Furthermore, of the 42 areas showing an increase, only 11 also showed a rise for the 12 months to September 2010 compared with the 12 months to September 2014, and most of these involved small numbers of individuals.\n\nOverall then, the very recent data on treatment presentations do not currently suggest that the number of new OCUs is on the verge of increasing, merely that it flattened for a period.\n\nA number of factors could explain the flattening. Most importantly, if there was some sort of shock that caused a one-off reduction in the lag-time to treatment this could make it appear as if incidence was rising when in fact new users may be falling but a greater percentage may simply be turning up to treatment faster. Such a shock may have occurred given the reduction in heroin supply seen from the end of 2010 through to 2012 (see Ahmad *et al*,. 2016). If users unable to obtain heroin used this enforced abstinence as a spur to seek treatment and hence to present to treatment services earlier than they otherwise would have done, this could cause a one-off 'concertina effect' in which treatment numbers initially flatten or even rise but then fall again. This would also explain why the downward trend has apparently resumed: evidence suggests the reduction in supply has also ended.\n\nHowever, further analysis revealed some other possibilities based on the characteristics of those attending opiate/crack treatment for the first time in recent years. The Appendix includes a series of graphs with age-of-onset distributions for those who first attended treatment in 2013, and then 2012, and so on back to 2004. These show that the majority of those who presented to treatment in 2004 initiated use in the mid-1990s in line with the likely peak of the epidemic. But by 2012 a far greater number of individuals presenting to treatment say they started using opiates/crack only a year or two before.23 In other words, there appears to be a shift towards a shorter lag between initiation and treatment. This shift looks even more dramatic when using proportions rather than absolute numbers, see the Appendix.\n\nFurthermore, these individuals (those who seem to have both initiated recently *and* presented to treatment within a year or two of initiation) show a notably different age-of-initiation profile compared to the established profile in the literature, which peaks around 18–22 (Donmall & Jones, 2005). These individuals have a notably older age profile: see figure 11 chart, which compares recent initiates who presented to treatment in 2005 with recent initiates who presented to treatment in 2013.\n\n23 This shift does not appear to be related to the reduction in heroin supply occurring around 2010/11. As Appendix 1 demonstrates, the pattern emerges far earlier.", - "page_start": 27, - "page_end": 27, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "**Figure 3.** Simulated changes in the percentage of days with daily temperature above the 90th percentile for 1981–2010 at 2°C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from different members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\nIndices based upon daily precipitation often show more spatial variability in changes than the temperature-based indices, and greater differences between ensemble members, but, nevertheless, some consistent pictures still emerge.\n\nThe number of consecutive dry days is projected to increase over some regions and decrease in others (figure 4). Southern Africa, the Mediterranean, Australia and northeast South America are projected to have increased dry spell lengths, while this is projected to decrease in central and eastern Asia. The general pattern of these projections is broadly consistent across the ensemble members. However, the global mean changes vary in sign (table 5), as a result of different magnitudes of regional changes dominating in different ensemble members.\n\nPerhaps more surprisingly, projected changes in maximum 5 day rainfall (Rx5day) also vary in sign both geographically and between models (figure 5). Extreme rainfall might simplistically be expected to increase in a warmer climate, and indeed the global mean change is a consistent increase in all ensemble members (table 5). Regional Rx5day is projected to increase over many regions including parts of southeast Asia, southern South America, northern Australia and the east coast of the USA. However, some regions (particularly, the central Amazon and the northern coast of South America) are projected to see a decrease in Rx5day.\n\nLarge increases in Rx5day are simulated in south and southeast Asia in all models, but with local details varying. Southeastern South America (broadly southern Brazil and northern Argentina) also see large increases in Rx5day in all models. All models show only small changes over central and north Africa, Europe and most of Asia. In northern South America, however, some models show increases in Rx5day but others show decreases. This suggests that the ensemble-mean result of a decrease in Rx5day in this area may be subject to large uncertainty. Inter-model variations in the sign of changes are seen in a few other local localized regions.\n\nThe average length of flood events (number of days in which the cumulative daily rainfall excess is positive, compared to the 95th percentile of the baseline) generally increase over most of the land surface, although this increase was mostly by a day or less (figure 6). However, some *Soc. A* **376**: 20160452\n\n........................................................", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed11.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of *new* users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay *et al*., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "when migration rates are high (and, therefore, *F*ST is low), indicating that Twigstats is better powered to detect such scenarios of continued migration. Encouragingly, a model that involves the two immediately adjacent populations is selected in all replicates as the 'best' model (highest qpAdm *P* value) using Twigstats, whereas this is the case in only 80% (migration rate of 0.001) and 30% (migration rate of 0.005) of all replicates using regular qpAdm.\n\n**Neanderthal admixture and deep structure simulation.** Our simulation in Extended Data Fig. 5d emulates Neanderthal admixture, in which Neanderthals and ancestors of modern humans split 25,000 generations ago and admixture occurs 2,000 generations ago. The resulting admixed non-African-like population coexists with the non-admixed African-like population until the present day. Furthermore, two Neanderthal populations split from each other 7,000 generations ago, which can be interpreted as emulating the Altai and Vindija Neanderthal populations, with Vindija being closer to the admixing source.\n\nWe simulate an alternative model with two subgroups emulating ancestral modern humans in Africa that have a non-zero symmetric migration rate, ranging from 4 × 10−5 to 2 × 10−4 per generation, up until 3,000 generations before present. One of these subgroups gives rise to a present-day African-like population, while the other gives rise to a present-day non-African-like population. We further sample two Neanderthal populations that split 7,000 generations ago and merge 25,000 generations ago with the same ancestral modern human subgroup that will eventually give rise to a non-African-like population.\n\nWe simulate whole genomes with human-like recombination rates and a mutation rate of 1.25 × 10−8 mutations per base per generation. Diploid effective population sizes are set to 10,000 except on the Neanderthal lineage, in which it is set to 3,000. We sample 2 haploid sequences for each Neanderthal population and 20 haploid sequences for the target admixed population and African non-admixed population.\n\n**Fine-scale structure simulation.** Our simulation in Extended Data Fig. 5a emulates the emergence of a fine-scale population structure and is adapted from ref. 39. In this simulation, populations split 100 generations ago into 25 subpopulations followed by a period in which individuals are allowed to migrate at a rate of 0.01 between adjacent populations in a 5 × 5 grid. The diploid effective population size is 500 in each of the 25 populations, and 10,000 in the ancestral population. We simulate ten replicates of chromosome 10, with a human-like mutation rate of 1.25 × 10−8 and hotspot recombination map. We sample two diploid individuals from each population. Furthermore, we sample 100 individuals from an ancestral population that splits from the 25 target populations 100 generations ago, before the emergence of structure in these 25 populations. Relate trees are inferred assuming true mutation rates, recombination rates and average coalescence rates across all samples.\n\n**Ancient sample selection.** A full list of ancient genomes can be found in Supplementary Table 1. Published ancient shotgun genomes provided by refs. 7,8 were only available aligned against the GRCh38 reference sequence. These data were realigned to the GRCh37d5 reference sequence using bwa aln (v. 0.7.17-r1188).\n\nWe select genomes with average autosomal coverage above 0.5×, except for VK518, which has previously been suggested to be of Saami ancestry6 and which had a coverage of 0.438. We included VK518 in our panel to capture this ancestry. Genomes above a coverage cut-off of 0.5× have previously been shown to result in reliable imputation results72. We exclude samples with evidence of contamination. We remove any duplicate individuals, such as individuals who were resequenced, choosing the file with the highest coverage. We then filter out any relatives annotated in the Allen Ancient DNA Resource v. 54.127, retaining the individual with the highest coverage in each family clade.\n\nOur final dataset includes 1,556 ancient genomes.\n\n**Imputation of ancient genomes.** We follow the recommended pipeline of GLIMPSE73 and first call genotype likelihoods for each genome in the 1000GP, segregating sites using bcftools mpileup with filter -q 20, -Q 20 and -C 50. We subsequently impute each genome separately using GLIMPSE v. 1.1.1 using the 1000GP phase 3 reference panel74 downloaded from https://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/. These imputed genomes are merged into a single VCF (variant call format) for further downstream processing.\n\nWe filter any site for which more than 2% of sites have an imputation posterior of less than 0.8 and retain all remaining sites so as not to have any missing genotypes at individual SNPs.\n\n**Relate-inferred genealogies.** We merge imputed ancient genomes with a subset of the 1000GP dataset, including all European populations (CEU, Utah residents with northern and western European ancestry; CHB, Han Chinese in Bejing, China; FIN, Finnish in Finland; GBR, British in England and Scotland; BS, Iberian populations in Spain; TSI, Toscani in Italy, YRI, Yoruba in Ibadan, Nigeria). We create a second dataset in which we merge imputed genomes with the Simons Genome Diversity Project75 (SGDP) downloaded from https://sharehost.hms. harvard.edu/genetics/reich_lab/sgdp/phased_data2021/. These two datasets contain, respectively, a total of 2,270 and 1,834 modern and ancient individuals.\n\nWe then infer genealogies for the joint dataset of ancient and modern genomes using Relate v. 1.2.1. We restrict our analysis to transversions only and assume a mutation rate of 4 × 10−9 mutations per base per generation and input sample dates as shown in Supplementary Table 1. We use coalescences rates pre-inferred for the 1000GP and SGDP datasets.\n\n**MDS analysis.** We compute *f*2-statistics using the Twigstats function f2_blocks_from_Relate between all pairs of individuals and between all individuals and an outgroup (Han Chinese people in SGDP) using the Relate genealogies of SGDP modern and imputed ancient genomes. We set the argument *t* to specify a time cut-off and set the argument use_muts to FALSE to compute these *f*-statistics on branches of the genealogy and to TRUE to compute these only on the mutations. We use these to compute *f*3(outgroup, indiv1, indiv2) = 0.5 × (*f*2(outgroup, indiv1) + *f*2(outgroup, indiv2) *− f*2(indiv1, indiv2)) for every pair of individuals, and store 1 *− f*3(outgroup, indiv1, indiv2) in a symmetric *N* × *N* matrix (where *N* is the number of individuals) for which we then compute an MDS using the R function cmdscale.\n\n**qpAdm modelling.** In brief, qpAdm models are a generalization of *f*4-ratios, for which one-, two- and three-source models can be tested as hypotheses and admixture components and their s.e. obtained with a block jackknife13. A qpAadm model is fully specified by a set of putative source groups and additional 'outgroups' that are used to distinguish source ancestries. We used a rotating approach in which we iteratively selected a subset of source groups and used all remaining putative sources as outgroups. This approach penalizes models where true contributing sources are used as outgroups. With sufficient statistical power, qpAdm models will be statistically rejected if true contributing sources are used as outgroups. If statistical power is more limited, several models will fit the data, but the correct model is expected to be preferred over wrong models. Throughout, we use the Relate genealogies of SGDP modern and imputed ancient genomes in our qpAdm modelling and first compute *f*2-statistics using the Twigstats function f2_blocks_from_Relate between all populations involved, which we then feed to the ADMIXTOOLS2 package70.\n\n**Clustering using qpwave.** To overcome challenges with hand-curating source groups used in qpAdm modelling, we follow ref. 5 and run qpwave using Twigstats between pairs of ancient individuals. We use Han Chinese individuals from Beijing and five European populations from the 1000GP as reference groups. This approach tests whether two", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "How many clusters has the Scandinavian peninsula been divided into thanks to Twigstats?", - "target_page": 12, - "target_passage": "This approach results in two clusters in the Scandinavian Penin- sula, approximately separating northern from southern Scandinavia", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "**Extended Data Fig. 7 | Ancestry estimates stratified by genetic sex. a**, Map showing ancestry carried by each Scandinavian Viking age individual. **b**, Ancestry proportions across individuals grouped by Latitude and genetic sex. **c**, Odds ratio and p-values calculated using a two-sided Fisher's exact test on the number of males and females carrying each ancestry in Viking Age Denmark, Sweden, Norway, Iceland, and Gotland. **d**, *F4* values of the form *f*4(Scandinavian_Peninsula_ EIA(I), alternative source group, males in Viking group, females in Viking group) computed using all SNPs and Twigstats. A significantly positive value is\n\nevidence of attraction of females with pop2 or males with Scandinavian_ Peninsula_EIA(I). Number of males and females is shown in each facet title and we restrict to groups with at least four males and females. We plot one standard error. **e**, Map showing 'farflung' Viking individuals grouped by ancestry and genetic sex. In contrast to Fig. 4a and d where we showed results for the 'best' qpAdm model, here in panels **a**, **b, c,** and **e**, an individual is assigned an ancestry group, if it has **any** accepted model (p > 0.01) where that ancestry features.", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on −log10[*P* values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a *P* value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium ce. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\n**Fine-scale structure in Neolithic Europe.** To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with *P* > 0.01, *z* < 2 for Yamnaya ancestry, and *z* < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n#### **Reporting summary**\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n#### **Data availability**\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n### **Code availability**\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76. All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of *f*-statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats_paper).\n\n- 70. Maier, R., Flegontov, P., Flegontova, O., Changmai, P. & Reich, D. On the limits of fitting complex models of population history to *f*-statistics. *eLife* **12**, e85492 (2023).\n- 71. Kelleher, J., Etheridge, A. M. & McVean, G. Efficient coalescent simulation and genealogical analysis for large sample sizes. *PLoS Comput. Biol.* **12**, e1004842 (2016).\n- 72. da Mota, B. S. et al. Imputation of ancient human genomes. *Nat. Commun.* **14**, 3660 (2023).\n- 73. Rubinacci, S., Ribeiro, D. M., Hofmeister, R. & Delaneau, O. Efficient phasing and imputation of low-coverage sequencing data using large reference panels. *Nat. Genet.* **53**, 120–126 (2021).\n- 74. The 1000 Genomes Project Consortium. A global reference for human genetic variation. *Nature* **526**, 68–74 (2015).\n- 75. Mallick, S. et al. The Simons Genome Diversity Project: 300 genomes from 142 diverse populations. *Nature* **538**, 201–206 (2016).\n- 76. Speidel, L. leospeidel/twigstats: Twigstats v1.0.1. *Zenodo* https://doi.org/10.5281/zenodo. 13833119 (2024).\n- 77. Skoglund, P. et al. Genetic evidence for two founding populations of the Americas. *Nature* **525**, 104–108 (2015).\n- 78. Prüfer, K. et al. The complete genome sequence of a Neanderthal from the Altai Mountains. *Nature* **505**, 43–49 (2014).\n- 79. Prüfer, K. et al. A high-coverage Neandertal genome from Vindija Cave in Croatia. *Science* **358**, 655–658 (2017).\n\n**Acknowledgements** L.S. was supported by a Sir Henry Wellcome Fellowship (220457/Z/20/Z). P.S. was supported by the European Molecular Biology Organization, the Vallee Foundation, the European Research Council (852558), the Wellcome Trust (217223/Z/19/Z) and Francis Crick Institute core funding (FC001595) from Cancer Research UK, the UK Medical Research Council and the Wellcome Trust. B.R. was supported by the Swedish Research Council (2021-03333).\n\n**Author contributions** P.S. supervised the study. L.S. and P.S. developed the method. L.S, M.S. and P.S. curated the dataset. L.S. and P.S. analysed the data and wrote the manuscript. L.S., M.S., T.B., B.R., K.A., C.B., A.G., P.H. and P.S. interpreted the results and edited the manuscript.\n\n**Funding** Open Access funding provided by The Francis Crick Institute.\n\n**Competing interests** The authors declare no competing interests.\n\n#### **Additional information**\n\n**Supplementary information** The online version contains supplementary material available at https://doi.org/10.1038/s41586-024-08275-2.\n\n**Correspondence and requests for materials** should be addressed to Leo Speidel or Pontus Skoglund.\n\n**Peer review information** *Nature* thanks Jerome Kelleher, Duncan Sayer and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.\n\n**Reprints and permissions information** is available at http://www.nature.com/reprints.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Fig. 2 | Ancestry from the Iron Age to the early medieval period in Europe. a**, Source groups used for qpAdm modelling of early medieval Europe. MDS is computed jointly with individuals from later periods using pairwise outgroup *f*3 statistics (outgroup: Han Chinese people). These are calculated using Twigstats on Relate genealogies with a cut-off of 1,000 generations. The geographical map shows sampling locations of these individuals. **b**, The genetic structure of ancient groups predominantly from early medieval contexts shown on the same MDS as in **a**. The magnified inset shows an MDS computed without Twigstats on the same samples as the Twigstats MDS and focusing on early medieval or later individuals. **c**, Ancestry models of early medieval (EM) groups across Europe computed using qpAdm. Sample sizes are\n\nancestry related to EIA Scandinavian Peninsula (Fig. 2c). The Wielbark archaeological complex has been linked to the later Chernyakhov culture to the southeast and to early Goths, an historical Germanic group that flourished in the second to fifth centuries ce56. Our modelling supports the idea that some groups that probably spoke Germanic languages from Scandinavia expanded south across the Baltic into the area between the Oder and Vistula rivers in the early centuries ce, although whether these expansions can be linked specifically with historical Goths is still debatable. Moreover, since a considerable shown in black boxes. Sources are highlighted in **a** and marked as bold in the key, and were used in a rotational qpAdm scheme. For each target group, we remove models with infeasible admixture proportions (falling outside [0, 1]) and use a Twigstats cut-off of 1,000 generations. All models satisfy *P* > 0.01, unless a −log10[*P* value] is shown next to the model. If models satisfy *P* > 0.05, we show all such models; otherwise, we show only the model with the largest *P* value. **d**, The ancestry proportion derived from EIA Scandinavia in groups with a non-zero component of this ancestry. We show groups modelled in **c** that have a feasible model (*P* > 0.01). In **c**,**d**, we show one s.e. BA, Bronze Age; CNE, continental northern Europeans; EBA, early Bronze Age; EVA, early Viking Age; IA, Iron Age; MED, medieval; MLBA, middle/late Bronze Age; VA, Viking Age.\n\nproportion of Wielbark burials during this period were cremations, the possible presence of individuals with other ancestries cannot be strictly rejected if they were exclusively cremated (and are therefore invisible in the aDNA record).\n\nA previous study could not reject continuity in ancestry from the Wielbark-associated individuals to later medieval individuals from a similar region12. With the improved power of Twigstats, models of continuity are strongly rejected, with no one-source model of any preceding Iron Age or Bronze Age group providing a reasonable fit for the", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Fig. 4 | Ancestry in the Viking world. a**, Map showing ancestry carried by Scandinavian Viking Age individuals as inferred using the best-fitting qpAdm model. These are chosen by either choosing the one-source model with largest *P* value and *P* > 0.01 or the two-source model with the largest *P* value and *P* > 0.01. Extended Data Fig. 7 shows the same map with all accepted models. **b**, Stable isotope data indicating the geology of childhood origin. The histogram shows the ratio of strontium isotopes 87 to 86 measured in 109 individuals in Öland69. For individuals included in our ancestry modelling, we plot Iron Age central European-related ancestry against their stable isotope values (grey circles, *r* = −0.39, *P* = 0.075). Shared area corresponds to the 95% confidence band\n\ncontinuity from the EIA in Norway and northern Sweden (Fig. 4a). When considered collectively, the individuals who show evidence of central European-related ancestry are mostly observed in regions historically within the Danish sphere of influence and rule. Currently, no such individuals, for example, are noted in eastern central Sweden, which was a focus of regional power of the Svear (Fig. 4a). The difference in distribution could suggest that the central European-related ancestry was more common in regions dominated by the historical Götar and groups inhabiting the lands on the borders of the Danish kingdom.\n\nTo test the extent to which the variation in ancestry was consistent with mobility during the lifetime of the individuals or, alternatively, around the regression line. **c**, The ancestry shift observed in Viking Age Danish groups using qpAdm on all SNPs or Twigstats. We show the best one-source and all two-source models with *P* > 0.05. For models with *P* < 0.05, the −log10[*P* value] is shown under the plot. Sample sizes for each group are shown in brackets. **d**, The ancestry proportion across Viking Age individuals in Denmark, Sweden and Norway grouped by latitude. **e**, Viking Age genetic variation (grey circles) visualized on the same MDS as in Fig. 2a,b. **f**, The best-fitting qpAdm ancestry model for far-flung Viking individuals. Detailed models for all individuals are shown in Extended Data Figs. 9 and 10. In **c** and **f**, we show one s.e. Rotating qpAdm sources are marked in bold in the key.\n\nthat of established groups, we focused on the island of Öland in southeast Sweden, where 23 individuals for whom we could reconstruct ancestry portraits also had associated strontium stable isotope data66. Strontium isotope data from dental enamel reflect the geology of the region where an individual grew to maturity, and there are considerable differences in expectations between Öland and many other regions in northern Europe. The full range of strontium isotope ratios in 109 individuals show two modes, a majority group with low ratios and a second minority group with high ratios falling outside the expected range of local fauna (Fig. 4b). Among 23 individuals with genomes in our data, all 5 individuals with 100% ancestry relating to central Europe", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Extended Data Fig. 10 | Ancestry models of farflung Viking individuals. a**, MDS of each farflung Viking group plotted on top of preceding Iron age and Roman individuals. **b**, All accepted qpAdm models using Twigstats-1000 for\n\nevery non-Scandinavian Viking individual computed in a rotational qpAdm with source groups identical to Fig. 4. We plot one standard error.", - "page_start": 21, - "page_end": 21, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Extended Data Fig. 6 | MDS of ancient and modern genomes. a**, Same MDS as in Fig. 2 but only showing qpAdm source groups of Fig. 2a and modern groups in the Simons Genome Diversity Project (labelled) computed using genotypes\n\n(top) or Twigstats (bottom). **b**, MDS computed using genotypes showing one early medieval or Viking age group per facet. **c**, MDS computed using Twigstats showing one early medieval or Viking age group per facet.", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed3.pdf" - }, - { - "text": "## **Fig. 3 | Time transects across six geographical regions in Europe.**\n\n**a**–**f**, Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland (**a**), southeastern Europe (**b**), central Europe (**c**), Italy (**d**), Britain and Ireland (**e**) and Scandinavia (**f**). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals (*P* ≪ 1 × 10−32). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36–51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41–57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus (*P* = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium ce in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century ce burial of a 50–60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century ce associated with Longobards (Longobard_earlyMED(I))10 (Fig. 2c). This is consistent with the original study10, which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP))10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - }, - { - "text": "(including one with ancestry related to Britain) are part of the majority strontium values, consistent with them having grown up locally. By contrast, the six most clearly non-local individuals based on the stable isotopes all have 50% or more EIA Scandinavian Peninsula-related ancestry, although three individuals with wholly EIA Scandinavian Peninsula-related ancestry also had local values. This suggests that the presence of central European-related ancestry was not a transient phenomenon, but an ancestry shift that occurred at some point after about 500 ce, the period to which individuals from the massacre site at Sandby Borg ringfort on Öland were dated; these individuals all have strictly EIA Scandinavian-related ancestry. Indeed, one hypothesis is that the massacre at Sandby Borg could represent conflict associated with movements of people that contributed to later ancestry change, although other scenarios are possible and further synthesis of biomolecular and archaeological data is necessary to test this hypothesis.\n\n### **Viking Age mobility into Scandinavia**\n\nPrevious studies had suggested a major influx of ancestry related to Britain into Viking Age Scandinavia6,7 . Although we detect this ancestry in some individuals (7 individuals in Norway, 14 in Denmark and 14 in Sweden), including some individuals whose ancestry appears to be entirely derived from Iron Age Britain, its overall impact appears reduced compared with previous reports. Our analysis indicates a proportionally larger impact of ancestry from Iron Age Britain in northern Norway, with southern Scandinavia predominantly influenced by continental central European ancestries (Fig. 4d). We hypothesize that our estimates of ancestry from Britain are reduced relative to previous studies because ancestry related to Britain and continental central Europe may have been indistinguishable. This could be due to a lack of statistical power to distinguish these closely related sources with standard methods, as well as through potential biases introduced by using modern surrogate populations that have since been influenced by later gene flow (such as gene flow into Britain). We illustrate this by replicating the analyses previously described6,7 (Extended Data Fig. 8).\n\nSimilarly, a previous study has suggested that individuals at sites such as Kärda in southern Sweden carried ancestry from southern Europe6 . In our models, two Kärda individuals fit with central European-related ancestry, but none of the individuals has a substantial proportion of ancestry related to southern European sources (Extended Data Fig. 9). Instead, we detect ancestry from southern European sources in only three individuals from Scandinavia, and in relatively small proportions (Fig. 4a).\n\nInterestingly, we detect ancestry from Bronze and Iron Age sources from Eastern Europe (present-day Lithuania and Poland), concentrated in southeastern parts of Sweden, particularly the island of Gotland (14 individuals; Fig. 4a). This is consistent with previous genetic studies6,7 . We find that this ancestry is enriched in male individuals (Extended Data Fig. 7d), suggesting male-biased mobility and/or burial. The closest match tends to be Roman Iron Age Lithuanian genomes associated with Balts, which would be consistent with mobility across the Baltic Sea, but we caution that the geographical representation of available genomes is still limited.\n\n### **Viking Age expansion from Scandinavia**\n\nTraditionally, historical perspectives on what is now often referred to as the Viking diaspora placed an emphasis on the movements and settlements of population groups from various parts of Scandinavia67. Our explorative MDS analysis again indicates mixed ancestries related to the Scandinavian EIA, with regional differences that point to varied local admixture (Fig. 4e and Extended Data Fig. 10).\n\nIn Britain, most of the individuals recovered from the two late Viking Age mass graves identified at Ridgeway Hill, Dorset, and St John's College, Oxford6 , show ancestries typical of those seen in Viking Age southern Scandinavia (Fig. 4f). Further west, North Atlantic Viking Age individuals in the Faroe Islands, Iceland and Greenland carry ancestry from the Scandinavian Peninsula, with several individuals showing the continental central Europe-related ancestry signal found in southern Scandinavia (Fig. 4f) and others who share substantial ancestry with Iron Age Britain. In contrast to previous hypotheses68, we found a marginal enrichment of ancestry related to Britain and Ireland in men (15 out of 17 men and 3 out of 6 women with at least one accepted model involving Iron or Roman Age Britain as source; Fisher's exact test *P* = 0.089) (Extended Data Fig. 7c,e). However, sampling of additional individuals to improve distinction between early English- and Norse-related ancestries would be required to fully test this hypothesis.\n\nIn eastern Europe, we observe EIA Scandinavian ancestries in a Viking Age burial from Ukraine, and these ancestries are overrepresented in Viking Age burials from present-day Russia. At Staraya Ladoga in western Russia, we observe several individuals with EIA Scandinavian Peninsula-related ancestry and at least one individual dated to the eleventh century with apparent ancestry related to Iron Age Britain. The relative absence of Iron Age central European ancestry, which was largely restricted to southern Scandinavia during the Viking Age, is thus indicative that these individuals may have originated in the central/ northern parts of Sweden or Norway, where Viking Age individuals show the most similar ancestry profiles to them.\n\n### **Conclusions**\n\nOur approach, Twigstats, transfers the power advantage of haplotypebased approaches to a fully temporal framework, which is applicable to *f*-statistics and enables previously unavailable unbiased and time-stratified analyses of admixture. We demonstrated that Twigstats enables fine-scale quantitative modelling of ancestry proportions, revealing wide-ranging ancestry changes that affect northern and central Europe during the Iron, Roman and Viking ages. We reveal evidence of the southward and/or eastward expansion of individuals who probably spoke Germanic languages and who had Scandinavian-related ancestry in the first half of the first millennium ce. We note that 'Scandinavian-related' in this context relates to the ancient genomes available, and so it is entirely possible that these processes were driven, for example, from regions in northern-central Europe. This could be consistent with the attraction of the greater wealth, which tended to build up among Rome's immediate neighbours and may have played a major role in vectors of migration internal to communities in Europe who lived beyond the Roman frontier52. Later, patterns of gene flow seem to have turned northwards, with the spread of Iron Age Central Europe-related ancestry into Scandinavia. Overall, our approach can be used for the reconstruction of new high-resolution genetic histories around the world.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-024-08275-2.\n\n- 1. Lawson, D. J., Hellenthal, G., Myers, S. & Falush, D. Inference of population structure using dense haplotype data. *PLoS Genet.* **8**, 11–17 (2012).\n- 2. Hellenthal, G. et al. A genetic atlas of human admixture history. *Science* **343**, 747–751 (2014).\n- 3. Schiffels, S. et al. Iron Age and Anglo-Saxon genomes from East England reveal British migration history. *Nat. Commun.* **7**, 10408 (2016).\n- 4. Flegontov, P. et al. Palaeo-Eskimo genetic ancestry and the peopling of Chukotka and North America. *Nature* **570**, 236–240 (2019).\n- 5. Antonio, M. L. et al. Stable population structure in Europe since the Iron Age, despite high mobility. *eLife* **13**, e79714 (2024).", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Extended Data Fig. 5 | Three examples of applying Twigstats. a** Fine-scale population structure simulation emulating ref. 39 (see Methods for simulation details). First two principal components are computed from pairwise outgroup *f*3 statistics on the genotypes directly and on Relate trees inferred from the 50 target individuals. Labels in plots show the average coordinates of members of that population. For each panel, we calculate a separation index (SI) as in39, which we define as the proportion of individuals for which the closest individual (by the Euclidean distance in PC space) is in the same population. **b**, Fine-scale genetic structure in Neolithic Europe quantified using an MDS calculated on a symmetric matrix that contains all pairwise outgroup *f*3 statistics (outgroup: YRI) between individuals. These are either calculated directly on genotypes or calculated using Twigstats on Relate genealogies with a cutoff of 1000 generations. Individuals were selected by filtering based on Steppe and Western Hunter-gatherer ancestry (Methods). **c**, Admixture proportions inferred using qpAdm with three distal sources of Western\n\nHunter-gatherers, early European farmers, and Yamnaya Steppe people46. We show results for Twigstats-5000. Bias is measured as the difference in admixture proportions obtained from Twigstats-5000 and all SNPs, and we show standard errors of the latter. We plot two standard errors around the mean. The standard error improvement shown is one minus the ratio of standard errors obtained from Twigstats-5000 and using all SNPs. **d**, Neanderthal admixture proportion inferred using an *f*4-ratio of the form *f*4(outgroup, Altai, target, Mbuti)/*f*4(outgroup, Altai, Vindija, Mbuti). We compute these on genetic variation data from the Simon's Genome Diversity Project (SGDP)75 and use the high-coverage Altai and Vindija Neanderthals78,79. We also compute equivalent *f*4-ratio statistics in a simulation emulating Neanderthal admixture 50,000 years ago and a second simulation involving no Neanderthal admixture but deep structure that leads to a similar inference unless deep coalescences are ignored by Twigstats. We plot two standard errors around the mean.", - "page_start": 16, - "page_end": 16, - "source_file": "pubmed3.pdf" - }, - { - "text": "- 4. A summary is displayed in which you can confirm that you selected the correct hosts. Click **Make Host Cluster** (see Figure 8-22).\n\n| Create Host Cluster: Summary | | × |\n| --- | --- | --- |\n| A new host cluster ITSO-ESX-Cluster-01 will be created. | | |\n| The following hosts will be added to ITSO-ESX-Cluster-01: | | |\n| ITSO-VMHOST-01 | | |\n| ITSO-VMHOST-02 | | |\n| Cancel | Back | Make Host Cluster |\n\n*Figure 8-22 Create host cluster summary*\n\n- 5. After the task completes, click **Close** to return to the Host Cluster view, where you can see the cluster that you created (see Figure 8-23).\n\n| Create Host Cluster | 三 Actions ▼ | IVI | Default | | Contains | Filter | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| IID Name | | Status | | Host Count | | | Mappings Count | 11 |\n| ITSO-ESX-Cluster-01 | | ✓ Online | | | | | | |\n\n*Figure 8-23 Host Cluster view*\n\n**Note:** The host cluster status depends on its member hosts. One offline or degraded host sets the host cluster status as degraded.", - "page_start": 364, - "page_end": 364, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "What are the cultures with which the Wielbark culture is associated?", - "target_page": 4, - "target_passage": "linked to the later Chernyakhov cul- ture to the southeast and to early Goths", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "higher resolution using earlier genomes. Several other individuals from these Longobard burials (Longobard_earlyMED(II)) show no detectable ancestry from northern Europe and, instead, are more closely related to Iron Age groups in continental central Europe, putatively representing descendants of local people buried in a Longobard style. Our results are consistent with attestations that the Longobards originated in the areas of present-day northern Germany or Denmark, but that by the sixth century ce they incorporated multiple different cultural identities, and mixed ancestries. Present-day populations of Hungary do not appear to derive detectable ancestry from early medieval individuals from Longobard contexts, and are instead more similar to Scythian-related ancestry sources (Extended Data Fig. 6), consistent with the later impact of Avars, Magyars and other eastern groups58.\n\nIn southern Germany, the genetic ancestry of individuals from early medieval Bavaria probably associated with the historical Germanic-language-speaking Baiuvarii59 cannot be modelled as deriving ancestry solely from earlier groups in Iron Age central Germany (*P* ≪ 1 × 10−36). The Baiuvarii probably appeared in the region in the fifth century ce59, but their origins remain unresolved. Our current best model indicates a mixture with ancestry derived from EIA Peninsular Scandinavia and central Europe, suggesting an expansion of Scandinavian-related ancestry producing a regional ancestry shift (Figs. 2c and 3c).\n\nIn Italy, southward expansions of northern and central European ancestries appear by the Late Antiquity (approximately fourth century ce), where a clear diversification of ancestry can be observed compared with preceding time periods (Fig. 3d). However, no individuals with near 100% Scandinavian ancestry can be observed in the sampling data available so far.\n\nIn Britain, the ancestries of Iron Age and Roman individuals form a tight cluster in our MDS analysis (Fig. 3e), shifted relative to available preceding Bronze Age individuals from Ireland and Orkney, and adjacent to, but distinct from, available individuals in Iron Age and Roman central Europe. However, two first- to second-century ce burials from a Roman military fortress site in Austria (Klosterneuburg)5 carry ancestry that is currently indistinguishable from Iron Age or Roman populations of Britain, to the exclusion of other groups (qpWave cladality *P* = 0.11). One option is that they had ancestry from Britain; alternatively, currently unsampled populations from western continental Europe carried ancestries similar to Iron Age southern Britain.\n\nTwigstats substantially improves models of admixture between ancestries from Iron Age Britain and northern Europe in early medieval England9 , halving standard errors from 9% with SNPs to 4% when using time stratification (point estimates 80% and 79% Iron Age Britain-related ancestry, respectively). We used this improved resolution to demonstrate that an earlier Roman individual (6DT3) dating to approximately second to fourth century ce from the purported gladiator or military cemetery at Driffield Terrace in York (Roman *Eboracum*), England60, who was previously identified as an ancestry outlier61,62, specifically carried approximately 25% EIA Scandinavian Peninsula-related ancestry (Fig. 2c). This documents that people with Scandinavian-related ancestry already were in Britain before the fifth century ce, after which there was a substantial influx associated with Anglo-Saxon migrations9 . Although it is uncertain whether this individual was a gladiator or soldier, individuals and groups from northern Europe are indeed recorded in Roman sources both as soldiers and as enslaved gladiators63,64.\n\nAcross Europe, we see regional differences in the southeastern and southwestern expansions of Scandinavian-related ancestries. Early medieval groups from present-day Poland and Slovakia carry specific ancestry from one of the Scandinavian EIA groups—the one with individuals primarily from the northern parts of Scandinavia in the EIA—with no evidence of ancestry related to the other primary group in more southern Scandinavia (Fig. 2d). By contrast, in southern and western Europe, Scandinavian-related ancestry either derives from EIA southern Scandinavia—as in the cases of the probable Baiuvarii in Germany, Longobard-associated burials in Italy and early medieval burials in southern Britain—or cannot be resolved to a specific region in Scandinavia. If these expansions are indeed linked to language, this pattern is remarkably concordant with the main branches of Germanic languages, with the now-extinct eastern Germanic spoken by Goths in Ukraine on the one hand, and western Germanic languages such as Old English and Old High German recorded in the early medieval period on the other hand.\n\n### **Influx into pre-Viking Age Scandinavia**\n\nIn EIA Scandinavia (<500 ce), we find evidence for broad genetic homogeneity. Specifically, individuals from Denmark (100 ce–300 ce) were indistinguishable from contemporary people in the Scandinavian Peninsula (Fig. 2c). However, we observe a clear shift in genetic ancestry already in the eighth century ce (Late Iron Age/early Viking Age) on Zealand (present-day Denmark) for which a 100% EIA ancestry model is rejected (*P* = 1 × 10−17 using Twigstats; *P* = 7.5 × 10−4 without). This shift in ancestry persists among later Viking Age groups in Denmark, where all groups are modelled with varying proportions of ancestry related to Iron Age continental groups in central Europe (Figs. 3f and 4c). A non-parametric MDS of Viking Age individuals suggests that variation between individuals forms a cline spanning from the EIA Scandinavian Peninsula individuals to ancestry characteristic of central Europe (Fig. 4e). The observed shift in ancestry in Denmark cannot be confounded by potentially earlier unknown gene flow into Iron Age source groups in Austria, France and Germany, but such gene flow could affect the exact ancestry proportions.\n\nThese patterns are consistent with northward expansion of ancestry, potentially starting before the Viking Age, into the Jutland peninsula and Zealand island towards southern Sweden. The geographical origin of this ancestry is currently difficult to discern, as the available samples from Iron Age central Europe remain sparse. The timing of this expansion is constrained only by the samples available: this ancestry is not observed in individuals from the Copenhagen area of Denmark (around 100 ce–300 ce)6 , an individual from the southern tip of Sweden (around 500 ce)16, individuals from the Sandby Borg massacre site on Öland in present-day Sweden (around 500 ce)7 and 31 individuals from the mid-eighth century Salme ship burials in present-day Estonia (Extended Data Fig. 9), who probably originated in central Sweden6 . Therefore, this ancestry transformation most likely postdated these individuals in each particular region and mostly occurred in the second half of the first millennium ce.\n\nTo assess the full extent of the impact of this ancestry influx into Scandinavia, we next aimed to understand the ancestry of individuals in Scandinavia during the Viking Age. Previous studies have suggested that there was a diversity of ancestries in Scandinavia during this period6,7,65, due to increased maritime mobility, but have not reported per-individual ancestry estimates based on preceding ancestry. We analysed each individual's ancestry using a rotational qpAdm scheme (Fig. 4a, Extended Data Fig. 9 and Supplementary Table 4), which showed increased power in distinguishing models when restricted to recent coalescences with Twigstats (more than 80% of accepted one-source models in Twigstats were also accepted one-source models using all SNPs, compared with less than 17% for the inverse).\n\nWe investigated regional differences in non-local ancestry across Scandinavia. In Denmark, 25 out of 53 Viking Age individuals had detectable (*z-*score > 1) central European-related ancestry (CentralEurope. IronRoman or Portugal.IronRoman) in their best accepted qpAdm models. In Sweden 20 out of 62 individuals had detectable central European-related ancestry, concentrated almost entirely in southern regions (Fig. 4a,d). By contrast, in Norway, this ancestry was observed in only 2 out of 24 individuals, indicating a wide-ranging impact of incoming ancestry in southern Scandinavia and suggesting more", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "## **Fig. 3 | Time transects across six geographical regions in Europe.**\n\n**a**–**f**, Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland (**a**), southeastern Europe (**b**), central Europe (**c**), Italy (**d**), Britain and Ireland (**e**) and Scandinavia (**f**). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals (*P* ≪ 1 × 10−32). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36–51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41–57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus (*P* = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium ce in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century ce burial of a 50–60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century ce associated with Longobards (Longobard_earlyMED(I))10 (Fig. 2c). This is consistent with the original study10, which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP))10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - }, - { - "text": "**Fig. 2 | Ancestry from the Iron Age to the early medieval period in Europe. a**, Source groups used for qpAdm modelling of early medieval Europe. MDS is computed jointly with individuals from later periods using pairwise outgroup *f*3 statistics (outgroup: Han Chinese people). These are calculated using Twigstats on Relate genealogies with a cut-off of 1,000 generations. The geographical map shows sampling locations of these individuals. **b**, The genetic structure of ancient groups predominantly from early medieval contexts shown on the same MDS as in **a**. The magnified inset shows an MDS computed without Twigstats on the same samples as the Twigstats MDS and focusing on early medieval or later individuals. **c**, Ancestry models of early medieval (EM) groups across Europe computed using qpAdm. Sample sizes are\n\nancestry related to EIA Scandinavian Peninsula (Fig. 2c). The Wielbark archaeological complex has been linked to the later Chernyakhov culture to the southeast and to early Goths, an historical Germanic group that flourished in the second to fifth centuries ce56. Our modelling supports the idea that some groups that probably spoke Germanic languages from Scandinavia expanded south across the Baltic into the area between the Oder and Vistula rivers in the early centuries ce, although whether these expansions can be linked specifically with historical Goths is still debatable. Moreover, since a considerable shown in black boxes. Sources are highlighted in **a** and marked as bold in the key, and were used in a rotational qpAdm scheme. For each target group, we remove models with infeasible admixture proportions (falling outside [0, 1]) and use a Twigstats cut-off of 1,000 generations. All models satisfy *P* > 0.01, unless a −log10[*P* value] is shown next to the model. If models satisfy *P* > 0.05, we show all such models; otherwise, we show only the model with the largest *P* value. **d**, The ancestry proportion derived from EIA Scandinavia in groups with a non-zero component of this ancestry. We show groups modelled in **c** that have a feasible model (*P* > 0.01). In **c**,**d**, we show one s.e. BA, Bronze Age; CNE, continental northern Europeans; EBA, early Bronze Age; EVA, early Viking Age; IA, Iron Age; MED, medieval; MLBA, middle/late Bronze Age; VA, Viking Age.\n\nproportion of Wielbark burials during this period were cremations, the possible presence of individuals with other ancestries cannot be strictly rejected if they were exclusively cremated (and are therefore invisible in the aDNA record).\n\nA previous study could not reject continuity in ancestry from the Wielbark-associated individuals to later medieval individuals from a similar region12. With the improved power of Twigstats, models of continuity are strongly rejected, with no one-source model of any preceding Iron Age or Bronze Age group providing a reasonable fit for the", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed3.pdf" - }, - { - "text": "(including one with ancestry related to Britain) are part of the majority strontium values, consistent with them having grown up locally. By contrast, the six most clearly non-local individuals based on the stable isotopes all have 50% or more EIA Scandinavian Peninsula-related ancestry, although three individuals with wholly EIA Scandinavian Peninsula-related ancestry also had local values. This suggests that the presence of central European-related ancestry was not a transient phenomenon, but an ancestry shift that occurred at some point after about 500 ce, the period to which individuals from the massacre site at Sandby Borg ringfort on Öland were dated; these individuals all have strictly EIA Scandinavian-related ancestry. Indeed, one hypothesis is that the massacre at Sandby Borg could represent conflict associated with movements of people that contributed to later ancestry change, although other scenarios are possible and further synthesis of biomolecular and archaeological data is necessary to test this hypothesis.\n\n### **Viking Age mobility into Scandinavia**\n\nPrevious studies had suggested a major influx of ancestry related to Britain into Viking Age Scandinavia6,7 . Although we detect this ancestry in some individuals (7 individuals in Norway, 14 in Denmark and 14 in Sweden), including some individuals whose ancestry appears to be entirely derived from Iron Age Britain, its overall impact appears reduced compared with previous reports. Our analysis indicates a proportionally larger impact of ancestry from Iron Age Britain in northern Norway, with southern Scandinavia predominantly influenced by continental central European ancestries (Fig. 4d). We hypothesize that our estimates of ancestry from Britain are reduced relative to previous studies because ancestry related to Britain and continental central Europe may have been indistinguishable. This could be due to a lack of statistical power to distinguish these closely related sources with standard methods, as well as through potential biases introduced by using modern surrogate populations that have since been influenced by later gene flow (such as gene flow into Britain). We illustrate this by replicating the analyses previously described6,7 (Extended Data Fig. 8).\n\nSimilarly, a previous study has suggested that individuals at sites such as Kärda in southern Sweden carried ancestry from southern Europe6 . In our models, two Kärda individuals fit with central European-related ancestry, but none of the individuals has a substantial proportion of ancestry related to southern European sources (Extended Data Fig. 9). Instead, we detect ancestry from southern European sources in only three individuals from Scandinavia, and in relatively small proportions (Fig. 4a).\n\nInterestingly, we detect ancestry from Bronze and Iron Age sources from Eastern Europe (present-day Lithuania and Poland), concentrated in southeastern parts of Sweden, particularly the island of Gotland (14 individuals; Fig. 4a). This is consistent with previous genetic studies6,7 . We find that this ancestry is enriched in male individuals (Extended Data Fig. 7d), suggesting male-biased mobility and/or burial. The closest match tends to be Roman Iron Age Lithuanian genomes associated with Balts, which would be consistent with mobility across the Baltic Sea, but we caution that the geographical representation of available genomes is still limited.\n\n### **Viking Age expansion from Scandinavia**\n\nTraditionally, historical perspectives on what is now often referred to as the Viking diaspora placed an emphasis on the movements and settlements of population groups from various parts of Scandinavia67. Our explorative MDS analysis again indicates mixed ancestries related to the Scandinavian EIA, with regional differences that point to varied local admixture (Fig. 4e and Extended Data Fig. 10).\n\nIn Britain, most of the individuals recovered from the two late Viking Age mass graves identified at Ridgeway Hill, Dorset, and St John's College, Oxford6 , show ancestries typical of those seen in Viking Age southern Scandinavia (Fig. 4f). Further west, North Atlantic Viking Age individuals in the Faroe Islands, Iceland and Greenland carry ancestry from the Scandinavian Peninsula, with several individuals showing the continental central Europe-related ancestry signal found in southern Scandinavia (Fig. 4f) and others who share substantial ancestry with Iron Age Britain. In contrast to previous hypotheses68, we found a marginal enrichment of ancestry related to Britain and Ireland in men (15 out of 17 men and 3 out of 6 women with at least one accepted model involving Iron or Roman Age Britain as source; Fisher's exact test *P* = 0.089) (Extended Data Fig. 7c,e). However, sampling of additional individuals to improve distinction between early English- and Norse-related ancestries would be required to fully test this hypothesis.\n\nIn eastern Europe, we observe EIA Scandinavian ancestries in a Viking Age burial from Ukraine, and these ancestries are overrepresented in Viking Age burials from present-day Russia. At Staraya Ladoga in western Russia, we observe several individuals with EIA Scandinavian Peninsula-related ancestry and at least one individual dated to the eleventh century with apparent ancestry related to Iron Age Britain. The relative absence of Iron Age central European ancestry, which was largely restricted to southern Scandinavia during the Viking Age, is thus indicative that these individuals may have originated in the central/ northern parts of Sweden or Norway, where Viking Age individuals show the most similar ancestry profiles to them.\n\n### **Conclusions**\n\nOur approach, Twigstats, transfers the power advantage of haplotypebased approaches to a fully temporal framework, which is applicable to *f*-statistics and enables previously unavailable unbiased and time-stratified analyses of admixture. We demonstrated that Twigstats enables fine-scale quantitative modelling of ancestry proportions, revealing wide-ranging ancestry changes that affect northern and central Europe during the Iron, Roman and Viking ages. We reveal evidence of the southward and/or eastward expansion of individuals who probably spoke Germanic languages and who had Scandinavian-related ancestry in the first half of the first millennium ce. We note that 'Scandinavian-related' in this context relates to the ancient genomes available, and so it is entirely possible that these processes were driven, for example, from regions in northern-central Europe. This could be consistent with the attraction of the greater wealth, which tended to build up among Rome's immediate neighbours and may have played a major role in vectors of migration internal to communities in Europe who lived beyond the Roman frontier52. Later, patterns of gene flow seem to have turned northwards, with the spread of Iron Age Central Europe-related ancestry into Scandinavia. Overall, our approach can be used for the reconstruction of new high-resolution genetic histories around the world.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-024-08275-2.\n\n- 1. Lawson, D. J., Hellenthal, G., Myers, S. & Falush, D. Inference of population structure using dense haplotype data. *PLoS Genet.* **8**, 11–17 (2012).\n- 2. Hellenthal, G. et al. A genetic atlas of human admixture history. *Science* **343**, 747–751 (2014).\n- 3. Schiffels, S. et al. Iron Age and Anglo-Saxon genomes from East England reveal British migration history. *Nat. Commun.* **7**, 10408 (2016).\n- 4. Flegontov, P. et al. Palaeo-Eskimo genetic ancestry and the peopling of Chukotka and North America. *Nature* **570**, 236–240 (2019).\n- 5. Antonio, M. L. et al. Stable population structure in Europe since the Iron Age, despite high mobility. *eLife* **13**, e79714 (2024).", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on −log10[*P* values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a *P* value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium ce. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\n**Fine-scale structure in Neolithic Europe.** To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with *P* > 0.01, *z* < 2 for Yamnaya ancestry, and *z* < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n#### **Reporting summary**\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n#### **Data availability**\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n### **Code availability**\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76. All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of *f*-statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats_paper).\n\n- 70. Maier, R., Flegontov, P., Flegontova, O., Changmai, P. & Reich, D. On the limits of fitting complex models of population history to *f*-statistics. *eLife* **12**, e85492 (2023).\n- 71. Kelleher, J., Etheridge, A. M. & McVean, G. Efficient coalescent simulation and genealogical analysis for large sample sizes. *PLoS Comput. Biol.* **12**, e1004842 (2016).\n- 72. da Mota, B. S. et al. Imputation of ancient human genomes. *Nat. Commun.* **14**, 3660 (2023).\n- 73. Rubinacci, S., Ribeiro, D. M., Hofmeister, R. & Delaneau, O. Efficient phasing and imputation of low-coverage sequencing data using large reference panels. *Nat. Genet.* **53**, 120–126 (2021).\n- 74. The 1000 Genomes Project Consortium. A global reference for human genetic variation. *Nature* **526**, 68–74 (2015).\n- 75. Mallick, S. et al. The Simons Genome Diversity Project: 300 genomes from 142 diverse populations. *Nature* **538**, 201–206 (2016).\n- 76. Speidel, L. leospeidel/twigstats: Twigstats v1.0.1. *Zenodo* https://doi.org/10.5281/zenodo. 13833119 (2024).\n- 77. Skoglund, P. et al. Genetic evidence for two founding populations of the Americas. *Nature* **525**, 104–108 (2015).\n- 78. Prüfer, K. et al. The complete genome sequence of a Neanderthal from the Altai Mountains. *Nature* **505**, 43–49 (2014).\n- 79. Prüfer, K. et al. A high-coverage Neandertal genome from Vindija Cave in Croatia. *Science* **358**, 655–658 (2017).\n\n**Acknowledgements** L.S. was supported by a Sir Henry Wellcome Fellowship (220457/Z/20/Z). P.S. was supported by the European Molecular Biology Organization, the Vallee Foundation, the European Research Council (852558), the Wellcome Trust (217223/Z/19/Z) and Francis Crick Institute core funding (FC001595) from Cancer Research UK, the UK Medical Research Council and the Wellcome Trust. B.R. was supported by the Swedish Research Council (2021-03333).\n\n**Author contributions** P.S. supervised the study. L.S. and P.S. developed the method. L.S, M.S. and P.S. curated the dataset. L.S. and P.S. analysed the data and wrote the manuscript. L.S., M.S., T.B., B.R., K.A., C.B., A.G., P.H. and P.S. interpreted the results and edited the manuscript.\n\n**Funding** Open Access funding provided by The Francis Crick Institute.\n\n**Competing interests** The authors declare no competing interests.\n\n#### **Additional information**\n\n**Supplementary information** The online version contains supplementary material available at https://doi.org/10.1038/s41586-024-08275-2.\n\n**Correspondence and requests for materials** should be addressed to Leo Speidel or Pontus Skoglund.\n\n**Peer review information** *Nature* thanks Jerome Kelleher, Duncan Sayer and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.\n\n**Reprints and permissions information** is available at http://www.nature.com/reprints.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": ".\n\n501 European Agency for Safety and Health at Work, 2019: The value of occupational safety and health and the societal costs of work-related injuries and diseases, Publications Office of the European Union, Luxembourg, doi:10.2802/251128\n\n502 A hint in this direction gives the EWCS 1995 evaluating per occupation the responses to the question: 'Is your job more difficult because of a chronic or permanent health problem by occupation?\". Between 72% and 81% of the manually dominated occupations (agriculture, crafts, machine operators, elementary occupations) respond: 'No, never', that is, between 19% and 28% of these occupations see such problems. Compared to this, between 86% and 90% of the high-level clerk workers respond: 'No, never', that is, comparatively less, only between 10% and 14% see these problems.\n\n503 WHO: Cultural context of health and well-being, Policy brief No 1, Culture matters: using a cultural contexts of health approach to enhance policy-making, Cultural context of health and well-being", - "page_start": 160, - "page_end": 160, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "means of measuring the toxicity of text generated by LMs. For example, the Perspective API model has been found to associate higher levels of toxicity with sentences containing identity markers for marginalized groups or even specific names [61, 103].\n\nSecond, auditing an LM for biases requires an a priori understanding of what social categories might be salient. The works cited above generally start from US protected attributes such as race and gender (as understood within the US). But, of course, protected attributes aren't the only identity characteristics that can be subject to bias or discrimination, and the salient identity characteristics and expressions of bias are also culture-bound [46, 116]. Thus, components like toxicity classifiers would need culturally appropriate training data for each context of audit, and even still we may miss marginalized identities if we don't know what to audit for.\n\nFinally, we note that moving beyond demonstrating the existence of bias to building systems that verify the 'safety' of some LM (even for a given protected class) requires engaging with the systems of power that lead to the harmful outcomes such a system would seek to prevent [19]. For example, the #MeToo movement has spurred broad-reaching conversations about inappropriate sexual behavior from men in power, as well as men more generally [84]. These conversations challenge behaviors that have been historically considered appropriate or even the fault of women, shifting notions of sexually inappropriate behavior. Any product development that involves operationalizing definitions around such shifting topics into algorithms is necessarily political (whether or not developers choose the path of maintaining the status quo ante). For example, men and women make significantly different assessments of sexual harassment online [40]. An algorithmic definition of what constitutes inappropriately sexual communication will inherently be concordant with some views and discordant with others. Thus, an attempt to measure the appropriateness of text generated by LMs, or the biases encoded by a system, always needs to be done in relation to particular social contexts and marginalized perspectives [19].\n\n#### 4.4 Curation, Documentation & Accountability\n\nIn summary, LMs trained on large, uncurated, static datasets from the Web encode hegemonic views that are harmful to marginalized populations. We thus emphasize the need to invest significant resources into curating and documenting LM training data. In this, we follow Jo et al. [62], who cite archival history data collection methods as an example of the amount of resources that should be dedicated to this process, and Birhane and Prabhu [18], who call for a more justice-oriented data collection methodology. Birhane and Prabhu note, echoing Ruha Benjamin [15], \"Feeding AI systems on the world's beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.\" [p.1541]\n\nWhen we rely on ever larger datasets we risk incurring documentation debt, 18 i.e. putting ourselves in a situation where the datasets are both undocumented and too large to document post hoc. While documentation allows for potential accountability [13, 52, 86], undocumented training data perpetuates harm without recourse. Without documentation, one cannot try to understand training data characteristics in order to mitigate some of these attested issues or even unknown ones. The solution, we propose, is to budget for\n\ndocumentation as part of the planned costs of dataset creation, and only collect as much data as can be thoroughly documented within that budget.\n\n#### 5 DOWN THE GARDEN PATH\n\nIn §4 above, we discussed the ways in which different types of biases can be encoded in the corpora used to train large LMs. In §6 below we explore some of the risks and harms that can follow from deploying technology that has learned those biases. In the present section, however, we focus on a different kind of risk: that of misdirected research effort, specifically around the application of LMs to tasks intended to test for natural language understanding (NLU). As the very large Transformer LMs posted striking gains in the state of the art on various benchmarks intended to model meaning-sensitive tasks, and as initiatives like [142] made the models broadly accessible to researchers seeking to apply them, large quantities of research effort turned towards measuring how well BERT and its kin do on both existing and new benchmarks.19 This allocation of research effort brings with it an opportunity cost, on the one hand in terms of time not spent applying meaning capturing approaches to meaning sensitive tasks, and on the other hand in terms of time not spent exploring more effective ways of building technology with datasets of a size that can be carefully curated and available for a broader set of languages [65, 91].\n\nThe original BERT paper [39] showed the effectiveness of the architecture and the pretraining technique by evaluating on the General Language Understanding Evaluation (GLUE) benchmark [138], the Stanford Question Answering Datasets (SQuAD 1.1 and 2.0) [108], and the Situations With Adversarial Generations benchmark (SWAG) [155], all datasets designed to test language understanding and/or commonsense reasoning. BERT posted state of the art results on all of these tasks, and the authors conclude by saying that \"unsupervised pre-training is an integral part of many language understanding systems.\" [39, p.4179]. Even before [39] was published, BERT was picked up by the NLP community and applied with great success to a wide variety of tasks [e.g. 2, 149].\n\nHowever, no actual language understanding is taking place in LM-driven approaches to these tasks, as can be shown by careful manipulation of the test data to remove spurious cues the systems are leveraging [21, 93]. Furthermore, as Bender and Koller [14] argue from a theoretical perspective, languages are systems of signs [37], i.e. pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning. Therefore, claims about model abilities must be carefully characterized.\n\nAs the late Karen Spärck Jones pointed out: the use of LMs ties us to certain (usually unstated) epistemological and methodological commitments [124]. Either i) we commit ourselves to a noisy-channel interpretation of the task (which rarely makes sense outside of ASR), ii) we abandon any goals of theoretical insight into tasks and treat LMs as \"just some convenient technology\" [p.7], or iii) we implicitly assume a certain statistical relationship — known to be invalid — between inputs, outputs and meanings.20 Although\n\n18On the notion of documentation debt as applied to code, rather than data, see [154].\n\n19~26% of papers sampled from ACL, NAACL and EMNLP since 2018 cite [39].\n\n20Specifically, that the mutual information between the input and the meaning given the output is zero — what Spärck Jones calls \"the model of ignorance\".", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "*Figure 5 – Feature Info tool.*\n\nThe different displayed layers can be examined using the \"Legend\" tool. If the external service provides legend graphics, the user can interpret the given symbology and temporarily disable the display of layers (see Figure 6).\n\n*Figure 6 – Legend tool.*", - "page_start": 39, - "page_end": 39, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.\n\n#### 6.2 Risks and Harms\n\nThe ersatz fluency and coherence of LMs raises several risks, precisely because humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said. We now turn to examples, laying out the potential follow-on harms.\n\nThe first risks we consider are the risks that follow from the LMs absorbing the hegemonic worldview from their training data. When humans produce language, our utterances reflect our worldviews, including our biases [78, 79]. As people in positions of privilege with respect to a society's racism, misogyny, ableism, etc., tend to be overrepresented in training data for LMs (as discussed in §4 above), this training data thus includes encoded biases, many already recognized as harmful.\n\nBiases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities, through directly contested framings (e.g. undocumented immigrants vs. illegal immigrants or illegals), to language that is widely recognized to be derogatory (e.g. racial slurs) yet still used by some. While some of the most overtly derogatory words could be filtered out, not all forms of online abuse are easily detectable using such taboo words, as evidenced by the growing body of research on online abuse detection [45, 109]. Furthermore, in addition to abusive language [139] and hate speech [67], there are subtler forms of negativity such as gender bias [137], microaggressions [22], dehumanization [83], and various socio-political framing biases [44, 114] that are prevalent in language data. For example, describing a woman's account of her experience of sexism with the word tantrum both reflects a worldview where the sexist actions are normative and foregrounds a stereotype of women as childish and not in control of their emotions.\n\nAn LM that has been trained on such data will pick up these kinds of problematic associations. If such an LM produces text that is put into the world for people to interpret (flagged as produced by an 'AI' or otherwise), what risks follow? In the first instance, we foresee that LMs producing text will reproduce and even amplify the biases in their input [53]. Thus the risk is that people disseminate text generated by LMs, meaning more text in the world that reinforces and propagates stereotypes and problematic associations, both to humans who encounter the text and to future LMs trained on training sets that ingested the previous generation LM's output. Humans who encounter this text may themselves be subjects of those stereotypes and associations or not. Either way, harms ensue: readers subject to the stereotypes may experience the psychological harms of microaggressions [88, 141] and stereotype threat [97, 126]. Other readers may be introduced to stereotypes or have ones they\n\nalready carry reinforced, leading them to engage in discrimination (consciously or not) [55], which in turn leads to harms of subjugation, denigration, belittlement, loss of opportunity [3, 4, 56] and others on the part of those discriminated against.\n\nIf the LM outputs overtly abusive language (as Gehman et al. [53] show that they can and do), then a similar set of risks arises. These include: propagating or proliferating overtly abusive views and associations, amplifying abusive language, and producing more (synthetic) abusive language that may be included in the next iteration of large-scale training data collection. The harms that could follow from these risks are again similar to those identified above for more subtly biased language, but perhaps more acute to the extent that the language in question is overtly violent or defamatory. They include the psychological harm experienced by those who identify with the categories being denigrated if they encounter the text; the reinforcement of sexist, racist, ableist, etc. ideology; followon effects of such reinforced ideologies (including violence); and harms to the reputation of any individual or organization perceived to be the source of the text.\n\nIf the LM or word embeddings derived from it are used as components in a text classification system, these biases can lead to allocational and/or reputational harms, as biases in the representations affect system decisions [125]. This case is especially pernicious for being largely invisible to both the direct user of the system and any indirect stakeholders about whom decisions are being made. Similarly, biases in an LM used in query expansion could influence search results, further exacerbating the risk of harms of the type documented by Noble in [94], where the juxtaposition of search queries and search results, when connected by negative stereotypes, reinforce those stereotypes and cause psychological harm.\n\nThe above cases involve risks that could arise when LMs are deployed without malicious intent. A third category of risk involves bad actors taking advantage of the ability of large LMs to produce large quantities of seemingly coherent texts on specific topics on demand in cases where those deploying the LM have no investment in the truth of the generated text. These include prosaic cases, such as services set up to 'automatically' write term papers or interact on social media,23 as well as use cases connected to promoting extremism. For example, McGuffie and Newhouse [80] show how GPT-3 could be used to generate text in the persona of a conspiracy theorist, which in turn could be used to populate extremist recruitment message boards. This would give such groups a cheap way to boost recruitment by making human targets feel like they were among many like-minded people. If the LMs are deployed in this way to recruit more people to extremist causes, then harms, in the first instance, befall the people so recruited and (likely more severely) to others as a result of violence carried out by the extremists.\n\nYet another risk connected to seeming coherence and fluency involves machine translation (MT) and the way that increased fluency of MT output changes the perceived adequacy of that output [77]. This differs somewhat from the cases above in that there was an initial human communicative intent, by the author of the source language text. However, MT systems can (and frequently do) produce output that is inaccurate yet both fluent and (again, seemingly)\n\nthe system (or the organization deploying the system) has accountability for the truth of the utterances produced.\n\n23Such as the GPT-3 powered bot let loose on Reddit; see https://thenextweb.com/ neural/2020/10/07/someone-let-a-gpt-3-bot-loose-on-reddit-it-didnt-end-well/amp/.", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "### *4.3.5 Summary of survey results on wellbeing and health status*\n\nAn overview table on the responses to five questions in three different surveys reveals partly consistent and partly contradictory results per country. There are countries that have a consistent outcome over all questions, while others show a mixed or contradictory picture.\n\nValues better than 25% of EU average are marked in aquamarine, and values worse than 25% of EU average in orange. Other values are not marked.\n\nDenmark, Czechia, Italy and Luxembourg are the countries over or at average for every item. Some countries are mostly at average, or have a negative result for one item, often the period off work or low", - "page_start": 97, - "page_end": 97, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "What do the timescales during which high-amplitude flaring events occur in blazars indicate?", - "target_page": 1, - "target_passage": "that much of the en- ergy is being produced deep within the jet on small, sub-parsec scales", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both \"quiescent\" and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VER-ITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0.3 < z < 0.7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n### **Acknowledgments**\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument.\n\n### **References**\n\n- [1] F. Aharonian et al. 2007, ApJ, 664, L71\n- [2] F. Aharonian et al. 2006, Nature, 440, 1018\n- [3] F. Aharonian et al. 2007, A&A, 475, L9\n- [4] J. Holder, et al. 2008, AIPC, 1085, 657\n- [5] L. Costamante & G. Ghisellini 2002, A&A, 384, 56\n- [6] E.S. Perlman 2000, AIPC, 515, 53\n- [7] F.W. Stecker et al. 1996, ApJ, 473, L75\n- [8] P. Giommi et al. 2005, A&A, 434, 385\n- [9] S. Turriziani et al. 2007, A&A, 472, 699\n- [10] L. Costamante 2006, arXiv:0612709\n- [11] P. Padovani et al. 2002, ApJ, 581, 895\n- [12] R. Muhkerjee et al. 2001, AIPC, 558, 324\n- [13] A.A. Abdo et al. 2009, ApJ, 700, 597\n- [14] V.A. Acciari et al. 2008, ApJ, 684, L73\n- [15] V.A. Acciari et al. 2009, ApJ, 707, 612\n- [16] V.A. Acciari et al. 2009, ApJ, 690, L126\n- [17] V.A. Acciari et al. 2009, ApJ, 693, L104\n- [18] L.C. Reyes 2009, arXiv:0907.5175\n- [19] R.A. Ong 2009, ATel, 1941\n- [20] R.A. Ong et al. 2009, ATel, 2272\n- [21] V.A. Acciari et al. 2009, ApJ, 708, L100\n- [22] R.A. Ong et al. 2009, ATel, 2301\n- [23] R.A. Ong et al. 2009, ATel, 2260\n- [24] R.A. Ong et al. 2009, ATel, 2309\n- [25] W. Benbow 2009, arXiv:0908.1412\n- [26] V.A. Acciari et al. 2009, ApJ, submitted\n- [27] V.A. Acciari et al. 2009, ApJ, 695, 1370\n- [28] V.A. Acciari et al. 2009, ApJ, in press\n- [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "detailed variability analysis for one of two reasons: (1) too few data points or (2) flux measurement uncertainties on the order of the amplitude of observed variability. It is important to note that, due to discrepancies between the sampling frequency in both bands, the variability indices for the 850µm band may be artificially depressed due to the fact that there are not always corresponding measurements at higher frequencies during flaring epochs.\n\n## **3.2. First-Order Continuous Autoregression**\n\nWe follow the method of Kelly et al. [9], who model quasar optical light curves as a continuous time firstorder autoregressive process (CAR(1)) in order to extract characteristic time scales and the amplitude of flux variations. Although flaring behavior is not typically thought of as an autoregressive process, we find that the light curves are well-fit by the models and therefore adopt the method here to study blazar submillimeter light curves.\n\nThe CAR(1) process is described by a stochastic differential equation [9],\n\n$$d S(t)\\,=\\,{\\frac{1}{\\tau}}S(t)\\,d t+\\sigma{\\sqrt{d t}}\\,\\epsilon\\left(t\\right)+b\\,d t,\\qquad(3)$$\n\nassociated with a power spectrum of the form\n\n$$P_{X}(f)\\,=\\,\\frac{2\\sigma^{2}\\tau^{2}}{1+(2\\pi\\tau f)^{2}}.\\qquad\\qquad(4)$$\n\nIn equations 3 and 4, τ is called the \"relaxation time\" of the process S(t) and is identified by the break in PX(f). The power spectrum appears flat for timescales longer than this and falls off as 1/f 2 for timescales shorter than the characteristic timescale of the process.\n\nTaking the logarithm of the blazar light curve (in Jy) to be S(t), we adopt τ (in days) as the characteristic timescale of variability, after which the physical process \"forgets\" about what has happened at time lags of greater than τ . The two other relevant parameters, σ and µ = b/a, are the overall amplitude of variability and the logarithm of mean value of the light curve, respectively.\n\nIn the routine, we construct an autoregressive model for the light curves for a minimum of 100,000 iterations and calculate the value of τ from the break in the power spectrum in each instance. Due to the limited number of observations in the 850µm band, we performed this autoregressive analysis only for the 1mm light curves, which typically have more than 10 points per light curve.\n\nThis method yielded some surprising results. In Figure 3, we see that the BL Lacs and FSRQs exhibit virtually no difference in characteristic timescale, with\n\nFigure 3: Characteristic timescale (days) versus submillimeter luminosity (erg s−1 ) in the 1mm band for all objects. Physically, τ represents a \"relaxation timescale\", the timescale beyond which events are no longer correlated.\n\nboth classes extending across a large range in τ . Because of the uncertainty for objects with shorter characteristic timescales, it is hard to draw any definitive conclusions about the differences between classes. It is important to note that τ does not necessarily represent a flaring timescale, which is a behavior that typically operates on a scale of ∼10–100 days and not on the longer timescales we see in τ .\n\n## **4. CONNECTION WITH GAMMA-RAYS**\n\nIn general, we find that in the submillimeter, we are observing these blazars at or near the peak of the synchrotron component (αS ∼ 0), but that Fermidetected sources have more negative energy spectral indices overall than Fermi-nondetected sources. In Figure 4, we see that while the majority of Fermi blazars are observed on the rising part of the synchrotron component (at lower energies than the peak), all of the objects have very steeply falling γ-ray energy spectral indexes, putting the γ-ray peak at lower energies than the observed Fermi band. Knowing that we are not observing the synchrotron and γ-ray components at analagous points in the spectrum may allow us to better understand the magnetic field in the parsec-scale jet region and the population of external photons that is being upscattered to γ-rays.\n\nIn Figure 5, the ratio between Lγ and νLν,1mm reflects the division between BL Lacs and FSRQs as well", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "the design service life with trouble-free operation. The following items describe the critical areas encountered during the operational use of the turbojet engine:\n\n(1) The limiting exhaust gag tcmpcra;wcs provide the most important restrictions to the operation of the turbojet engine. The turbine components are subject to centrifugal loads of rotation, impulse and reaction loads on the blades, and various vibratory loads which may be inherent with the design. When the turbine components are subject to this variety of stress in the presence of high temperature, two types of structural phenomena must be considered. when a part is subject to a certain stress at some high temperature, weep failure will take place after a period of time. Of course, an increase in .tcmperature or stress will increase the rate at which creep damage is accumulated and reduce the time required to cause failure. Another problem results when a part is subjected to a repeated or cyclic stress. F&&e failure will occur after a number of cycles of a varying stress. An increase in temperature or magnitude of cyclic stress will increase the rate of fatigue damage and reduce the number of cycles necessary to produce failure. It is important to note that both fatigue and creep damage are cumulative.\n\nA gross overstress or overtemperature of the turbine section will produce damage that is immediately apparent. However, the creep and fatigue damage accumulated through periods of less extreme' overstress or overtemperature is more subtle. If the turbine is sibject to repeated excessive temperatures, the greatly increased rate of creep and fatigue damage wiIl produce failure early within the anticipated service life.\n\nGenerally, the operations which produce the highest exhaust gas temperatures are starting, acceleration, and maximum thrust at high altitude. The time spent at these temperatures must be limited arbitrarily to prevent excessive accumulation of creep and fatigue. Any time spent at temperatures in\n\nexcess of the operational limits for these conditions will increase the possibility of early failure of the turbine components.\n\nWhile the turbine components are the most critically stressed high temperature elements they are not the only items. The combustion chamber components may be critical at low altitude where high combustion chamber pressures exist. Also, the airframe structure and equipment adjacent to the engine may be subject to quite high temperatures and require provision to prevent damage by excess time at high temperature.\n\n(2) The c~mprcs~or Jtall or surge has the possibility of producing damaging temperatures in the turbine and combustion chamber or unusual transient loads in the compressor. While the stall-surge phenomenon is possible with the centrifugal compressor, the more common .occurrence is with the axial flow compressor. Figure 2.13 depicts the pressure distribution that may exist for steady state operation of the engine. In order to accelerate the engine to a greater speed, more fuel must be added to increase the turbine power above that required to operate the compressor.\n\nSuppose that the fuel flow is increased beyond the steady state requirement without a change in rotative speed. The increased combustion chamber pressure due to the greater fuel flow requires that the compressor discharge pressure be higher. For the instant before an engine speed change occurs, an increase in compressor discharge pressure will be accompanied by a decrease in compressor flow velocity. The equivalent effect is illustrated by the flow components onto the rotating compressor blade of figure 2.13. One component of velocity is due to rotation and this component remains unchanged for a given rotative velocity of the single blade. The axial flow velocity for steady state operation combines with rotational component to define a resultant velocity and direction. If the axial flow component is reduced, the resultant velocity and direction provide an increase in angle of", - "page_start": 142, - "page_end": 142, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "Where is the Submillimeter Array?", - "target_page": 1, - "target_passage": "near the summit of Mauna Ke", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "However, you can adjust this configuration by setting the number of drives of different classes to zero. For information about Easy Tier, see Chapter 10, \"Advanced features for storage efficiency\" on page 403.\n\nIf you are adding storage to a pool with storage already assigned, the existing storage is taken into consideration. Some properties are inherited from existing arrays for a specific drive class. Systems aim to achieve an optimally balanced configuration, so it is not possible to add significantly different MDisks to one pool with the given GUI dialog.\n\nFor example, if the pool has an array MDisk made of 16 drives of DRAID6, you cannot add two drives of RAID1 to the same pool. Otherwise, an imbalanced storage pool is created.\n\nYou can still add any array of any configuration to an existing pool by using the CLI.\n\nWhen you are satisfied with the configuration presented, click **Assign**. The RAID arrays, or MDisks, are then created and initialized in the background. You can monitor the progress of the initialization process by selecting the corresponding task under **Running Tasks** in the upper-right corner of GUI screen, as shown in Figure 6-39. The array is available for I/O during this process.\n\n| | | 15- |\n| --- | --- | --- |\n| Suggested Tasks | | |\n| Enable Encryption Enable encryption for the system | | Not Now Run Task |\n| Running Tasks | | N View All Tasks |\n| 5 Volume Synchronizations | | View |\n| Array Initialization | | View |\n| 1 Recently Completed Task | Enternrise Disk | View DS3000 - 000000000 |\n\n*Figure 6-39 Array Initialization task*\n\nClick **View** in the Running tasks list to see the initialization progress and the time that remains, as shown in Figure 6-40. Notice that array creation depends on the type of drives it consists of. For example, an array of Flash drives is much quicker to initialize than NL-SAS drives.\n\n| Select a running task to see its | Progress: Array Initialization | | Estimated time remaining for task. |\n| --- | --- | --- | --- |\n| progress. | | | |\n| Array Initialization | Name | Progress | Time Remaining |\n| | Array mdisk10 | 56% | 00:28:26 |\n| 2 Volume Synchronizations | | | |\n| 2 Recently Completed Tasks | | | |\n\n*Figure 6-40 Array initialization task progress information*", - "page_start": 241, - "page_end": 241, - "source_file": "sg247938.pdf" - }, - { - "text": "| | | Member Drives for MDisk Distributed_array | | | × |\n| --- | --- | --- | --- | --- | --- |\n| 11 Actions ▼ | → | | | Default V | ర్ Contains V Filter |\n| Drive ID ← | Capacity | Use | Status | Enclosure ID | IIi Slot ID |\n| 7 | 558.41 GiB | Member | V Online | 1 | Ja |\n| රි | 558.41 GiB | Member | V Online | 1 | 8 |\n| ਰੇ | 558.41 GiB | Member | ✓ Online | 1 | 21 |\n| 10 | 558.41 GiB | Member | ✓ Online | 1 | JA |\n| JJ | 558.41 GiB | Member | ✓ Online | 1 | 17 |\n| 12 | 558.41 GIB | Member | ✓ Online | ਜ | 23 |\n| J3 | 558.41 GIB | Member | V Online | 1 | 16 |\n| Showing 7 drives Selecting 0 drives | | | | | |\n| | | | | | Close |\n\n*Figure 6-47 List of drives in an array*\n\nYou can use the CLI command **lsarraymember** to get the same information with the CLI. Provide an array name or ID as the parameter to filter output by the array. If run without arguments, the command lists all members of all configured arrays.\n\n# **Properties**\n\nThis section shows all available array MDisk parameters: its state, capacity, RAID level, and others.\n\nUse the CLI command **lsarray** to get a list of all configured arrays. Use **lsarray** with array name or ID as the parameter to get extended information about the selected one, as shown in Example 6-21.\n\n*Example 6-21 lsarray output (truncated)*\n\n```\nIBM_Storwize:ITSOV7K:superuser>lsarray\nmdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity \n0 mdisk0 online 0 mdiskgrp0 1.3TB\n16 Distributed_array online 1 mdiskgrp1 2.2TB \nIBM_Storwize:ITSOV7K:superuser>lsarray 16\nmdisk_id 16\nmdisk_name Distributed_array\nstatus online\nmode array\nmdisk_grp_id 1\nmdisk_grp_name mdiskgrp1\ncapacity 2.2TB\n<...>\n```\n# **6.3 Working with external controllers and MDisks**\n\nIn IBM Spectrum Virtualize terminology, *Controllers* are external storage systems that provide resources to be used as MDisks. Storwize V7000 supports external storage controllers that are attached through iSCSI and through Fibre Channel.", - "page_start": 248, - "page_end": 248, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 5-14 Relationship between OAM and Content Manager OnDemand\n\n# **Object naming conventions**\n\nThe *object name* identifies the object within a collection. The object name is unique within a collection and it is provided by the Content Manager OnDemand application. Currently, no installation exits allow any customization of these names. The object name is composed of the application group name and the load identifier within the application group portion of the load ID. The load identifier within the application group is composed of a numeric sequence number followed by a character string, such as FAAA. This string is then converted into two qualifiers of the object name:\n\n- -L indicates that the object contains document data.\n- -R indicates that the object contains resource data.\n\nThe application group name is added, and an object name looks like the following syntax:\n\nA BDA.L1.FAAA\n\nThe maximum size of an object is specified through the Content Manager OnDemand Administrator Client when you define an application group. The default value is 10 MB. Currently, the maximum size for an OAM object is 256 MB. The Content Manager OnDemand administrator must be careful not to specify a value that exceeds this limit.\n\n**Important:** In the current implementation, Content Manager OnDemand is not aware that an object was deleted by OAM based on the management class criteria that are set by the Storage Management component. A user can search for data that is no longer available. No synchronization occurs between OAM object expiration and index expiration. Ensure that you define the index expiration correctly when you define the application group.\n\nFigure 5-15 on page 117 shows the window in which you can set up the index expiration for Storage Management when you define or update an application group.", - "page_start": 139, - "page_end": 139, - "source_file": "sg246915.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11228**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n\n6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n#### **Model AY11232**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11228**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11232**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x- 45x | 3.5x- 22.5x | 5.3x- 33.8x | 10.5x- 67.5x | 14x- 90x |\n| Field of View Objective Dia. (mm) | | 28.6- | 57.2- | 38.1- | 19.0- | 14.3- |\n| | | 4.4 | 8.8 | 5.9 | 2.9 | 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x 112.5x | 8.8x 56.3x | 13x 84.4x | 26.3x 169x | 35x 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n#### **Model AY11228**\n\n#### **Model AY11232**\n\n| Name | Qty | |\n| --- | --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 | |\n| 10x Wide Field Eyepiece | 2 | |\n| Eyeshade | 2 | Eyeshade |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) | |\n| Fuse 2A (spare) | 1 | |\n| Lens Cleaning Tissue | 1 | |\n| Dust Cover | 1 | |\n| Black/White Working Stage | 1 | |\n| Specifications | 1 | |\n| Packing Slip | 1 | |\n| Quality Inspection Certificate | 1 | |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n#### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11228 Model AY11232**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik,1, 2 P. Wadley,3 J. Haigh,3 K. W. Edmonds,3 R. P. Campion,3 A. W. Rushforth,3 B. L. Gallagher,3\n\nC. T. Foxon,3 T. Jungwirth,2, 3 J. Wunderlich,1, 2 S. S. Dhesi,4 S. Cavill,4 G. van der Laan,4 and E. Arenholz5\n\n1Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\nInstitute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic 3School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom\n\n4Diamond Light Source, Harwell Science and Innovation Campus,\n\n5Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n(Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\n2\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p-type non-magnetic spacers2 . However, the Curie temperature TC of (Ga,Mn)As is currently limited to 185 K in single layers3 , and is typically much lower for layers embedded within a heterostructure2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively4,5. Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature8,9. Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition, which may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples7 . Demonstration of coupling between the bulk of the layers, i.e., an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.\n\nHere, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures10,11) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref.7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260◦C, using previously established methods3,8. A low Mn concentration of x ≈ 0.03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼0 ◦C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L2,3 x-ray absorption and XMCD\n\nDidcot, Oxfordshire, OX11 0DE, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11230**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n- 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n### **Model AY11234**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11230**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11234**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x 45x | 3.5x 22.5x | 5.3x 33.8x | 10.5x 67.5x | 14x 90x |\n| Field of View Objective Dia. (mm) | | 28.6- 4.4 | 57.2- 8.8 | 38.1- 5.9 | 19.0- 2.9 | 14.3- 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x- 112.5x | 8.8x- 56.3x | 13x- 84.4x | 26.3x- 169x | 35x- 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n### **PARTS LIST**\n\n#### **Model AY11230**\n\n#### **Model AY11234**\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11230 Model AY11234**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- **12** 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "How many blazars were observed by the SMA in either band during the three months August-October 2008?", - "target_page": 2, - "target_passage": "only 129 of the SMA blazars", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850µm observations, and the open triangles represent the 1mm observations.\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0.03 ≤ z ≤ 2.19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## **2.1. Submillimeter Properties**\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\n$$\\nu_{e}L_{\\nu_{e}}=4\\pi D_{\\mathrm{L}}^{2}{\\frac{\\nu_{\\mathrm{obs}}F_{\\mathrm{obs}}}{1+z}},\\qquad\\qquad(1)$$\n\nwhere DL is the luminosity distance, νobs is the frequency of the observed band, and Fobs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850µm), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the \"tail\" to the left is populated by objects with errors larger than the intrinsic variability.\n\nflux (in erg cm−2 s −1 Hz−1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H0 = 71 km s−1 Mpc−1 , ΩM = 0.27, and Λ = 0.73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of αγ, we define spectral energy index as νFν = ν −αS and calculate αS from the average of the energy spectral indices over the corresponding three months. We only calculate αS for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850µm during this time frame.\n\n## **3. VARIABILITY ANALYSIS**\n\n## **3.1. Variability Index**\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\n$$V\\,=\\,\\frac{(F_{\\rm max}-\\sigma_{F_{\\rm max}})-(F_{\\rm min}+\\sigma_{F_{\\rm min}})}{(F_{\\rm max}-\\sigma_{F_{\\rm max}})+(F_{\\rm min}+\\sigma_{F_{\\rm min}})}\\tag{2}$$\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "### **3. VERITAS Blazar KSP**\n\nVERITAS observes for ∼750 h and ∼250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- A VHE blazar discovery program (∼200 h / yr): Each year ∼10 targets are selected to receive ∼10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- A target-of-opportunity (ToO) observation program (∼50 h / yr): VERITAS blazar observations can be triggered by either a VERI-TAS blazar discovery, a VHE flaring alert (>2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- Multi-wavelength (MWL) studies of VHE blazars (∼50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n# **4. Blazar Discovery Program**\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ-rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles (−8 ◦ < δ < 72◦ ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0.3. To further the study of the\n\nEBL a few objects having a large (z > 0.3) are also included in the target list. The target list includes:\n\n- All nearby (z < 0.3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- The X-ray brightest HBL (z < 0.3) in the recent Sedentary [8] and ROXA [9] surveys.\n- Four distant (z > 0.3) BL Lac objects recommended by [5, 10].\n- Several FSRQ recommended as potential VHE emitters in [6, 11].\n- All nearby (z < 0.3) blazars detected by EGRET [12].\n- All nearby (z < 0.3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- All sources (|b| > 10◦ ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ-ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERI-TAS blazar discovery program.\n\n### **5. VERITAS AGN Detections**\n\nVERITAS has detected VHE γ-ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n### **5.1. Recent VERITAS Blazar Discoveries**\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES 0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHE emission from 3C 66A was discovered by VER-ITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (ΓVHE ∼ 4.1). RGB J0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "# **Submillimeter Variability and the Gamma-ray Connection in** *Fermi* **Blazars**\n\nA. Strom *Univ. of Arizona, AZ 85721, USA* A. Siemiginowska, M. Gurwell, B. Kelly *CfA, MA 02138, USA*\n\nWe present multi-epoch observations from the Submillimeter Array (SMA) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August–October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## **1. INTRODUCTION**\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ-ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ-ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submillimeter Array1 (SMA) at 1mm and 850µm, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ-ray indices and luminosities.\n\n## **2.** *SMA* **BLAZARS**\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850µm windows, achieving spatial resolution as fine as 0.25\" at 850µm. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and\n\n1The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\n2http://sma1.sma.hawaii.edu/callist/callist.html", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both \"quiescent\" and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VER-ITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0.3 < z < 0.7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n### **Acknowledgments**\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument.\n\n### **References**\n\n- [1] F. Aharonian et al. 2007, ApJ, 664, L71\n- [2] F. Aharonian et al. 2006, Nature, 440, 1018\n- [3] F. Aharonian et al. 2007, A&A, 475, L9\n- [4] J. Holder, et al. 2008, AIPC, 1085, 657\n- [5] L. Costamante & G. Ghisellini 2002, A&A, 384, 56\n- [6] E.S. Perlman 2000, AIPC, 515, 53\n- [7] F.W. Stecker et al. 1996, ApJ, 473, L75\n- [8] P. Giommi et al. 2005, A&A, 434, 385\n- [9] S. Turriziani et al. 2007, A&A, 472, 699\n- [10] L. Costamante 2006, arXiv:0612709\n- [11] P. Padovani et al. 2002, ApJ, 581, 895\n- [12] R. Muhkerjee et al. 2001, AIPC, 558, 324\n- [13] A.A. Abdo et al. 2009, ApJ, 700, 597\n- [14] V.A. Acciari et al. 2008, ApJ, 684, L73\n- [15] V.A. Acciari et al. 2009, ApJ, 707, 612\n- [16] V.A. Acciari et al. 2009, ApJ, 690, L126\n- [17] V.A. Acciari et al. 2009, ApJ, 693, L104\n- [18] L.C. Reyes 2009, arXiv:0907.5175\n- [19] R.A. Ong 2009, ATel, 1941\n- [20] R.A. Ong et al. 2009, ATel, 2272\n- [21] V.A. Acciari et al. 2009, ApJ, 708, L100\n- [22] R.A. Ong et al. 2009, ATel, 2301\n- [23] R.A. Ong et al. 2009, ATel, 2260\n- [24] R.A. Ong et al. 2009, ATel, 2309\n- [25] W. Benbow 2009, arXiv:0908.1412\n- [26] V.A. Acciari et al. 2009, ApJ, submitted\n- [27] V.A. Acciari et al. 2009, ApJ, 695, 1370\n- [28] V.A. Acciari et al. 2009, ApJ, in press\n- [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "# arXiv:1001.0770v1 [astro-ph.HE] 5 Jan 2010\n\n# **VERITAS Observations of Blazars**\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E>100 GeV) γ-ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ-ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ-rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n### **1. Introduction**\n\nActive galactic nuclei are the most numerous class of identified VHE γ-ray sources. These objects emit non-thermal radiation across ∼20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ-ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ-rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH (∼2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ-rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## **2. VERITAS**\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ-rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼100 GeV, an energy resolution of ∼15%, an angular resolution of ∼0.1◦ , and a sensitivity yielding a 5σ detection of a 1% Crab Nebula flux object in <30 hours1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.\n\n1A VERITAS telescope was relocated during Summer 2009, increasing the array's sensitivity by a factor ∼1.3.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 4: The γ-ray index versus submillimeter index plane. The blazars fall more steeply in the γ-rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around αS ∼ 0.\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ-ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ-ray component than during its \"low state\". 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\neConf C091122\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ-ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## **5. CONCLUSIONS**\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ-ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar \"state\", with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n- BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n- Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τrest < 500 days.\n- The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n- FSRQs exhibit higher ratios of γ-ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL Lacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ-ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τrest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## **Acknowledgments**\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "| Object | | Class Redshift |\n| --- | --- | --- |\n| M 87 | FR I | 0.004 |\n| Mkn 421 | HBL | 0.030 |\n| Mkn 501 | HBL | 0.034 |\n| 1ES 2344+514 | HBL | 0.044 |\n| 1ES 1959+650 | HBL | 0.047 |\n| W Comae† | IBL | 0.102 |\n| RGB J0710+591† | HBL | 0.125 |\n| H 1426+428 | HBL | 0.129 |\n| 1ES 0806+524† | HBL | 0.138 |\n| 1ES 0229+200 | HBL | 0.139 |\n| 1ES 1218+304 | HBL | 0.182 |\n| RBS 0413† | HBL | 0.190 |\n| 1ES 0502+675† | HBL | 0.341 |\n| 3C 66A† | IBL | 0.444? |\n| PKS 1424+240† | IBL | ? |\n| VER J0521+211† | ? | ? |\n\nTable I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n(∼5.5σ; 3% Crab flux above 300 GeV; ΓVHE ∼ 2.7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-092 . These data resulted in the discovery of VHE gamma-rays (>270γ, ∼6σ) at a flux (>200 GeV) of ∼2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERI-TAS and LAT collaborations.\n\n### **5.2. Discoveries Motivated by Fermi-LAT**\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES 0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object (z = 0.341) detected in the VHE band. In addition, VER J0521+211, likely associated with the radio-loud AGN RGB J0521.8+2112, was detected by VERTAS in ∼4 h of observations in October 2009 [23]. These observations were motivated by its identification as a >30 GeV γ-ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VER J0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n### **6. Blazars Upper Limits**\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ-ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ-rays, corresponding to a statistical significance of 4.8σ, observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected (>5σ), by VERITAS does not show a significant excess (∼120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with ΓVHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n# **7. Multi-wavelength Studies of VHE Blazars**\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (2009- 10 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VER-ITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered\n\n2RBS 0413 was observed further by VERITAS in Fall 2009.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼2% Crab flux.\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n- 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n- 1ES 1218+304: This HBL flared during VER-ITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n- 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n- W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an external-Compton (EC) component in an SSC interpretation.\n- 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n- Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n- RGB J0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n- PKS 1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n### **8. Conclusions**\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ-rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "## **References**\n\n- [1] M. Sikora and G. Madejski, in American Institute of Physics Conference Series, edited by F. A. Aharonian and H. J. V¨olk (2001), vol. 558 of American Institute of Physics Conference Series, pp. 275–288.\n- [2] M. Sikora, in Blazar Demographics and Physics, edited by P. Padovani and C. M. Urry (2001), vol. 227 of Astronomical Society of the Pacific Conference Series, pp. 95–104.\n- [3] J. A. Stevens, S. J. Litchfield, E. I. Robson, D. H. Hughes, W. K. Gear, H. Terasranta, E. Valtaoja, and M. Tornikoski, ApJ 437, 91 (1994).\n- [4] P. T. P. Ho, J. M. Moran, and K. Y. Lo, ApJl 616, L1 (2004).\n- [5] M. A. Gurwell, A. B. Peck, S. R. Hostler, M. R. Darrah, and C. A. Katz, in From Z-Machines to ALMA: (Sub)Millimeter Spectroscopy of Galaxies, edited by A. J. Baker, J. Glenn, A. I. Harris,\n\nJ. G. Mangum, and M. S. Yun (2007), vol. 375 of Astronomical Society of the Pacific Conference Series, p. 234.\n\n- [6] S. E. Healey, R. W. Romani, G. Cotter, P. F. Michelson, E. F. Schlafly, A. C. S. Readhead, P. Giommi, S. Chaty, I. A. Grenier, and L. C. Weintraub, ApJS 175, 97 (2008).\n- [7] A. A. Abdo, M. Ackermann, M. Ajello, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, B. M. Baughman, et al., ApJ 700, 597 (2009).\n- [8] T. Hovatta, E. Nieppola, M. Tornikoski, E. Valtaoja, M. F. Aller, and H. D. Aller, A&A 485, 51 (2008).\n- [9] B. C. Kelly, J. Bechtold, and A. Siemiginowska, ApJ 698, 895 (2009).\n- [10] M. Sikora, R. Moderski, and G. M. Madejski, ApJ 675, 71 (2008).", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0806.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "How big is the Mermaid fleet?", - "target_page": 12, - "target_passage": "Mermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## MERMAID FLEET", - "page_start": 25, - "page_end": 25, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "During 2000 Mermaid Marine formed a new business unit Mermaid Labour and Management Limited. The focus of this unit will be labour supply and industrial relations management to the marine, offshore construction industry and onshore resources projects in the NW of Australia. The Directors and Management of the new entity are very experienced, well known and regarded by the industry in general. The company has high expectations for Mermaid Labour and Management Limited. MERMAID LABOUR AND MANAGEMENT LIMITED\n\n#### SAFETY\n\nMermaid remains dedicated to ensuring a safe environment in all areas where we operate or have responsibility.\n\nIn April 2000, following the regular six monthly Quality Assurance audit, the Company's accreditation under AS/NZS/ISO 9002 was reconfirmed. Mermaid's quality assurance and compliance team continues with a continuous day to day effort to improve our health, safety and environmental performance. Stringent charterer requirements, which are a pre requisite of increased vessel usage, must be met to the letter and are the subject of regular and demanding audits. Although time consuming and expensive, we are grateful to certain of the large producers, who while demanding the highest levels of compliance, have also been prepared to give their time, sharing their safety expertise with us and in that way assisting in the very major advances our company has made in this all important area.\n\nAt the time of writing this report, Mermaid had accumulated 348 days without a Lost Time Injury. A fine achievement and a continuing record.", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "*The foreshore of King Bay will be redeveloped as part of the Mermaid Marine Dampier Base Expansion works.*\n\nleased facilities to seven third party vessels and protection for three of our own vessels using this technique by the cyclone season in 2001.\n\nAs more vessels seek protection, additional breakwaters can be constructed and sea room dredged. Each mooring involves a pattern of pin piles drilled into the granite sea floor with four vessel specific mooring lines secured to special attachment points on the vessel.\n\nMany smaller vessels including Mermaid's will be lifted from the water and tied down on purpose built cradles for cyclones.\n\n#### **F. ONSHORE LAND RECLAMATION.**\n\nLike our neighbours, much of the Mermaid site is below the prescribed storm surge level, or needs some degree of earthworks to maximize its value. Currently 8 of the 17 ha of the area is suitable for development in its present state.\n\nThe spoil produced from dredging will allow Mermaid to achieve full utilization of the site at a fraction of the cost of importing fill from elsewhere.\n\nConsiderable effort has gone into anticipating the future direction of the Base. Planning services such as traffic flows, land allocation and security, as well as fulfilling the many and complex regulatory requirements related to health, safety, quarantine, environmental management, dust, dangerous goods and hazchem materials have been the subject of considerable study prior to this implementation stage. 1 3\n\n> MERMAID MARINE AUSTRALIA LIMITED", - "page_start": 16, - "page_end": 16, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "#### OVERVIEW\n\nTrading for the period commencing 1 July 1999 to 30 June 2000 for Mermaid Marine Australia Ltd (\"Company\") and its controlled entities, experienced a 43% turnover reduction from last year. The result was almost entirely due to a heavy fall in oil prices, which reached their low of US$10 in February 1999, leading to the lowest level of offshore activity for many years. In September 1999 Mermaid exercised its option to acquire the utility vessel \"Mermaid Achiever\" for $3,250,000. Previously the Achiever operated under a bare boat charter.\n\nIn February 2000 Mermaid received approval in principle from the Western Australian Minister for the Environment for the development of a supply and engineering base at Dampier (Dampier Base). Since that time a detailed environmental management system has been produced for final approval and as a guide to daily environmental management and compliance. Refinements to the design have proceeded, together with the preparation of bid packages and negotiations with Banks for project finance.\n\nSubsequent to years end, the subscription of a further $5 million from Mr Mark Bradley and Clough Engineering will see an extremely robust balance sheet, with cash on hand approaching $10 million. As construction commences at Dampier, a level of project finance will be arranged providing a comfortable mix of debt and equity and allowing the retention of a significant cash balance.\n\nThe year saw considerable progress with Base activities at Dampier, Broome and Darwin. They are dealt with in detail under following headings.\n\nMermaid recorded an after-tax loss for the Period of $207,957. Compared with an after-tax profit for the previous period of $2,454,919. Revenue for the Period was $15,124,774, a decrease of 43% over the previous period. Fixed cost reductions enabled the Company to ride out the market reversal with a minimal loss and positive operating cash before capex of $1.6m. This result, achieved against a major drop in turnover, was possible through a vigorous attack on overheads, which included more beneficial ownership costs, insurance savings, management salary savings, including voluntary sacrifice from certain senior executives in recognition of the tighter conditions. In all the changes contributed approximately $1.5million to the bottom line.\n\nBare boat charters, although useful for the busy times encountered in 1998 exposed the Company to a high level of fixed costs. The vessels were valuable earners and the transfer of the Mermaid Achiever, Mermaid Eagle and Mermaid Reunion to Company ownership has proved to be the right decision for all market conditions. Although there have been no contracts yet let for work of any significance by producers on the North West Shelf, underlying day to day activity has returned. Expressions of interest for major project work have been issued and as an indication of better trading conditions, an unaudited profit of $496,721 has been recorded for the two months to 31st August 2000. The trend has continued in September.\n\n#### FINANCIAL", - "page_start": 10, - "page_end": 10, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## CHAIRMAN ' S REPORT\n\nDirector of the Clough Group and a highly experienced and talented executive. Richard has appointed an alternate director, Mr Chris Sutherland, a senior Clough Executive, with engineering qualifications and associated business skills to assist him.\n\nCaptain Jim Carver, Mermaid's founder continues to play a significant role in Mermaid's operations, paying particular attention to our business at sea. Under 20 years of Jim's leadership, Mermaid developed an enviable reputation as a \"can do\" company, and in our drive for new engineering expertise and professionalism, we have no intention of allowing that attitude to be lost.\n\nLast year we identified Broome as our next strategic position. No oil and gas work had been supported out of Broome for seventeen years and with the valuable cooperation and assistance of the Broome Port Authority, we secured Inpex, the large Japanese resource company as our first client. The base was then established early this year.\n\nA new focus has developed in the Browse Basin and it is pleasing to report that after only seven months operation, our Base is profitable, housing Inpex, BHP, Woodside and Sedco in support of their current drilling programs. All the holes drilled from the Broome Base have been designated as commercial finds by the explorers and the very major increase in the reserves at Brecknock, Woodside's permit 500 kilometres north of Broome creates optimism for future production based in the Broome area.\n\nDarwin was next on our list, enabling involvement in Timor Sea oil and gas activity. The Bayu Undan project operated by Phillips, is well advanced and will impact Darwin's offshore activity quite soon. Pursuing the formula for a strategic sea/land interface, we reached agreement with Perkins Shipping in Darwin, to set up an office at their Frances Drive facility. Perkins Shipping is synonymous with Darwin's history. Set up by V.B. Perkins in the late 40's, it has grown to significant size, operating its ships across the top of Australia and into South East Asia. There are many synergies which Mermaid shares with Perkins and we look forward to developing our Darwin business in close association with that fine old Company.\n\nOur ambitions for the support of the oil and gas industry now go beyond bases and vessels. Early in the current financial year, Mermaid acquired 50% of the OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Mermaid shares. OIS MOC owns the highly successful labour hire business operated by Kevin Ponga and Rick De Franck. Kevin Ponga is now General Manager of Mermaid Labour & Management Pty Limited and Mr De Franck becomes a Director. With their reputation and talent added to Mermaid's experienced team, this labour hire company has become a significant force and can be expected to be in the final when major labour hire contracts are let.", - "page_start": 8, - "page_end": 8, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "#### **G. SLIPWAY.**\n\nAustralia, and particularly the north west is impoverished in terms of infrastructure to service our marine industries. Some of this has been due to a historical link with our recent industrial past. This is now behind us, and Australia has now become a centre of excellence with respect to both new building and ship repair, particularly for high tech and specialty vessels.\n\nThe Mermaid slipway will be the third such facility on the western half of the continent , with others located at Fremantle and Darwin.\n\nThe slipway will be a repair only facility, no new building is contemplated. Its capacity is structured to meet the regional steel mono-hulled fleet requirements of some 60 vessels between 200 and 4000 tonne displacement. Fishing industry, marine tourist industry, large private pleasure craft , naval, scientific and law enforcement vessels are a secondary target.\n\nThe slipway is designed to initially accept vessels up to 2,700 tonnes, a restriction which is set by our current inventory of cradles used to support vessel on the slip. The cradles will be progressively upgraded to ultimately handle 4000 tonne. A later expansion will allow 500 tonne vessels to be side slipped, thereby increasing capacity.\n\nThe slipway location and orientation on the Base has been chosen to maximize the cost and load bearing benefits of having a very high strength granite bedrock as the best possible foundation.\n\nThe Mermaid slipway will rank second in terms of capacity on the western half of the continent. Tenix, Fremantle 8,000 tonne, Mermaid Dampier 2,700 tonne rising to 4,000 tonne, Darwin Ship Repair 2,500 tonne. The nearest other facilities are Singapore, Adelaide, Port Moresby or Cairns.\n\nMermaid has purchased a very large cyclone rated industrial building frame which will be sited beside the slipway and tenanted by Mermaid engineering and companies which will provide ancillary services related to ship repair.\n\n*The Northwest Shelf is a world scale offshore oil and gas exploration province.*", - "page_start": 20, - "page_end": 20, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Darwin is serviced by three marine infrastructure elements.\n\n- a. A public port adjacent to the main business centre, which is destined to be redeveloped as a cruise ship and tourism precinct .\n- b. A group of freehold water front properties on Frances Bay near to the main business center.\n- c. A recently commissioned public port and industrial estate at East Arm some 25 km from the main business district.\n\nDarwin already has an abundance of shore based logistics service providers who operate from onshore industrial estates through publicly owned facilities.\n\nThe Northern Territory Government has sponsored a study to determine the marine infrastructure deficits of the Darwin area. Mermaid has contributed to the study and is monitoring the subsequent planning processes.\n\nRegardless of industry trends, Mermaid has a need for a Darwin Base to service and care for Mermaid vessels working in the area. Too often vessels have been demobilised to Dampier at the conclusion of a contract then being required to return to Darwin within days or weeks for another assignment.\n\nMermaid has decided that needs and opportunities in the north of Australia can be best served by entering a co-operative arrangement with an established Darwin Company. Agreement has therefore been reached with Perkins Shipping Group, who are one of the freehold land owners on Frances Bay.\n\nPerkins Shipping, established in the 1950s is the major coastal shipping service provider in Australia's north, linking Darwin to mining and aboriginal committees from the Kimberly to Gulf of Carpenteria. Additionally Perkins operate services to East Timor, mining operations in Indonesia, as well as Singapore and East Malaysia. The Perkins and Mermaid businesses are different, but complementary, offering benefits to both. The arrangement with Perkins will give Mermaid well placed office facilities, open storage and waterfront access.\n\nOur intention is that Darwin become the third and final mainland entreport to service the Northwestern offshore oil and gas industry together with our other strategically placed facilities at Dampier and Broome.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Work on Dampier Base expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m. B ASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nThe principle activities and facility developments involved in the expansion are:\n\n#### **A. DREDGING**\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project. The layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n#### **B. QUAY WALL ( BERTH 1)**\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "vessels engaged in routine offshore logistics tasks operate fully laden with 7.4 m draft which means there will be very few occasions when the largest vessels in the industry have to make a tide dependent entry or departure through the Mermaid channel. Further the Mermaid Base will not suffer operational disadvantages experienced by the adjacent Woodshed Base or nearby Damper Public Wharf in terms of entry and departure draft restrictions.\n\nThe function and purpose of Berth 1 will be:\n\n- To service the larger offshore supply boat market on a fast turnaround basis.\n- To receive and offload very heavy ro/ro cargoes up to 1500 tonne delivered by ocean going heavy lift ships and barges.\n- To handle inbound and outbound cargoes related to major offshore pipe lay projects.\n- To receive and efficiently load reel ships used for deep water small diameter pipelay.\n\nThe wharf will be an earth filled structure with steel sheet pile faces and concrete capping beam surround. Most of the construction will be performed using land based equipment working from the core of the earth filled system.\n\nMuch effort has gone into a design concept which allows very large cranes (>100 tonne capacity) to operate without restriction on the wharf.\n\nThe separation between Berth 1 and Berth 2 is such to allow Road Train Triples (the max allowable) to turn unassisted on the wharf.\n\n#### **C. QUAY WALL (BERTH 2)**\n\nThe inner berth, Berth 2 has a minimum depth alongside of 5.0 m allowing unrestricted operation of all the Mermaid fleet, and the majority of other vessels servicing the offshore oil/gas industry and mineral ports. This berth will offer excellent weather protection for small and medium size vessels.\n\n#### **D. BREAKWATER.**\n\nThe rubble mount type breakwater will be an extension of the wharf, constructed using core and armor rock largely won from excavations on the Base. The excavations created will become depositories for dredge spoil.\n\nBecause the storm surge associated with major cyclones can be up to 7 m above chart datum (low tide), before imposing the wave height, a fully protective breakwater is not practical. The", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "| Note | Consolidated | Company | 2000 | 1999 | 2000 | 1999 | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| $ | $ | $ | $ | | | | |\n| 11. | INVESTMENTS | At cost: | | | | | |\n| Unlisted investment - shares | controlled in entities | – | – | 2,444,611 | 2,444,611 | | |\n| Country of | Ownership | Ownership | Incorporation | Interest 2000 | Interest 1999 | | |\n| % | % | Parent Entity | | | | | |\n| Mermaid Marine Australia Limited | Australia | | | | | | |\n| Controlled Entities | Mermaid Marine Group Pty Ltd* | Australia | 100 | 100 | | | |\n| Mermaid Marine Vessel Operations Pty Ltd* | Australia | 100 | 100 | Mermaid Marine Pty Ltd* | Australia | 100 | 100 |\n| Mermaid Marine Offshore Pty Ltd* | Australia | 100 | 100 | Mermaid Marine Charters Pty Ltd* | Australia | 100 | 100 |\n| Mermaid Supply Base Pty Ltd* | Australia | 100 | 100 | Dampier Stevedoring Pty Ltd* | Australia | 100 | 100 |\n| Mermaid Manning and Management Pty Ltd* | Australia | 100 | 100 | | | | |\n\n* Pursuant to ASIC Class Order 98/1418, relief has been granted to these wholly owned controlled entities from the Corporations Law requirements for preparation, audit and lodgement of the financial report. As a condition of the Class Order, Mermaid Marine Australia Limited and the controlled entities entered into a Deed of Cross Guarantee on 24 June 1999.\n\n| | Note | Consolidated | | Company | |\n| --- | --- | --- | --- | --- | --- |\n| | | 2000 | 1999 | 2000 | 1999 |\n| | | $ | $ | $ | $ |\n| 12. | OTHER NON CURRENT ASSETS | | | | |\n| | Future income tax benefit | | | | |\n| | - timing differences | 664,722 | 318,202 | – | – |\n| 13. | INTANGIBLES | | | | |\n| | Goodwill arising on consolidation | – | 95,105 | – | – |\n| | Write off of goodwill | – | (95,105) | – | – |\n| | | – | – | – | – |\n| 14. | ACCOUNTS PAYABLE | | | | |\n| | Trade payables | 1,844,206 | 1,079,327 | – | 55,590 |\n| | Other payables and accruals | 519,300 | 534,897 | – | 42,785 |\n| | | 2,363,506 | 1,614,224 | – | 98,375 |", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "What was the budget for the expansion of Dampier Base?", - "target_page": 14, - "target_passage": "a capital budget of $13m", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Mermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity including exploration support, supply, survey and berthing assist. Lower vessel utilisation during the period allowed an acceleration of scheduled maintenance. Two tugs, Mermaid Commando and Mermaid Chieftan received extensive refits. In both cases the work increased productivity through enhanced bollard pull and consequent earnings. SEAGOING OPERATIONS\n\nSafety was given the highest priority through new monitoring systems and awareness programs. Formalised on the job instruction and training courses have also lifted levels of experience and proficiency across the workforce.\n\n#### DAMPIER BASE\n\n8\n\nThe offshore waters and islands adjacent to Dampier, host in excess of 50% of all exploration and development budgets of Australia's offshore oil and gas industry. The Burrup Peninsular where the Base is located is the intended site of major new oil, gas, petrochemical and industrial mineral processing plants. The Port of Dampier is Australia's largest Port as measured by tonnage, but as identified in the 1997 WA Department of Commerce and Trade report, there remains an urgent need for additional marine support infrastructure. Mermaid is now well advanced in our plan to satisfy those needs and onshore work was announced to start on the 9th October 2000.\n\nSince receiving approval in principle for development of the Dampier Base from the Western Australian Minister for the Environment in February 2000, engineering and general design work in connection with the base proceeded at an accelerated pace.\n\nThis work, assisted by technical studies and a re-assessment of an increased demand for services arising out of greater expectations for growth in the sector, has led to improvements and expansion of capacity over earlier plans.\n\nThe Dampier Base will now comprise:-\n\n**•**\n\n**•**\n\n- A wharf offering 7.5 metres depth at low tide, featuring a heavy loadout section to accommodate modules of up to 1500 tonnes to onshore projects on the Burrup Peninsular and adjacent mining centres. A subsea pipe reel loading facility will encourage the use of spool ships in the region for deepwater pipelay. On a project by project basis, pipeline protection rock dumping, specialist vessel rig up activities and the like will be facilitated, as will dry and bulk cargo handling, refuelling, watering and all categories of waste reception. The joint Commonwealth and WA State Government initiative to establish an integrated industrial estate at Jervoise Bay (south of Perth) serviced by high wide load corridors from Perth's industrial areas will see the heavy capacity wharf playing a strategic role in major capital works in the Pilbara, leading to significant cost savings.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "- A slipway initially capable of receiving vessels up to 2,700 tonnes capacity will handle most of the 60 vessels currently working in the region, a considerable number, but one which will rise over coming years. First class engineering facilities have been planned and highly experienced management recruited. Alternative slipways offering comparable capacity are only to be found in Darwin or Fremantle, a sea journey of approximately 1000 miles from this operational region. Australia has emerged as a centre of excellence with respect to vessel repair work, the Dampier facility will both benefit from and protect that valuable reputation. **•**\nRehabilitated land for buildings and storage will finally extend over 17 hectares. The major oilfield services company Halliburton, have been attracted to the base as a tenant and a $1.1m purpose built building is being constructed for their use. Negotiations are also proceeding with other groups who recognise the unique advantages of operating from this strategically positioned Base. Rental income and associated revenues such as plant and labour hire will contribute significantly to the overall economics of the facility.\n\n- Protected moorings for cyclone shelter will be established inside the breakwater for long term lease to local tug operators. The demand arises from serious vessel and crew safety considerations. The Dampier Port Authority are reluctant to see the continued use of cyclone moorings in the Harbour, not only for safety reasons, but for environmental concerns as well. Oil spills are not acceptable under any circumstances and will be avoided whatever the cost. Tug owners share similar concerns, but in addition they need to remain in a position of readiness for crews and equipment to resume their important functions immediately following a cyclonic event. The number of specific purpose spread moorings, detailed on the adjacent plan will total 10 in the first phase of construction, a limit which will be assisted by an ability to remove vessels up to 100 tonnes from the water by wharf crane for tie down on cradles.\n**•**\n\n**•**\n\nConstruction of the Dampier Base commenced on the 9th October this year, with an expectation that all major elements of the project will be largely completed within 12 months.\n\n*The \"Clough Challenge\" Barge - Shallow Water Construction Support Barge in the East Spar Field*", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "#### OVERVIEW\n\nTrading for the period commencing 1 July 1999 to 30 June 2000 for Mermaid Marine Australia Ltd (\"Company\") and its controlled entities, experienced a 43% turnover reduction from last year. The result was almost entirely due to a heavy fall in oil prices, which reached their low of US$10 in February 1999, leading to the lowest level of offshore activity for many years. In September 1999 Mermaid exercised its option to acquire the utility vessel \"Mermaid Achiever\" for $3,250,000. Previously the Achiever operated under a bare boat charter.\n\nIn February 2000 Mermaid received approval in principle from the Western Australian Minister for the Environment for the development of a supply and engineering base at Dampier (Dampier Base). Since that time a detailed environmental management system has been produced for final approval and as a guide to daily environmental management and compliance. Refinements to the design have proceeded, together with the preparation of bid packages and negotiations with Banks for project finance.\n\nSubsequent to years end, the subscription of a further $5 million from Mr Mark Bradley and Clough Engineering will see an extremely robust balance sheet, with cash on hand approaching $10 million. As construction commences at Dampier, a level of project finance will be arranged providing a comfortable mix of debt and equity and allowing the retention of a significant cash balance.\n\nThe year saw considerable progress with Base activities at Dampier, Broome and Darwin. They are dealt with in detail under following headings.\n\nMermaid recorded an after-tax loss for the Period of $207,957. Compared with an after-tax profit for the previous period of $2,454,919. Revenue for the Period was $15,124,774, a decrease of 43% over the previous period. Fixed cost reductions enabled the Company to ride out the market reversal with a minimal loss and positive operating cash before capex of $1.6m. This result, achieved against a major drop in turnover, was possible through a vigorous attack on overheads, which included more beneficial ownership costs, insurance savings, management salary savings, including voluntary sacrifice from certain senior executives in recognition of the tighter conditions. In all the changes contributed approximately $1.5million to the bottom line.\n\nBare boat charters, although useful for the busy times encountered in 1998 exposed the Company to a high level of fixed costs. The vessels were valuable earners and the transfer of the Mermaid Achiever, Mermaid Eagle and Mermaid Reunion to Company ownership has proved to be the right decision for all market conditions. Although there have been no contracts yet let for work of any significance by producers on the North West Shelf, underlying day to day activity has returned. Expressions of interest for major project work have been issued and as an indication of better trading conditions, an unaudited profit of $496,721 has been recorded for the two months to 31st August 2000. The trend has continued in September.\n\n#### FINANCIAL", - "page_start": 10, - "page_end": 10, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Work on Dampier Base expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m. B ASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nThe principle activities and facility developments involved in the expansion are:\n\n#### **A. DREDGING**\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project. The layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n#### **B. QUAY WALL ( BERTH 1)**\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "BROOME SUPPLY BASE\n\nMermaid Marine services base at the Port of Broome (Broome Base) commenced operations on 1 February 2000 when the first ship containing drill pipe for Inpex Browse Ltd arrived from Japan.\n\nAs a result of Mermaid's efforts in establishing the Broome Base, Inpex Browse Ltd., BHP Petroleum and Woodside have used Broome as their base for drilling a total of four (4) offshore wells.\n\nIt is presently expected that at least six (6) exploration wells will be drilled in the area during 2001. The Base now employs as many as ten (10) staff up from the three (3) who commenced in February 2000. Excellent management and staff competence are the prime factors, which have delivered the smooth start up and continued success at Broome.\n\n*The Mermaid Broome Supply Base certified Impex, Woodside and BHP Petroleum exploration program during 2000.*\n\nThe base is currently secured on a come and go lease arrangement, located on Port premises adjacent to the wharf gates. Although convenient, with an excellent cyclone proof building, the site has limitations in terms of size and slope. An area more suitable for our long term needs has been optioned from Port authorities and discussions will proceed with our clients this year to determine their precise needs.\n\nThe success of Browse Basin wells drilled this year, strong developments in the energy sector and the intention of operators to base their 2001 operations in Broome, have encouraged the Board to consider further investment to ensure that capability keeps pace with demand and that we leave no reason for competitors to offer more or better.\n\nThe offshore waters of the Northern Territory, the Zone of Co-Operation (ZOCA) between Australia and Timor, and the Commonwealth Territory of Ashmore and Cartier host approximately 35% of the exploration and development budgets of Australian offshore oil and gas industry. DARWIN BASE\n\n> Two large projects are under study or implementation in these waters; the Phillips Petroleum Bayu-Undang Project and the Woodside Sunrise Troubador Project.\n\n> Two large petrochemical projects are under study for the Darwin area based upon pipelines from the Timor Sea gas resources of the projects above.\n\n> Darwin will within 3 years be the northern terminus of the Australian national rail system with the completion of the Alice Springs Darwin rail link, further expanding its role in Australia's economy.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "breakwater will be an over capping type, which interrupts the waves progress, but does not totally protect from wave penetration. These events are manageable and estimated as a once in 50 years possibility.\n\nThe breakwater core will be used as a construction causeway allowing land based equipment to perform the work. The greater part of the breakwater work involves winning the material as opposed to actual construction.\n\n#### **E. CYCLONE MOORINGS.**\n\nThe extent of the cyclone problem in Australia's north and north west was emphasised when Cyclone Tracey struck Darwin in 1974. The most powerful cyclone to cross the Australian coast was Cyclone Vance in 1999, which passed near Dampier, destroying large parts of the towns of Onslow and Exmouth further to the south.\n\nThe problem is acute, particularly in the area between Exmouth and Port Hedland, which suffers cyclones of an intensity and frequency as high as anywhere in the world. The Mermaid Base is typically on cyclone alert three times per season. The season is November to April.\n\nTo date there have been three options available to vessel owners when a cyclone approaches:.\n\n- Run to sea\n- Take refuge with crew onboard, on a mooring in the most sheltered location available such as the Dampier Archipelago or the Monte Bello Islands.\n- Construct a cyclone shelter.\n\nThere are serious personal safety and environmental considerations related to Options 1 and 2 and it is obvious that best practice universally adopted by large responsible Companies can be satisfied in this way.\n\nOnly Woodside at Dampier and BHP at Port Hedand have taken the step of building shelters which provides protection to 12 of the region's 60 vessels and this at very considerable cost.\n\nMermaid has undertaken significant engineering work on the placing of vessels on partially sheltered spread moorings, allowing the vessels to be secured near to shore and the crews demobilized to take care of their families and attend to household cyclone preparation.\n\nMermaid is taking a leadership role with a technical solution which will lead to wider adoption as vessel owners and the insurance industry fully value the arrangements. Mermaid will provide 1 2", - "page_start": 15, - "page_end": 15, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Figure 2.26. Range Performance", - "page_start": 186, - "page_end": 186, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| | | | | | Depreciation, | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Gross | Intercompany | | Net | Amortization, Depletion and | Operating | Capital | | Total |\n| 2003 | Revenue | Revenue(b) | | Revenue | Accretion(c) | Income | Expenditures(d) | | Assets |\n| Eastern Region ÏÏÏÏÏÏÏÏÏ $ | 600.2 | $ (93.0) | $ | 507.2 | $ 36.4 | $ 71.3 | $ 40.7 | $ | 826.9 |\n| Central Region ÏÏÏÏÏÏÏÏÏ | 671.7 | (151.6) | | 520.1 | 74.0 | 106.6 | 75.7 | | 960.5 |\n\nSouthern RegionÏÏÏÏÏÏÏÏ 680.3 (76.9) 603.4 62.8 107.5 69.9 865.6 Southwestern Region ÏÏÏÏ 332.6 (31.2) 301.4 28.7 50.2 28.9 409.4 Western Region ÏÏÏÏÏÏÏÏ 729.4 (143.9) 585.5 46.2 148.8 51.4 813.2 Corporate Entities(a)ÏÏÏÏ .2 Ì .2 3.7 (71.7) 6.6 678.5 TotalÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ $3,014.4 $(496.6) $2,517.8 $251.8 $412.7 $273.2 $4,554.1\n\n### **NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)**\n\n| | | | | Depreciation, Amortization, | Other | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | Gross | Intercompany | Net | and | Charges | Operating | Capital | Total |\n| 2002 | Revenue | Revenue(b) | Revenue | Depletion(c) | (Income) | Income | Expenditures(d) | Assets |\n| Eastern Region ÏÏÏÏÏÏÏÏ $ | 564.1 | $ (79.7) | $ 484.4 | $ 32.0 | $(4.1) | $ 87.0 | $ 39.2 | $ 822.2 |\n| Central Region ÏÏÏÏÏÏÏÏ | 589.6 | (120.2) | 469.4 | 53.6 | (1.5) | 105.3 | 77.1 | 950.9 |\n| Southern Region ÏÏÏÏÏÏÏ | 643.1 | (65.5) | 577.6 | 52.7 | Ì | 118.3 | 58.0 | 830.7 |\n| Southwestern Region ÏÏÏ | 311.8 | (29.1) | 282.7 | 22.8 | Ì | 41.9 | 30.6 | 374.6 |\n| Western Region ÏÏÏÏÏÏÏ | 690.0 | (139.1) | 550.9 | 41.3 | Ì | 145.5 | 47.3 | 826.7 |\n| Corporate Entities(a)ÏÏÏ | .2 | (.1) | .1 | (2.8) | Ì | (38.5) | 6.4 | 404.0 |\n| TotalÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ $2,798.8 | | $(433.7) | $2,365.1 | $199.6 | $(5.6) | $459.5 | $258.6 | $4,209.1 |\n\n(a) Corporate functions include legal, tax, treasury, information technology, risk management, human resources, national accounts and other typical administrative functions. The increase in operating income for Corporate Entities from 2003 to 2004 is due primarily to higher self-insurance expense recorded during 2003.\n\n(b) Intercompany operating revenue reÖects transactions within and between segments and are generally made on a basis intended to reÖect the market value of such services.\n\n- (c) EÅective January 1, 2003, the Company adopted SFAS 143. (See Note 1, Basis of Presentation, for further information.)\n- (d) Capital expenditures for 2002 exclude $72.6 million used to purchase equipment consisting primarily of revenue-producing vehicles originally placed into service pursuant to an operating lease.\n\nGoodwill is the cost of acquired businesses in excess of the fair value of net assets acquired. The activity in goodwill, net of accumulated amortization, during 2004 and 2003 is as follows:\n\n| | Balance as of December 31, | | | Balance as of December 31, |\n| --- | --- | --- | --- | --- |\n| | 2003 | Acquisitions | Transfers | 2004 |\n| Eastern Region ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $ 435.9 | $ 2.6 | $(2.1) | $ 436.4 |\n| Central Region ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 350.5 | 10.7 | (3.6) | 357.6 |\n| Southern Region ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 325.8 | 2.0 | (1.3) | 326.5 |\n| Southwestern Region ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 135.0 | .2 | (1.6) | 133.6 |\n| Western RegionÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 310.9 | (2.3) | Ì | 308.6 |\n| Total ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $1,558.1 | $13.2 | $(8.6) | $1,562.7 |", - "page_start": 88, - "page_end": 88, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "(4) The deposit of any moneys forming part of the Consolidated Fund with a bank or with the Crown Agents for Overseas Governments and Administrations or the investment of any such moneys in securities in which, under the law for the time being in force in Botswana, trustees are authorized to invest, or the making of advances to such extent and in such circumstances as may be prescribed by Parliament, shall not be regarded as a withdrawal of those moneys from the Fund for the purposes of this section.\n\n## **119. Authorization of expenditure**\n\n(1) The Minister for the time being responsible for finance shall cause to be prepared and laid before the National Assembly, before or not later than 30 days after the commencement of each financial year, estimates of the revenues and expenditure of Botswana for that year.\n\n(2) The organisations of expenditure contained in the estimates for a financial year (other than expenditure charged upon the Consolidated Fund by this Constitution or any other law) shall be included in a Bill to be known as an Appropriation Bill which shall be introduced into the Assembly to provide for the issue from the Consolidated Fund of the sums necessary to meet that expenditure and the appropriation of those sums for the purposes specified in the said Bill.\n\n(3) If in any financial year it is found-\n\n- (a) that the amount appropriated by the Appropriation Act for the purposes included in any organisation of expenditure is insufficient or that a need has arisen for expenditure for a purpose for which no amount has been appropriated by the Appropriation Act; or\n- (b) that any moneys have been expended on any organisation of expenditure in excess of the amount appropriated for the purposes included in that organisation by the Appropriation Act or for a purpose for which no amount has been appropriated by the Appropriation Act,\n\na supplementary estimate showing the sums required or spent shall be laid before the National Assembly and the organisations of expenditure shall be included in a supplementary Appropriation Bill, or in a motion or motions approving such expenditure, which shall be introduced or moved in the Assembly.\n\n(4) Where any supplementary expenditure has been approved in a financial year by a resolution of the National Assembly in accordance with the provisions of subsection (3) of this section, a supplementary Appropriation Bill shall be introduced in the National Assembly, not later than the end of the financial year next following, providing for the appropriation of the sums so approved.\n\n## **120. Authorization of expenditure in advance of appropriation**\n\nParliament may make provision under which, if the Appropriation Act in respect of any financial year has not come into operation by the beginning of that financial year, the President may authorize the withdrawal of moneys from the Consolidated Fund for the purpose of meeting expenditure necessary to carry on the services of the Government until the expiration of four months from the beginning of that financial year or the coming into operation of the Appropriation Act, whichever is the earlier.\n\n## **121. Contingencies Fund**\n\n(1) Parliament may make provision for the establishment of a Contingencies Fund and for authorizing the President, if satisfied that there has arisen an urgent and unforeseen need for expenditure for which no other provision exists, to make advances from that Fund to meet that need.\n\n(2) Where any advance is made from the Contingencies Fund, a supplementary estimate shall be laid before the National Assembly as soon as possible for the purpose of replacing the amount so advanced.", - "page_start": 51, - "page_end": 51, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- I.4.2. Period of provision of the services\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n- I.4.3. Implementation of FWC in cascade\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n- (a) send the specific tender back to the contracting authority; or\n- (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the *performance of the specific contract* (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n# **I.5. Prices**\n\n# **I.5.1. Maximum amount of the FWC and maximum prices**\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is **EUR 1 000 000** (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\n| Senior experts: | [ | ] EUR per man-day |\n| --- | --- | --- |\n| Experts: | [ | ] EUR per man-day |\n\n# **I.5.2. Price revision index**\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc_hicp_midx).]\n\n# **I.5.3. Reimbursement of expenses**\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "When did Mermaid Marine Service Base in the Port of Broome start?", - "target_page": 22, - "target_passage": "1 February 2000", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "*The foreshore of King Bay will be redeveloped as part of the Mermaid Marine Dampier Base Expansion works.*\n\nleased facilities to seven third party vessels and protection for three of our own vessels using this technique by the cyclone season in 2001.\n\nAs more vessels seek protection, additional breakwaters can be constructed and sea room dredged. Each mooring involves a pattern of pin piles drilled into the granite sea floor with four vessel specific mooring lines secured to special attachment points on the vessel.\n\nMany smaller vessels including Mermaid's will be lifted from the water and tied down on purpose built cradles for cyclones.\n\n#### **F. ONSHORE LAND RECLAMATION.**\n\nLike our neighbours, much of the Mermaid site is below the prescribed storm surge level, or needs some degree of earthworks to maximize its value. Currently 8 of the 17 ha of the area is suitable for development in its present state.\n\nThe spoil produced from dredging will allow Mermaid to achieve full utilization of the site at a fraction of the cost of importing fill from elsewhere.\n\nConsiderable effort has gone into anticipating the future direction of the Base. Planning services such as traffic flows, land allocation and security, as well as fulfilling the many and complex regulatory requirements related to health, safety, quarantine, environmental management, dust, dangerous goods and hazchem materials have been the subject of considerable study prior to this implementation stage. 1 3\n\n> MERMAID MARINE AUSTRALIA LIMITED", - "page_start": 16, - "page_end": 16, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Darwin is serviced by three marine infrastructure elements.\n\n- a. A public port adjacent to the main business centre, which is destined to be redeveloped as a cruise ship and tourism precinct .\n- b. A group of freehold water front properties on Frances Bay near to the main business center.\n- c. A recently commissioned public port and industrial estate at East Arm some 25 km from the main business district.\n\nDarwin already has an abundance of shore based logistics service providers who operate from onshore industrial estates through publicly owned facilities.\n\nThe Northern Territory Government has sponsored a study to determine the marine infrastructure deficits of the Darwin area. Mermaid has contributed to the study and is monitoring the subsequent planning processes.\n\nRegardless of industry trends, Mermaid has a need for a Darwin Base to service and care for Mermaid vessels working in the area. Too often vessels have been demobilised to Dampier at the conclusion of a contract then being required to return to Darwin within days or weeks for another assignment.\n\nMermaid has decided that needs and opportunities in the north of Australia can be best served by entering a co-operative arrangement with an established Darwin Company. Agreement has therefore been reached with Perkins Shipping Group, who are one of the freehold land owners on Frances Bay.\n\nPerkins Shipping, established in the 1950s is the major coastal shipping service provider in Australia's north, linking Darwin to mining and aboriginal committees from the Kimberly to Gulf of Carpenteria. Additionally Perkins operate services to East Timor, mining operations in Indonesia, as well as Singapore and East Malaysia. The Perkins and Mermaid businesses are different, but complementary, offering benefits to both. The arrangement with Perkins will give Mermaid well placed office facilities, open storage and waterfront access.\n\nOur intention is that Darwin become the third and final mainland entreport to service the Northwestern offshore oil and gas industry together with our other strategically placed facilities at Dampier and Broome.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## CHAIRMAN ' S REPORT\n\nDirector of the Clough Group and a highly experienced and talented executive. Richard has appointed an alternate director, Mr Chris Sutherland, a senior Clough Executive, with engineering qualifications and associated business skills to assist him.\n\nCaptain Jim Carver, Mermaid's founder continues to play a significant role in Mermaid's operations, paying particular attention to our business at sea. Under 20 years of Jim's leadership, Mermaid developed an enviable reputation as a \"can do\" company, and in our drive for new engineering expertise and professionalism, we have no intention of allowing that attitude to be lost.\n\nLast year we identified Broome as our next strategic position. No oil and gas work had been supported out of Broome for seventeen years and with the valuable cooperation and assistance of the Broome Port Authority, we secured Inpex, the large Japanese resource company as our first client. The base was then established early this year.\n\nA new focus has developed in the Browse Basin and it is pleasing to report that after only seven months operation, our Base is profitable, housing Inpex, BHP, Woodside and Sedco in support of their current drilling programs. All the holes drilled from the Broome Base have been designated as commercial finds by the explorers and the very major increase in the reserves at Brecknock, Woodside's permit 500 kilometres north of Broome creates optimism for future production based in the Broome area.\n\nDarwin was next on our list, enabling involvement in Timor Sea oil and gas activity. The Bayu Undan project operated by Phillips, is well advanced and will impact Darwin's offshore activity quite soon. Pursuing the formula for a strategic sea/land interface, we reached agreement with Perkins Shipping in Darwin, to set up an office at their Frances Drive facility. Perkins Shipping is synonymous with Darwin's history. Set up by V.B. Perkins in the late 40's, it has grown to significant size, operating its ships across the top of Australia and into South East Asia. There are many synergies which Mermaid shares with Perkins and we look forward to developing our Darwin business in close association with that fine old Company.\n\nOur ambitions for the support of the oil and gas industry now go beyond bases and vessels. Early in the current financial year, Mermaid acquired 50% of the OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Mermaid shares. OIS MOC owns the highly successful labour hire business operated by Kevin Ponga and Rick De Franck. Kevin Ponga is now General Manager of Mermaid Labour & Management Pty Limited and Mr De Franck becomes a Director. With their reputation and talent added to Mermaid's experienced team, this labour hire company has become a significant force and can be expected to be in the final when major labour hire contracts are let.", - "page_start": 8, - "page_end": 8, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## MERMAID FLEET", - "page_start": 25, - "page_end": 25, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "BROOME SUPPLY BASE\n\nMermaid Marine services base at the Port of Broome (Broome Base) commenced operations on 1 February 2000 when the first ship containing drill pipe for Inpex Browse Ltd arrived from Japan.\n\nAs a result of Mermaid's efforts in establishing the Broome Base, Inpex Browse Ltd., BHP Petroleum and Woodside have used Broome as their base for drilling a total of four (4) offshore wells.\n\nIt is presently expected that at least six (6) exploration wells will be drilled in the area during 2001. The Base now employs as many as ten (10) staff up from the three (3) who commenced in February 2000. Excellent management and staff competence are the prime factors, which have delivered the smooth start up and continued success at Broome.\n\n*The Mermaid Broome Supply Base certified Impex, Woodside and BHP Petroleum exploration program during 2000.*\n\nThe base is currently secured on a come and go lease arrangement, located on Port premises adjacent to the wharf gates. Although convenient, with an excellent cyclone proof building, the site has limitations in terms of size and slope. An area more suitable for our long term needs has been optioned from Port authorities and discussions will proceed with our clients this year to determine their precise needs.\n\nThe success of Browse Basin wells drilled this year, strong developments in the energy sector and the intention of operators to base their 2001 operations in Broome, have encouraged the Board to consider further investment to ensure that capability keeps pace with demand and that we leave no reason for competitors to offer more or better.\n\nThe offshore waters of the Northern Territory, the Zone of Co-Operation (ZOCA) between Australia and Timor, and the Commonwealth Territory of Ashmore and Cartier host approximately 35% of the exploration and development budgets of Australian offshore oil and gas industry. DARWIN BASE\n\n> Two large projects are under study or implementation in these waters; the Phillips Petroleum Bayu-Undang Project and the Woodside Sunrise Troubador Project.\n\n> Two large petrochemical projects are under study for the Darwin area based upon pipelines from the Timor Sea gas resources of the projects above.\n\n> Darwin will within 3 years be the northern terminus of the Australian national rail system with the completion of the Alice Springs Darwin rail link, further expanding its role in Australia's economy.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## Mermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity including exploration support, supply, survey and berthing assist. Lower vessel utilisation during the period allowed an acceleration of scheduled maintenance. Two tugs, Mermaid Commando and Mermaid Chieftan received extensive refits. In both cases the work increased productivity through enhanced bollard pull and consequent earnings. SEAGOING OPERATIONS\n\nSafety was given the highest priority through new monitoring systems and awareness programs. Formalised on the job instruction and training courses have also lifted levels of experience and proficiency across the workforce.\n\n#### DAMPIER BASE\n\n8\n\nThe offshore waters and islands adjacent to Dampier, host in excess of 50% of all exploration and development budgets of Australia's offshore oil and gas industry. The Burrup Peninsular where the Base is located is the intended site of major new oil, gas, petrochemical and industrial mineral processing plants. The Port of Dampier is Australia's largest Port as measured by tonnage, but as identified in the 1997 WA Department of Commerce and Trade report, there remains an urgent need for additional marine support infrastructure. Mermaid is now well advanced in our plan to satisfy those needs and onshore work was announced to start on the 9th October 2000.\n\nSince receiving approval in principle for development of the Dampier Base from the Western Australian Minister for the Environment in February 2000, engineering and general design work in connection with the base proceeded at an accelerated pace.\n\nThis work, assisted by technical studies and a re-assessment of an increased demand for services arising out of greater expectations for growth in the sector, has led to improvements and expansion of capacity over earlier plans.\n\nThe Dampier Base will now comprise:-\n\n**•**\n\n**•**\n\n- A wharf offering 7.5 metres depth at low tide, featuring a heavy loadout section to accommodate modules of up to 1500 tonnes to onshore projects on the Burrup Peninsular and adjacent mining centres. A subsea pipe reel loading facility will encourage the use of spool ships in the region for deepwater pipelay. On a project by project basis, pipeline protection rock dumping, specialist vessel rig up activities and the like will be facilitated, as will dry and bulk cargo handling, refuelling, watering and all categories of waste reception. The joint Commonwealth and WA State Government initiative to establish an integrated industrial estate at Jervoise Bay (south of Perth) serviced by high wide load corridors from Perth's industrial areas will see the heavy capacity wharf playing a strategic role in major capital works in the Pilbara, leading to significant cost savings.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "During 2000 Mermaid Marine formed a new business unit Mermaid Labour and Management Limited. The focus of this unit will be labour supply and industrial relations management to the marine, offshore construction industry and onshore resources projects in the NW of Australia. The Directors and Management of the new entity are very experienced, well known and regarded by the industry in general. The company has high expectations for Mermaid Labour and Management Limited. MERMAID LABOUR AND MANAGEMENT LIMITED\n\n#### SAFETY\n\nMermaid remains dedicated to ensuring a safe environment in all areas where we operate or have responsibility.\n\nIn April 2000, following the regular six monthly Quality Assurance audit, the Company's accreditation under AS/NZS/ISO 9002 was reconfirmed. Mermaid's quality assurance and compliance team continues with a continuous day to day effort to improve our health, safety and environmental performance. Stringent charterer requirements, which are a pre requisite of increased vessel usage, must be met to the letter and are the subject of regular and demanding audits. Although time consuming and expensive, we are grateful to certain of the large producers, who while demanding the highest levels of compliance, have also been prepared to give their time, sharing their safety expertise with us and in that way assisting in the very major advances our company has made in this all important area.\n\nAt the time of writing this report, Mermaid had accumulated 348 days without a Lost Time Injury. A fine achievement and a continuing record.", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "vessels engaged in routine offshore logistics tasks operate fully laden with 7.4 m draft which means there will be very few occasions when the largest vessels in the industry have to make a tide dependent entry or departure through the Mermaid channel. Further the Mermaid Base will not suffer operational disadvantages experienced by the adjacent Woodshed Base or nearby Damper Public Wharf in terms of entry and departure draft restrictions.\n\nThe function and purpose of Berth 1 will be:\n\n- To service the larger offshore supply boat market on a fast turnaround basis.\n- To receive and offload very heavy ro/ro cargoes up to 1500 tonne delivered by ocean going heavy lift ships and barges.\n- To handle inbound and outbound cargoes related to major offshore pipe lay projects.\n- To receive and efficiently load reel ships used for deep water small diameter pipelay.\n\nThe wharf will be an earth filled structure with steel sheet pile faces and concrete capping beam surround. Most of the construction will be performed using land based equipment working from the core of the earth filled system.\n\nMuch effort has gone into a design concept which allows very large cranes (>100 tonne capacity) to operate without restriction on the wharf.\n\nThe separation between Berth 1 and Berth 2 is such to allow Road Train Triples (the max allowable) to turn unassisted on the wharf.\n\n#### **C. QUAY WALL (BERTH 2)**\n\nThe inner berth, Berth 2 has a minimum depth alongside of 5.0 m allowing unrestricted operation of all the Mermaid fleet, and the majority of other vessels servicing the offshore oil/gas industry and mineral ports. This berth will offer excellent weather protection for small and medium size vessels.\n\n#### **D. BREAKWATER.**\n\nThe rubble mount type breakwater will be an extension of the wharf, constructed using core and armor rock largely won from excavations on the Base. The excavations created will become depositories for dredge spoil.\n\nBecause the storm surge associated with major cyclones can be up to 7 m above chart datum (low tide), before imposing the wave height, a fully protective breakwater is not practical. The", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "Work on Dampier Base expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m. B ASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nThe principle activities and facility developments involved in the expansion are:\n\n#### **A. DREDGING**\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project. The layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n#### **B. QUAY WALL ( BERTH 1)**\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "#### **G. SLIPWAY.**\n\nAustralia, and particularly the north west is impoverished in terms of infrastructure to service our marine industries. Some of this has been due to a historical link with our recent industrial past. This is now behind us, and Australia has now become a centre of excellence with respect to both new building and ship repair, particularly for high tech and specialty vessels.\n\nThe Mermaid slipway will be the third such facility on the western half of the continent , with others located at Fremantle and Darwin.\n\nThe slipway will be a repair only facility, no new building is contemplated. Its capacity is structured to meet the regional steel mono-hulled fleet requirements of some 60 vessels between 200 and 4000 tonne displacement. Fishing industry, marine tourist industry, large private pleasure craft , naval, scientific and law enforcement vessels are a secondary target.\n\nThe slipway is designed to initially accept vessels up to 2,700 tonnes, a restriction which is set by our current inventory of cradles used to support vessel on the slip. The cradles will be progressively upgraded to ultimately handle 4000 tonne. A later expansion will allow 500 tonne vessels to be side slipped, thereby increasing capacity.\n\nThe slipway location and orientation on the Base has been chosen to maximize the cost and load bearing benefits of having a very high strength granite bedrock as the best possible foundation.\n\nThe Mermaid slipway will rank second in terms of capacity on the western half of the continent. Tenix, Fremantle 8,000 tonne, Mermaid Dampier 2,700 tonne rising to 4,000 tonne, Darwin Ship Repair 2,500 tonne. The nearest other facilities are Singapore, Adelaide, Port Moresby or Cairns.\n\nMermaid has purchased a very large cyclone rated industrial building frame which will be sited beside the slipway and tenanted by Mermaid engineering and companies which will provide ancillary services related to ship repair.\n\n*The Northwest Shelf is a world scale offshore oil and gas exploration province.*", - "page_start": 20, - "page_end": 20, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "How do I create a new document in Word?", - "target_page": 2, - "target_passage": "Just select File > New", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share, and send a link to this document. (keyboard shortcut – Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n# Add visuals with pictures from the web\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures, and then search for something, like puppy clip art.\n- 2. Select the picture you want, and select Insert.", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "Instructions you can edit, share, and print\n\nUnlike old-school user guides, this doc is yours to tailor exactly for your needs. Reading it will teach you some basics about Word, but this document isn't just for reading. It's for editing too, so you can learn by doing.\n\nFor practice using Word features, watch for Try it text in red throughout this document.\n\nTime saver: If you've only got a minute and you want to see how this works, watch this Video: Welcome to Word.\n\n### Write eloquently, with a little help\n\nWord automatically checks spelling and grammar, and marks misspelled words with a red squiggly underline. Grammatical glitches get a blue double underline.\n\nTry it: Put your cursor at the end of this paragraph, and hit Enter to start a new paragraph. Write a sentence with some spelling or grammatical mistakes, and press Enter to finish the paragraph.\n\nRight-click the text that's marked with underlines, or Press F7. Choose a suggestion to correct the mistakes.", - "page_start": 0, - "page_end": 0, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "# **Word PDF Accessibility**\n\nArticle • 11/26/2024\n\n### **Summary**\n\nAuthors can ensure that their Word documents are accessible to people with disabilities even when distributing them in PDF format using the following approach:\n\n- 1. First, they should follow the practices in Make your Word documents accessible to people with disabilities .\n- 2. Next, they should follow the steps in Create accessible PDFs to preserve the accessibility of the document in PDF format.\n\nThis article provides details about the information Word includes in the PDF to make it accessible.\n\n- 1. PDF/UA tags are included to provide semantic information about the content in the document.\n- 2. Decorative content does not need to be read, so it is marked as in the Content Tree in the PDF and no PDF/UA tags are included.\n- 3. Bookmarks for each section and slide are included to make it easier to navigate the content.\n\nノ **Expand table**\n\n### **PDF/UA Tags**\n\n**Type of content Tags** Document Title H1, H2, etc. <H1>, <H2>, etc.", - "page_start": 55, - "page_end": 55, - "source_file": "office-pdf.pdf" - }, - { - "text": "# Get writing suggestions\n\nWith **Editor**, bring out your best writing. Editor helps you bring out your best writing by giving you intelligent writing suggestions. It also calculates an Editor Score based on the number and types of suggestions you have yet to address. Select an underlined word or phrase to accept or ignore a suggestion.\n\n## Review and track changes\n\nWhether you just want to check spelling, keep your word count in check, or fully collaborate with other people, the **Review** tab has essential commands to track, discuss, and manage all of the changes made to your documents.\n\n# View who else is typing\n\nCo-authoring Word documents that are shared on OneDrive or on a SharePoint site happens in real-time, which means you can easily view where other authors are making changes in the same document that you're currently working in.\n\n| File | Home | Insert | Design | Layout | References | Mailings | Review | View |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Anne Wallace | | | | | | | | |\n| The Contoso PK-388 is already living up to its reputation for ease of use and power. Industry | | | | | | | | |\n| testing results have been impressive, and critics are echoing those results with their own praise. | | | | | | | | |\n| & Alex Darrow | | | | | | | | |\n| As the infographic below shows, according to industry testing, the Contoso PK-388 leads the way. | | | | | | | | |\n\nFormat with styles\n\n**Styles** lets you create, apply, and review the formatting styles in your current document. To open it, select the **Home** tab, and then select the small arrow in the lower right corner of the Styles gallery.", - "page_start": 2, - "page_end": 2, - "source_file": "Word QS.pdf" - }, - { - "text": "# **13.4.1 PDF data**\n\nPortable Document Format (PDF) data is an increasingly common data type that can be archived within Content Manager OnDemand. The following key advantages are available by using this data type as a document format:\n\n- - It is a read-only format that does not require any external resources, such as images or fonts. It is self-contained.\n- - The viewer for PDF can be downloaded at no charge from the Adobe website and the browser plug-ins for PDF are also available at no charge.\n\nDuring PDF document creation, resources, such as images and custom fonts, are placed in the data stream once and then referenced many times from within the PDF file. If a large report is produced from many small documents, that report requires only one copy of the resources.\n\nHowever, when the PDF is indexed, the PDF Indexer creates many PDF documents from the input file. Each of these documents requires a certain number of PDF structures, which define a document. These documents are concatenated together in the .out file, and then loaded into Content Manager OnDemand as separate documents. Because the resources are extracted and placed into a separate resource file, they are not included in each document. For an illustration of the process, see Figure 13-3.\n\nFigure 13-3 PDF indexing\n\nIf no resources are collected, the size of the .out file, which contains all of the individual documents, might be larger than the original file. For tips about how to reduce the size of the output file, see 7.3.5, \"PDF indexing: Using internal indexes (Page Piece Dictionary)\" on page 173.", - "page_start": 331, - "page_end": 331, - "source_file": "sg246915.pdf" - }, - { - "text": "**Important:** Use care when you enable this feature. The Display Document Location function can result in degraded search performance because the storage location information for every document that is returned must be retrieved from the Content Manager OnDemand object server.\n\n# **Display Document Hold**\n\nThe Display Document Hold setting (Figure 3-7 on page 54) determines whether the client shows a column that indicates whether a hold is placed on the document. For more information, see Chapter 16, \"Enhanced Retention Management\" on page 353.\n\n### **Note Search**\n\nIf the annotation parameter (annotation flags in the document database table) in the application group is set to \"No\", the Note Search parameter (Figure 3-7 on page 54) determines when Content Manager OnDemand searches the database for annotations and notifies the user of the annotations. The following options are possible:\n\n- - Hit list: When a folder query is run, Content Manager OnDemand searches for annotations, and a note icon, which contains an annotation, is displayed next to each document in the resulting hit list. The hit list option has a direct performance impact on the generation of the document list.\n- - Retrieve: Content Manager OnDemand searches for annotations when the user selects a document for display. This option is the default and preferred option.\n- - Note: Content Manager OnDemand searches for annotations when the user selects the **note** command when the user views a displayed document.\n\nAs a preferred practice, set the annotation parameter in the application group advanced settings to \"Yes\". In this case, an annotation flag is set in the database when a user adds an annotation to a document. When the document hit list is displayed, a note icon is displayed next to the documents for which an annotation exists.\n\n# **Full Report Browse**\n\nIn the Permissions tab of the folder definition window (Figure 3-8 on page 56), the Full Report Browse option allows a user of the Content Manager OnDemand Windows Client to select a document, retrieve that document, and view the entire report to which the document belongs.", - "page_start": 78, - "page_end": 78, - "source_file": "sg246915.pdf" - }, - { - "text": "# Get help with Word\n\n| Q Add watermark | |\n| --- | --- |\n| 区 | Watermark |\n| 물 | Insert Picture |\n| E | Insert Rows Above |\n| E | Add a Blank Page |\n| 電 | Insert Rows Below |\n| 2 | Get Help on \"Add watermark\" |\n| 0 | Smart Lookup on \"Add water ... |\n\nThe Tell me search box takes you straight to commands and Help in Word.\n\n#### Try it: Get help:\n\n- 1. Go to Tell me what you want to do at the top of the window.\n- 2. Type what you want to do.\n\nFor example, type:\n\n- Add watermark to quickly get to the watermark command.\n- Help to go to Word help.\n- Training to see the list of Word training courses.\n- What's new for a list of the most recent updates to Word\n\n### Let us know what you think\n\nPlease give us feedback on this template, so we can provide content that's truly useful and helpful. Thanks!", - "page_start": 7, - "page_end": 7, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Find whatever you need\n\nType a keyword or phrase into the **Search** box to quickly find the Word features and ribbon commands you're looking for, to discover **Help** content, or to get more information online.\n\n| print × |\n| --- |\n| Actions |\n| F Print |\n| Print Preview and Print |\n| इ Preview and Print |\n| 트 Print Layout |\n| Get Help on |\n| \"print\" |\n| 10 results |\n| Definition |\n| print [print] |\n| verb. produce (books, newspapers, maqazines, etc.), especially |\n| in large quantities, by a mechanical process involving the trans ... |\n| Find in Document |\n| \"print\" \"print\" 0 results |\n| More search results for \"print\" |\n\n# Share your work with others\n\nTo invite others to view or edit your documents, select the **Share** button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.\n\n| Comments | | & Share | |\n| --- | --- | --- | --- |\n| | | | × |\n| Send link | | | |\n| CE Annual Report.docx | | | |\n| Anyone with the link can edit > | | | |\n| lex Wilber × | Add another | | |\n| Message ... | | | |\n| | | Send | |\n| oov link Outlook | | | |\n| Send a copy V | | | |\n\n### Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to **https://go.microsoft.com/fwlink/?linkid=2008317.**\n\n## Next steps with Word\n\n#### **See what's new in Office**\n\nExplore the new and improved features in Word and the other Office apps. Visit **https://go.microsoft.com/fwlink/?linkid=871117** for more information.\n\n#### **Get free training, tutorials, and videos for Office**\n\nReady to dig deeper into the capabilities that Word has to offer? Visit **https://go.microsoft.com/fwlink/?linkid=871123** to explore our free training options.\n\n#### **Send us your feedback**\n\nLove Word? Got an idea for improvement to share with us? On the **File** menu, select **Feedback** and then follow the prompts to send your suggestions directly to the Word product team. Thank you!", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "# **II.5.4. Validity and date of e-documents**\n\n- 1. The parties agree that any e-document, including related attachments exchanged via *e-PRIOR*:\n\t- (a) is considered as equivalent to a paper document;\n\t- (b) is deemed to be the original of the document;\n\t- (c) is legally binding on the parties once an *e-PRIOR* authorised person has performed the 'sign' action in *e-PRIOR* and has full legal effect; and\n\t- (d) constitutes evidence of the information contained in it and is admissible as evidence in judicial proceedings.\n- 2. The parties expressly waive any rights to contest the validity of such a document solely on the grounds that communications between the parties occurred through *e-PRIOR* or that the document has been signed through *e-PRIOR*. If a direct connection is established between the parties' *back offices* to allow electronic transfer of documents, the parties agree that an e-document, sent as mentioned in the *interface control document*, qualifies as an *EDI message*.\n- 3. If the e-document is dispatched through the *supplier portal*, it is deemed to have been legally issued or sent when the contractor (or leader in the case of a joint tender) is able to successfully submit the e-document without any error messages. The generated PDF and XML document for the e-document are considered as a proof of receipt by the contracting authority.\n- 4. In the event that an e-document is dispatched using a direct connection established between the parties' *back offices*, the e-document is deemed to have been legally issued or sent when its status is 'received' as defined in the *interface control document*.\n- 5. When using the *supplier portal*, the contractor (or leader in the case of a joint tender) can download the PDF or XML message for each e-document for one year after submission. After this period, copies of the e-documents are no longer available for automatic download from the *supplier portal*.\n\n# **II.5.5. Authorised persons in e-PRIOR**\n\nThe contractor submits a request for each person who needs to be assigned the role of 'user' in *e-PRIOR*. These persons are identified by means of the European Communication Authentication Service (ECAS) and authorised to access and perform actions in *e-PRIOR* within the permissions of the user roles that the contracting authority has assigned to them.\n\nUser roles enabling these *e-PRIOR* authorised persons to sign legally binding documents such as specific tenders or specific contracts are granted only upon submission of supporting documents proving that the authorised person is empowered to act as a legal representative of the contractor.\n\n# **II.6. Liability**\n\n- **II.6.1** The contracting authority is not liable for any damage or loss caused by the contractor, including any damage or loss to third parties during or as a consequence of *implementation of the FWC*.\n- **II.6.2** If required by the relevant applicable legislation, the contractor must take out an insurance policy against risks and damage or loss relating to the *implementation of the FWC*. It must also take out supplementary insurance as reasonably required by standard practice in the industry. Upon request, the contractor must provide evidence of insurance coverage to the contracting authority.", - "page_start": 17, - "page_end": 17, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- modifying the content, the dimensions;\n- making technical changes to the content (necessary correction of technical errors), adding new parts or functionalities, changing functionalities, providing third parties with additional information concerning the *result* (e.g. source code) with a view to making modifications;\n- addition of new elements, paragraphs, titles, leads, bolds, legend, table of content, summary, graphics, subtitles, sound;\n- addition of metadata, for text and data-mining purposes; addition of rightmanagement information; addition of technological protection measures;\n- preparation in audio form, preparation as a presentation, animation, pictograms story, slide-show, public presentation;\n- extracting a part or dividing into parts;\n- incorporating, including by cropping and cutting, the *results* or parts thereof in other works, such as on websites and webpages;\n- translating, inserting subtitles, dubbing in different language versions:\n\t- English, French, German;\n\t- all official languages of EU;\n\t- languages used within EU;\n\n-\n\n- languages of candidate countries;\n(f)rights to authorise or license the modes of exploitation set out in any of the points (a) to (e) to third parties, provided however that this does not apply to *pre-existing rights* and *pre-existing materials*, if they are only licensed to the Agency, except as foreseen by Article II.13.2.;\n\n(g) other adaptations which the parties may later agree; in such case, the following rules apply: the contracting authority must consult the contractor. If necessary, the contractor must in turn seek the agreement of any *creator* or other right holder and must reply to the contracting authority within one month by providing its agreement, including any suggestions of modifications, free of charge. The contractor may refuse the intended modification only if a *creator* can demonstrate that the intended modification may harm his/her honour or reputation, thereby violating his/her moral rights.\n\nThe modes of exploitation may be defined in more details in the specific contract.\n\nThe list above is in addition to whatever rights already accrue to the contracting authority on the basis of existing exceptions in the applicable legislation, such as the copyright exception to ensure the proper performance or reporting of administrative proceedings, in cases where such exceptions apply.\n\n# **I.10.2. Licence or transfer of pre-existing rights**\n\nAll *pre-existing rights* incorporated in the *results*, if any, are licensed to the Agency as set out in Article II.13.2.", - "page_start": 9, - "page_end": 9, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "Where can I find other Microsoft quick start guides?", - "target_page": 4, - "target_passage": "To download our free Quick Start Guides for your other favorite apps, go to https://go.microsoft.com/fwlink/?linkid=2008317.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "# Quick Start Guide\n\nNew to Word? Use this guide to learn the basics.", - "page_start": 0, - "page_end": 0, - "source_file": "Word QS.pdf" - }, - { - "text": "# **Welcome to Microsoft Teams**\n\nMicrosoft Teams is the app that brings your conversations, meetings, and files together in one place. This guide will help you get started with Teams, learn the basics, get tips to practice on your own, and discover ways to engage your team.\n\n**Download** the app for desktop and mobile to access Teams with the best performance anywhere you go.\n\n**Hit the ground running now!** Build confidence by trying things on your own. Go to the meet now button (at the top right corner on the Calendar tab) to play around and test all the meetings functionalities before you're in the spotlight!", - "page_start": 0, - "page_end": 0, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "## Find whatever you need\n\nType a keyword or phrase into the **Search** box to quickly find the Word features and ribbon commands you're looking for, to discover **Help** content, or to get more information online.\n\n| print × |\n| --- |\n| Actions |\n| F Print |\n| Print Preview and Print |\n| इ Preview and Print |\n| 트 Print Layout |\n| Get Help on |\n| \"print\" |\n| 10 results |\n| Definition |\n| print [print] |\n| verb. produce (books, newspapers, maqazines, etc.), especially |\n| in large quantities, by a mechanical process involving the trans ... |\n| Find in Document |\n| \"print\" \"print\" 0 results |\n| More search results for \"print\" |\n\n# Share your work with others\n\nTo invite others to view or edit your documents, select the **Share** button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.\n\n| Comments | | & Share | |\n| --- | --- | --- | --- |\n| | | | × |\n| Send link | | | |\n| CE Annual Report.docx | | | |\n| Anyone with the link can edit > | | | |\n| lex Wilber × | Add another | | |\n| Message ... | | | |\n| | | Send | |\n| oov link Outlook | | | |\n| Send a copy V | | | |\n\n### Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to **https://go.microsoft.com/fwlink/?linkid=2008317.**\n\n## Next steps with Word\n\n#### **See what's new in Office**\n\nExplore the new and improved features in Word and the other Office apps. Visit **https://go.microsoft.com/fwlink/?linkid=871117** for more information.\n\n#### **Get free training, tutorials, and videos for Office**\n\nReady to dig deeper into the capabilities that Word has to offer? Visit **https://go.microsoft.com/fwlink/?linkid=871123** to explore our free training options.\n\n#### **Send us your feedback**\n\nLove Word? Got an idea for improvement to share with us? On the **File** menu, select **Feedback** and then follow the prompts to send your suggestions directly to the Word product team. Thank you!", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "# **Next Steps**\n\nYou will **get the most out of Teams** when you get to truly connect with your team and collaborate together. Keep practicing until each step of your workflow feels natural.\n\n| Test meetings | | |\n| --- | --- | --- |\n| 1. | Use the Meet now button in the | Calendar tab |\n| Then select \"Start meeting\" | 2. | |\n| 3. | And then \"Join now\" | |\n| Here you can try to share your screen, | start a whiteboard or even record | |\n| yourself while you are practicing a | presentation. This is your safe space | |\n| to test everything out! | | |\n\n# **Share knowledge**\n\nTeamwork is all about collaboration! **Share with your team best practices** you learn along the way, tips and tricks for how you can best organize your workflows and ask for their own advice to define how you can best use Teams together.\n\n# **Keep learning**\n\nNo matter how you like to learn and practice, we've got resources to support and inspire you:\n\n- Virtual classes: We have instructors to answer your questions and walk you through all the details. •\n- Training series: Complete the beginner series of videos at your own pace.\n- Support articles and step-by-step guides: To get answers to your most common questions.\n- Feature overviews, tutorials, and announcements: Our YouTube channel has carefully curated content to get you excited and show how you can use Teams effortlessly.", - "page_start": 5, - "page_end": 5, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "# **UNDERSTANDING QUICK ANALYSIS**\n\nThe *Quick Analysis* tools were developed in response to the fact that users weren't using or even aware of the more powerful analytical tools found in Excel. So Excel decided to combine\n\n*Live Preview* with some of these tools to create the *Quick Analysis* tools.\n\n### **The Quick Analysis Button**\n\nThe *Quick Analysis* button appears when a range is selected in a worksheet. Clicking on the button displays the *Quick Analysis* gallery which contains quick analysis tools that can be applied to the selected data.\n\nThe tools have been organised along tabs at the top – *FORMATTING*, *CHARTS*, *TOTALS*, *TABLES*, and *SPARKLINES*.\n\nWhen you click on a tab, options specific to that tab are presented.\n\n### **Using Quick Analysis Tools With Live Preview**\n\nMost of the *Quick Analysis* tools in the *Quick Analysis* gallery provide a Live Preview of the changes in the worksheet when you point to an option.\n\nThis is very useful if you are not sure of the formatting or type of analysis you require as it provides you with a preview of what the data would look like if you selected that specific option.\n\nAt the right we have selected only the totals from the worksheet shown above. We have pointed to options from the *TOTALS* tab (*% Total* and *Average*) and from the *FORMATTING* tab (*Data Bars*).\n\nLive Preview has either presented another row of analysed data or has formatted the selection accordingly.\n\nAll of these tools are also available on the ribbon but using the *Quick Analysis* tools is much quicker.", - "page_start": 35, - "page_end": 35, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **QUICK CHARTING**\n\nCharts aren't all that difficult to create in Excel, especially with the *Recommended Charts* feature. However, deciding what style and type of chart can be daunting. Fortunately the *Charts*\n\ntools provide a way of seeing what the different charts will look like without having to first create the chart.\n\n### **For Your Reference…**\n\n### To *use* the *Quick Charting tools*:\n\n- 1. Select the range to be charted, then click on the *Quick Analysis* button\n- 2. Choose the desired option from the *CHARTS* tab\n\n### **Handy to Know…**\n\n- When creating a chart you'll need to ensure that the range you select includes the labels to be used on the chart.", - "page_start": 37, - "page_end": 37, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "### **QUICK FORMATTING**\n\nThe first tab in the *Quick Analysis* gallery is *FORMATTING*. This tab provides access to the conditional formatting tools of Excel. These are the tools that allow you to analyse data by\n\ncolouring it or presenting it in a slightly different way. In the *Quick Analysis* gallery you can apply data bars, colour high and low values, values over or below a value, and more.\n\n### **For Your Reference…**\n\n### To *apply Quick Formatting* in a *worksheet*:\n\n- 1. Select the range to be formatted, then click on the *Quick Analysis* button\n- 2. Choose the desired formatting from the *FORMATTING* tab\n\n### **Handy to Know…**\n\n- *Quick Formatting* applies conditional formatting, not the standard formatting.\n- The *Clear Format* option in the *Quick Analysis* gallery will clear any conditional formatting that has been applied.", - "page_start": 36, - "page_end": 36, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Create something\n\nBegin with a **Blank document** to get right to work. Or start with a template to save yourself time and steps. Just select **File** > **New**, and then select or search for the template you want.\n\n| | New |\n| --- | --- |\n| (n) Home | |\n| New | |\n| Open | |\n| Info | |\n| Save a Copy | |\n| Save as Adobe PDF | Blank document |\n| Print | |\n| Share | Search for online templates Q |\n| Export | Suggested searches Business Cards Flyers Letters Education Resumes and Cover Letters Holiday |\n| Transform | Aa NAME |\n| Clase | Take a tour |\n\n### Access files anywhere\n\nNeed to work on the go and across different devices? Click **File** > **Account** to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n#### Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting **File** > **Open** takes you to your recently used documents and any files that you may have pinned to your list.\n\n| € | Open | | | | |\n| --- | --- | --- | --- | --- | --- |\n| (2 Home | | | | | |\n| D New | L Recent | | 0 Search | | |\n| | | | Documents Folders | | |\n| Open | 08 | Shared with Me | | | |\n| | Contass | | 13 Name | | Date modified |\n| Info | | OneDrive - Contoso | Pinned | Pin files you want to easily find later. Click the pin icon that appears when you hover over a file. | |\n| Save a Copy | | MeganB@contoso.com | | | |\n| | | | Today | | |\n| Save as Adobe PCC | | Sites - Contoso MeganB@contoso.com | 四元 Connector - Elbow.doco Desktop | | 11/4/2021 3:01 AM |\n| Print | | | | | |\n| Share | This PC | | CE Annual Report.docx W OneDrive - Contoso | | 11/4/2021 2:48 AM |\n| | Add a Place | | | | |\n| Export | | | Older | | |\n| Transform | Browse | | Document (8).doco W | | 10/S/2021 4:48 PM |\n| | | | OneOrive - Contaso | | |\n| Close | | | 8 | Voice Capture Document.docx | 10/5/2021 4:37 PM |\n| | | | OneOrive - Contoso | | |\n| | | | W | Manufacturing and delivery plan.docx Mark 8 Project Team > Research and Development | 9/16/2021 8:28 AM |\n\n### Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the **Table Design** and **Layout** tabs, which offer additional options.\n\n| Review | View | Help | Acrobat | Table Design | | Layout | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | | | | | | | 1/2 pt | |\n| | | | | | Shading | Border | | Borders Border |\n| | | | | | | | Styles × | Painter |\n| Table Styles | | | | | | | Borders | 7 |", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "The ODWEK Java API provides line-of-business operations. For more information, see IBM Content Manager OnDemand Web Enablement Kit Java APIs: The Basics and Beyond, SG24-7646.\n\n# **8.1.2 Client infrastructure options**\n\nSeveral basic architectural options, Windows client, Content Navigator, or API-based client integration into your line-of-business application, are available.\n\n# **Windows client**\n\nConsider the following items when you are planning a Windows client infrastructure:\n\n- -It is faster than the web clients and more powerful.\n- - It requires native installation on each user's workstation or notebook. Server version upgrades might also require a new client installation.\n- -This client supports Citrix and Terminal services environments.\n- - It does not support the Transforms interface for transforming and converting data formats because the data formats are provided by ODWEK only.\n\n# **Content Navigator**\n\nWhen you choose a ready-for-use web client, consider the IBM strategic client, IBM Content Navigator, because it is the most complete, most recent web client.\n\nSpecial use cases might require the development of a custom client application for Content Manager OnDemand. For more information about development APIs, see 8.3, \"Client API overview\" on page 202.\n\nWith Content Navigator, you can run a cross-repository search to search for content across multiple types of repositories, including Content Manager OnDemand. For example, Content Manager OnDemand search results can be included in the same hit list as search results from other supported repositories to help provide a comprehensive view of content.\n\nWhen you create a cross-repository search, you can specify the following information:\n\n- - Specify the scope of the search on each repository. You can specify the search or the classes that you want to include in the cross-repository search by using IBM Content Manager OnDemand. On IBM FileNet Content Manager and IBM Content Manager, you also can limit the search to a specific folder.\n- -Specify how properties from each repository are related to each other.\n- -Specify any default search criteria that you want displayed when users open the search.\n\nFor more information about how to configure a cross-repository search, see the IBM Content Navigator Knowledge Center at the following web address:\n\nhttp://www.ibm.com/support/knowledgecenter/SSEUEX_2.0.3/contentnavigator_2.0.3.htm", - "page_start": 213, - "page_end": 213, - "source_file": "sg246915.pdf" - }, - { - "text": "# **Document history for the Serverless Developer Guide**\n\nThe following table describes notable releases to the Serverless Developer Guide.\n\n| Change | Description | Date |\n| --- | --- | --- |\n| Minor revisions | Updated links to Serverless | August 28, 2023 |\n| | Patterns workshop (now with | |\n| | idempotence!). Fixed various | |\n| | links to additional resources. | |\n| Workshop connections | Added links to the related | April 12, 2023 |\n| | serverless workshop for | |\n| | hands-on experience. | |\n| Initial release | Initial release of the Serverles | February 19, 2023 |\n| | s Developer Guide! | |", - "page_start": 90, - "page_end": 90, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "How to connect to my Microsoft account from Word?", - "target_page": 2, - "target_passage": " Click File > Account to sign in with your Microsoft account", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share, and send a link to this document. (keyboard shortcut – Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n# Add visuals with pictures from the web\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures, and then search for something, like puppy clip art.\n- 2. Select the picture you want, and select Insert.", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "you will be prompted to create a user ID (your email address) and a password. Once you do that you should have a fresh Web Protégé workspace. Figure 12.1 shows what my Web Protégé workspace currently looks like. Most of the projects are owned by me although note that the CODO project is owned by my colleague Biswanath Dutta. However, I still have complete access to that ontology due to the way Biswanath has configured my access as being able to both view and edit the ontology.\n\nTo upload the Pizza ontology, select the large Create New Project button. This will bring up the window shown in figure 12.2. Fill out the project name and description, then select the Choose File button and navigate to where you have the latest version of the Pizza tutorial with data. Note that in the figure I have already done this navigation so there is a value for the file to load. You can leave the Language field blank. Once you have all the fields set up similar to figure 12.2 click the Create New Project button on this dialog (note this is a different button than the one you started from).\n\n| Create New Project | |\n| --- | --- |\n| Project name | |\n| Pizza With Data | |\n| Language | |\n| empty for no language tag. | Enter a language tag for labelling new entities and to use as the primary display language. Leave |\n| Description | |\n| The Pizza tutorial ontology with data. | |\n| Create from existing sources | |\n| Choose File PizzaTutori... thDataV2.owl | |\n\nFigure 12.2 The Create New Project Dialog\n\nYour workspace should now include your first project. Click on the three horizontal bars at the far right of the project. This should bring up a pop-up menu. Select the Open option. This should bring you into the main Web Protégé UI to browse an ontology.\n\nBefore you make changes to the ontology you need to make sure the settings for new entities and rendering are consistent with the settings you used for the Pizza ontology. The default in Web Protégé as with Protégé is to use Auto-Generated UUIDs rather than user supplied names. If you aren't sure about these settings you can go back to exercise 2 at the beginning of chapter 4 and chapter 7 to refresh your memory. There are excellent reasons to use auto-generated UUIDs but for beginners, especially for those who want to learn SPARQL, I think they make learning the basics more difficult so we have been using the alternative of user supplied names. At the top of the Web Protégé UI in the right corner there are", - "page_start": 84, - "page_end": 84, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "If you are creating a new account, you will create a root account using an email address. The **root** account has *unrestricted access*, similar to root accounts for an operating system. As a best practice, you should create an administrative user too.\n\n#### **Granting administrative access to a user**\n\nAs you might guess, granting administrative access to a user is still rather far reaching. An account with administrative level privileges will make getting started easier. For systems in production, follow the principle of least-privilege — granting only the minimum access necessary to accomplish tasks.\n\n- For a step-by-step guide to account types and login management, see Signing in to the AWS Management Console.\n- AWS Identity and Access Management (IAM) is the service to manage entities and resources authorized to use services and service resources.\n\n### **Sign up for an AWS account**\n\nIf you do not have an AWS account, complete the following steps to create one.\n\n#### **To sign up for an AWS account**\n\n- 1. Open https://portal.aws.amazon.com/billing/signup.\n- 2. Follow the online instructions.\n\nPart of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.\n\nWhen you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.\n\nAWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing **My Account**.", - "page_start": 13, - "page_end": 13, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Count on Word to count your words\n\nTry it: Hit return after this line and type some words.\n\nThe status bar at the bottom of the window keeps a running count of the number of words in the document.\n\n### Save this for later, access it anywhere\n\nWhen you save this document in OneDrive, you'll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.\n\n| Save As | Info | | |\n| --- | --- | --- | --- |\n| New | 1 = OneDrive - Contoso | | |\n| Recent | Open | Enter file name here | |\n| Word Document (*. docx) | Contoso | Save | More options ... |\n| OneDrive - Contoso | Save As | IrvinS@Contoso.com | |\n| Name ↑ | Print | Sites - Contoso | |\n| Share | Attachments | IrvinS@Contoso.com | |\n| Personal | Export | Forms | |\n| OneDrive - Personal | Close | My Stuff | irvinsayers 1@outlook.com |\n\nTry it: Select File > Save As, and then select OneDrive and give this document a name.\n\nIf you sign in to Office 365 on another device, this document will be in your list of recent files. You can pick up where you left off… even if you left the document open on the computer you're using now.", - "page_start": 1, - "page_end": 1, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "**Note:** Make sure that your PC or notebook has a network route to the system IP address that you specified. In particular, you can access the management GUI from any management console that is connected to the same subnet as the system. Enter the system IP address on a supported browser to access the management GUI.\n\n# **4.3 System setup**\n\nThis section provides instructions about how to define the basic settings of the system with the system setup wizard, and how to add nodes and optional expansion enclosures.\n\n# **4.3.1 System setup wizard**\n\nWhether you are redirected from your PC or notebook after completing system initialization or you browse to the management IP address manually, you must complete the system setup wizard to define the basic settings of the system.\n\n**Note:** The first time that you connect to the management GUI, you are prompted to accept untrusted certificates because the system certificates are self-signed.\n\nYou can install certificates that are signed by a trusted certificate authority after you complete system setup. For more information about how to perform this task, see 4.5, \"Configuring secure communications\" on page 117.", - "page_start": 113, - "page_end": 113, - "source_file": "sg247938.pdf" - }, - { - "text": "## Find whatever you need\n\nType a keyword or phrase into the **Search** box to quickly find the Word features and ribbon commands you're looking for, to discover **Help** content, or to get more information online.\n\n| print × |\n| --- |\n| Actions |\n| F Print |\n| Print Preview and Print |\n| इ Preview and Print |\n| 트 Print Layout |\n| Get Help on |\n| \"print\" |\n| 10 results |\n| Definition |\n| print [print] |\n| verb. produce (books, newspapers, maqazines, etc.), especially |\n| in large quantities, by a mechanical process involving the trans ... |\n| Find in Document |\n| \"print\" \"print\" 0 results |\n| More search results for \"print\" |\n\n# Share your work with others\n\nTo invite others to view or edit your documents, select the **Share** button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.\n\n| Comments | | & Share | |\n| --- | --- | --- | --- |\n| | | | × |\n| Send link | | | |\n| CE Annual Report.docx | | | |\n| Anyone with the link can edit > | | | |\n| lex Wilber × | Add another | | |\n| Message ... | | | |\n| | | Send | |\n| oov link Outlook | | | |\n| Send a copy V | | | |\n\n### Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to **https://go.microsoft.com/fwlink/?linkid=2008317.**\n\n## Next steps with Word\n\n#### **See what's new in Office**\n\nExplore the new and improved features in Word and the other Office apps. Visit **https://go.microsoft.com/fwlink/?linkid=871117** for more information.\n\n#### **Get free training, tutorials, and videos for Office**\n\nReady to dig deeper into the capabilities that Word has to offer? Visit **https://go.microsoft.com/fwlink/?linkid=871123** to explore our free training options.\n\n#### **Send us your feedback**\n\nLove Word? Got an idea for improvement to share with us? On the **File** menu, select **Feedback** and then follow the prompts to send your suggestions directly to the Word product team. Thank you!", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing **My Account**.\n\n#### **Create a user with administrative access**\n\nAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.\n\n#### **Secure your AWS account root user**\n\n- 1. Sign in to the AWS Management Console as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.\nFor help signing in by using root user, see Signing in as the root user in the *AWS Sign-In User Guide*.\n\n- 2. Turn on multi-factor authentication (MFA) for your root user.\nFor instructions, see Enable a virtual MFA device for your AWS account root user (console) in the *IAM User Guide*.\n\n#### **Create a user with administrative access**\n\n- 1. Enable IAM Identity Center.\nFor instructions, see Enabling AWS IAM Identity Center in the *AWS IAM Identity Center User Guide*.\n\n- 2. In IAM Identity Center, grant administrative access to a user.\nFor a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the *AWS IAM Identity Center User Guide*.\n\n#### **Sign in as the user with administrative access**\n\n- To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.", - "page_start": 41, - "page_end": 41, - "source_file": "serverless-core.pdf" - }, - { - "text": "### **Bookmarks**\n\nBookmarks are included in the PDF for headings or Word bookmarks depending on the option selected.\n\n## **Availability**\n\nThe information in this article is applicable to the following versions of Word.\n\n- Word for Windows Version 2408 and later.\n- Word for Mac Version 16.89 and later.\n- Word for iOS Version 2.89 and later.\n- Word for Android Build 16.0.18025.XXXXX or later.\n- Word for the web Build 16.0.18025.XXXXX or later.\n\nIt is available to customers with Office 2024 or Office LTSC 2024 and to customers with a Microsoft 365 subscription on Current Channel or Monthly Enterprise Channel. For customers with a Microsoft 365 subscription on Semi-Annual Enterprise Channel it will be available on January 14, 2025.", - "page_start": 60, - "page_end": 60, - "source_file": "office-pdf.pdf" - }, - { - "text": "### **Step 3: The Graph Tab**\n\nClick on the graph tab in order to display the corresponding graph.\n\n- 1. Selection of the sheet name\n- 2. Button to go back to the \"Grid\" view\n- 3. Selection of a range of data records\n- 4. Search box\n- 5. Filters button to open the filters form\n- 6. Fields button to open the fields box\n- 7. Select box to select the graph type\n- 8. Select box to select the group column (Axis 1)\n- 9. Select box to select the series A (Axis 2)\n- 10. Button to add series", - "page_start": 44, - "page_end": 44, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "What are the products of Hormel Foods Corporation?", - "target_page": 4, - "target_passage": "meat and other food product", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## **(b)** *Industry Segment*\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n#### **(c)** *Description of Business*\n\n## **Products and Distribution**\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n| --- | --- | --- | --- |\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| Other | 8.7 | 4.6 | 4.0 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).\n\nNo new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n### **Raw Materials**\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| 3.2(1) | Bylaws as amended to date. (Incorporated by reference to Exhibit 3.2 to Hormel's Amendment No. 3 to Registration Statement on |\n| --- | --- |\n| | Form S-4, dated November 29, 2001, File No. 333-68498.) |\n| 4.1(1) | Indenture dated as of June 1, 2001, between Hormel and U.S. Bank Trust National Association, as Trustee relating to certain |\n| | outstanding debt securities. (Incorporated by reference to Exhibit 4.1 to Hormel's Registration Statement on Form S-4 dated, |\n| | August 28, 2001, File No. 333-68498.) |\n| 4.2(1) | Supplemental Indenture No. 1 dated as of June 4, 2001, to Indenture dated as of June 1, 2001, between Hormel and U.S. Bank |\n| | Trust National Association, as Trustee, relating to certain outstanding debt securities. (Incorporated by reference to Exhibit 4.2 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.3(1) | Letter of Representations dated June 5, 2001, among Hormel, U.S. Bank Trust National Association, as Trustee, and The |\n| | Depository Trust Company relating to certain outstanding debt securities of Hormel. (Incorporated by reference to Exhibit 4.3 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.4(1) | Pursuant to Item 601 (b)(4)(iii) of Regulation S-K, copies of instruments defining the rights of holders of certain long-term debt are |\n| | not filed. Hormel agrees to furnish copies thereof to the Securities and Exchange Commission upon request. |\n| 10.1(1) | U.S. $150,000,000 Credit Agreement, dated as of October 20, 2003, between Hormel, the banks identified on the signature pages |\n| | thereof, and Citicorp U.S.A. Inc., as Administrative Agent. (Incorporated by Reference to Exhibit 10.1 to Hormel's Current Report |\n| | on Form 8-K dated October 23, 2003.) |\n| 10.2(1)(3) | Hormel Foods Corporation Operators' Shares Incentive Compensation Plan. (Incorporated by Reference to Appendix A to |\n| | Hormel's definitive Proxy Statement filed on December 30, 1997, File No. 001-02402.) |\n| 10.3(1)(3) | Hormel Foods Corporation Supplemental Executive Retirement Plan (2002 Restatement.) (Incorporated by Reference to |\n| | Exhibit 10.3 to Hormel's Annual Report on Form 10-K for the fiscal year ended October 26, 2002, file No. 001-02402.) |\n| 10.4(1)(3) | Hormel Foods Corporation 2000 Stock Incentive Plan. (Incorporated by Reference to Exhibit A to Hormel's definitive Proxy |\n| | Statement filed on December 30, 1999, File No. 001-02402.) |\n\n(1) Document has previously been filed with the Securities and Exchange Commission and is incorporated herein by reference.\n\n(2) These Exhibits transmitted via EDGAR.\n\n(3) Management compensatory plan", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "# **Hormel Foods Annual Report 2004**\n\n## **Form 10-K (NYSE:HRL)**\n\nPublished: January 23rd, 2004\n\nPDF generated by stocklight.com", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| Item 1. | BUSINESS |\n| --- | --- |\n| Item 2. | PROPERTIES |\n| Item 3. | LEGAL PROCEEDINGS |\n| Item 4. | SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS |\n| PART II | |\n| Item 5. | MARKET FOR THE REGISTRANT'S COMMON STOCK AND RELATED STOCKHOLDER MATTERS |\n| Item 6. | SELECTED FINANCIAL DATA |\n| Item 7. | MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS |\n| Item 7A. | QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK |\n| Item 8. | FINANCIAL STATEMENTS AND SUPPLEMENTAL DATA |\n| Item 9. | CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE |\n| Item 9A. | CONTROLS AND PROCEDURES |\n| PART III | |\n| Item 10. | DIRECTORS AND EXECUTIVE OFFICERS OF THE AGREEMENT |\n| Item 11. | EXECUTIVE COMPENSATION |\n| Item 12. | SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER |\n| | MATTERS |\n| Item 13. | CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS |\n| Item 14. | PRINCIPAL ACCOUNTING FEES AND SERVICES |\n| PART IV | |\n| Item 15. | EXHIBITS, FINANCIAL STATEMENT SCHEDULES AND REPORTS ON FORM 8-K |\n| SIGNATURES | |\n\n#### **PART I**\n\n## **Item 1.** *BUSINESS*\n\n## **Available Information**\n\nThe Company makes available, free of charge on its website at *www.hormel.com*, its annual report on Form 10-K, quarterly reports on Form 10-Q, current reports on Form 8-K, and amendments to those reports filed or furnished pursuant to Section 13(a) or 15(d) of the Securities Exchange Act of 1934. These reports are accessible under the \"Investor\" caption of the Company's website and are available as soon as reasonably practicable after such material is electronically filed with or furnished to the Securities and Exchange Commission, which is within 24 hours.\n\nThe Company has adopted a Code of Ethical Business Conduct that covers its officers and directors, which is available on the Company's website, free of charge, under the caption \"Corporate.\" The Company also adopted Corporate Governance Guidelines, which are available on the Company's website, free of charge, under the caption \"Investor.\"\n\n#### **(a)** *General Development of Business*\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## **Manufacturing**\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n#### **Patents and Trademarks**\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:\n\nHORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n#### **Customers and Backlog Orders**\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n#### **Competition**\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n#### **Research and Development**\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n### **Employees**\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "#### **COMPETITIVE CONDITIONS**\n\nWe operate in a highly competitive business environment. We compete with other national, regional, local and online retailers that may carry similar lines of merchandise, including department stores, specialty stores, off-price stores, boutiques and Internet businesses. Our specific competitors vary from market to market. We believe the keys to competing in our industry are providing great customer service and customer experiences in stores and online, which includes compelling price and value, fashion newness, quality of products, selection, convenience, technology, product fulfillment, personalization and appealing, relevant store environments in top locations.\n\n#### **INVENTORY**\n\nWe plan our merchandise purchases and receipts to coincide with expected sales trends. For instance, our merchandise purchases and receipts increase prior to our Anniversary Sale, which has historically extended over the last two weeks of July. We also purchase and receive a larger amount of merchandise in the fall as we prepare for the holiday shopping season (from late November through December). Beginning in 2012, we increased our investment in pack and hold inventory at Nordstrom Rack, which involves the strategic purchase of merchandise from some of our full-line stores' top brands in advance of the upcoming selling seasons to take advantage of favorable buying opportunities. This inventory is typically held for six months on average and has contributed to the growth in our Nordstrom Rack business. We pay for our merchandise purchases under the terms established with our vendors.\n\nIn order to offer merchandise that our customers want, we purchase from a wide variety of high-quality suppliers, including domestic and foreign businesses. We also have arrangements with agents and contract manufacturers to produce our private label merchandise. We expect our suppliers to meet our \"Nordstrom Partnership Guidelines,\" which address our corporate social responsibility standards for matters such as legal and regulatory compliance, labor, health and safety and the environment, and are available on our website at Nordstrom.com.\n\n#### **EMPLOYEES**\n\nDuring 2014, we employed approximately 67,000 employees on a full- or part-time basis. Due to the seasonal nature of our business, employment increased to approximately 68,000 employees in July 2014 and 73,500 in December 2014. All of our employees are non-union. We believe our relationship with our employees is good.\n\n#### **CAUTIONARY STATEMENT**\n\nCertain statements in this Annual Report on Form 10-K contain or may suggest \"forward-looking\" information (as defined in the Private Securities Litigation Reform Act of 1995) that involve risks and uncertainties, including, but not limited to, anticipated financial outlook for the fiscal year ending January 30, 2016, anticipated annual total and comparable sales rates, anticipated new store openings in existing, new and international markets, anticipated Return on Invested Capital and trends in our operations. Such statements are based upon the current beliefs and expectations of the company's management and are subject to significant risks and uncertainties. Actual future results may differ materially from historical results or current expectations depending upon factors including, but not limited to:\n\n- successful execution of our customer strategy, including expansion into new markets, acquisitions, investments in our stores and online, our ability to realize the anticipated benefits from growth initiatives, our ability to provide a seamless experience across all channels, and the timely completion of construction associated with newly planned stores, relocations and remodels, all of which may be impacted by the financial health of third parties,\n- our ability to manage the transformation of our business/financial model as we increase our investments in growth opportunities, including our online business and our ability to manage related organizational changes,\n- our ability to maintain relationships with our employees and to effectively attract, develop and retain our future leaders,\n- effective inventory management, disruptions in our supply chain and our ability to control costs,\n- the impact of any systems failures, cybersecurity and/or security breaches, including any security breach of our systems or those of a third-party provider that results in the theft, transfer or unauthorized disclosure of customer, employee or company information or compliance with information security and privacy laws and regulations in the event of such an incident,\n- successful execution of our information technology strategy,\n- our ability to effectively utilize data in strategic planning and decision making,\n- efficient and proper allocation of our capital resources,\n- reviewing of options and structure for a financial partner in regards to a potential transaction related to our credit card receivables,\n- our ability to safeguard our reputation and maintain our vendor relationships,\n- the impact of economic and market conditions and the resultant impact on consumer spending patterns,\n- our ability to respond to the business environment, fashion trends and consumer preferences, including changing expectations of service and experience in stores and online,\n- the effectiveness of planned advertising, marketing and promotional campaigns in the highly competitive retail industry,\n- weather conditions, natural disasters, health hazards, national security or other market disruptions, or the prospects of these events and the resulting impact on consumer spending patterns,\n- our compliance with applicable banking-related laws and regulations impacting our ability to extend credit to our customers, employment laws and regulations, certain international laws and regulations, other laws and regulations applicable to us, including the outcome of claims and litigation and resolution of tax matters, and ethical standards,\n- impact of the current regulatory environment and financial system and health care reforms,", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\n*The following discussion of the Company's financial condition and results of operations should be read together with the other financial information and consolidated financial statements included in this Annual Report. This discussion contains forward-looking statements that involve risks and uncertainties. The Company's actual results could differ materially from the results anticipated in the forward-looking statements as a result of a variety of factors, including those discussed in \"Forward Looking Statements\" and elsewhere in this Annual Report.* \n\n### **OVERVIEW**\n\nThe Company designs, develops, manufactures, markets, sells and distributes products and components, primarily for the medical and health care industry. The Company markets components to other equipment manufacturers for incorporation in their products and sells finished devices to physicians, hospitals, clinics and other treatment centers. The Company's products and services primarily range from ophthalmology and cardiovascular products to fluid delivery devices, contract manufacturing and kitting services. In 2003 approximately 26 percent of the Company's sales were outside the U.S.\n\nThe Company's products are used in a wide variety of applications by numerous customers, the largest of which accounted for approximately 14 percent of net sales in 2003. The Company encounters competition in all of its markets and competes primarily on the basis of product quality, price, engineering, customer service and delivery time.\n\nThe Company's strategy is to provide a broad selection of products and a high level of service in the areas in which it competes. The Company focuses its research and development efforts to improve current products and develop highly-engineered products that meet customer needs and have the potential for broad market applications and significant sales. Proposed new products may be subject to regulatory clearance or approval prior to commercialization and the time period for introducing a new product to the marketplace can be unpredictable. The Company is also focused on controlling costs. The Company does this by investing in modern manufacturing technologies and controlling purchasing processes. Over the past three years, the Company has continued to be faced with increasing costs associated with all lines of insurance, including group health benefits. The Company has been successful in consistently generating cash from operations and uses that cash to reduce indebtedness, to fund capital expenditures, to repurchase stock and, starting in 2003, to pay dividends. During 2003, the Company reduced debt by approximately $6.0 million.\n\nThe Company's strategic objective is to further enhance its position in its served markets by:\n\n- Focusing on customer needs\n- Expanding existing product lines and developing new products\n- Maintaining a culture of controlling cost\n- Preserving and fostering a collaborative, entrepreneurial management structure\n\nFor the year ended December 31, 2003, the Company reported revenues of $62.8 million, income from continuing operations of $4.9 million and net income of $5.1 million, up 5 percent, 20 percent and 95 percent, respectively, from 2002.\n\n### **RESULTS OF OPERATIONS**\n\nThe Company's income from continuing operations was $4.9 million, or $2.86 per basic and $2.66 per diluted share, in 2003, compared to income from continuing operations of $4.1 million, or $2.37 per basic and $2.18 per diluted share, in 2002 and $4.3 million, or $2.10 per basic and $1.88 per diluted share, in 2001. Net income, including discontinued operations and cumulative effect of accounting change, totaled $5.1 million, or $2.96 per basic and $2.75 per diluted share, in 2003, compared with $2.6 million, or $1.51 per basic and $1.39 per diluted share, in 2002 and $9.8 million, or $4.80 per basic and $4.30 per diluted share, in 2001. The Company adopted Statement of Financial Accounting Standards (\"SFAS\") No. 142 effective January 1, 2002. The required adoption of SFAS No. 142 as discussed in Note 2 to the Company's Consolidated Financial Statements included herein is considered a change in accounting principle and the cumulative effect of adopting this standard resulted in a $1.6 million, or $ .96 per basic and $ .88 per diluted share, noncash, after-tax charge in 2002.\n\nOperating revenues were $62.8 million in 2003, compared with $59.5 million in 2002 and $57.6 million in 2001. These revenue increases are generally attributable to higher sales volumes. The 5 percent revenue increase in 2003 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's ophthalmic products, an 8 percent increase in the revenues of the Company's cardiovascular products, a 3 percent increase in the Company's fluid delivery products and a 2 percent increase in the Company's other medical and non-medical products and services. The 3 percent revenue increase in 2002 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's cardiovascular products, a 4 percent increase in the Company's fluid delivery products and a 4 percent increase in the Company's other medical and non-medical products and services.\n\nThe Company's cost of goods sold was $40.6 million in 2003, compared with $39.2 million in 2002 and $35.8 million in 2001. The increase in cost of goods sold for 2003 over 2002 was primarily related to the increase in revenues discussed above and increased insurance costs partially offset by an improvement in manufacturing variances resulting from increased production volumes. The increase in cost of goods sold for 2002 over 2001 was primarily related to a shift in product mix, which resulted in lower gross margins, and the increase in revenues discussed above.\n\nGross profit was $22.2 million in 2003, compared with $20.3 million in 2002 and $21.8 million in 2001. The Company's gross profit in 2003 was 35 percent of revenues compared with 34 percent of revenues in 2002 and 38 percent of revenues in 2001. The increase in gross profit percentage in 2003", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "#### INDUSTRIAL MACHINERY AND MARINE BUSINESS\n\nTOSHIO AOKI Vice President\n\nWHO WE ARE\n\n## **Building on the Core**\n\n\"We are the only forklift manufacturer directly owned by an automotive company, and that has created a number of synergies for our division. There's a natural link with the core business, for instance, given the powertrain of a forklift. However, we also benefit from other assets within Nissan, such as brand, quality, cost management, and marketing activities.\n\nThe bottom line is that we contribute to the Company's total profitability. We had our highest sales and profit in fiscal 2004. We now lead the industry in profitability, in fact, which I believe reflects the market's awareness of our superior quality. In this business, quality is everything, because our customers are investing in tools for their business. As we upgrade our customer service, I think we will be in a position to become the market leader.\n\nProducing forklifts is the heart of our business, although we also build marine products, mostly fiberglass boats and outboard motors. During the year a major issue for our forklift division was the rising price of steel, which seriously affects forklift production. We increased our selling price in response, as did the rest of the industry. Fortunately, we met or surpassed our targets in Japan and in Europe, where we have a plant in Spain. We were slightly below our target for the U.S., however, the result of a slight delay in the start of production on a new model, which reduced volume for the year. We have since recovered our strength in that market, which we see as key to our continued growth.\n\nA major contributor to our expansion was the release of a new forklift in Japan two years ago. At the time we had not released a new model in over seven years. Over the coming years we plan to introduce a new battery-powered model in major markets and enhance our service network. Since forklifts are production equipment, their sales are highly influenced by business cycles. To help maintain our profitability, we need to ramp up our parts and service businesses, which can be a significant source of income.\n\nWe have made a tough commitment for the NISSAN Value-Up period, and that is to increase our profitability until it is in line with Nissan's other operations. This will require some bold steps, but doing so will make us the industry leader. We are currently expanding into producing other material handling and warehousing equipment. We also see opportunities for quality forklifts in China, despite the competitive market there. Used forklifts can also be a profitable business as well, and we are looking to increase our involvement in that area.\n\nOur marine-related business has been profitable since 2000, when we restructured the business by expanding the marine product line-up and strengthening marina operations. Now we are focusing on larger boats and investigating the possibility of manufacturing in China. We are also researching the recycling of plastic and fiberglass boats, which is a major environmental concern.\"", - "page_start": 31, - "page_end": 31, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Specific Examples of CSR Activities\n\n# **Together with Our Customers**\n\n**We work as a team to improve customer satisfaction and product quality, and, while supporting the customer, contribute to the sustainable development of society as a whole.**\n\n# **The financial sector's role in improving the nation's diet and in strengthening the agricultural and fisheries sectors**\n\nFor many years, food supply networks in For many years, food supply networks in Japan were premised on mass production and Japan were premised on mass production and mass consumption, enabling the country to mass consumption, enabling the country to meet soaring food demand at a time of rapid meet soaring food demand at a time of rapid growth in the population and economy. growth in the population and economy. But in recent years, consumers have come to But in recent years, consumers have come to place more priority on factors other than place more priority on factors other than volume and price, such as food safety and volume and price, such as food safety and healthiness, and the cultural aspects of diet. healthiness, and the cultural aspects of diet. As discussion continues on the need for As discussion continues on the need for farmers to increase production scale and farmers to increase production scale and move into processing and marketing, major move into processing and marketing, major changes are underway in the agriculture and changes are underway in the agriculture and fisheries sector in Japan. fisheries sector in Japan.\n\nAgainst this backdrop, SMBC has developed Against this backdrop, SMBC has developed a new financial product for this sector. a new financial product for this sector. The SMBC Food and Agricultural Assessment The SMBC Food and Agricultural Assessment Loan comes with conditions, depending on Loan comes with conditions, depending on the results of an evaluation of food-producers' the results of an evaluation of food-producers' progress in areas such as food safety and progress in areas such as food safety and environment-friendliness, healthiness and environment-friendliness, healthiness and nutritional value, and efficiency of distribution. nutritional value, and efficiency of distribution. The Japan Research Institute researches The Japan Research Institute researches\n\nmeasures in the me a s u r e s i n t h e areas of food and of food and farming being taken farming being taken by the loan applicant, by the loan applicant, and drafts a simple and drafts a simple \"diagnosis\" stating \"diagnosis\" stating whether there is room whether there is room\n\nfor future improvement. Ernst & Young for future improvement. Ernst & Young ShinNihon LLC provides expert opinions on ShinNihon LLC provides expert opinions on ongoing improvement of this system. ongoing improvement of this system.\n\nBy backing customer companies' own By backing customer companies' own initiatives in the areas of food and agriculture initiatives in the areas of food and agriculture in this way, SMBC will be supporting measures in this way, SMBC will be supporting measures to improve the diet of the Japanese and to improve the diet of the Japanese and strengthen the agriculture and fisheries sector. strengthen the agriculture and fisheries sector.\n\n#### **For further details, please see our website.**\n\nA roundtable session with experts held in August 2011 eyesight concerns. eyesight concerns. considered the role of the new SMBC Food and Agricultural Assessment Loan in improving the food supply chain that links food and fishery producers with food processors and consumers. Opinions were also exchanged on what other future role the bank might assume in this regard, given the current situation and issues facing the food industry\n\nand agriculture in Japan.\n\n**Roundtable session: SMBC Food and Agricultural Assessment Loan**\n\n#### **Key comments of participants**\n\n\"We want to deliver value by creating demand and quality combined with safety, peace of mind and trust.\" Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd.\n\nYasuhiro Nakashima Associate Professor Graduate School of Agricultural and Life Sciences, The University of Tokyo\n\n\"Eating should be something that generates emotion. New potential exists in the world of cuisine.\" Daisuke Yamamoto, Vice Senior Consultant, Research Department, The Japan Research Institute, Limited\n\n\"As consumer tastes go through a time of great change, I think it is important to prioritize ingredients and the attitude of customers toward eating.\"\n\n\"An important concept is multilateral dialogue as the number of parties involved in food production increases throughout the supply chain.\" Yoichiro Fukayama, Planning Dept., Deputy Head (with powers of representation) of the Corporate Banking Unit & Middle Market Banking Unit, SMBC\n\nModerated by Kenji Sawami, Partner, Ernst & Young ShinNihon LLC\n\n# **Making banking a more pleasant experience for all customers**\n\nWith the old-age dependency ratio soaring, With the old-age dependency ratio soaring, the SMFG Group aims to provide friendly, the SMFG Group aims to provide friendly, easy-to-use banking services for all its easy-to-use banking services for all its customers. customers.\n\nSome Group companies are likewise making Some Group companies are likewise making their facilities barrier-free at bank branches their facilities barrier-free at bank branches with large numbers of customers, to tailor with large numbers of customers, to tailor services to the needs of all customers. services to the needs of all customers.\n\nFor example at the Minato Bank, we have For example at the Minato Bank, we have equipped all ATMs at all our branches and equipped all ATMs at all our branches and cashpoints with voice-guidance handsets for cashpoints with voice-guidance handsets for the visually impaired. the visually impaired.\n\nIn addition, we have set up priority seating In addition, we have set up priority seating in the lobby of each of our branches for in the lobby of each of our branches for customers who are very old or who have customers who are very old or who have mobility problems. We are also steadily mobility problems. We are also steadily introducing queue-number displays using introducing queue-number displays using Color Universal Design (CUD) principles, Color Universal Design (CUD) principles, which are easier to read for customers with which are easier to read for customers with\n\nHandheld hearing support device (The Minato Bank)\n\nA further measure is installation of handheld A further measure is installation of handheld hearing support devices at all branches hearing support devices at all branches (except housing loan promotion offices), to (except housing loan promotion offices), to allay the concerns of hearing-impaired allay the concerns of hearing-impaired customers who find it difficult to converse customers who find it difficult to converse and follow spoken instructions. By using the and follow spoken instructions. By using the devices as communication tools, bank devices as communication tools, bank employees can respect customer privacy employees can respect customer privacy and do not have to talk loudly. and do not have to talk loudly. Further measures include posting of \"green Further measures include posting of \"green ear\" logos at branches to reassure customers ear\" logos at branches to reassure customers that the bank has facilities for conversing that the bank has facilities for conversing in writing. All branches are being equipped writing. All branches are being equipped with white boards and special message with white boards and special message tablets for dialogue with customers who ablets for dialogue with customers who have concerns about their hearing and who have concerns about their hearing and who dislike written conversations. dislike written conversations.\n\n# **Peace of mind at the bank counter**\n\nThe Minato Bank has created a position The Minato Bank has created a position titled \"Service Care Manager\" at each of titled \"Service Care Manager\" at each of its branches, filled by at least one branch its branches, filled by at least one branch managerial staffer, as part of measures to managerial staffer, as part of measures to make branch visits more pleasant for make branch visits more pleasant for customers, following earlier nuts-and-bolts customers, following earlier nuts-and-bolts improvements. improvements.\n\nService Care Managers are dedicated to Service Care Managers are dedicated to improving support and services for the improving support and services for the customer at each branch. Their training customer at each branch. Their training includes simulations of the problems faced includes simulations of the problems faced by persons with disabilities, awareness by persons with disabilities, awareness raising and support methods for the elderly raising and support methods for the elderly and persons with disabilities. and persons with disabilities.\n\n### **New queue-number display system installed at bank counters**\n\nColors and special designs are used to make queue-number displays more visible to all customers (The Minato Bank)\n\nTelephone handset-type ATM (The Minato Bank)\n\n# **Preparing our businesses for a higher old-age dependency ratio**\n\nIn addition to removing mobility barriers at In addition to removing mobility barriers at branches, the bank plans to aggressively branches, the bank plans to aggressively support installation of facilities needed to support installation of facilities needed to cope with the rapidly rising old-age cope with the rapidly rising old-age dependency ratio. As a first step, SMBC dependency ratio. As a first step, SMBC has established clear guidelines for has established clear guidelines for supporting the construction of rental supporting the construction of rental housing for the elderly, expected to be a housing for the elderly, expected to be a future growth area. future growth area.\n\nWhile continuing to tailor business While continuing to t ailor busines s activities to the needs of the community at activities to the needs of the community at large and ensuring a friendly banking large and ensuring a friendly banking environment for our customers, the SMFG environment for our customers, the SMFG Group also plans to support the creation of Group also plans to support the creation of frameworks that enable the elderly to live frameworks that enable the elderly to live active lives with peace of mind. active lives with peace of mind.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## **HORMEL FOODS CORPORATION**\n\n#### **Austin, Minnesota**\n\n**Item 15(a) (1), (2) and (3) and Item 15 (c) and (d)**\n\n## **LIST OF FINANCIAL STATEMENTS AND FINANCIAL STATEMENT SCHEDULES**\n\n#### **HORMEL FOODS CORPORATION**\n\n## **FINANCIAL STATEMENTS**\n\nThe following consolidated financial statements of Hormel Foods Corporation included in the Annual Stockholders' Report for the Registrant to its stockholders for the year ended October 25, 2003, are incorporated herein by reference in Item 8 of Part II of this report:\n\n**Consolidated Statements of Financial Position**—October 25, 2003, and October 26, 2002.\n\n**Consolidated Statements of Operations**—Years Ended October 25, 2003, October 26, 2002 and October 27, 2001.\n\n**Consolidated Statements of Changes in Shareholders' Investment** —Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\n**Consolidated Statements of Cash Flows**—Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\n**Notes to Financial Statements**—October 25, 2003.\n\n**Report of Independent Auditors**\n\n#### **FINANCIAL STATEMENT SCHEDULES**\n\nThe following consolidated financial statement schedule of Hormel Foods Corporation required pursuant to Item 15(d) is submitted herewith:\n\n#### **Schedule II—Valuation and Qualifying Accounts and Reserves...F-3**\n\nAll other schedules for which provision is made in the applicable accounting regulation of the Securities and Exchange Commission are not required under the related instructions or are inapplicable, and therefore have been omitted.\n\n## **FINANCIAL STATEMENTS AND SCHEDULES OMITTED**\n\nCondensed parent company financial statements of the registrant are omitted pursuant to Rule 5-04(c) of Article 5 of Regulation S-X.\n\n## **SCHEDULE II—VALUATION AND QUALIFYING ACCOUNTS AND RESERVES**\n\n## **HORMEL FINANCIAL SERVICES CORPORATION**\n\n**(In Thousands)**\n\n**Note (1)**—Uncollectible accounts written off.\n\n**Note (2)**—Recoveries on accounts previously written off.\n\n**Note (3)**—Increase in the reserve due to the inclusion of The Turkey Store Company accounts receivable.\n\n**Note (4)**—Increase in the reserve due to the inclusion of Diamond Crystal Brands accounts receivable.\n\n#### **LIST OF EXHIBITS**\n\n### **HORMEL FOODS CORPORATION**\n\n2.1(1) Agreement and Plan of Merger and Plan of Reorganization dated January 22, 2001, by and among Hormel, Badger Acquisition Corporation, Jerome Foods, Inc. and Jerome K. Jerome. (Incorporated by reference to Hormel's Current Report on Form 8-K dated March 9, 2001, File No. 001-02402.)\n\n- 3.1(1) Certificate of Incorporation as amended to date. (Incorporated by reference to Exhibit 3A-1 to Hormel's Annual Report on Form 10- K/A for the fiscal year ended October 28, 2000, File No. 001-02402.)", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HRL_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "Where are Hormel Foods Corporation plants located? ", - "target_page": 5, - "target_passage": "has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## **(b)** *Industry Segment*\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n#### **(c)** *Description of Business*\n\n## **Products and Distribution**\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n| --- | --- | --- | --- |\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| Other | 8.7 | 4.6 | 4.0 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).\n\nNo new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n### **Raw Materials**\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| 3.2(1) | Bylaws as amended to date. (Incorporated by reference to Exhibit 3.2 to Hormel's Amendment No. 3 to Registration Statement on |\n| --- | --- |\n| | Form S-4, dated November 29, 2001, File No. 333-68498.) |\n| 4.1(1) | Indenture dated as of June 1, 2001, between Hormel and U.S. Bank Trust National Association, as Trustee relating to certain |\n| | outstanding debt securities. (Incorporated by reference to Exhibit 4.1 to Hormel's Registration Statement on Form S-4 dated, |\n| | August 28, 2001, File No. 333-68498.) |\n| 4.2(1) | Supplemental Indenture No. 1 dated as of June 4, 2001, to Indenture dated as of June 1, 2001, between Hormel and U.S. Bank |\n| | Trust National Association, as Trustee, relating to certain outstanding debt securities. (Incorporated by reference to Exhibit 4.2 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.3(1) | Letter of Representations dated June 5, 2001, among Hormel, U.S. Bank Trust National Association, as Trustee, and The |\n| | Depository Trust Company relating to certain outstanding debt securities of Hormel. (Incorporated by reference to Exhibit 4.3 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.4(1) | Pursuant to Item 601 (b)(4)(iii) of Regulation S-K, copies of instruments defining the rights of holders of certain long-term debt are |\n| | not filed. Hormel agrees to furnish copies thereof to the Securities and Exchange Commission upon request. |\n| 10.1(1) | U.S. $150,000,000 Credit Agreement, dated as of October 20, 2003, between Hormel, the banks identified on the signature pages |\n| | thereof, and Citicorp U.S.A. Inc., as Administrative Agent. (Incorporated by Reference to Exhibit 10.1 to Hormel's Current Report |\n| | on Form 8-K dated October 23, 2003.) |\n| 10.2(1)(3) | Hormel Foods Corporation Operators' Shares Incentive Compensation Plan. (Incorporated by Reference to Appendix A to |\n| | Hormel's definitive Proxy Statement filed on December 30, 1997, File No. 001-02402.) |\n| 10.3(1)(3) | Hormel Foods Corporation Supplemental Executive Retirement Plan (2002 Restatement.) (Incorporated by Reference to |\n| | Exhibit 10.3 to Hormel's Annual Report on Form 10-K for the fiscal year ended October 26, 2002, file No. 001-02402.) |\n| 10.4(1)(3) | Hormel Foods Corporation 2000 Stock Incentive Plan. (Incorporated by Reference to Exhibit A to Hormel's definitive Proxy |\n| | Statement filed on December 30, 1999, File No. 001-02402.) |\n\n(1) Document has previously been filed with the Securities and Exchange Commission and is incorporated herein by reference.\n\n(2) These Exhibits transmitted via EDGAR.\n\n(3) Management compensatory plan", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "# **Hormel Foods Annual Report 2004**\n\n## **Form 10-K (NYSE:HRL)**\n\nPublished: January 23rd, 2004\n\nPDF generated by stocklight.com", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## **Manufacturing**\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n#### **Patents and Trademarks**\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:\n\nHORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n#### **Customers and Backlog Orders**\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n#### **Competition**\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n#### **Research and Development**\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n### **Employees**\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| Item 1. | BUSINESS |\n| --- | --- |\n| Item 2. | PROPERTIES |\n| Item 3. | LEGAL PROCEEDINGS |\n| Item 4. | SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS |\n| PART II | |\n| Item 5. | MARKET FOR THE REGISTRANT'S COMMON STOCK AND RELATED STOCKHOLDER MATTERS |\n| Item 6. | SELECTED FINANCIAL DATA |\n| Item 7. | MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS |\n| Item 7A. | QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK |\n| Item 8. | FINANCIAL STATEMENTS AND SUPPLEMENTAL DATA |\n| Item 9. | CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE |\n| Item 9A. | CONTROLS AND PROCEDURES |\n| PART III | |\n| Item 10. | DIRECTORS AND EXECUTIVE OFFICERS OF THE AGREEMENT |\n| Item 11. | EXECUTIVE COMPENSATION |\n| Item 12. | SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER |\n| | MATTERS |\n| Item 13. | CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS |\n| Item 14. | PRINCIPAL ACCOUNTING FEES AND SERVICES |\n| PART IV | |\n| Item 15. | EXHIBITS, FINANCIAL STATEMENT SCHEDULES AND REPORTS ON FORM 8-K |\n| SIGNATURES | |\n\n#### **PART I**\n\n## **Item 1.** *BUSINESS*\n\n## **Available Information**\n\nThe Company makes available, free of charge on its website at *www.hormel.com*, its annual report on Form 10-K, quarterly reports on Form 10-Q, current reports on Form 8-K, and amendments to those reports filed or furnished pursuant to Section 13(a) or 15(d) of the Securities Exchange Act of 1934. These reports are accessible under the \"Investor\" caption of the Company's website and are available as soon as reasonably practicable after such material is electronically filed with or furnished to the Securities and Exchange Commission, which is within 24 hours.\n\nThe Company has adopted a Code of Ethical Business Conduct that covers its officers and directors, which is available on the Company's website, free of charge, under the caption \"Corporate.\" The Company also adopted Corporate Governance Guidelines, which are available on the Company's website, free of charge, under the caption \"Investor.\"\n\n#### **(a)** *General Development of Business*\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "vertically integrated waste services or expand the service area for our existing disposal sites. Development projects, while generally less capital intensive, typically require extensive permitting eÅorts that can take years to complete with no assurance of success. We undertake development projects when we believe there is a reasonable probability of success and where reasonably priced acquisition opportunities are not available.\n\n- ' *Acquisition Growth.* During the late 1990's, the solid waste industry experienced a period of rapid consolidation. We were able to grow signiÑcantly through acquisitions during this period. However, the rate of consolidation in the industry has slowed considerably. Despite this, we continue to look to acquire businesses that complement our existing business platform. Our acquisition growth strategy focuses on privately-held solid waste companies and municipal and other local governmental authorities. We believe that our ability to acquire privately-held companies is enhanced by increasing competition in the solid waste industry, increasing capital requirements as a result of changes in solid waste regulatory requirements, and the limited number of exit strategies for these privately-held companies' owners and principals. We also seek to acquire operations and facilities from municipalities that are privatizing, which occur for many of the same reasons that privately-held companies sell their solid waste businesses. In addition, we will continue to evaluate opportunities to acquire operations and facilities that may be divested by other publicly-owned waste companies. In sum, our acquisition growth strategy focuses on:\n\t- ' acquiring businesses that position our company for growth in existing and new markets,\n\t- ' acquiring well-managed companies and, when appropriate, retaining local management,\n\t- ' acquiring operations and facilities from municipalities that are privatizing and publicly-owned companies that are divesting of assets.\n\nFor certain risks involved with our acquisition growth strategy, see \"\"Risk Factors Ì We may be unable to execute our acquisition growth strategy,'' \"\"Ì We may be unable to manage our growth eÅectively,'' and \"\"Ì Businesses we acquire may have undisclosed liabilities.''\n\n*Acquire Businesses Positioning the Company for Growth.* In making acquisitions, we principally target high quality businesses that will allow our company to be, or provide our company favorable prospects of becoming, a leading provider of integrated solid waste services in markets with favorable demographic growth. Generally, we have acquired, and will continue to seek to acquire, solid waste collection, transfer and disposal companies that:\n\n- ' have strong operating margins,\n- ' are in growth markets,\n- ' are among the largest or have a signiÑcant presence in their local markets, and\n- ' have long-term contracts or franchises with municipalities and other customers.\n\nOnce we have a base of operations in a particular market, we focus on acquiring trucks and routes of smaller businesses that also operate in that market and surrounding markets, which are typically referred to as \"\"tuck-in'' acquisitions. We seek to consolidate the operations of such tuck-in businesses into our existing operations in that market. We also seek to acquire landÑlls, transfer stations and collection companies that operate in markets that we are already servicing in order to fully integrate our operations from collection to disposal. In addition, we have in the past and may continue in the future to exchange businesses with other solid waste companies if by doing so there is a net beneÑt to our business platform. These activities allow us to increase our revenue and market share, lower our cost of operations as a percentage of revenue, and consolidate duplicative facilities and functions to maximize cost eÇciencies and economies of scale.\n\n*Acquire Well-Managed Companies.* We also seek to acquire businesses that have experienced management teams that are willing to join the management of our company. We generally seek to maintain continuity in management of larger acquired companies in order to capitalize on their local", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "#### **(d)** *Executive Officers of the Registrant*\n\n| Joel W. Johnson | 60 | Chairman of the Board, President and Chief | 12/08/95 to Present | 1991 |\n| --- | --- | --- | --- | --- |\n| | | Executive Officer | | |\n| Michael J. McCoy | 56 | Executive Vice President and Chief | 10/29/01 to Present | 1996 |\n| | | Financial Officer | | |\n| | | Senior Vice President and Chief Financial | 05/01/00 to 10/28/01 | |\n| | | Officer | | |\n| | | Vice President and Controller | 04/27/98 to 04/30/00 | |\n| | | Vice President and Treasurer | 01/27/97 to 04/26/98 | |\n| Gary J. Ray | 57 | Executive Vice President Refrigerated Foods | 11/01/99 to Present | 1988 |\n| | | Executive Vice President Operations | 07/27/92 to 10/31/99 | |\n| Eric A. Brown | 57 | Group Vice President Prepared Foods | 12/02/96 to Present | 1987 |\n| Steven G. Binder | 46 | Group Vice President Foodservice | 10/30/00 to Present | 1998 |\n| | | Vice President Foodservice | 11/02/98 to 10/29/00 | |\n| | | Director Foodservice Sales | 12/30/96 to 11/01/98 | |\n| Richard A. Bross | 52 | Group Vice President Hormel/President | 10/29/01 to Present | 1995 |\n| | | Hormel Foods International Corporation | | |\n| | | Vice President Hormel/President Hormel | 11/01/99 to 10/28/01 | |\n| | | Foods International Corporation | | |\n| | | Vice President Grocery Products | 01/30/95 to 10/31/99 | |\n| Jeffrey M. Ettinger | 45 | Group Vice President Hormel/President and | 03/03/03 to Present | 1998 |\n| | | Chief Executive Officer Jennie-O Turkey | | |\n| | | Store | | |\n| | | Group Vice President Hormel/President and | 10/29/01 to 03/02/03 | |\n| | | Chief Operating Officer Jennie-O Turkey | | |\n| | | Store | | |\n| | | Vice President Hormel/President and | 04/30/01 to 10/28/01 | |\n| | | Chief Operating Officer Jennie-O Turkey | | |\n| | | Store | | |\n| | | Vice President Hormel/President and Chief | 01/31/00 to 04/29/01 | |\n| | | Executive Officer Jennie-O Foods | | |\n| | | Vice President Hormel/Jennie-O Foods | 11/01/99 to 01/30/00 | |\n| | | Treasurer | 04/27/98 to 10/31/99 | |\n| | | Assistant Treasurer | 11/24/97 to 04/26/98 | |\n\n| Ronald W. Fielding | 50 | Group Vice President Sales Strategy | 06/02/03 to Present | 1997 |\n| --- | --- | --- | --- | --- |\n| | | Group Vice President Meat Products | 11/01/99 to 06/01/03 | |\n| | | Vice President Hormel/President Hormel | 01/27/97 to 10/31/99 | |\n| | | Foods International Corporation | | |\n| James A. Jorgenson | 59 | Senior Vice President Corporate Staff | 11/01/99 to Present | 1990 |\n| | | Vice President Human Resources | 12/30/91 to 10/31/99 | |\n| Mahlon C. Schneider | 64 | Senior Vice President External Affairs and | 11/01/99 to Present | 1990 |\n| | | General Counsel | | |\n| | | Vice President and General Counsel | 11/19/90 to 10/31/99 | |\n| Thomas R. Day | 45 | Vice President Foodservice Sales | 10/30/00 to Present | 2000 |\n| | | Director Foodservice Sales | 11/02/98 to 10/29/00 | |\n| | | Director Dubuque Foods Incorporated | 03/07/94 to 11/01/98 | |\n| | | Foodservice Sales and Marketing | | |\n| Forrest D. Dryden | 60 | Vice President Research and Development | 01/26/87 to Present | 1987 |\n| Jody H. Feragen | 47 | Vice President and Treasurer | 10/29/01 to Present 10/30/00 to | 2000 |\n| | | Treasurer | 10/28/01 | |\n| | | Assistant Treasurer, National Computer | 12/01/95 to 10/30/00 | |\n| | | Systems in Eden Prairie, Minnesota, a | | |\n| | | data collection and software company | | |\n| Dennis B. Goettsch | 50 | Vice President Foodservice Marketing | 10/30/00 to Present | 2000 |\n| | | Director Foodservice Marketing | 10/01/90 to 10/29/00 | |\n| Daniel A. Hartzog | 52 | Vice President Meat Products Sales | 10/30/00 to Present | 2000 |\n| | | Director of Meat Products Business | 07/03/00 to 10/29/00 | |\n| | | Development | | |\n| | | Meat Products Regional Sales Manager | 09/19/88 to 07/02/00 | |", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| Houston, Texas | 93,000 | Owned | |\n| --- | --- | --- | --- |\n| Knoxville, Iowa | 130,000 | Owned | |\n| Osceola, Iowa | 334,000 | Owned | |\n| Quakertown, Pennsylvania | 13,000 | Owned | |\n| Rochelle, Illinois | 440,000 | Owned | |\n| Sparta, Wisconsin | 185,000 | Owned | |\n| Stockton, California | 139,000 | Owned | |\n| Tucker, Georgia | 259,000 | Owned | |\n| Wichita, Kansas | 80,000 | Owned | |\n| Warehouse/Distribution Centers | | | |\n| Austin, Minnesota—Annex | 83,000 | Owned | |\n| Dayton, Ohio | 140,000 | Owned | |\n| Eldridge, Iowa | 280,000 | Leased | October, 2005 |\n| Osceola, Iowa | 233,000 | Owned | |\n| Stockton, California | 232,000 | Leased | July, 2004 |\n| Tucker, Georgia | 96,000 | Leased | October, 2004 |\n| Research and Development Center | | | |\n| Austin, Minnesota | 59,000 | Owned | |\n| Corporate Offices | | | |\n| Austin, Minnesota | 203,000 | Owned | |\n| Dan's Prize, Inc. | | | |\n| Browerville, Minnesota—Plant | 52,000 | Owned | |\n| Long Prairie, Minnesota—Plant | 80,000 | Owned | |\n| Jennie-O Turkey Store, Inc. | | | |\n| Plants | | | |\n| Barron, Wisconsin | 372,000 | Owned | |\n| Faribault, Minnesota | 169,000 | Owned | |\n| Marshall, Minnesota | 142,000 | Owned | |\n| Melrose, Minnesota | 124,000 | Owned | |\n| Montevideo, Minnesota | 85,000 | Owned | |\n| Pelican Rapids, Minnesota | 242,000 | Owned | |\n| Willmar, Minnesota | 419,000 | Owned | |\n\n* Acres\n\nMany of these properties are not exclusive to any one of the Company's segments and a few of the properties are utilized in all five segments of the Company. The Company has renovation or building projects in progress at Austin, Minnesota; Fremont, Nebraska; Rochelle, Illinois; Osceola, Iowa; Los Animas, Colorado; and at various JOTS locations. The Company believes its operating facilities are well maintained and suitable for current production volumes and all volumes anticipated in the foreseeable future.\n\n## **Item 3.** *LEGAL PROCEEDINGS*\n\nThe Company knows of no pending material legal proceedings.\n\n#### **Item 4.** *SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS*\n\nNo matters were submitted to shareholders during the fourth quarter of the 2003 fiscal year.\n\n#### **PART II**\n\n#### **Item 5.** *MARKET FOR THE REGISTRANT'S COMMON STOCK AND RELATED STOCKHOLDER MATTERS*\n\nThe high and low closing price of the Company's Common Stock and the dividends per share declared for each fiscal quarter of 2003 and 2002, respectively, are shown below:", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process. Their controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n#### Ask Your Compost Supplier\n\n**Whether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:**\n\n- **• What ingredients go into your compost?**\n- **• What compost products or blends do you sell?**\n- **• Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)**\n\t- **• Which product is best for my intended use?**\n\t- **• What application rate do you recommend?**\n\t\t- **• How much do I need for my area? (Or see pages 4-6.)**\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\n**Compost** is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\n**Mulch** is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\n**Peat Moss** is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\n**Fertilizers** are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\n**Topsoil** that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\nCompost improves soil structure and plant growth by\n\n- Replenishing soil organic matter, and storing nutrients in plant-available forms\n- Supporting beneficial soil life\n- Reducing erosion and water run-off\n- Loosening clay soils for better root development (increasing soil pore space)\n- Retaining moisture in sandy soils so plants need less watering.", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "#### **COMPETITIVE CONDITIONS**\n\nWe operate in a highly competitive business environment. We compete with other national, regional, local and online retailers that may carry similar lines of merchandise, including department stores, specialty stores, off-price stores, boutiques and Internet businesses. Our specific competitors vary from market to market. We believe the keys to competing in our industry are providing great customer service and customer experiences in stores and online, which includes compelling price and value, fashion newness, quality of products, selection, convenience, technology, product fulfillment, personalization and appealing, relevant store environments in top locations.\n\n#### **INVENTORY**\n\nWe plan our merchandise purchases and receipts to coincide with expected sales trends. For instance, our merchandise purchases and receipts increase prior to our Anniversary Sale, which has historically extended over the last two weeks of July. We also purchase and receive a larger amount of merchandise in the fall as we prepare for the holiday shopping season (from late November through December). Beginning in 2012, we increased our investment in pack and hold inventory at Nordstrom Rack, which involves the strategic purchase of merchandise from some of our full-line stores' top brands in advance of the upcoming selling seasons to take advantage of favorable buying opportunities. This inventory is typically held for six months on average and has contributed to the growth in our Nordstrom Rack business. We pay for our merchandise purchases under the terms established with our vendors.\n\nIn order to offer merchandise that our customers want, we purchase from a wide variety of high-quality suppliers, including domestic and foreign businesses. We also have arrangements with agents and contract manufacturers to produce our private label merchandise. We expect our suppliers to meet our \"Nordstrom Partnership Guidelines,\" which address our corporate social responsibility standards for matters such as legal and regulatory compliance, labor, health and safety and the environment, and are available on our website at Nordstrom.com.\n\n#### **EMPLOYEES**\n\nDuring 2014, we employed approximately 67,000 employees on a full- or part-time basis. Due to the seasonal nature of our business, employment increased to approximately 68,000 employees in July 2014 and 73,500 in December 2014. All of our employees are non-union. We believe our relationship with our employees is good.\n\n#### **CAUTIONARY STATEMENT**\n\nCertain statements in this Annual Report on Form 10-K contain or may suggest \"forward-looking\" information (as defined in the Private Securities Litigation Reform Act of 1995) that involve risks and uncertainties, including, but not limited to, anticipated financial outlook for the fiscal year ending January 30, 2016, anticipated annual total and comparable sales rates, anticipated new store openings in existing, new and international markets, anticipated Return on Invested Capital and trends in our operations. Such statements are based upon the current beliefs and expectations of the company's management and are subject to significant risks and uncertainties. Actual future results may differ materially from historical results or current expectations depending upon factors including, but not limited to:\n\n- successful execution of our customer strategy, including expansion into new markets, acquisitions, investments in our stores and online, our ability to realize the anticipated benefits from growth initiatives, our ability to provide a seamless experience across all channels, and the timely completion of construction associated with newly planned stores, relocations and remodels, all of which may be impacted by the financial health of third parties,\n- our ability to manage the transformation of our business/financial model as we increase our investments in growth opportunities, including our online business and our ability to manage related organizational changes,\n- our ability to maintain relationships with our employees and to effectively attract, develop and retain our future leaders,\n- effective inventory management, disruptions in our supply chain and our ability to control costs,\n- the impact of any systems failures, cybersecurity and/or security breaches, including any security breach of our systems or those of a third-party provider that results in the theft, transfer or unauthorized disclosure of customer, employee or company information or compliance with information security and privacy laws and regulations in the event of such an incident,\n- successful execution of our information technology strategy,\n- our ability to effectively utilize data in strategic planning and decision making,\n- efficient and proper allocation of our capital resources,\n- reviewing of options and structure for a financial partner in regards to a potential transaction related to our credit card receivables,\n- our ability to safeguard our reputation and maintain our vendor relationships,\n- the impact of economic and market conditions and the resultant impact on consumer spending patterns,\n- our ability to respond to the business environment, fashion trends and consumer preferences, including changing expectations of service and experience in stores and online,\n- the effectiveness of planned advertising, marketing and promotional campaigns in the highly competitive retail industry,\n- weather conditions, natural disasters, health hazards, national security or other market disruptions, or the prospects of these events and the resulting impact on consumer spending patterns,\n- our compliance with applicable banking-related laws and regulations impacting our ability to extend credit to our customers, employment laws and regulations, certain international laws and regulations, other laws and regulations applicable to us, including the outcome of claims and litigation and resolution of tax matters, and ethical standards,\n- impact of the current regulatory environment and financial system and health care reforms,", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "Does Hormel Food Corporation have any material legal proceedings pending?", - "target_page": 8, - "target_passage": "The Company knows of no pending material legal proceedings.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## **(b)** *Industry Segment*\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n#### **(c)** *Description of Business*\n\n## **Products and Distribution**\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n| --- | --- | --- | --- |\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| Other | 8.7 | 4.6 | 4.0 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).\n\nNo new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n### **Raw Materials**\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| 3.2(1) | Bylaws as amended to date. (Incorporated by reference to Exhibit 3.2 to Hormel's Amendment No. 3 to Registration Statement on |\n| --- | --- |\n| | Form S-4, dated November 29, 2001, File No. 333-68498.) |\n| 4.1(1) | Indenture dated as of June 1, 2001, between Hormel and U.S. Bank Trust National Association, as Trustee relating to certain |\n| | outstanding debt securities. (Incorporated by reference to Exhibit 4.1 to Hormel's Registration Statement on Form S-4 dated, |\n| | August 28, 2001, File No. 333-68498.) |\n| 4.2(1) | Supplemental Indenture No. 1 dated as of June 4, 2001, to Indenture dated as of June 1, 2001, between Hormel and U.S. Bank |\n| | Trust National Association, as Trustee, relating to certain outstanding debt securities. (Incorporated by reference to Exhibit 4.2 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.3(1) | Letter of Representations dated June 5, 2001, among Hormel, U.S. Bank Trust National Association, as Trustee, and The |\n| | Depository Trust Company relating to certain outstanding debt securities of Hormel. (Incorporated by reference to Exhibit 4.3 to |\n| | Hormel's Registration Statement on Form S-4 dated August 28, 2001, File No. 333-68498.) |\n| 4.4(1) | Pursuant to Item 601 (b)(4)(iii) of Regulation S-K, copies of instruments defining the rights of holders of certain long-term debt are |\n| | not filed. Hormel agrees to furnish copies thereof to the Securities and Exchange Commission upon request. |\n| 10.1(1) | U.S. $150,000,000 Credit Agreement, dated as of October 20, 2003, between Hormel, the banks identified on the signature pages |\n| | thereof, and Citicorp U.S.A. Inc., as Administrative Agent. (Incorporated by Reference to Exhibit 10.1 to Hormel's Current Report |\n| | on Form 8-K dated October 23, 2003.) |\n| 10.2(1)(3) | Hormel Foods Corporation Operators' Shares Incentive Compensation Plan. (Incorporated by Reference to Appendix A to |\n| | Hormel's definitive Proxy Statement filed on December 30, 1997, File No. 001-02402.) |\n| 10.3(1)(3) | Hormel Foods Corporation Supplemental Executive Retirement Plan (2002 Restatement.) (Incorporated by Reference to |\n| | Exhibit 10.3 to Hormel's Annual Report on Form 10-K for the fiscal year ended October 26, 2002, file No. 001-02402.) |\n| 10.4(1)(3) | Hormel Foods Corporation 2000 Stock Incentive Plan. (Incorporated by Reference to Exhibit A to Hormel's definitive Proxy |\n| | Statement filed on December 30, 1999, File No. 001-02402.) |\n\n(1) Document has previously been filed with the Securities and Exchange Commission and is incorporated herein by reference.\n\n(2) These Exhibits transmitted via EDGAR.\n\n(3) Management compensatory plan", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "| Item 1. | BUSINESS |\n| --- | --- |\n| Item 2. | PROPERTIES |\n| Item 3. | LEGAL PROCEEDINGS |\n| Item 4. | SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS |\n| PART II | |\n| Item 5. | MARKET FOR THE REGISTRANT'S COMMON STOCK AND RELATED STOCKHOLDER MATTERS |\n| Item 6. | SELECTED FINANCIAL DATA |\n| Item 7. | MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS |\n| Item 7A. | QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK |\n| Item 8. | FINANCIAL STATEMENTS AND SUPPLEMENTAL DATA |\n| Item 9. | CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE |\n| Item 9A. | CONTROLS AND PROCEDURES |\n| PART III | |\n| Item 10. | DIRECTORS AND EXECUTIVE OFFICERS OF THE AGREEMENT |\n| Item 11. | EXECUTIVE COMPENSATION |\n| Item 12. | SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER |\n| | MATTERS |\n| Item 13. | CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS |\n| Item 14. | PRINCIPAL ACCOUNTING FEES AND SERVICES |\n| PART IV | |\n| Item 15. | EXHIBITS, FINANCIAL STATEMENT SCHEDULES AND REPORTS ON FORM 8-K |\n| SIGNATURES | |\n\n#### **PART I**\n\n## **Item 1.** *BUSINESS*\n\n## **Available Information**\n\nThe Company makes available, free of charge on its website at *www.hormel.com*, its annual report on Form 10-K, quarterly reports on Form 10-Q, current reports on Form 8-K, and amendments to those reports filed or furnished pursuant to Section 13(a) or 15(d) of the Securities Exchange Act of 1934. These reports are accessible under the \"Investor\" caption of the Company's website and are available as soon as reasonably practicable after such material is electronically filed with or furnished to the Securities and Exchange Commission, which is within 24 hours.\n\nThe Company has adopted a Code of Ethical Business Conduct that covers its officers and directors, which is available on the Company's website, free of charge, under the caption \"Corporate.\" The Company also adopted Corporate Governance Guidelines, which are available on the Company's website, free of charge, under the caption \"Investor.\"\n\n#### **(a)** *General Development of Business*\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## **Manufacturing**\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n#### **Patents and Trademarks**\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:\n\nHORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n#### **Customers and Backlog Orders**\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n#### **Competition**\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n#### **Research and Development**\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n### **Employees**\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "#### Table of Contents\n\n# Certain Investigations and Other Matters\n\nWe regularly receive requests for information, including subpoenas, from regulators and governmental authorities such as the National Highway Traffic Safety Administration, the National Transportation Safety Board, the Securities and Exchange Commission (\"SEC\"), the Department of Justice (\"DOJ\"), and various local, state, federal, and international agencies. The ongoing requests for information include topics such as operations, technology (e.g., vehicle functionality, vehicle incidents, Autopilot and FSD Capability), compliance, finance, data privacy, and other matters related to Tesla's business, its personnel, and related parties. We routinely cooperate with such formal and informal requests for information, investigations, and other inquiries. To our knowledge no government agency in any ongoing investigation has concluded that any wrongdoing occurred. We cannot predict the outcome or impact of any ongoing matters. Should the government decide to pursue an enforcement action, there exists the possibility of a material adverse impact on our business, results of operation, prospects, cash flows, financial position or brand.\n\nWe are also subject to various other legal proceedings, risks and claims that arise from the normal course of business activities. For example, during the second quarter of 2023, a foreign news outlet reported that it obtained certain misappropriated data including, purportedly non-public Tesla business and personal information. Tesla has made notifications to potentially affected individuals (current and former employees) and regulatory authorities and we are working with certain law enforcement and other authorities. On August 5, 2023, a putative class action was filed in the United States District Court for the Northern District of California, purportedly on behalf of all U.S. individuals impacted by the data incident, followed by several additional lawsuits, that each assert claims under various state laws and seeks monetary damages and other relief. If an unfavorable ruling or development were to occur in these or other possible legal proceedings, risks and claims, there exists the possibility of a material adverse impact on our business, results of operations, prospects, cash flows, financial position or brand.\n\n#### Note 11 – Variable Interest Entity Arrangements\n\nThe aggregate carrying values of the variable interest entities' assets and liabilities, after elimination of any intercompany transactions and balances, in the consolidated balance sheets were as follows (in millions):\n\n| financial position or brand. | | |\n| --- | --- | --- |\n| We are also subject to various other legal proceedings, risks and claims that arise from the normal course of business | | |\n| activities. For example, during the second quarter of 2023, a foreign news outlet reported that it obtained certain | | |\n| misappropriated data including, purportedly non-public Tesla business and personal information. Tesla has made notifications | | |\n| to potentially affected individuals (current and former employees) and regulatory authorities and we are working with certain | | |\n| law enforcement and other authorities. On August 5, 2023, a putative class action was filed in the United States District Court | | |\n| for the Northern District of California, purportedly on behalf of all U.S. individuals impacted by the data incident, followed by | | |\n| several additional lawsuits, that each assert claims under various state laws and seeks monetary damages and other relief. If an | | |\n| unfavorable ruling or development were to occur in these or other possible legal proceedings, risks and claims, there exists the | | |\n| possibility of a material adverse impact on our business, results of operations, prospects, cash flows, financial position or brand. | | |\n| Note 11 – Variable Interest Entity Arrangements | | |\n| The aggregate carrying values of the variable interest entities' assets and liabilities, after elimination of any | | |\n| intercompany transactions and balances, in the consolidated balance sheets were as follows (in millions): | | |\n| September 30, December 31, | | |\n| 2024 2023 | | |\n| Assets | | |\n| Current assets | | |\n| Cash and cash equivalents $ 51 $ | | 66 |\n| Accounts receivable, net 28 | | 13 |\n| Prepaid expenses and other current assets 263 361 | | |\n| Total current assets 342 440 | | |\n| Operating lease vehicles, net 451 | | — |\n| Solar energy systems, net 2,524 3,278 | | |\n| Other non-current assets 190 369 | | |\n| 3,507 $ 4,087 Total assets | $ | |\n| Liabilities | | |\n| Current liabilities | | |\n| Accrued liabilities and other $ 36 $ | | 67 |\n| Deferred revenue 7 | | 6 |\n| Current portion of debt and finance leases 1,930 1,564 | | |\n| Total current liabilities 1,973 1,637 | | |\n| Deferred revenue, net of current portion 81 | | 99 |\n| 1,826 2,041 Debt and finance leases, net of current portion | | |\n| $ 3,880 $ 3,777 Total liabilities | | |\n| 24 | | |", - "page_start": 29, - "page_end": 29, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "premises within the complex. The agreement is subject to the implementation of proposed gaming law reforms and a tax structure acceptable to the Company, and obtaining required planning and other approvals.\n\n**Macau.** In connection with the Company's pending joint venture in Macau (see Note 1), the Company has committed to invest up to $280 million in the entity in the form of capital contributions and shareholder loans.\n\n**New York Racing Association.** The Company has an understanding with the New York Racing Association (\"NYRA\") to manage video lottery terminals (\"VLTs\") at NYRA's Aqueduct horseracing facility in metropolitan New York. The Company would assist in the development of the facility, including providing project financing, and would manage the facility for a fee. Work was halted on the VLT facility in August 2003 pending the outcome of an investigation of certain aspects of NYRA's operations by Federal prosecutors. In December 2003, NYRA reached agreement with the Justice Department whereby NYRA was indicted with prosecution deferred. NYRA agreed to pay a fine and the indictment will be dismissed with prejudice upon NYRA implementing certain reforms and otherwise complying with the terms of the agreement. The Company's participation is subject to a definitive agreement, regulatory approvals and certain legislative changes by the State of New York.\n\n**The Residences at MGM Grand.** In July 2004, the venture obtained construction financing for up to $210 million for the development of the first tower. The Company has provided a guaranty for up to 50% of the interest and principal payment obligations on the construction financing as well as a joint and several completion guaranty with its partners. The Company recorded the value of the guaranty obligation, approximately $2 million, in other long-term liabilities.\n\n**Other Guarantees.** The Company is party to various guarantee contracts in the normal course of business, which are generally supported by letters of credit issued by financial institutions. The Company's Senior Credit Facility limits the amount of letters of credit that can be issued to $200 million, and the amount of available borrowings under the Senior Credit Facility is reduced by any outstanding letters of credit. At December 31, 2004, the Company had provided a $50 million letter of credit to support the Economic Development Corporation of the City of Detroit bonds referred to above, which are a liability of the Company.\n\n**Litigation.** The Company is a party to various legal proceedings, most of which relate to routine matters incidental to its business. Management does not believe that the outcome of such proceedings will have a material adverse effect on the Company's financial position or results of operations.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe effective date is 1 July 2000. The Company will issue 800,000 ordinary fully paid shares in Mermaid Marine Australia Limited.\n\nThere have not been any other matters or circumstances, other than those referred to in the Chairman's and Operations Reviews and/or in the financial statements and notes attached thereto, that have arisen since the end of the Financial Year that have significantly affected, or may significantly affect Mermaid's operations, the results of those operations or its state of affairs in future financial years.\n\nThe Chairman's and Operations Reviews give indications, in general terms, of likely developments in Mermaid's operations in future financial years and the expected results of those operations. FUTURE DEVELOPMENTS\n\nThe development of the Company's Dampier and Broome bases is subject to the approval of the Western Australian Environmental Protection Authority. ENVIRONMENTAL REGULATION\n\nAs at the date of this report the Company had a total of 7,115,000 unissued shares under option as follows: **30 November 2000 Options** SHARE OPTIONS\n\n> As at the date of this report there are outstanding 6,500,000 options to acquire 6,500,000 ordinary shares in the Company at an issue price of 0.75 cents per ordinary share. Each of these options expires on 30 November 2000.", - "page_start": 33, - "page_end": 33, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "The Company has contingent liabilities, which have arisen in the course of its business, including pending litigation, preferential payment claims in customer bankruptcies, environmental remediation, taxes, and other claims. The Company currently has a claim for approximately $7.6 million pending against it, arising out of the bankruptcy of a customer filed in 2001. The Company was named a critical vendor by the bankruptcy court and, accordingly, was paid in full for all outstanding receivables. The claim alleges that the Company received preferential payments from the customer during the ninety days before the customer filed for bankruptcy protection. The claim was brought in February 2003. The Company has recorded an accrual with respect to this contingency, in an amount substantially less than the full amount of the claim, which represents the best estimate within the range of likely exposure, and intends to vigorously defend against the claim. Given the nature of this claim, it is possible that the ultimate outcome could differ from the recorded amount.\n\n#### **Significant Customer**\n\nOne office furniture customer accounted for approximately 13% of consolidated net sales in 2003 and 14% in 2002 and 2001.\n\n#### **Operating Segment Information**\n\nIn accordance with SFAS No. 131, \"Disclosures about Segments of an Enterprise and Related Information,\" management views the Company as being in two operating segments: office furniture and hearth products, with the former being the principal segment. The office furniture segment manufactures and markets a broad line of metal and wood commercial and home office furniture, which includes storage products, desks, credenzas, chairs, tables, bookcases, freestanding office partitions and panel systems, and other related products. The hearth products segment manufactures and markets a broad line of manufactured gas-, pellet-, and wood-burning fireplaces and stoves, fireplace inserts, gas logs, and chimney systems, principally for the home.\n\nThe Company's hearth products segment is somewhat seasonal, with the third (July-September) and fourth (October-December) fiscal quarters historically having higher sales than the prior quarters. In fiscal 2003, 56% of consolidated net sales of hearth products were generated in the third and fourth quarters.\n\nFor purposes of segment reporting, intercompany sales transfers between segments are not material, and operating profit is income before income taxes exclusive of certain unallocated corporate expenses. These unallocated corporate expenses include the net costs of the Company's corporate operations, interest income, and interest expense. Management views interest income and expense as corporate financing costs and not as an operating segment cost. In addition, management applies an effective income tax rate to its consolidated income before income taxes so income taxes are not reported or viewed internally on a segment basis. Identifiable assets by segment are those assets applicable to the respective industry segments. Corporate assets consist principally of cash and cash equivalents, short-term investments, and corporate office real estate and related equipment.\n\nNo geographic information for revenues from external customers or for long-lived assets is disclosed, since the Company's primary market and capital investments are concentrated in the United States.\n\nReportable segment data reconciled to the consolidated financial statements for the years ended 2003, 2002, and 2001 is as follows:\n\n| (In thousands) | 2003 | | 2002 | | | 2001 |\n| --- | --- | --- | --- | --- | --- | --- |\n| Net sales: | | | | | | |\n| Office furniture | $ 1,304,054 | | $ 1,279,059 | | | $ 1,366,312 |\n| Hearth products | 451,674 | | 413,563 | | | 426,126 |\n| | $ 1,755,728 | | $ 1,692,622 | | | $ 1,792,438 |\n| Operating profit: | | | | | | |\n| Office furniture(a) | $ | 130,080 | $ | 130,014 | $ | 112,405 |\n| Hearth products(a) | | 54,433 | | 44,852 | | 39,282 |\n| Total operating profit | | 184,513 | | 174,866 | | 151,687 |\n| Unallocated corporate | | | | | | |\n| expenses | | (33,582) | | (34,312) | | (35,426) |\n| Income before income taxes | $ | 150,931 | $ | 140,554 | $ | 116,261 |\n| Depreciation and | | | | | | |\n| amortization expense: | | | | | | |\n| Office furniture | $ | 54,121 | $ | 48,546 | $ | 58,658 |\n| Hearth products | | 13,599 | | 13,993 | | 20,389 |\n| General corporate(b) | | 5,052 | | 6,216 | | 2,338 |\n| | $ | 72,772 | $ | 68,755 | $ | 81,385 |\n| Capital expenditures: | | | | | | |\n| Office furniture | $ | 17,619 | $ | 17,183 | $ | 29,785 |\n| Hearth products | | 12,577 | | 6,132 | | 7,149 |\n| General corporate | | 7,312 | | 2,570 | | (83) |\n| | $ | 37,508 | $ | 25,885 | $ | 36,851 |\n| Identifiable assets: | | | | | | |\n| Office furniture | $ | 452,350 | $ | 494,559 | $ | 526,712 |\n| Hearth products | | 303,811 | | 305,326 | | 320,199 |\n| General corporate(b) | 265,665 | | | 220,667 | | 114,980 |\n| | $ 1,021,826 | | $ 1,020,552 | | $ | 961,891 |\n\n*(a)Included in operating profit for the office furniture segment are pretax charges of $8.5 million, $3.0 million, and $22.5 million for closing of facilities and impairment charges in 2003, 2002, and 2001, respectively. Included in operating profit for the hearth products segment is a pretax charge of $1.5 million for closing of facilities and impairment charges in 2001.*\n\n*(b)In 2002 the Company's information technologies departments became a shared service at the corporate level. The costs continue to be charged out to the segments; however, the assets and related depreciation are now classified as general corporate.*", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "*Transfer and Disposal Services.* We own or operate 96 transfer stations. We deposit waste at these stations, as do other private haulers and municipal haulers, for compaction and transfer to trailers for transport to disposal sites or recycling facilities. As of December 31, 2004, we owned or operated 58 landÑlls, which had approximately 8,904 permitted acres and total available permitted and probable expansion disposal capacity of approximately 1.7 billion in-place cubic yards. The in-place capacity of our landÑlls is subject to change based on engineering factors, requirements of regulatory authorities and the ability to expand our sites successfully. Some of our landÑlls accept non-hazardous special waste, including utility ash, asbestos and contaminated soils. See \"\"Ì Properties.''\n\nMost of our existing landÑll sites have the potential for expanded disposal capacity beyond the currently permitted acreage. We monitor the availability of permitted disposal capacity at each of our landÑlls and evaluate whether to pursue expansion at a given landÑll based on estimated future waste volumes and prices, market needs, remaining capacity and likelihood of obtaining an expansion. To satisfy future disposal demand, we are currently seeking to expand permitted capacity at certain of our landÑlls, although no assurances can be made that all future expansions will be permitted as designed.\n\n*Other Services.* We have 35 materials recovery facilities and other recycling operations, which are generally required to fulÑll our obligations under long-term municipal contracts for residential collection services. These facilities sort recyclable paper, aluminum, glass and other materials. Most of these recyclable materials are internally collected by our residential collection operations. In some areas, we receive commercial and industrial solid waste that is sorted at our facilities into recyclable materials and nonrecyclable waste. The recyclable materials are salvaged, repackaged and sold to third parties and the nonrecyclable waste is disposed of at landÑlls or incinerators. Wherever possible, our strategy is to reduce our exposure to Öuctuations in recyclable commodity prices by utilizing third party recycling facilities, thereby minimizing our recycling investment.\n\nWe provide remediation and other heavy construction services primarily through our subsidiary located in Missouri.\n\nWe also have a Texas-based compost, mulch and soil business at which yard, mill and other waste is processed, packaged and sold as various products.\n\n### **Sales and Marketing**\n\nWe seek to provide quality services that will enable our company to maintain high levels of customer satisfaction. We derive our business from a broad customer base which we believe will enable our company to experience stable growth. We focus our marketing eÅorts on continuing and expanding business with existing customers, as well as attracting new customers.\n\nWe employ approximately 500 sales and marketing employees. Our sales and marketing strategy is to provide high-quality, comprehensive solid waste collection, recycling, transfer and disposal services to our customers at competitive prices. We target potential customers of all sizes, from small quantity generators to large \"\"Fortune 500'' companies and municipalities.\n\nMost of our marketing activity is local in nature. However, in 2000 we initiated a national accounts program in response to our customers' needs.\n\nWe generally do not change the tradenames of the local businesses we acquire, and therefore we do not operate nationally under any one mark or tradename. Rather, we rely on the goodwill associated with the acquired companies' local tradenames as used in each geographic market in which we operate.\n\n### **Customers**\n\nWe provide services to commercial, industrial, municipal and residential customers. No one customer has individually accounted for more than 10% of our consolidated revenue or of our reportable segment revenue in any of the last three years.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "claims. The Company currently has a claim for approximately $7.6 million pending against it arising out of the bankruptcy of a customer filed in 2001. The Company was named a critical vendor by the bankruptcy court and, accordingly, was paid in full for all outstanding receivables. The claim alleges that the Company received preferential payments from the customer during the ninety days before the customer filed for bankruptcy protection. The claim was brought in February 2003. The Company has recorded an accrual with respect to this contingency, in an amount substantially less than the full amount of the claim, which represents the best estimate within the range of likely exposure and intends to vigorously defend against the claim. Given the nature of this claim, it is possible that the ultimate outcome could differ from the recorded amount. It is our opinion, after consultation with legal counsel, that additional liabilities, if any, resulting from these matters, are not expected to have a material adverse effect on our financial condition, although such matters could have a material effect on our quarterly or annual operating results and cash flows when resolved in a future period.\n\n#### **Looking Ahead**\n\nThe Company is encouraged by indications that the economy is recovering and is cautiously optimistic that the office furniture industry will begin to rebound in the second half of 2004. Global Insight, BIFMA's forecasting consultant, increased its estimate for the industry shipment growth from 2.4% to 5.6% in 2004, with first quarter flat and improving as the year progresses.\n\nThe hearth segment is impacted by the housing market, which may experience a slight decline from record high levels, but is expected to remain at healthy levels. Management believes its strong brand recognition and new innovative product introductions in addition to strengthening distribution will allow it to grow its hearth segment.\n\nOn January 5, 2004, the Company completed the acquisition of Paoli Inc., a leading provider of wood case goods and seating. The Company intends to continue to build on Paoli's strong position in the market and excellent selling capabilities while leveraging its lean enterprise practices to achieve greater cost efficiencies and improved customer performance.\n\nThe Company's strategy is to grow its business through aggressive investment in building its brands, enhancing its strong member-owner culture, and remaining focused on its rapid continuous improvement program to continue to build best total cost. The Company plans to reinvest a large portion of its cost savings from plant consolidations and its rapid continuous improvement program to continue to build brands, product solutions, and selling models.\n\nBecause of the following factors, as well as other variables affecting the Company's operating results, past financial performance may not be a reliable indicator of future performance, and historical trends should not be used to anticipate results or trends in future periods:\n\n**•** competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n\n**•** increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n\n**•** increases in the cost of health care benefits provided by the Company;\n\n**•** reduced demand for the Company's storage products caused by changes in office technology, including the change from paper record storage to electronic record storage;\n\n**•** the effects of economic conditions on demand for office furniture, customer insolvencies and related bad debts, and claims against the Company that it received preferential payments;\n\n**•** changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n\n**•** issues associated with acquisitions and integration of acquisitions;\n\n**•** the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n\n**•** the ability of the Company to realize financial benefits from investments in new products;\n\n**•** the ability of the Company's distributors and dealers to successfully market and sell the Company's products; and\n\n**•** the availability and cost of capital to finance planned growth.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What is Mexican Farm Subsidies ?", - "target_page": 9, - "target_passage": "an online tool to analyze how the federal government allocates those subsidies", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "currently in favourable status are in that category or show a strong positive trend. The Commission and the European Environmental Agency will provide guidance to Member States in 2020 on how to select and prioritise species and habitats.\n\n## *2.2.2. Bringing nature back to agricultural land*\n\nAs guardians of our land, farmers play a vital role in preserving biodiversity. They are among the first to feel the consequences when biodiversity is lost but also among the first to reap the benefits when it is restored. Biodiversity enables them to provide us with **safe, sustainable, nutritious and affordable food** and provides them with the income they need to thrive and develop. European farmers are an essential part of the EU's future and must continue to be the social and economic hub of many communities across our Union.\n\nAt the same time, certain agricultural practices are a key driver of biodiversity decline. This is why it is important to work with farmers to **support and incentivise the transition to fully sustainable practices**. Improving the condition and diversity of agroecosystems will increase the sector's resilience to climate change, environmental risks and socioeconomic shocks, while creating new jobs, for example in organic farming, rural tourism or recreation.\n\nTo support the long-term sustainability of both nature and farming, this strategy will work in tandem with the new **Farm to Fork Strategy** and the **new Common Agricultural Policy (CAP)**, including by promoting eco-schemes and result-based payment schemes. In implementing the Biodiversity and the Farm to Fork Strategies, the Commission will closely monitor progress and improvements in terms of food security and farmers income. The Commission will ensure that the CAP Strategic plans are assessed against robust climate and environmental criteria, and that Member States set explicit national values for the relevant targets set in this strategy, as well as in the Farm to Fork Strategy. These plans should lead to sustainable practices such as precision agriculture, organic farming, agro-ecology, agro-forestry, low-intensive permanent grassland, and stricter animal welfare standards.\n\nFarmland birds and insects, particularly pollinators, are key indicators of the health of agroecosystems and are vital for agricultural production and food security. Their alarming decline must be reversed. As set out in the Farm to Fork Strategy, the Commission will take action to reduce by **50% the overall use of – and risk from – chemical pesticides by 2030** and reduce by 50% the use of more hazardous pesticides by 2030. This must be supported by the full implementation of the EU Pollinators initiative31. By the end of 2020, the Commission will review the initiative and propose additional measures if necessary. To provide space for wild animals, plants, pollinators and natural pest regulators, there is an urgent need to bring back **at least 10% of agricultural area under high-diversity landscape features**. These include, *inter alia*, buffer strips, rotational or non-rotational fallow land, hedges, non-productive trees, terrace walls, and ponds. These help enhance carbon sequestration, prevent soil erosion and depletion, filter air and water, and support climate adaptation. In addition, more biodiversity often helps lead to more agricultural production. Member States will need to translate the 10% EU target to a lower geographical scale to ensure connectivity among habitats, especially through the CAP instruments and CAP Strategic Plans, in line with the Farm to Fork Strategy, and through the implementation of the Habitats Directive. The\n\n<sup>31</sup> EU Pollinators initiative (COM(2018) 395).", - "page_start": 7, - "page_end": 7, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "progress towards the target will be under constant review, and adjustment if needed, to mitigate against undue impact on biodiversity, food security and farmers' competitiveness.\n\nAgroecology can provide healthy food while maintaining productivity, increase soil fertility and biodiversity, and reduce the footprint of food production. Organic farming in particular holds great potential for farmers and consumers alike. The sector creates jobs and attracts young farmers. Organic farming also provides 10-20 % more jobs per hectare than conventional farms, and creates added value for agricultural products32 . To make the most of this potential, at least **25% of the EU's agricultural land must be organically farmed by 2030**. In addition to CAP measures, the Commission will put forward an Action Plan on organic farming, helping Member States stimulate both supply and demand of organic products. It will also ensure consumer's trust through promotion campaigns and green public procurement. In the implementation of the EU-wide agroecological targets set out in this strategy and in the Farm to Fork Strategy, the different starting points and differences in progress already made in Member States will be taken into account.\n\nThe uptake of agroforestry support measures under rural development should be increased as it has great potential to provide multiple benefits for biodiversity, people and climate.\n\nThe decline of **genetic diversity** must also be reversed, including by facilitating the use of traditional varieties of crops and breeds. This would also bring health benefits through more varied and nutritious diets. The Commission is considering the revision of marketing rules for traditional crop varieties in order to contribute to their conservation and sustainable use. The Commission will also take measures to facilitate the registration of seed varieties, including for organic farming, and to ensure easier market access for traditional and locally adapted varieties.\n\n#### *2.2.3. Addressing land take and restoring soil ecosystems*\n\nSoil is one of the most complex of all ecosystems. It is a habitat in its own right, and home to an incredible diversity of organisms that regulate and control key ecosystem services such as soil fertility, nutrient cycling and climate regulation. **Soil is a hugely important non-renewable resource**, vital for human and economic health, as well as the production of food and new medications.\n\nIn the EU, the degradation of soil is having considerable environmental and economic consequences. Poor land management, such as deforestation, overgrazing, unsustainable farming and forestry practices, construction activities and land sealing are among the main causes of this situation33 . Despite recent reductions in the pace of soil sealing, fertile soils continue to be lost to land take and urban sprawl34. When compounded by\n\n<sup>32</sup> OECD (2016), Farm Management Practices to Foster Green Growth.\n\n<sup>33</sup> European Environment Agency (2019), EEA Signals 2019: Land and Soil in Europe.\n\n<sup>34</sup> European Environment Agency and Swiss Federal Office for the Environment (FOEN) (2016), Urban sprawl in Europe.", - "page_start": 8, - "page_end": 8, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## **Manufacturing**\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n#### **Patents and Trademarks**\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:\n\nHORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n#### **Customers and Backlog Orders**\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n#### **Competition**\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n#### **Research and Development**\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n### **Employees**\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Right now, one of the most active Asian countries in the Open Data arena is India, which also signed an Open Government partnership with the USA in November 2010. In January 2011 the Indian Congress Party announced plans for a new law to fight corruption among public servants and politicians. Anti-corruption websites (including ones in local dialects) like Indiaagainstcorruption.org, already existed, including one, Ipaidabribe.com, that collected more than 3,000 people reports of graft in its first four months.\n\nAs it happens in Asia, even Latin America is currently focused, at least outside Public Administration circles, on how to open public data to achieve actual transparency. This appears even from the way many projects are labeled, that is \"Civic Information\" instead of Open Data (which is an idea starting from data *reuse*) or Open Government.\n\nThe reason is that even where good Freedom of Information laws exist in Latin America, they still have too little practical effects. Mexico, for example, already has a digital system to manage Freedom of Information requests, but there are reports of complaints filed against municipal officials that either have no effect at all, or aren't possible in the first place, because relevant information has not been updated in years, or omits key data like (in the case of budget reports) *\"descriptions of how the money was spent\"*.\n\nEven with these difficulties, the Latin America Open Data/Civic Information landscape is active and definitely worthwhile following. The list of interesting Civic Information projects in Latin America include (from Sasaki's Access to Information: Is Mexico a Model for the Rest of the World?:\n\n- Mexico\n\t- Mexican Farm Subsidies an online tool to analyze how the federal government allocates those subsidies\n\t- Compare Your School: compares aggregate test results from any school with the municipal, regional, and national averages\n\t- Rebellion of the Sick built for patients with chronic diseases whose expenses are not covered by the government subsidized health coverage.\n- Argentina: Public Spending in Bahía analyzes how public funds are used.\n- Colombia: Visible Congress monitors the actions of the Colombian congress\n- Brazil\n\t- Eleitor 2010: a website to submit reports of electoral fraud during the Brazil 2010", - "page_start": 8, - "page_end": 8, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "**Figure 6.** Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n**Market price of maize in main countries.** In this study, we elaborate on the endogenous response of our economic models. Tis response can be theoretically elaborated as: due to the efect of climate change on yield reduction (improvement), the supply curve moves lefward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shifing to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. Tis also alters the self-sufciency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among diferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signifcantly under both scenarios, as their yields improve due to climate efects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-sufciency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive efects on yields and/or are relatively less dependent on imports, are positively (less negatively) afected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-sufciency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n#### **Discussion and conclusion**\n\n**Discussion.** Our analysis highlights the efects of climate change on global- and regional-specifc maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We fnd that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. Te limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - }, - { - "text": "## Specific Examples of CSR Activities\n\n# **Together with Our Customers**\n\n**We work as a team to improve customer satisfaction and product quality, and, while supporting the customer, contribute to the sustainable development of society as a whole.**\n\n# **The financial sector's role in improving the nation's diet and in strengthening the agricultural and fisheries sectors**\n\nFor many years, food supply networks in For many years, food supply networks in Japan were premised on mass production and Japan were premised on mass production and mass consumption, enabling the country to mass consumption, enabling the country to meet soaring food demand at a time of rapid meet soaring food demand at a time of rapid growth in the population and economy. growth in the population and economy. But in recent years, consumers have come to But in recent years, consumers have come to place more priority on factors other than place more priority on factors other than volume and price, such as food safety and volume and price, such as food safety and healthiness, and the cultural aspects of diet. healthiness, and the cultural aspects of diet. As discussion continues on the need for As discussion continues on the need for farmers to increase production scale and farmers to increase production scale and move into processing and marketing, major move into processing and marketing, major changes are underway in the agriculture and changes are underway in the agriculture and fisheries sector in Japan. fisheries sector in Japan.\n\nAgainst this backdrop, SMBC has developed Against this backdrop, SMBC has developed a new financial product for this sector. a new financial product for this sector. The SMBC Food and Agricultural Assessment The SMBC Food and Agricultural Assessment Loan comes with conditions, depending on Loan comes with conditions, depending on the results of an evaluation of food-producers' the results of an evaluation of food-producers' progress in areas such as food safety and progress in areas such as food safety and environment-friendliness, healthiness and environment-friendliness, healthiness and nutritional value, and efficiency of distribution. nutritional value, and efficiency of distribution. The Japan Research Institute researches The Japan Research Institute researches\n\nmeasures in the me a s u r e s i n t h e areas of food and of food and farming being taken farming being taken by the loan applicant, by the loan applicant, and drafts a simple and drafts a simple \"diagnosis\" stating \"diagnosis\" stating whether there is room whether there is room\n\nfor future improvement. Ernst & Young for future improvement. Ernst & Young ShinNihon LLC provides expert opinions on ShinNihon LLC provides expert opinions on ongoing improvement of this system. ongoing improvement of this system.\n\nBy backing customer companies' own By backing customer companies' own initiatives in the areas of food and agriculture initiatives in the areas of food and agriculture in this way, SMBC will be supporting measures in this way, SMBC will be supporting measures to improve the diet of the Japanese and to improve the diet of the Japanese and strengthen the agriculture and fisheries sector. strengthen the agriculture and fisheries sector.\n\n#### **For further details, please see our website.**\n\nA roundtable session with experts held in August 2011 eyesight concerns. eyesight concerns. considered the role of the new SMBC Food and Agricultural Assessment Loan in improving the food supply chain that links food and fishery producers with food processors and consumers. Opinions were also exchanged on what other future role the bank might assume in this regard, given the current situation and issues facing the food industry\n\nand agriculture in Japan.\n\n**Roundtable session: SMBC Food and Agricultural Assessment Loan**\n\n#### **Key comments of participants**\n\n\"We want to deliver value by creating demand and quality combined with safety, peace of mind and trust.\" Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd.\n\nYasuhiro Nakashima Associate Professor Graduate School of Agricultural and Life Sciences, The University of Tokyo\n\n\"Eating should be something that generates emotion. New potential exists in the world of cuisine.\" Daisuke Yamamoto, Vice Senior Consultant, Research Department, The Japan Research Institute, Limited\n\n\"As consumer tastes go through a time of great change, I think it is important to prioritize ingredients and the attitude of customers toward eating.\"\n\n\"An important concept is multilateral dialogue as the number of parties involved in food production increases throughout the supply chain.\" Yoichiro Fukayama, Planning Dept., Deputy Head (with powers of representation) of the Corporate Banking Unit & Middle Market Banking Unit, SMBC\n\nModerated by Kenji Sawami, Partner, Ernst & Young ShinNihon LLC\n\n# **Making banking a more pleasant experience for all customers**\n\nWith the old-age dependency ratio soaring, With the old-age dependency ratio soaring, the SMFG Group aims to provide friendly, the SMFG Group aims to provide friendly, easy-to-use banking services for all its easy-to-use banking services for all its customers. customers.\n\nSome Group companies are likewise making Some Group companies are likewise making their facilities barrier-free at bank branches their facilities barrier-free at bank branches with large numbers of customers, to tailor with large numbers of customers, to tailor services to the needs of all customers. services to the needs of all customers.\n\nFor example at the Minato Bank, we have For example at the Minato Bank, we have equipped all ATMs at all our branches and equipped all ATMs at all our branches and cashpoints with voice-guidance handsets for cashpoints with voice-guidance handsets for the visually impaired. the visually impaired.\n\nIn addition, we have set up priority seating In addition, we have set up priority seating in the lobby of each of our branches for in the lobby of each of our branches for customers who are very old or who have customers who are very old or who have mobility problems. We are also steadily mobility problems. We are also steadily introducing queue-number displays using introducing queue-number displays using Color Universal Design (CUD) principles, Color Universal Design (CUD) principles, which are easier to read for customers with which are easier to read for customers with\n\nHandheld hearing support device (The Minato Bank)\n\nA further measure is installation of handheld A further measure is installation of handheld hearing support devices at all branches hearing support devices at all branches (except housing loan promotion offices), to (except housing loan promotion offices), to allay the concerns of hearing-impaired allay the concerns of hearing-impaired customers who find it difficult to converse customers who find it difficult to converse and follow spoken instructions. By using the and follow spoken instructions. By using the devices as communication tools, bank devices as communication tools, bank employees can respect customer privacy employees can respect customer privacy and do not have to talk loudly. and do not have to talk loudly. Further measures include posting of \"green Further measures include posting of \"green ear\" logos at branches to reassure customers ear\" logos at branches to reassure customers that the bank has facilities for conversing that the bank has facilities for conversing in writing. All branches are being equipped writing. All branches are being equipped with white boards and special message with white boards and special message tablets for dialogue with customers who ablets for dialogue with customers who have concerns about their hearing and who have concerns about their hearing and who dislike written conversations. dislike written conversations.\n\n# **Peace of mind at the bank counter**\n\nThe Minato Bank has created a position The Minato Bank has created a position titled \"Service Care Manager\" at each of titled \"Service Care Manager\" at each of its branches, filled by at least one branch its branches, filled by at least one branch managerial staffer, as part of measures to managerial staffer, as part of measures to make branch visits more pleasant for make branch visits more pleasant for customers, following earlier nuts-and-bolts customers, following earlier nuts-and-bolts improvements. improvements.\n\nService Care Managers are dedicated to Service Care Managers are dedicated to improving support and services for the improving support and services for the customer at each branch. Their training customer at each branch. Their training includes simulations of the problems faced includes simulations of the problems faced by persons with disabilities, awareness by persons with disabilities, awareness raising and support methods for the elderly raising and support methods for the elderly and persons with disabilities. and persons with disabilities.\n\n### **New queue-number display system installed at bank counters**\n\nColors and special designs are used to make queue-number displays more visible to all customers (The Minato Bank)\n\nTelephone handset-type ATM (The Minato Bank)\n\n# **Preparing our businesses for a higher old-age dependency ratio**\n\nIn addition to removing mobility barriers at In addition to removing mobility barriers at branches, the bank plans to aggressively branches, the bank plans to aggressively support installation of facilities needed to support installation of facilities needed to cope with the rapidly rising old-age cope with the rapidly rising old-age dependency ratio. As a first step, SMBC dependency ratio. As a first step, SMBC has established clear guidelines for has established clear guidelines for supporting the construction of rental supporting the construction of rental housing for the elderly, expected to be a housing for the elderly, expected to be a future growth area. future growth area.\n\nWhile continuing to tailor business While continuing to t ailor busines s activities to the needs of the community at activities to the needs of the community at large and ensuring a friendly banking large and ensuring a friendly banking environment for our customers, the SMFG environment for our customers, the SMFG Group also plans to support the creation of Group also plans to support the creation of frameworks that enable the elderly to live frameworks that enable the elderly to live active lives with peace of mind. active lives with peace of mind.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "Afforestation, reforestation and tree planting to support biodiversity and ecosystem restoration will be promoted through the CAP Strategic Plans, and the Cohesion Policy funds. The new **European Urban Greening Platform**38 will also facilitate urban tree planting, including under the LIFE programme.\n\nThe share of forest areas covered by management plans should cover all managed public forests and an increased number of private forests, and biodiversity-friendly practices such as closer-to-nature-forestry should continue and be further developed. To support this, the Commission will develop guidelines on biodiversity-friendly afforestation and reforestation and closer-to-nature-forestry practices. This will be done in parallel with the new EU Forest Strategy.\n\nTo gain a better picture of the health of European forests, the Commission will work with other data providers to further develop the **Forest Information System for Europe**. This will help produce up-to-date assessments of the condition of European forests and link all EU forest-data web-platforms. This will also be presented as part of the EU Forest Strategy.\n\n### *2.2.5. Win-win solutions for energy generation*\n\nDecarbonising the energy system is critical for climate neutrality, as well as for the EU's recovery from the COVID-19 crisis and long-term prosperity. More sustainably sourced renewable energy will be essential to fight climate change and biodiversity loss. The EU will prioritise solutions such as ocean energy, offshore wind, which also allows for fish stock regeneration, solar-panel farms that provide biodiversity-friendly soil cover, and sustainable bioenergy.\n\nTo mitigate climate and environmental risks created by the increasing use of certain sources for bioenergy, the revised Renewable Energy Directive39 includes strengthened sustainability criteria. It also promotes the shift to advanced biofuels based on residues and non-reusable and non-recyclable waste. This approach should continue for all forms of bioenergy. The use of whole trees and food and feed crops for energy production – whether produced in the EU or imported – should be minimised.\n\nTo better understand and monitor the potential climate and biodiversity risks, the Commission is assessing the **EU and global biomass supply and demand** and related sustainability40. As part of its increased ambition to protect and restore forest ecosystems, the Commission will publish the results of this work on the use of forest biomass for energy production by the end of 2020. This will inform the Commission's policymaking, including the review and revision, where necessary, of the level of ambition of the Renewable Energy Directive, the Emissions Trading Scheme, and the Regulation on land use, land use change and forestry (LULUCF) set for 2021.\n\nIn line with the Renewable Energy Directive, the Commission will also develop operational guidance in 2021 on the **new sustainability criteria on forest biomass for** \n\n<sup>38</sup> See Section 2.2.8.\n\n<sup>39</sup> Directive (EU) 2018/2001 on the promotion of the use of energy from renewable sources.\n\n<sup>40</sup> JRC Biomass Assessment Study.", - "page_start": 10, - "page_end": 10, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "that maize yield would decrease severely. For the whole world more mitigation and adaptation actions should be taken from now on. Food security would be a signifcant challenge in this century.\n\n**Yield change of maize in main countries.** Tere are huge diferences in impacts on maize yield under climate change, which would infuence the food crisis in diferent regions. Tere are 159 countries in the whole world which plant maize. Te gross yield of maize the top 20 countries accounts for more than 90% of the total yield in the 159 countries. So, the changes in the top 20 countries under future scenarios would infuence the food security of the whole world (Fig. 5). From the results of simulated by CRESE-maize under global warming by 1.5 °C, there would be 75 countries facing with yield loss of maize; the mean yield loss rate would become 33.5%. Tere would be 84 countries experiencing yield increases. Overall, the global maize yield would slightly increase. Under global warming by 2.0 °C, there would be 82 countries facing with yield loss of maize, for which the mean yield loss rate is approximate to that under global warming by 1.5 °C. Tere would be 77 countries experiencing yield increase; however, the mean yield increase is apparently smaller than that under global warming by 1.5 °C. Generally, the global maize yield would decrease. Te results show that the adverse efect of warming up 2.0 °C on global maize production is far greater than warming up 1.5 °C. It is important to take actions to develop forward-looking adaptation measures to cope with future climate change.\n\nAccording to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. Te United States accounts for more than 32%; China accounts for about 24%; Brazil, Argentina and Mexico account for about 23%. Te fuctuation of maize production in these fve top countries will have a signifcant impact on the global maize trade. Based on the simulation results, comparing to 1986–2005, the maize yield in China, Brazil and Argentina would decrease under global warming by 1.5 °C; the yield loss rate would reach more than 20% in Brazil; Argentina would decrease by 14.7%; China would decrease by 3.7%. However, there would be increasing trends in the United States and Mexico; the change in the United States would not be signifcant and the maize yield would increase by 0.5%; the yield increasing rate would exceed 50% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 2% under global warming\n\nVol:.(1234567890)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed9.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\n**Special thanks:** the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n- \n# **original artwork provided by:**\n\n## Tips to Remember:\n\n- *• Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.*\n- *• When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.*\n- *• Ask your compost supplier which compost product is best for your intended use.*\n- *• Use compost at the recommended application rate.*\n- *• To maintain healthy soil, reapply compost or mulch every 1-2 years.*\n- *• Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.*\n- *• Compost can also reduce your lawn and garden's summer irrigation needs.*\n- *• Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.*", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n#### **What is compost?**\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil – it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n#### **What materials (\"feedstocks\") are used to make compost?**\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n#### **How do I know I'm getting safe, quality compost?**\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n#### **What about weed seeds, plant diseases or pesticide residues?**\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n# Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\nl The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n\n- l Please keep yard debris free of :\n\t- x Garbage x Plastic of any sort\n- Plastic plant pots\n- Plastic plant tabs\n- Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n\t- x Rock, brick, or masonry x Glass or metal x Pet waste.\n\t-\n\t-\n\n* Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What concerns has open data raised in the insurance sector?", - "target_page": 23, - "target_passage": "insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "there is no mandate to support one group to centralize it.\n\nKenya's own OpenData.go.ke website has only ever seen a small handful of data sets, none of which are now (early April 2011) available anymore. Groups like the Ministry of Education might publish some information on schools, but they won't give anyone the location data.\n\n# **3. Emerging trends and issues related to Open Data**\n\nOne of the most common activities for Open Data activists in this moment is the creation of country-wide catalogs of all data sources, to facilitate individuation and correlation of independent data sets. Normally, all initiatives of this type are announced on the Open Knowledge Foundation blog and/or its data hub CKAN. Another relevant development is the publication of an Open Data Manual that *\"can be used by anyone but is especially designed for those seeking to open up data, since it discusses why to go open, what open is, and the how to 'Open' Data.\"* Activists in several European countries have already published local versions of the manual, or equivalent documents. On this background, several interesting issues, some of which were anticipated in the Open Data, Open Society report, are coming in full light. They are presented, one at a time, in the following sections of this chapter.\n\n### **3.1. Cost of not opening PSI is increasing**\n\nMuch has been said on the *economic* benefits of opening public sector information, and much more remains to be said and studied. One part of this issue that is becoming more evident over time is that Open Data are the simplest, if not the only way, to save Public Administrations from the costs that they have *already* (and rightfully!) forced themselves to bear, through assorted laws and official regulations. This is explained well in the report from LinkedGov about the economic impact of open data:\n\n> *(p. 2) \"As the costs of disseminating and accessing information have declined, the transactions costs associated with charging for access to information, and controlling subsequent redistribution have come to constitute a major barrier to access in themselves. As a result, the case for free (gratis) provision of Public Sector Information is stronger than has already been recognized.*\n\nEaves provides a practical example from Canada in Access to Information is Fatally Broken… You Just Don't Know it Yet: *the number of Access to Information Requests (ATIP) has almost tripled*", - "page_start": 10, - "page_end": 10, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "officially lobbying Public Administrations to get the PSI they could use for the same purposes. As other suggestions made here, these are activities that should start at the city and regional level, first with custom-made education initiatives, then with specific data-based services. Engaging all these actors in the adoption of (local) Open Data will be one of the big challenges of the next years.\n\n# **5. Bibliography**\n\nBesides those explicitly linked from the text, this report has drawn inspiration by many other resources. The most important ones are listed here, but the complete list should be much longer. We wish to thank first the authors of the works listed below and, immediately after, to all the activists, inside and outside governments worldwide, who are working on this topic.\n\n- 1. Are you prepared for the pitfalls of Gov 2.0?\n- 2. Can we use Mobile Tribes to pay for the costs of Open Data?\n- 3. Canada launches data.gc.ca what works and what is broken\n- 4. Creative Commons and data bases: huge in 2011, what you can do\n- 5. Defining Gov 2.0 and Open Government\n- 6. How Government Data Can Improve Lives\n- 7. If you like solar, tell your utility to publish this map\n- 8. Indian corruption backlash builds after \"year of the treasure hunters\"\n- 9. Información Cívica / Just What is Civic Information?\n- 10. Is open government just about information?\n- 11.LSDI : In un click la mappa del crimine\n- 12. La casta è online: dategli la caccia!\n- 13. Linee guida UK sull'opendata\n- 14.MSc dissertation on Open Government Data in the UK\n- 15. Open Data (2): Effective Data Use.\n- 16.Open Data: quali prospettive per la pianificazione?\n- 17.Open Knowledge Foundation Blog \" Blog Archive \" Keeping Open Government Data Open?\n- 18. Open data, democracy and public sector reform\n- 19.Pubblicato Camere Aperte 2011 blog OpenParlamento\n- 20.Reasons for not releasing data in government\n- 21.The impact of open data: first evidence", - "page_start": 32, - "page_end": 32, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "benefit when local businesses make more money) are aware of this opportunity?\n\n# **4. Conclusion: seven Open Data strategy and best practices suggestions**\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n### **4.1. Properly define and explain both Open Data and Public Data**\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n- 1. **Privacy issues are almost always a non-issue.** Quoting from What \"open data\" means and what it doesn't): *Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.*\n- 2. Defining as Public and consequently opening them in the right way, *much more data* than those born and stored *inside* Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n### **4.2. Keep political issues separated by economics ones**\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data *even if* they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "- 22. Thinking About Africa's Open Data\n- 23.Towards EU Benchmarking 2.0 Transparency and Open Data on Structural Funds in Europe\n- 24. UK Open Government Licence removes barriers to re-use of public sector information\n- 25.Western Europe: A journey through tech for transparency projects\n- 26.What open data means to marginalized communities\n- 27.What's in a Name? Open Gov and Good Gov\n- 28.WikiLeaks Relationship With the Media\n- 29.WikiLeaks, Open Information and Effective Use: Exploring the Limits of Open Government", - "page_start": 33, - "page_end": 33, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "coal plants. If data are not available, every conclusion is questionable because it relies on assumptions or estimates.\n\n### **2.3. Open Data in Latin America, Asia and Africa**\n\nSeveral countries in Latin America are studying and making experiments with Open Data both at the government and at the grassroots level. The same is happening, on a much smaller scale, in a few parts of Asia and Africa. On average, the volume of these Open Data experiments and the level of *local* interest and awareness around them is still lower than what is happening in Europe and North America. In spite of this we suggest that it is important, for public officials and civic activists in Western Countries, to follow these developments closely. The reason is that they may turn into very useful test beds for all the strengths and limits of Open Data, especially those not encountered yet where the movement was born.\n\nIn fact, the original discourse and arguments around Open Data are heavily Western centric. The problem they want to solve is how to make democracy work better *in countries where it already exists and which share a great amount of history and cultural/philosophical values*.\n\nOther countries face very different challenges, from the philosophical level to the practical one. A common issue in developing countries, for example, is that there is very little to open simply because much PSI (Public Sector Information) doesn't exist in digital format yet. Therefore, the first thing to do is to *create* data, normally through outsourcing and crowd sourcing.\n\nOther issues, that will be discussed in detail in other sections of the report because they are also present in Europe in different forms, are related to lack of equal opportunities for access to data and serious fears (sometimes, concrete, sometimes caused by confusion about what should be open and how) that data will be used *against* citizens. A commenter to Gurstein's Open Data: Empowering the Empowered or Effective Data Use for Everyone? said:\n\n> *in Delhi and Mumbai, mobs and rioters managed to get information about particular identity groups through voter rolls: openness is, in certain situations, a precarious virtue. It is almost certain that Open Data would be used to rig election but here again openness is not the issue, they would find it anyway...*\n\nSo far, the main interest about Open Data in Asian countries seems limited, so to speak, to its effects on transparency in politics. At a two-weeks programming contest held at the end of 2010 in Thailand, for example, one of the most appreciated entries was a software scraper of the Thailand's Member of House of Representative Website, that made it possible for everybody to create applications using those data.", - "page_start": 7, - "page_end": 7, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "| 1. Introduction 3 |\n| --- |\n| 2. Social and political landscape 3 |\n| 2.1. Wikileaks and the Open Data movement 5 |\n| 2.2. Data Openness in EU 6 |\n| 2.3. Open Data in Latin America, Asia and Africa 8 |\n| 3. Emerging trends and issues related to Open Data 11 |\n| 3.1. Cost of not opening PSI is increasing 11 |\n| 3.2. Creative, unforeseen uses of local Open Data increase 12 |\n| 3.3. Legal issues remain crucial 13 |\n| 3.4. The price of digitization 14 |\n| 3.5. The nature of Open Government and the relationship between citizens and Government 15 |\n| 3.6. Clearer vision of the real risks and limits of Open Data 16 |\n| 3.6.1. Data alterations and financial sustainability 17 |\n| 3.6.2. Real impact of data manipulation or misunderstanding 17 |\n| 3.6.3. Unequal access 19 |\n| 3.6.4. Lack of education to data 20 |\n| 3.6.5. Lack of public interest 21 |\n| 3.6.6. Unprepared Public Administrators 22 |\n| 3.7. The privacy problem 22 |\n| 3.8. Need to better define what is Public Data 23 |\n| 4. Conclusion: seven Open Data strategy and best practices suggestions 27 |\n| 4.1. Properly define and explain both Open Data and Public Data 27 |\n| 4.2. Keep political issues separated by economics ones 27 |\n| 4.3. Keep past and future separate 28 |\n| 4.4. Impose proper licensing and streamline procurement 29 |\n| 4.5. Educate citizens to understand and use data 30 |\n| 4.6. Focus on local, specific issues to raise interest for Open Data 31 |\n| 4.7. Involve NGOs, charities and business associations 32 |\n| 5. Bibliography 33 |", - "page_start": 1, - "page_end": 1, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "#### **3.6.1. Data alterations and financial sustainability**\n\nSome concerns about the limits of Open Data are about what may happen, or stop to happen, *before* they are published online. The most common concerns of this type are (from Open Public Data: Then What? - Part 1):\n\n- 1. Opening up PSI causes those data to not be produced anymore, or to be only produced as private property by private corporations, because the public agencies whose job was to produce those data, can't sell them anymore.\n- 2. total accessibility of data provides more incentives to tinker with them, at the risk of reducing trust in institutions and inhibiting decision-making even more than today.\n\nData manipulation is the topic of the next paragraph. Speaking of costs, a point to take into account is that, once data are open, routinely used and monitored by as many independent users as possible, even the cost of keeping them up to date may be sensibly reduced: in other words, in the medium/long term Open Data may reduce the need to periodically perform complete, that is very expensive, studies and surveys to update a whole corpus of data in one run.\n\nBesides, and above all, even if opening data always destroyed any source of income for the public office that used to create and maintain them, this problem would only exist for the PSI datasets that are *already* sold today. Such data, even if of strategic importance as is the case with digital cartography, are only a minimal fraction of all the PSI that could and should be opened to increase transparency, reduce the costs of Government and stimulate the economy. In all these other cases:\n\n- the money to generate the data already arrives by some other source than sales and licensing(but even with those data it may be possible to generate them by crowdsourcing, thereby reducing those costs!)\n- the only extra expense caused by publishing those data online (assuming they're already available in some digital format, of course!), would be the hosting and bandwidth costs, that may be greatly reduced by mirroring and other technical solutions like torrents, already widely used to distribute Free/Open Source Software (FOSS) through the Internet.\n\n#### **3.6.2. Real impact of data manipulation or misunderstanding**\n\nThe fix for the risk that data is manipulated is to not only open government data and procedures, but to simplify the latter (which eventually also greatly reduces cost) as much as possible. Abundance of occasions to secretly play with data and how they are managed is a symptom of excessive, or peak complexity: again, problems and risks with Open Data are a symptom of a [pre", - "page_start": 16, - "page_end": 16, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind *prove*, that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n- 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n- 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, **nor there is any actual demand for them by Open Data advocates**\n- 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n- 4. Very often, in practice, Open Data struggles only happen about *when and how* to make available in the most effective way for society information that was *already* recognized as public. *What* to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n### **3.8. Need to better define what is Public Data**\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "#### **3.1.1 How to browse through the Editorial Content of the Portal**\n\nThe editorial content of the Portal is organized into 4 main menu items:\n\n- 1. What we do\n- 2. Providing Data\n- 3. Using Data\n- 4. Resources\n\n| European Data Portal | | | | |\n| --- | --- | --- | --- | --- |\n| 1 What we do- | Data - | Providing Data- | Using Data . | Resources - |\n\n- **1. Click on \"What we do\", then on sub-menu \"Our Activities\"**\nThe system displays a separate page with information on what is done in the Portal.\n\n| Newsletter FAQ Search Contact Cookies Legal notice Login > | English (en) |\n| --- | --- |\n| Search site content ... ರ | |\n| European Data Portal > What we do > Our Activities | |\n| 1 What we do- Data - Providing Data- Using Data - | Resources - |\n| Our Activities Open Data Maturity Factsheets and Reports Featured Highlights Calendar News | |\n| Our Activities | |\n| The European Data Portal harvests the metadata of Public Sector Information available on | |\n| public data portals across European countries. Information regarding the provision of data and | |\n| the benefits of re-using data is also included. | |\n| What is Open Data? | |\n| Open (Government) Data refers to the information collected, produced or paid for by the public bodies (also referred to as Public Sector Information) | |\n| and made freely available for re-use for any purpose. The licence will specify the terms of use. These principles for Open Data are described in detail | |\n| in the Open Definition &. | |\n| Public sector information is information held by the public sector. The Directive on the re-use of public sector information provides a common legal | |\n| framework for a European market for goverment-held data. It is built around the key pillars of the internal market: free flow of data, transparency and | |\n| fair competition. It is important to note that not all of the public sector information is Open Data. | |\n| Find out more about the PSI Directive a and other non-legislative activities of DG CONNECT & in this area. | |\n| About the European Data Portal | |\n| Going beyond the harvesting of metadata, the strategic objective of the European Data Portal is to improve accessibility and increase the value of | |\n| Open Data: | |\n| · Accessibility: How to access this information? Where to find it? How to make it available in the first place? In | |\n| countries? In what language? | |\n| · Value: For what purpose and what economic gain? Societal gain? Democratic gain? In what format? What is the critical mass? | |\n| The European Data Portal addresses the whole data value chain: from data publishing to data re-use. | |\n| Within the Portal, sections are dedicated to: | |\n| Searching datasets: Categories have been established to structure the metadata harvested from the various countries. These categories follow the | |\n| revision of the DCAT Application Profile & and have been mapped against the Eurovoc Thesaurus G. | |", - "page_start": 9, - "page_end": 9, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "elections\n\n- Open Congress: a tool for political scientists to track the work and effectiveness of the Brazilian congress\n- Paraguay: Who Do We Choose?: lists profiles of all candidates for many public posts.\n\nIn Brazil, the principle that *\"what is not confidential should be available on the Internet in the open data format\"* is already discussed and, in principle, accepted, by some departments of the Brazilian federal government. However, the preferred practice for now is (if there are no other obstacles) to only publish data that have been explicitly requested by some citizens.\n\nA report presented in May 2011 at the First Global Conference on Transparency Research mentioned a couple of Open Data issues in Latin America that are worth noting, because they're present even in Europe and North America, in spite of the different historical and social background:\n\n- \"Better coordination is needed between right to information campaigners and open data activists.\"\n- \"If activist manage to target particular topics to add \"value\" to the discussion, demand for open data could eventually increase in the region.\"\n\nIn Africa, mobile phones are much more available, and more essential than computer with Internet access, often bypassing the need for real desktop PCs with many applications. Therefore, from a purely technical point of view, transparency, accountability and efficiency in government are quickly becoming accessible to most African citizens through mobile networks rather than through the \"traditional\" Internet. However, there are still too few public departments and procedures that use digital documents and procedures on a scale large enough to generate meaningful volumes of digital data that could be then published online.\n\nWhile we write, Kenya is laying the legal groundwork to support Open Data. Permanent Secretary for Information and Communications, Dr. Bitange Ndemo is reported as having been championing for quite some time. In practice, big challenges remain for Open Data usage in Kenya. The easiest one to solve is to technical, that is find skilled people that can package the data in ways that the public can consume (even on mobile phones...). The real problem, however, is the fact that (summarizing from Thinking About Africa's Open Data):\n\n> There is a lot of Kenya data but it isn't accessible. The entities that hold the most public and infrastructure data are always government institutions. Getting information from them can be very hard indeed. We don't know who to go to to get the data we need, and", - "page_start": 9, - "page_end": 9, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What are Steinberg's concerns about the government releasing all non-private existing data?", - "target_page": 28, - "target_passage": "The first reasons for Steinberg's concern is that asking for everything as soon as possible would \"stress the system too much, by spreading thin the finite amount of good will, money and political capital\". The second is that many existing old data and data archival systems are, in practice, so uninteresting that it wouldn't make sense to spend resources in opening them", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "benefit when local businesses make more money) are aware of this opportunity?\n\n# **4. Conclusion: seven Open Data strategy and best practices suggestions**\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n### **4.1. Properly define and explain both Open Data and Public Data**\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n- 1. **Privacy issues are almost always a non-issue.** Quoting from What \"open data\" means and what it doesn't): *Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.*\n- 2. Defining as Public and consequently opening them in the right way, *much more data* than those born and stored *inside* Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n### **4.2. Keep political issues separated by economics ones**\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data *even if* they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "more concrete over time is damage control. In a world that produces digital data without interruption, uncontrolled and unpredictable data releases are facts of life that are very hard to predict, practically impossible to avoid and increasingly common. Opening public government data, that is providing plenty of officially verified information, becomes therefore also a damage control solution, to prevent or at least minimize damages from such uncontrolled releases. Without official Open Public Data, individual citizens, political parties or other organizations will start to process and compare (if they already aren't...) data from unofficial sources anyway, maybe from different countries. In such cases, it will be unavoidable not reach sometimes, even in good faith, wrong conclusions. This is not some theoretical possibility far in the future, as this real world example (from a comment to an Open Data discussion in an italian blog) proves:\n\n> \"*on the [non italian] Geonames website you can download geo-referenced data about... 47000 Italian municipalities. That worries me, because there are only 8094 of them. Besides, I grabbed a few random data about population, and I can guarantee you that not one was right. What should be done in such cases?*\n\nFrom an Open Data perspective, all these recent stories have (at least) one thing in common: they suggest that, considering its current needs and problems, current societies want and need more Open Data than they already have.\n\n### **2.1. Wikileaks and the Open Data movement**\n\nDuring the 2010/2011 winter the discussions around the Cablegate and other documents published by Wikileaks have, in some occasion, included hostility towards Open Data. This is a consequence of a more or less conscious mixing of the two themes, because in a very general sense, both Open Data and Wikileaks are about transparency, accountability and democracy.\n\nAs far as this study is concerned, two conclusions can be drawn from the Cablegate/Wikileaks scandal.\n\nThe first is that, in practice, it is necessary to find and equilibrium between secrecy and transparency whenever government activities are concerned. Citizens must be able to know what the state is *actually* doing but sometimes, be it for careful evaluation of all the alternatives or because of security, it must be possible to work behind closed doors, at least temporarily. We'll come back to this point later in this report.\n\nThe second conclusion is that, while certainly both Open Data and Wikileaks are about openness and transparency in politics, not only there are deep differences between the two ideas but, in our", - "page_start": 4, - "page_end": 4, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "### **4.3. Keep past and future separate**\n\nFor the same reason why it is important to always distinguishes between political and economical advantages (or disadvantages) of Open Data, it is necessary to keep decisions about *future* data (those that will arrive in the future, due to new contracts, public services and so on) separate from those about data that already exist. At the end of 2010, T. Steinberg wrote that the idea that Government should publish everything non-private it can **now** is \"rather dangerous\", and that it would be much better to release nothing until someone actually asked for it, and at that point doing it right, that is with an open license and so on. The first reasons for Steinberg's concern is that asking for everything as soon as possible would *\"stress the system too much, by spreading thin the finite amount of good will, money and political capital\"*. The second is that many existing old data and data archival systems are, in practice, so uninteresting that it wouldn't make sense to spend resources in opening them.\n\nEven if these concerns were always true, it is important to realize that they apply (especially the second) to already existing data, not to future ones. The two classes of data have, or can have, very different constraints. Existing data may still exist only in paper format and/or be locked by closed or unclear licenses, or not relevant anymore for future decisions.\n\nOpening *future* data, instead, is almost always more important, useful urgent, easier and cheaper than digitizing or even only reformatting material that in many cases is already too old to make immediate, concrete differences. While this argument is probably not always true when we look at Open data for transparency, it probably is when it comes to economic development.\n\nTherefore, features and guidelines that should be present in all future data generation and management processes include:\n\n- standardization: the less, obviously open, formats are used for data of the same type, the easier it is to merge and correlate them. The formats that have to be standardized are not only those at the pure software level. Even more important is, for example, to adopt by law standard identificators for government suppliers, names and machine-readable identifiers of budget voices and so on\n- preparation for future digitization: new digital systems should explicitly be designed from the beginning so that it will be possible, when non-digital records will be digitized, to add them to the databases without modifying losses.\n- Open licenses", - "page_start": 27, - "page_end": 27, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind *prove*, that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n- 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n- 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, **nor there is any actual demand for them by Open Data advocates**\n- 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n- 4. Very often, in practice, Open Data struggles only happen about *when and how* to make available in the most effective way for society information that was *already* recognized as public. *What* to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n### **3.8. Need to better define what is Public Data**\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "with a project called \"Tales of Things\" to allow people to leave messages for each other (or just for the world) at the bus stops. Scanning the QR code now allows people to see not just the bus timetable, but also the notes other travelers have left on that stop, including *\"what's nearby, who's waiting for whom, what number can you call for a good time. It's a cross between bus stop Facebook and digital graffiti\"*, that happened thanks to the openness of the original bus stop data.\n\nThe Social Life of Data Project will study instead how particular datasets have been used, who used them, how those people are connected and what conversations happen around Open Data.\n\n### **3.3. Legal issues remain crucial**\n\nProper licensing of Public data is essential. The more Open Data activities continue, the clearer this rule becomes. What distinguishes Open Data from \"mere\" transparency is reuse. Paraphrasing Eaves, until a government get the licensing issue right, Open Data cannot bring all the possible benefits in that country. If there are no guarantees that public data can be used without restriction, very little happens in practice, and when it happens it may be something against the public interest.\n\nCanadian Company Public Engines Inc, that is paid by local police departments to collect, process and analyze official crime data, also publishes online, with a proprietary license, anonymized summaries of those data. When in 2010 another company, Report See Inc, scraped those data from their website to reuse them, Public Engines sued.\n\nReporting this, D. Eaves rightly points out that *both* companies are right: one is trying to protect its investment, the other is simply trying to reuse what IS public data, by getting it from the ONLY place where it's available. This is what happens when public officials leave the ownership of *public* data to the third parties hired to collect them. Please note that, in practice, it makes very little difference whether those third parties are private, for-profit corporations or even other Public Administrations. Unless, of course, there are national laws already in place that define in advance what is the license of all present and future Public Data, *no matter how they were generated and by whom*, those data can be lost in any moment for society. In all other cases, the legal status of data will be either officially closed and locked, or uncertain enough to prevent most or all reuses. In February 2011, the news came that, even if they weren't the original copyright holders, Public Engines had been able to put together enough legal claims to convince Report See to give up.\n\nDisputes like this should not happen and would not happen if all contracts regarding collection and management of PSI clearly specified that all the resulting data either go directly into the public domain (after being anonymized if necessary, of course) or remain exclusive property of the", - "page_start": 12, - "page_end": 12, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "What is, exactly, Public Data? A definition that is accepted almost implicitly is *\"data that is of public interest, that belongs to the whole community, data that every citizen is surely entitled to know and use\"*. This definition is so generic that accepting it together with the assumption that all such data should be open as preached by the Open Data movement (online, as soon as possible, in machine readable format with an open license etc...) doesn't create any particular problem or conflict.\n\nReal problems however start as it has happened all too often so far, whenever we assume more or less consciously that \"Public Data\" in the sense defined above and data directly produced by Governments and Public Administrations, that is what's normally called PSI (Public Sector Information) are the same thing.\n\nThere is no doubt that Governments and Public Administrations produce huge quantities of Public Data. But this is an age of privatization of many public services, from transportation to healthcare, energy and water management. This is an age in which many activities with potentially very serious impacts on whole communities, like processing of hazardous substances or toxic waste, happen *outside* Public Administrations. The paradox is that, as Sasaki put it, this increased privatization is happening in the very same period in which *\" we are observing a worldwide diffusion of access to information laws that empower citizens to hold government agencies accountable.\"*\n\nIn such a context, \"Public Data\"is critical just because it is a much bigger set of data than what constitutes traditional, official PSI. \"Public Data\" includes all that information *plus* the much bigger amount of data describing and measuring all the activities of private companies, from bus timetables to packaged food ingredients, aqueducts performances and composition of fumes released in the atmosphere, that have a *direct impact* on the health and rights of all citizens of the communities affected by the activities of those companies.\n\nAre such data \"Public\" today, in the sense defined at the beginning of this paragraph, that is something every citizen has the right to know without intermediaries or delegates, or not? Should they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as soon as possible, in machine readable format with an open license etc...) just like, for example, the budget of some Ministry? Answering these questions may be one of the biggest challenges for the Open Data community, and for society as a whole, in the next years.\n\nHere are, in order to facilitate reflection on this issue, a few recent, real world examples of \"Public Data\" that are *not* PSI, and of the impacts of their lack of openness.", - "page_start": 23, - "page_end": 23, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "#### **3.6.1. Data alterations and financial sustainability**\n\nSome concerns about the limits of Open Data are about what may happen, or stop to happen, *before* they are published online. The most common concerns of this type are (from Open Public Data: Then What? - Part 1):\n\n- 1. Opening up PSI causes those data to not be produced anymore, or to be only produced as private property by private corporations, because the public agencies whose job was to produce those data, can't sell them anymore.\n- 2. total accessibility of data provides more incentives to tinker with them, at the risk of reducing trust in institutions and inhibiting decision-making even more than today.\n\nData manipulation is the topic of the next paragraph. Speaking of costs, a point to take into account is that, once data are open, routinely used and monitored by as many independent users as possible, even the cost of keeping them up to date may be sensibly reduced: in other words, in the medium/long term Open Data may reduce the need to periodically perform complete, that is very expensive, studies and surveys to update a whole corpus of data in one run.\n\nBesides, and above all, even if opening data always destroyed any source of income for the public office that used to create and maintain them, this problem would only exist for the PSI datasets that are *already* sold today. Such data, even if of strategic importance as is the case with digital cartography, are only a minimal fraction of all the PSI that could and should be opened to increase transparency, reduce the costs of Government and stimulate the economy. In all these other cases:\n\n- the money to generate the data already arrives by some other source than sales and licensing(but even with those data it may be possible to generate them by crowdsourcing, thereby reducing those costs!)\n- the only extra expense caused by publishing those data online (assuming they're already available in some digital format, of course!), would be the hosting and bandwidth costs, that may be greatly reduced by mirroring and other technical solutions like torrents, already widely used to distribute Free/Open Source Software (FOSS) through the Internet.\n\n#### **3.6.2. Real impact of data manipulation or misunderstanding**\n\nThe fix for the risk that data is manipulated is to not only open government data and procedures, but to simplify the latter (which eventually also greatly reduces cost) as much as possible. Abundance of occasions to secretly play with data and how they are managed is a symptom of excessive, or peak complexity: again, problems and risks with Open Data are a symptom of a [pre", - "page_start": 16, - "page_end": 16, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "The biggest difference between Gov 2.0 and OpenGov seems to be how they approach transparency. Gov 2.0 is about transparency through open data and the \"government as a platform\" idea. \"Open Government\" is about Transparency for the sake of accountability, but not necessarily interaction, cooperation and reuse of data outside the government.\n\n[who advocates] Open Data does so in order to make it accessible to citizens rather than to hold government accountable. This is not to say that one approach is better than another, but this is to say that there seem to be two very different motivations for advocating for transparency, and they do seem to correlate to whether people label themselves as part of Gov 2.0 or part of OpenGov.\n\nIn general, reflection and debate on this point is accelerating. At the moment, some characteristics of Open Government on which there is more or less agreement are that Open Government is about:\n\n- deliberation, choice, influence on decisions and participation as a common citizen\n- letting *all* citizens use technology to participate, monitor and define government activities. In other words, Government is really Open when it's based on interaction, not only on some set of infrastructures and methods imposed top-down\n- diffused, seamless conversations, that are only possible with digital technologies, online social networks and so on, between public employees and citizens.\n\nThe obvious potential limit of these definitions is that they rely on a big, still largely unknown factor, that is actual citizen participation. When data are opened, the problem becomes to have everybody use them, in order to actually realize Open Government as defined above. This issue will be explored in detail in the next paragraphs, but we can already say that Open Data are highlighting the critical, weak points in the present and future relationship between citizens and governments.\n\nWhile citizens participation is essential, especially in times of social and economic crisis, achieving it on a large scale won't be easy. Frustration and lack of trust in institutions in many countries are high, so it's no surprise when people express doubts that opening government data won't help much in fixing things.\n\n### **3.6. Clearer vision of the real risks and limits of Open Data**\n\nOpen Data, we already said, is about reuse. The point is, at least when the goal is Open Government and transparency in politics, reuse by whom? There is no *automatic* cause-effect relationship between Open Data and real transparency and democracy. On the contrary, several problems may occur, if administrators and citizens don't pay close attention.", - "page_start": 15, - "page_end": 15, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "#### **3.1.1 How to browse through the Editorial Content of the Portal**\n\nThe editorial content of the Portal is organized into 4 main menu items:\n\n- 1. What we do\n- 2. Providing Data\n- 3. Using Data\n- 4. Resources\n\n| European Data Portal | | | | |\n| --- | --- | --- | --- | --- |\n| 1 What we do- | Data - | Providing Data- | Using Data . | Resources - |\n\n- **1. Click on \"What we do\", then on sub-menu \"Our Activities\"**\nThe system displays a separate page with information on what is done in the Portal.\n\n| Newsletter FAQ Search Contact Cookies Legal notice Login > | English (en) |\n| --- | --- |\n| Search site content ... ರ | |\n| European Data Portal > What we do > Our Activities | |\n| 1 What we do- Data - Providing Data- Using Data - | Resources - |\n| Our Activities Open Data Maturity Factsheets and Reports Featured Highlights Calendar News | |\n| Our Activities | |\n| The European Data Portal harvests the metadata of Public Sector Information available on | |\n| public data portals across European countries. Information regarding the provision of data and | |\n| the benefits of re-using data is also included. | |\n| What is Open Data? | |\n| Open (Government) Data refers to the information collected, produced or paid for by the public bodies (also referred to as Public Sector Information) | |\n| and made freely available for re-use for any purpose. The licence will specify the terms of use. These principles for Open Data are described in detail | |\n| in the Open Definition &. | |\n| Public sector information is information held by the public sector. The Directive on the re-use of public sector information provides a common legal | |\n| framework for a European market for goverment-held data. It is built around the key pillars of the internal market: free flow of data, transparency and | |\n| fair competition. It is important to note that not all of the public sector information is Open Data. | |\n| Find out more about the PSI Directive a and other non-legislative activities of DG CONNECT & in this area. | |\n| About the European Data Portal | |\n| Going beyond the harvesting of metadata, the strategic objective of the European Data Portal is to improve accessibility and increase the value of | |\n| Open Data: | |\n| · Accessibility: How to access this information? Where to find it? How to make it available in the first place? In | |\n| countries? In what language? | |\n| · Value: For what purpose and what economic gain? Societal gain? Democratic gain? In what format? What is the critical mass? | |\n| The European Data Portal addresses the whole data value chain: from data publishing to data re-use. | |\n| Within the Portal, sections are dedicated to: | |\n| Searching datasets: Categories have been established to structure the metadata harvested from the various countries. These categories follow the | |\n| revision of the DCAT Application Profile & and have been mapped against the Eurovoc Thesaurus G. | |", - "page_start": 9, - "page_end": 9, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "#### procurement.\n\nThe same issue is denounced as an obstacle to innovation and cost savings in New recommendations for improving local open government and creating online hubs:\n\n> John Grant focused on a major pain point for government at all levels for tapping into the innovation economy: procurement issues, which civic entrepreneurs run into in cities, statehouses and Washington. \"It is time to look at these procurement rules more closely,\" he said, and promote higher levels of innovation. \"There are a lot of ideas are happening but a lot of rules restrict vendors from interacting in government,\" said Grant. Turner-Lee observed that traditional procurement laws may also not be flexible enough to bring more mobile apps into government.\n\nCurrent procurement laws aren't partially incompatible with an Open Data world only at this level, that is when it's time to procure software that makes the data useful. Even bigger problems and inefficiencies can be introduced at the beginning of data life, that is when data collection and processing services are procured. We've already explained that forgetting to impose the right license is one of the problems, but it's not the only one. Even future *organization* of all the foreseeable data management activities should take advantage of the flexibility provided by data openness. Here is how Tim Davies summarizes this point:\n\n> Right now [public] bodies often procure data collection, data publishing and data interfaces all in one block (as seems to be the case with Oxfordshires real-time bus information - leading to a roadblock on innovation) - and so without these layers being separated in procurement, some of the benefits here stand to be lost.\n\nChanging procurement of information/data-rich public services would be, of course, only the first step of a general reform of procurement laws and regulations. After management of Open Data has been simplified, it becomes time to implement similar simplifications to procurement of everything else. In fact, in such a scenario, there would be much less possibilities for the loopholes, frauds and inefficiencies that forced local procurement procedures to become so slow and complicated: since the public budget and other relevant public data would already be fully open, errors and other problems would surface and be fixed much more quickly and reliably than today, even assuming that they would continue to appear with the same frequency.\n\n### **4.5. Educate citizens to understand and use data**\n\nIt is necessary to guarantee the widest possible availability of *all* the pre-requisites for effective use of Open Data. In other words, it is necessary to provide free and widely accessible training, oriented to average citizens, on how and why to visualize Public Data and use them to make informed", - "page_start": 29, - "page_end": 29, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "How did serum estradiol and progesterone levels change during pregnancy?", - "target_page": 2, - "target_passage": "Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "**Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a**, Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). **b**, Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. **c**, A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n# **Discussion**\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence13–15,20,21,24–26. Investigations that compare women week. **d**, Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes11,27. But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain—indicating greater tract integrity—as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n# **Results**\n\n#### **Serological evaluations**\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml−1 and progesterone (P) = 0.84 ng ml−1; 3 weeks preparturition, E = 12,400 pg ml−1 and P = 103 ng ml−1; 3 months postparturition, E = 11.50 pg ml−1 and P = 0.04 ng ml−1).\n\n#### **Whole-brain dynamics from baseline through postpartum**\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline—2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV (*F* = 27.87, *P* < 0.001, deviance explained = 93.9%, *R*2 adj = 0.91), summary CT (*F* = 15.79, *P* < 0.001, deviance explained = 78.6%, *R*2 adj = 0.75) and total brain volume (*F* = 26.12, *P* < 0.001, deviance explained = 93.4%, *R*2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, *F* = 4.62, *P* = 0.007, deviance explained = 60.2%, *R*2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion *(F* = 10.44, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.77) and increased cerebrospinal fluid (CSF; *F* = 13.32, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n#### **Cortical volume and thickness changes tied to gestation**\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline—36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).\n\nChanges in GMV were near-ubiquitous across the cortical mantle (Fig. 2a). Most large-scale brain networks exhibited decreases in GMV (Fig. 2b and Supplementary Table 1); indeed, 80% of the 400 regions of interest (ROI) demonstrated negative relationships between GMV and gestation week (Fig. 2a and Supplementary Table 2). Together, these results provide evidence of a global decrease in cortical volume across pregnancy. Several sensory and attention subnetworks were particularly sensitive to gestation, including the control (subnetwork B), salience/ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks (Supplementary Table 1). Regions driving these network-level changes include the bilateral inferior parietal lobe, postcentral gyri, insulae, prefrontal cortex, posterior cingulate and somatosensory cortex (Fig. 2c, Supplementary Table 2 and validation of findings using alternate pipeline in Supplementary Tables 1 and 3). These regions and associated brain networks appear to decrease in volume at a faster rate than the rest of the brain throughout pregnancy, as determined by a subsequent analysis controlling for total GMV (Supplementary Tables 1 and 2). GMV reductions were also significantly correlated with the participant's estradiol and progesterone concentrations (Supplementary Table 1). A highly similar pattern of results was observed when examining pregnancy-related CT changes (Supplementary Fig. 3 and Supplementary Tables 4 and 5). Significant reductions in cortical GMV over gestation remained after controlling for standard quality control (QC) metrics, albeit with some influence on the magnitude and location of the observed effects (Supplementary Figs. 4 and 5).\n\nIn contrast, GMV within regions of the default mode (subnetwork C), limbic (subnetworks A and B) and visual peripheral networks buck the global trend by slightly increasing (for example, temporal poles), remaining constant (for example, orbitofrontal cortex) or reducing at a much slower rate (for example, extrastriate cortex) than total GMV (Fig. 2a,b and Supplementary Tables 1 and 2). CT changes in these regions exhibit similar patterns (Supplementary Fig. 3 and Supplementary Tables 4 and 5).\n\n#### **Subcortical GMV changes tied to gestation**\n\nConsistent with the broader cortical reductions in GMV, several subcortical regions significantly reduced in volume across gestation (Fig. 3a, left). This included bilateral ventral diencephalon (right hemisphere values shown in Fig. 3a, right; encompasses hypothalamus, substantia nigra, mammillary body, lateral geniculate nucleus and red nucleus among others22), caudate, hippocampus and thalamus, along with left putamen and brain stem (Supplementary Table 6, *q* < 0.05).\n\nNext, high-resolution segmentation of the MTL allowed us to interrogate subcortical structures at a finer resolution, revealing nonlinear volumetric decreases in CA1 (*F*(2,15) = 5.84, *q* = 0.031, *R*2 adj = 0.36; Fig. 3b, left) and CA2/CA3 (*F*(2,15) = 6.82, *q* = 0.027, *R*2 adj = 0.41; Fig. 3b, middle) across gestation. PHC exhibited linear volumetric decreases across gestation (*F*(1,16) = 24.87, *q* < 0.001, *R*2 adj = 0.58; Fig. 3b, right) which was also tied to estradiol (*F*(1,12) = 20.21, *q* = 0.005, *R*2 adj = 0.60). All three relationships remained significant after proportional correction for total GMV. There was no significant change in other subregions or total volume of the hippocampal body, or in the parahippocampal gyrus (Supplementary Table 7 and Supplementary Fig. 8).\n\n#### **White matter microstructure changes tied to gestation**\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all *q* < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n#### **Comparing brain changes across pregnancy against controls**\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls23. The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "# **nature neuroscience**\n\n# **Neuroanatomical changes observed over the course of a human pregnancy**\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\n**Laura Pritschet  1 , Caitlin M. Taylor  1 , Daniela Cossio  2 , Joshua Faskowitz  3 , Tyler Santander1 , Daniel A. Handwerker  3 , Hannah Grotzinger1 , Evan Layher1 , Elizabeth R. Chrastil  2,5 & Emily G. Jacobs  1,4,5**\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal fuid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity3–10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups12.\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum13–16, particularly in regions central to theory-of-mind processing13. These GMV changes persist at 6 years postpartum17 and are traceable decades later18,19, underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues21. Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n1 Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA. 2 Department of Neurobiology and Behavior, University of California, Irvine, CA, USA. 3 Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA. 4 Neuroscience Research Institute, University of California, Santa Barbara, CA, USA. 5 These authors contributed equally: Elizabeth R. Chrastil, Emily G. Jacobs.  e-mail: laura.pritschet@pennmedicine.upenn.edu; chrastil@uci.edu; emily.jacobs@psych.ucsb.edu", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "L inf. longitudinal fasc.\n\nPre 1st 2nd 3rd Post\n\nL arcuate fasciculus\n\nIndividual tracts\n\n2\n\n1\n\n0\n\n–1\n\n–2\n\n2\n\n1\n\n0\n\n–1\n\n–2\n\n**Fig. 4 | White matter microstructure changes throughout the experiment. a**, Numerous white matter tracts demonstrate increasing QA in relation to advancing gestation week (baseline—36 weeks, 16 scans), as determined by correlational tractography analysis (FDR, *q* < 0.0001). See Supplementary Table 9 for complete list of tracts with a significant correlation between QA and gestation week. **b**, Summary of QA values by pregnancy stage (gestation and postpartum, 23 scans) for representative ROIs significantly tied to gestation. ROI-based tractometry was used to extract QA values. Each boxplot represents\n\noverlook the full range of changes that unfold within the gestational window, and underrepresent the brain's metamorphosis during pregnancy. Furthermore, although observed changes were largely global, some regions displayed notable stability (for example, extrastriate cortex). The subcortical region that displayed the strongest relationship with gestation week was the ventral diencephalon, which encompasses the hypothalamus and subsequent medial preoptic area and paraventricular nucleus—structures critical for inducing maternal behavior12,16. The hippocampus exhibited a reduction in volume across gestation, and with higher spatial resolution, this reduction was revealed to be driven by changes in CA1 and CA2/CA3 subfield volumes, while other hippocampal subfields remained stable. Adjacent PHC within the MTL also exhibited volume reduction across gestation. While our hippocampal findings are consistent with pre/post studies of pregnancy13, the precision lens applied within gestation revealed the nonlinear nature of this reduction. Recapitulating and clarifying these regionally specific patterns of volume change throughout the MTL merits further investigation.\n\nSimilar precision imaging studies have captured dynamic brain reorganization across other neuroendocrine transitions, such as the menstrual cycle (see review in ref. 28), underscoring the powerful role steroid hormones have in shaping the mammalian brain29. Endocrine changes across pregnancy dwarf those that occur across the menstrual cycle, which highlights the critical need to map the brain's response to this unique hormonal state. Broad physiological changes occur in tandem with the rise in steroid hormones, including changes in body mass composition, water retention, immune function and\n\nPre 1st 2nd 3rd Post Pre 1st 2nd 3rd Post\n\nsleep patterns11. These factors could have a role in the brain changes observed here, with some driving neurobiological changes and others, like water retention, potentially affecting MRI-based measurements. Note that, although cortical reductions in GMV over gestation were stable across analyses, accounting for QC measures influenced the magnitude and location of these results. These metrics all fell within the standard range, but there may be meaningful reductions in signal that accompany volumetric reductions (for example, increased CSF and decreased GM)—a methodological nuance that goes beyond the scope of this resource study. Ultimately, identifying the shared and unique contributions of these factors to the neuroanatomical changes that unfold across gestation warrants further investigation. Deeply phenotyping a large and diverse cohort of women across pregnancy will open up new avenues of exploration, for example, allowing researchers to link blood-based proteomic signatures to pregnancy outcomes; deploying wearable devices to monitor changes in sleep, cognition and mood; and probing the broader social and environmental determinants of maternal health27.\n\nThe neuroanatomical changes that unfold during matrescence may have broad implications for understanding individual differences in parental behavior13,24,30,31, vulnerability to mental health disorders32,33 and patterns of brain aging18,19,34–36. Decreases in GMV may reflect 'fine-tuning' of the brain by neuromodulatory hormones in preparation for parenthood26. For example, in rodents, steroid hormones promote parental behavior by remodeling specific neural circuits in the medial preoptic area of the hypothalamus. These behavioral adaptations are critical to the dam's ability to meet the demands of caring for", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - }, - { - "text": "# 2.3. FastBlue tracer injections\n\nTable 2\n\nMice were briefly anesthetized during the procedure, induced with 3% to 5% isoflurane, and then maintained at 1.5% to 2% as required. Hindlimbs were taped with the plantar surface of the paw facing up, and a custom, 26G removable needle with a 30˚ bevel, attached to a 25-mL Hamilton syringe, was inserted between the 2 distal-most footpads, towards the medial aspect of the hindpaw. The needle was then rotated 90˚, so the bevel faced medially. Furthermore, 4-mL FastBlue (FB; 2% in sterile phosphate-buffered saline (PBS); CAS# 73819-41-7; Polysciences, Inc, Warrington, PA) per paw was then slowly injected, and the needle was left in place for 10 seconds, before rotating and carefully retracting to avoid backflow of FB along the needle track. This prevented the FB bolus from contacting the sural innervation territory of the lateral hindpaw, restricting it largely to the tibial innervation territory of the glabrous hindpaw skin.\n\n# 2.4. Immunohistochemistry and image acquisition\n\nMice were anesthetized with an overdose of pentobarbital (20 mg) and transcardially perfused with a fixative containing 4% formaldehyde. L3 to L5 DRGs were removed and postfixed for another 2 hours, cryoprotected in 30% sucrose overnight, and then embedded in optimal cutting temperature media (OCT; Tissue Tek, Alphen aan den Rijn, the Netherlands). Dorsal root ganglia were sectioned on a Leica CM1950 cryostat at 30 mm, with every section collected serially on 5 Superfrost Plus slides (VWR, Lutterworth, United Kingdom) and each slide containing 1 in every 5 sections (4-7 sections per slide). One slide per DRG was selected at random and was washed with PBS, before being incubated with appropriate primary antibodies (Table 2) diluted in 5% normal donkey serum and 0.3% Triton X-100 in PBS for 3 days at 4˚C. After PBS washes, slides were incubated with appropriate secondary antibodies (Table 2) in the same PBS/ (normal donkey serum) NDS/Triton-X100 solution as for primaries, overnight at room temperature. Slides were washed and coverslipped with VectaShield Vibrance Hardset mounting media (Vector Labs, Newark, CA), with 4',6-diamidino-2-phenylindole included in mounting media where FB-labelled cells were not being examined. Sections were imaged using a Zeiss LSM900 Airyscan confocal microscope equipped with 405-, 488-, 561-,\n\n| Primary and secondary antibodies used in the study. | | | |\n| --- | --- | --- | --- |\n| Antibody | Source | Identifiers | Working dilution |\n| Anti-GFP (Chicken polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab13970 | 1:1000 |\n| | | RRID: AB_300798 | |\n| Anti-NeuN (Guinea pig polyclonal) | Synaptic Systems, G ¨ottingen, Germany | Cat#: 266004 | 1:500 |\n| | | RRID: AB_2619988 | |\n| Anti-mCherry (Rat monoclonal) | Invitrogen, Waltham, MA; Thermo Fisher Scientific, | Cat#: M11217 | 1:500 |\n| United Kingdom | | RRID: AB_2536611 | |\n| Anti-Atf3 (Rabbit polyclonal) | Novus Biologicals, Minneapolis, MN | Cat#: NBP1-85816 | 1:500 |\n| | | RRID: AB_11014863 | |\n| Anti-NF200 (Rabbit polyclonal) | Sigma-Aldrich, Saint Louis, MO | Cat#: N4142 | 1:1000 |\n| | | RRID: AB_477272 | |\n| Anti-TrkA (Goat polyclonal) | R&D Systems, Minneapolis, MN | Cat#: AF1056 | 1:500 |\n| | | RRID: AB_2283049 | |\n| Anti-TDP43 (Rabbit polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab133547 | 1:100 |\n| | | RRID: AB_2920621 | |\n| Anti-RFP (Mouse monoclonal) | Thermo Fisher Scientific, United Kingdom | Cat#: MA5-15257 | 1:200 |\n| | | RRID: AB_10999796 | |\n| Anti-RFP (Chicken polyclonal) | Sigma-Aldrich, United Kingdom | Cat#: AB3528 | 1:200 |\n| | | RRID: AB_11212735 | |\n| Alexa Fluor 488 Donkey Anti-Chicken IgY | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 703-545-155 | 1:500 |\n| (Donkey polyclonal) | | RRID: AB_2340375 | |\n| Alexa Fluor 647 Donkey Anti-Guinea pig IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 706-605-148 | 1:250 |\n| (Donkey polyclonal) | | RRID: AB_2340476 | |\n| Rhodamine Red-X Donkey Anti-Rat IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 712-295-153 | 1:100 |\n| polyclonal) | | RRID: AB_2340676 | |\n| Alexa Fluor 647 Donkey Anti-Rabbit IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-605-152 | 1:250 |\n| polyclonal) | | RRID: AB_2492288 | |\n| Rhodamine Red-X Donkey Anti-Rabbit IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-295-152 RRID: AB_2340613 | 1:100 |\n| (Donkey polyclonal) | | | |\n| Alexa Fluor 546 Goat Anti-Chicken IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11040 | 1:400 |\n| polyclonal) | | RRID: AB_2534097 | |\n| Alexa Fluor 488 Goat Anti-Rabbit IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11008 | 1:400 |\n| polyclonal) | | RRID: AB_143165 | |\n| Alexa Fluor 546 Donkey Anti-Mouse IgG (Donkey | Thermo Fisher Scientific, United Kingdom | Cat#: A10036 | 1:400 |\n| polyclonal) | | RRID: AB_2534012 | |\n\nGFP, green fluorescent protein; RFP, red fluorescent protein", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed2.pdf" - }, - { - "text": "the offspring12. Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment13. Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted31. Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs37–39, including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition40. For both adolescence41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period42, but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.\n\nThese findings provide a critical rationale for conducting further precision imaging studies of pregnancy in demographically enriched cohorts to determine the universality and idiosyncrasy of these adaptations and their role in maternal health. Are the changes observed in our participant reflective of the broader population? Do deviations from the norm lead to maladaptive outcomes? A precision imaging approach can help determine whether the pace of pregnancy-induced neuroanatomical changes drives divergent brain health outcomes in women, as may be the case during other rapid periods of brain development44. One in five women experiences perinatal depression45 and while the first FDA-approved treatment is now available46, early detection remains elusive. Precision imaging studies could offer clues about an individual's risk for or resilience to depression before symptom onset, helping clinicians better determine when and how to intervene. Neuroscientists and clinicians also lack tools to facilitate detection and treatment of neurological disorders that co-occur, worsen or remit with pregnancy, such as epilepsy, headaches, multiple sclerosis and intracranial hypertension47. Precision mapping of the maternal brain lays the groundwork for a greater understanding of the subtle and sweeping structural, functional, behavioral and clinical changes that unfold across pregnancy. Such pursuits will advance our basic understanding of the human brain and its remarkable ability to undergo protracted plasticity in adulthood.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41593-024-01741-0.\n\n# **References**\n\n- 1. World Health Organization. Maternal, newborn, child and adolescent health and ageing. platform.who.int/data/ maternal-newborn-child-adolescent-ageing (2022).\n- 2. Thornburg, K. L., Bagby, S. P. & Giraud, G. D. *Knobil and Neill's Physiology of Reproduction* pp. 1927–1955 (Elsevier, 2015).\n- 3. Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. *Nat. Rev. Neurosci.* **9**, 11–25 (2008).\n- 4. Gregg, C. Pregnancy, prolactin and white matter regeneration. *J. Neurol. Sci.* **285**, 22–27 (2009).\n- 5. Haim, A. et al. A survey of neuroimmune changes in pregnant and postpartum female rats. *Brain Behav. Immun.* **59**, 67–78 (2017).\n- 6. Barrière, D. A. et al. Brain orchestration of pregnancy and maternal behavior in mice: a longitudinal morphometric study. *NeuroImage* **230**, 117776 (2021).\n- 7. Celik, A., Somer, M., Kukreja, B., Wu, T. & Kalish, B. T. The genomic architecture of pregnancy-associated plasticity in the maternal mouse hippocampus. *eNeuro* **9**, ENEURO.0117-22. 2022 (2022).\n- 8. Puri, T. A., Richard, J. E. & Galea, L. A. M. Beyond sex diferences: short- and long-term efects of pregnancy on the brain. *Trends Neurosci.* **46**, 459–471 (2023).\n- 9. Chaker, Z. et al. Pregnancy-responsive pools of adult neural stem cells for transient neurogenesis in mothers. *Science* **382**, 958–963 (2023).\n- 10. Diamond, M. C., Johnson, R. E. & Ingham, C. Brain plasticity induced by environment and pregnancy. *Int. J. Neurosci.* **2**, 171–178 (1971).\n- 11. Servin-Barthet, C. et al. The transition to motherhood: linking hormones, brain and behaviour. *Nat. Rev. Neurosci.* **24**, 605–619 (2023).\n- 12. Ammari, R. et al. Hormone-mediated neural remodeling orchestrates parenting onset during pregnancy. *Science* **382**, 76–81 (2023).\n- 13. Hoekzema, E. et al. Pregnancy leads to long-lasting changes in human brain structure. *Nat. Neurosci.* **20**, 287–296 (2017).\n- 14. Hoekzema, E. et al. Mapping the efects of pregnancy on resting state brain activity, white matter microstructure, neural metabolite concentrations and grey matter architecture. *Nat. Commun.* **13**, 6931 (2022).\n- 15. Martínez-García, M., Paternina-Die, M., Desco, M., Vilarroya, O. & Carmona, S. Characterizing the brain structural adaptations across the motherhood transition. *Front. Glob. Womens Health* **2**, 742775 (2021).\n- 16. Spalek, K. et al. Pregnancy renders anatomical changes in hypothalamic substructures of the human brain that relate to aspects of maternal behavior. *Psychoneuroendocrinology* **164**, 107021 (2024).\n- 17. Martínez-García, M. et al. Do pregnancy-induced brain changes reverse? The brain of a mother six years after parturition. *Brain Sci.* **11**, 168 (2021b).\n- 18. De Lange, A.-M. G. et al. Population-based neuroimaging reveals traces of childbirth in the maternal brain. *Proc. Natl Acad. Sci. USA* **116**, 22341–22346 (2019).", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "# **Methods**\n\n#### **Participant**\n\nOur participant (E.R.C.) was a healthy 38-year-old primiparous woman who underwent in-vitro fertilization (IVF) to achieve pregnancy. Previous studies reported no observable differences in neural changes from prepregnancy to postpregnancy between women who conceived naturally versus women who conceived via IVF13, and doing so provides a controlled way of monitoring pregnancy status. The participant experienced no pregnancy complications (for example, gestational diabetes and hypertension), delivered at full term via vaginal birth, nursed through 16 months postpartum, and had no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking. The participant gave written informed consent and the study was approved by the University of California, Irvine Human Subjects Committee.\n\n#### **Study design**\n\nThe participant underwent 26 MRI scanning sessions from 3 weeks before conception through 2 years postpartum (162 weeks), during which high-resolution anatomical and diffusion spectrum imaging scans of the brain were acquired. Scans were distributed throughout this period, including prepregnancy (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans; Fig. 1c). The first 6 sessions took place at the UCSB Brain Imaging Center (BIC), the final 20 sessions took place at the UCI Facility for Imaging and Brain Research (FIBRE). The majority of scans took place between 9 AM and 2 PM, limiting significant AM–PM fluctuations49. The MRI protocol, scanner (Siemens 3T Prisma) and software (version MR E11) were identical across sites. Each scanner was checked weekly for the duration of the study and passed all QC reports indicating no significant alterations in the geometry. To ensure the robustness of the findings, after the final study session, the participant completed back-to-back validation scans at UCI and UCSB within a 12-h window to assess reliability between scanners. Intraclass correlation coefficients (two-way, random effects, absolute agreement, single rater) reveal 'excellent' test–retest reliability between scanners, including ROI-level GMV (ICC = 0.97, 95% CI: 0.80–0.99), ROI-level CT (ICC = 0.96, 95% CI: 0.90–0.98), MTL subfield volume (ICC = 0.99, 95% CI: 0.97–0.99) and ROI-level QA (ICC = 0.94, 95% CI: 0.91–0.97). Furthermore, when examining the relationship between gestation week and GMV among UCI-only gestational sessions, findings were consistent (Supplementary Fig. 12), indicating that site differences are highly unlikely to have contributed meaningfully to the observed effects. Although not applicable here, we note that having a control participant scanned over a similar duration within the same scanner is critical for estimating how much variation in the brain can be attributed to within-scanner variability.\n\nTo monitor state-dependent mood and lifestyle measures, the following scales were administered on each experiment day: Perceived Stress Scale50, Pittsburgh Sleep Quality Index51, State-Trait Anxiety Inventory for Adults52 and Profile of Mood States53. Correlation analyses between state-dependent measures, summary brain metrics and gestation week revealed little to no relationships. The only exception to this was a moderate negative association between global QA and state anxiety (Spearman's correlation (*ρ*) = −0.65, *q* = 0.04; baseline—36 weeks, *n* = 16). By making this data openly accessible, we encourage a more nuanced approach toward exploring mood and lifestyle measures in relation to brain changes over pregnancy.\n\n#### **Endocrine procedures**\n\nThe participant underwent a blood draw (*n* = 19; Fig. 1c) before MRI scanning. Sex steroid concentrations were determined via ultra-sensitive liquid chromatography–mass spectrometry at the Brigham and Women's Hospital Research Assay Core (BRAC). Assay sensitivities, dynamic range and intra-assay coefficients of variation were as follows: estradiol—1.0 pg ml−1, 1–500 pg ml−1, <5% relative s.d. (RSD); progesterone—0.05 ng ml−1, 0.05–10 ng ml−1, 9.33% RSD. Serological samples were not acquired in five sessions due to scheduling conflicts with UC Irvine's Center for Clinical Research.\n\n**MRI acquisition.** MRI scanning sessions at the University of California, Santa Barbara and Irvine were conducted on 3T Prisma scanners equipped with 64-channel phased-array head/neck coil (of which 50 coils are used for axial brain imaging). High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (repetition time (TR) = 2,500 ms, time to echo (TE) = 2.31 ms, inversion time (TI) = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo field map (TR = 758 ms, TE1 = 4.92 ms, TE2 = 7.38 ms, flip angle = 60°). A T2-weighted (T2w) turbo spin echo scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/ TE = 9,860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2-mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5 min and 42 sec). The Diffusion Spectrum Imaging (DSI) protocol sampled the entire brain with the following parameters: single phase, TR = 4,300 ms, echo time = 100.2 ms, 139 directions, *b*-max = 4,990, FoV = 259 × 259 mm, 78 slices, 1.7986 × 1.7986 × 1.8 mm voxel resolution. These images were linearly registered to the whole-brain T1w MPRAGE image. A custom foam headcase was used to provide extra padding around the head and neck, as well as to minimize head motion. Additionally, a custom-built sound-absorbing foam girdle was placed around the participant's waist to attenuate sound near the fetus during second-trimester and third-trimester scanning.\n\n**Image processing.** *Cortical volume and thickness*. CT and GMV were measured with Advanced Normalization Tools54 version 2.1.0 (ANTs). We first built a subject-specific template (SST) (antsMultivariateTemplateConstruction2) and tissue priors (antsCookTemplatePriors) based on our participant's two preconception whole-brain T1-weighted scans to examine neuroanatomical changes relative to the participant's prepregnancy baseline. We used labels from the OASIS population template, provided by ANTs, as priors for this step. For each session, the structural image was processed and registered to the SST using the ANTs CT pipeline (antsCorticalThickness). This begins with an N4 bias field correction for field inhomogeneity, then brain extraction using a hybrid registration/segmentation method55. Tissue segmentation was performed using Atropos54 to create tissue masks of CSF, gray matter, white matter and deep gray matter. Atropos allows prior knowledge to guide the segmentation algorithm, and we used labels from our SST as priors to minimize warping and remain in native participant space. CT measurements were then estimated using the DiReCT algorithm56, which estimates the gray–white matter interface and the gray matter–CSF interface and computes a diffeomorphic mapping between the two interactions, from which thickness is derived. Each gray matter tissue mask was normalized to the template and multiplied to a Jacobian image that was computed via affine and nonlinear transforms. Using MATLAB (version 2022a), summary, regional-level estimates of CT, GMV and CSF for each scan were obtained by taking the first eigenvariate (akin to a 'weighted mean'57) across all voxels within each parcel of the Schaefer 400-region atlas58. We then averaged ROIs across networks, which were defined by the 17-network Schaefer scheme58,59. Global measures of CT, GMV and CSF were computed for each session by summing across all voxels within the respective output image; total brain volume was computed by summing across all voxels within each session's brain extraction mask. Our findings held when using an SST derived from all 26 MRIs (prepregnancy through postpartum), as well as when estimating the mean (versus weighted mean) of all voxels within each parcel. The ANTs CT pipeline is highly validated with good test–retest reproducibility and improved ability to predict variables such as age and gender from region-wise CT measurements", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed4.pdf" - }, - { - "text": "**Fig. 2 | Cortical GMV showed widespread change through gestation and postpartum. a**, Multivariate regression analyses reveal largely negative relationships between gestation week and regional GMV, with only a minority of regions unaffected or increasing over the gestational window (baseline—36 weeks). All associations presented here were corrected for multiple comparisons (FDR at *q* < 0.05; nonsignificant values set to zero for interpretability). **b**, Average network change was calculated by estimating GMV percent change from baseline (initial) to 36 weeks gestation (final). Attention and control networks appear most affected. **c**, Six representative regions, classified by major subnetworks, that exhibit pronounced GMV change across gestation. For each panel, we display a scatterplot between average GMV of the ROIs and gestation week (left; gestation sessions only, 19 scans), and summary GMV of ROIs by pregnancy stage across the whole study (right; gestation and postpartum sessions, 26 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. All statistical tests were corrected for multiple comparisons (FDR at *q* < 0.05) and values were *z* scored and transformed to have a mean of zero and s.d. of one for easier comparison across regions. Please note that the data values shown here are raw (see Supplementary Tables 1 and 2 and Supplementary Data 1 for exhaustive list). Brain visualizations created with R package ggseg48. IQR, interquartile range; Lat, lateral; Med, medial; DMN, default mode network; VisPeri, visual peripheral network; SomMot, somatomotor network; VisCent, visual central network; Cont, control network; TempPar, temporal parietal network; DorsAttn, dorsal attention network; SalVentAttn, salience/ventral attention network.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed4.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n# 2. Methods\n\n#### 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light–dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel ofthe University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7to 16 weeks atthe start ofthe experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of MrgprdCreERT2;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1. Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgDCreERT2;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al.50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n#### 2.2. Spared nerve transection and crush surgeries\n\nSpared nerve injury (transection of the common peroneal and tibial branches of the sciatic nerve; SNItrans) and common peroneal and tibial crush injury (SNIcrush), in which nerve axons were severed but the epineurium remained intact, were performed as previously described.12 Anesthesia was induced with 3% to 5% isoflurane and then maintained at 1.5% to 2% as required. Analgesia, consisting of carprofen (10 mg/kg) and buprenorphine (0.05 mg/kg) (Glasgow) or carprofen (5 mg/kg) and local bupivacaine (2 mg/kg) (Oxford) was provided perioperatively. The left hindpaw was secured with tape in hip abduction, and the operative field (lateral surface of the thigh) was shaved. Ophthalmic ointment was applied to the eyes, and the shaved area was swabbed with chlorhexidine solution. A longitudinal incision was made in the skin at the lateral mid-thigh. Using blunt dissection, an opening was made through the biceps femoris, exposing the sciatic nerve and the 3 peripheral branches (sural, tibial, and common peroneal nerves). For SNItrans, the common peroneal and tibial nerves were ligated using a 6-0 Vicryl suture (Ethicon, Raritan, NJ), and a 1- to 2-mm piece distal to the suture was removed using spring scissors. For SNIcrush, the exposed tibial and common peroneal nerves were clamped using a pair of fine hemostats (Fine Science Tools, Heidelberg, Germany) closed to their second clip, leaving the nerve branches intact but translucent. The muscle was closed with one 6-0 Vicryl suture (Ethicon), and the skin incision was closed with one 10 mm wound clip (Alzet, Cupertino, CA). Animals were monitored daily for self-mutilation, and no animals required sacrifice due to tissue damage.\n\n#### Table 1\n\n#### Transgenic lines used in the study.\n\n| Used name | Full name | Putative population | Ref | Source | Tamoxifen regime |\n| --- | --- | --- | --- | --- | --- |\n| Atf3CreERT2 | Atf3tm1.1(cre/ERT2)Msra | Axotomised afferents | 13 | Gift: Dr Franziska Denk | 50 mg/kg on days 0, 3, and 7 after surgery |\n| AvilFlpO | Aviltm1(flpo)Ddg | Sensory neurons | 1 | Gift: Prof David Ginty | N.A. |\n| MrgDCreERT2 | Mrgprdtm1.1(cre/ERT2)Wql | Major class of nonpeptidergic | 39 | The Jackson Laboratory (RRID: | General: 1x 50 mg/kg in adulthood, (.1 week |\n| | | neurons | | IMSR_JAX:031286) | before experiment) |\n| | | | | | 3D volumetric analysis: 5x i.p. (0.5 mg/animal/ |\n| | | | | | day), beginning between P10 and P17 |\n| MrgDChR2- | Mrgprdtm4.1(COP4)Mjz | Major class of nonpeptidergic | 59 | Mutant Mouse Resource & Research | N.A. |\n| YFP | | neurons | | Centers (RRID:MMRRC_036112-UNC) | |\n| CalcaCreERT2 | Calcatm1.1(cre/ERT2)Ptch | Peptidergic neurons | 51 | Gift: Prof Pao-Tien Chuang | 1x 75 mg/kg in adulthood (.1 week before |\n| | | | | | experiment) |\n| Trpm8FlpO | | Cold afferents | 4 | Gift: Dr Mark Hoon | N.A. |\n| Thy1-CFP | B6.Cg-Tg(Thy1-CFP) | Sample of myelinated afferents | 16 | The Jackson Laboratory (RRID: | N.A. |\n| | 23Jrs/J | | | IMSR_JAX:003710) | |\n| ThCreERT2 | Thtm1.1(cre/ERT2)Ddg/J | C low threshold | 1 | Gift: Prof David Ginty; The Jackson | 1x 50 mg/kg in adulthood (.2 weeks before |\n| | | mechanoreceptors | | Laboratory (RRID:IMSR_JAX:025614) | experiment) |\n| RC::FLTG | B6.Cg- Gt(ROSA) | Flp-mediated tdTomato; | 40 | The Jackson Laboratory (RRID: | N.A. |\n| | tm1.3(CAG-tdTomato,- 26Sor | Cre1Flp-mediated GFP | | IMSR_JAX:026932) | |\n| | EGFP)Pjen /J | expression | | | |\n| Ai14 | B6.Cg- Gt(ROSA) | Cre-mediated tdTomato | 33 | The Jackson Laboratory (RRID: | N.A. |\n| | tm14(CAG-tdTomato)Hze 26Sor / | expression | | IMSR_JAX:007914) | |\n| J | | | | | |\n| Ai32 | B6.Cg- Gt(ROSA) | Cre-mediated ChR2-eYFP | 32 | The Jackson Laboratory (RRID: | N.A. |\n| | tm32(CAG 26Sor | expression | | IMSR_JAX:024109) | |\n| | COP4*H134R/EYFP)Hze | | | | |\n\nCFP, cyan fluorescent protein; GFP, Green fluorescent protein; YFP, yellow fluorescent protein.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - }, - { - "text": "\"endogenous opioid system\"—involved in reward and the regulation of pain and affect—have been documented (Bresin and Gordon 2013), as low basal plasma levels of β-endorphins (which are peripherally released following tissue damage) in psychiatric patients (van der Venne et al. 2021, Cakin Memik et al. 2023). Lower salivary levels of β-endorphins have also been registered immediately before NSSI, compared to post-NSSI (Störkel et al. 2021). Thus, an imbalance in the opioid system might be a relevant component in NSSI, where self-injures might be acted to initiate the release of β-endorphins to restore homeostasis (Stanley et al. 2010, Bresin and Gordon 2013). Another line of research examined \"nociceptive dysregulation,\" with possible hypoalgesia (Kirtley et al. 2016), showing higher pain thresholds and lower pain intensity in adolescents with NSSI (Nock et al. 2006, 2009, van der Venne et al. 2021). Recent works failed to replicate these fndings but reported specifc alterations in descending inhibitory pain control (Leone et al. 2021, Lalouni et al. 2022). Despite some inconsistencies, this is an area that is worth further investigation. It remains to investigate to what extent the perspective on NSSI offered here could be extended to cover the aforementioned body of evidence and whether active inference could help integrate the different perspectives we have discussed.\n\nWe focused on adolescence as a potentially critical period for NSSI, given that it is associated with high levels of uncertainty about several central domains in human life. However, there are other (gender-related) developmental periods in which bodily changes might be coupled with increased levels of uncertainty (e.g. in physiology, in the sense of self, in the social role) and vulnerability. Pregnancy and transition to menopause, e.g. are periods of endocrine and hormonal upheavals that might impact a woman's affective life and well-being. These physiological changes are coupled with a fundamental developmental transition that requires a redefnition of personal identity and narrative integration (McLean and Lilgendahl 2019), with increased uncertainty of one's internal states and role in the social context. Taking into account the perimenopausal and menopausal transition, the physiological, psychological, and affective experiences associated with it are very heterogeneous. Some women might experience it as a new beginning, whereas for others, it may be more critical (Deeks 2003). In some cases, e.g. the menopause transition might perturb the continuity of one's sense of self, inducing discrepancies in internal self-coherence (e.g. the end of childbearing years, the aging process), which might increase the level of distress (Barca and De Marchis 2018).\n\nThe dramatic changes that a women's physiology undergoes during life have been suggested to concur with the atypical interoception often reported (e.g. heightened interoceptive attention but poor interoceptive accuracy), which might contribute to their greater vulnerability to mental illness (Murphy et al. 2019). Although this is still a speculative hypothesis that needs to be tested empirically, the effect of these transition periods on women's well-being is currently overlooked and deserves more attention.\n\nFinally, although we only focused on \"maladaptive\" strategies to modify the sense of self through bodily sensations, there are also \"adaptive\" strategies that use the body to improve the sense of self and feelings of well-being. Among these, e.g. is engaging in physical activities to reduce emotional distress. Performing physical activity concurs in the reduction in symptoms of depression and anxiety, acting on both psychological (e.g. diverting from unpleasant stimuli or increasing the sense of selfeffcacy) and physiological mechanisms (e.g. through the release of monoamines and endorphins), which seem to alleviate distressing emotions (see for a review Paluska and Schwenk 2000). The relationship between well-being, physical activity, and interoception is increasingly receiving attention (Wallman-Jones et al. 2021) and deserves further investigation for its potential role as a protective factor against emotional distress and the development of clinical conditions.\n\nIt is worth reminding that the theoretical proposal advanced in this study—that NSSI might emerge when some of the (precision) parameters of one's model of the body and the self are not appropriately tuned—is still speculative. However, previous studies reported the importance of aberrant precision tuning in interoceptive streams across various psychopathological conditions, such as depression, anxiety, eating, and substance use disorders (Smith et al. 2020, 2021). These studies, along with other proposals (Khalsa et al. 2018), raise the possibility that interoceptive dysfunctions and the incorrect tuning of (precision) parameters of generative models might have a pervasive effect on psychopathology. This hypothesis remains to be investigated in the case of NSSI.\n\n#### Confict of interest\n\nThe authors have no conficts of interest to declare.\n\n### Data availability\n\nNo new data were generated or analyzed in support of this research.\n\n#### References\n\n- Abraham E, Hendler T, Zagoory-Sharon O *et al.* Interoception sensitivity in the parental brain during the frst months of parenting modulates children's somatic symptoms six years later: the role of oxytocin. *Int J Psychophysiol* 2019;**136**:39–48.\n- Adams RA, Stephan KE, Brown HR *et al.* The computational anatomy of psychosis. *Front Psychiatry* 2013;**4**:1–26.\n- Arciero G, Bondolf G. *Selfhood, Identity and Personality Styles*. 1st edn Hoboken, New Jersey, United States: John Wiley & Sons Inc, 2009.\n- Barca L, Candidi M, Lancia GL *et al.* Mapping the mental space of emotional concepts through kinematic measures of decision uncertainty. *Philos Trans R Soc Lond B Biol Sci* 2023;**378**:20210367.\n- Barca L, De Marchis MD. The case of Sofa: an example of the dynamic properties of the therapeutic relationship. 2018.\n- Barca L, Pezzulo G. Keep your interoceptive streams under control: an active inference perspective on anorexia nervosa. *Cogn Affect Behav Neurosci* 2020;**20**:427–40.\n- Barrett LF. *How Emotions Are Made: The Secret Life of the Brain*. Boston, Massachusetts, United States: Houghton Miffin Harcourt, 2017.\n- Barrett LF, Quigley KS, Hamilton P. An active inference theory of allostasis and interoception in depression. *Philos Trans R Soc Lond B Biol Sci* 2016;**371**:20160011.\n- Barrett LF, Simmons WK. Interoceptive predictions in the brain. *Nat Rev Neurosci* 2015;**16**:419–29.\n- Bowlby J. *Attachment and Loss: Attachment*. London, UK: Pimlico, 1997.\n- Bresin K, Gordon KH. Endogenous opioids and nonsuicidal selfinjury: a mechanism of affect regulation. *Neurosci Biobehav Rev* 2013;**37**:374–83.\n- Cakin Memik N, Hunc F, Kalayci S *et al.* Assessment of plasmaendogenous opioid neuropeptide levels and psychometric properties of non-suicidal self-injury in adolescents. *Arch Suicide Res* 2023;**27**:749–68.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "Which cortical sub-networks were particularly sensitive to pregnancy?", - "target_page": 2, - "target_passage": "Several sensory and attention subnetworks were particu- larly sensitive to gestation, including the control (subnetwork B), sali- ence ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "the offspring12. Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment13. Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted31. Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs37–39, including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition40. For both adolescence41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period42, but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.\n\nThese findings provide a critical rationale for conducting further precision imaging studies of pregnancy in demographically enriched cohorts to determine the universality and idiosyncrasy of these adaptations and their role in maternal health. Are the changes observed in our participant reflective of the broader population? Do deviations from the norm lead to maladaptive outcomes? A precision imaging approach can help determine whether the pace of pregnancy-induced neuroanatomical changes drives divergent brain health outcomes in women, as may be the case during other rapid periods of brain development44. One in five women experiences perinatal depression45 and while the first FDA-approved treatment is now available46, early detection remains elusive. Precision imaging studies could offer clues about an individual's risk for or resilience to depression before symptom onset, helping clinicians better determine when and how to intervene. Neuroscientists and clinicians also lack tools to facilitate detection and treatment of neurological disorders that co-occur, worsen or remit with pregnancy, such as epilepsy, headaches, multiple sclerosis and intracranial hypertension47. Precision mapping of the maternal brain lays the groundwork for a greater understanding of the subtle and sweeping structural, functional, behavioral and clinical changes that unfold across pregnancy. Such pursuits will advance our basic understanding of the human brain and its remarkable ability to undergo protracted plasticity in adulthood.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41593-024-01741-0.\n\n# **References**\n\n- 1. World Health Organization. Maternal, newborn, child and adolescent health and ageing. platform.who.int/data/ maternal-newborn-child-adolescent-ageing (2022).\n- 2. Thornburg, K. L., Bagby, S. P. & Giraud, G. D. *Knobil and Neill's Physiology of Reproduction* pp. 1927–1955 (Elsevier, 2015).\n- 3. Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. *Nat. Rev. Neurosci.* **9**, 11–25 (2008).\n- 4. Gregg, C. Pregnancy, prolactin and white matter regeneration. *J. Neurol. Sci.* **285**, 22–27 (2009).\n- 5. Haim, A. et al. A survey of neuroimmune changes in pregnant and postpartum female rats. *Brain Behav. Immun.* **59**, 67–78 (2017).\n- 6. Barrière, D. A. et al. Brain orchestration of pregnancy and maternal behavior in mice: a longitudinal morphometric study. *NeuroImage* **230**, 117776 (2021).\n- 7. Celik, A., Somer, M., Kukreja, B., Wu, T. & Kalish, B. T. The genomic architecture of pregnancy-associated plasticity in the maternal mouse hippocampus. *eNeuro* **9**, ENEURO.0117-22. 2022 (2022).\n- 8. Puri, T. A., Richard, J. E. & Galea, L. A. M. Beyond sex diferences: short- and long-term efects of pregnancy on the brain. *Trends Neurosci.* **46**, 459–471 (2023).\n- 9. Chaker, Z. et al. Pregnancy-responsive pools of adult neural stem cells for transient neurogenesis in mothers. *Science* **382**, 958–963 (2023).\n- 10. Diamond, M. C., Johnson, R. E. & Ingham, C. Brain plasticity induced by environment and pregnancy. *Int. J. Neurosci.* **2**, 171–178 (1971).\n- 11. Servin-Barthet, C. et al. The transition to motherhood: linking hormones, brain and behaviour. *Nat. Rev. Neurosci.* **24**, 605–619 (2023).\n- 12. Ammari, R. et al. Hormone-mediated neural remodeling orchestrates parenting onset during pregnancy. *Science* **382**, 76–81 (2023).\n- 13. Hoekzema, E. et al. Pregnancy leads to long-lasting changes in human brain structure. *Nat. Neurosci.* **20**, 287–296 (2017).\n- 14. Hoekzema, E. et al. Mapping the efects of pregnancy on resting state brain activity, white matter microstructure, neural metabolite concentrations and grey matter architecture. *Nat. Commun.* **13**, 6931 (2022).\n- 15. Martínez-García, M., Paternina-Die, M., Desco, M., Vilarroya, O. & Carmona, S. Characterizing the brain structural adaptations across the motherhood transition. *Front. Glob. Womens Health* **2**, 742775 (2021).\n- 16. Spalek, K. et al. Pregnancy renders anatomical changes in hypothalamic substructures of the human brain that relate to aspects of maternal behavior. *Psychoneuroendocrinology* **164**, 107021 (2024).\n- 17. Martínez-García, M. et al. Do pregnancy-induced brain changes reverse? The brain of a mother six years after parturition. *Brain Sci.* **11**, 168 (2021b).\n- 18. De Lange, A.-M. G. et al. Population-based neuroimaging reveals traces of childbirth in the maternal brain. *Proc. Natl Acad. Sci. USA* **116**, 22341–22346 (2019).", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "L inf. longitudinal fasc.\n\nPre 1st 2nd 3rd Post\n\nL arcuate fasciculus\n\nIndividual tracts\n\n2\n\n1\n\n0\n\n–1\n\n–2\n\n2\n\n1\n\n0\n\n–1\n\n–2\n\n**Fig. 4 | White matter microstructure changes throughout the experiment. a**, Numerous white matter tracts demonstrate increasing QA in relation to advancing gestation week (baseline—36 weeks, 16 scans), as determined by correlational tractography analysis (FDR, *q* < 0.0001). See Supplementary Table 9 for complete list of tracts with a significant correlation between QA and gestation week. **b**, Summary of QA values by pregnancy stage (gestation and postpartum, 23 scans) for representative ROIs significantly tied to gestation. ROI-based tractometry was used to extract QA values. Each boxplot represents\n\noverlook the full range of changes that unfold within the gestational window, and underrepresent the brain's metamorphosis during pregnancy. Furthermore, although observed changes were largely global, some regions displayed notable stability (for example, extrastriate cortex). The subcortical region that displayed the strongest relationship with gestation week was the ventral diencephalon, which encompasses the hypothalamus and subsequent medial preoptic area and paraventricular nucleus—structures critical for inducing maternal behavior12,16. The hippocampus exhibited a reduction in volume across gestation, and with higher spatial resolution, this reduction was revealed to be driven by changes in CA1 and CA2/CA3 subfield volumes, while other hippocampal subfields remained stable. Adjacent PHC within the MTL also exhibited volume reduction across gestation. While our hippocampal findings are consistent with pre/post studies of pregnancy13, the precision lens applied within gestation revealed the nonlinear nature of this reduction. Recapitulating and clarifying these regionally specific patterns of volume change throughout the MTL merits further investigation.\n\nSimilar precision imaging studies have captured dynamic brain reorganization across other neuroendocrine transitions, such as the menstrual cycle (see review in ref. 28), underscoring the powerful role steroid hormones have in shaping the mammalian brain29. Endocrine changes across pregnancy dwarf those that occur across the menstrual cycle, which highlights the critical need to map the brain's response to this unique hormonal state. Broad physiological changes occur in tandem with the rise in steroid hormones, including changes in body mass composition, water retention, immune function and\n\nPre 1st 2nd 3rd Post Pre 1st 2nd 3rd Post\n\nsleep patterns11. These factors could have a role in the brain changes observed here, with some driving neurobiological changes and others, like water retention, potentially affecting MRI-based measurements. Note that, although cortical reductions in GMV over gestation were stable across analyses, accounting for QC measures influenced the magnitude and location of these results. These metrics all fell within the standard range, but there may be meaningful reductions in signal that accompany volumetric reductions (for example, increased CSF and decreased GM)—a methodological nuance that goes beyond the scope of this resource study. Ultimately, identifying the shared and unique contributions of these factors to the neuroanatomical changes that unfold across gestation warrants further investigation. Deeply phenotyping a large and diverse cohort of women across pregnancy will open up new avenues of exploration, for example, allowing researchers to link blood-based proteomic signatures to pregnancy outcomes; deploying wearable devices to monitor changes in sleep, cognition and mood; and probing the broader social and environmental determinants of maternal health27.\n\nThe neuroanatomical changes that unfold during matrescence may have broad implications for understanding individual differences in parental behavior13,24,30,31, vulnerability to mental health disorders32,33 and patterns of brain aging18,19,34–36. Decreases in GMV may reflect 'fine-tuning' of the brain by neuromodulatory hormones in preparation for parenthood26. For example, in rodents, steroid hormones promote parental behavior by remodeling specific neural circuits in the medial preoptic area of the hypothalamus. These behavioral adaptations are critical to the dam's ability to meet the demands of caring for", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - }, - { - "text": "# **nature neuroscience**\n\n# **Neuroanatomical changes observed over the course of a human pregnancy**\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\n**Laura Pritschet  1 , Caitlin M. Taylor  1 , Daniela Cossio  2 , Joshua Faskowitz  3 , Tyler Santander1 , Daniel A. Handwerker  3 , Hannah Grotzinger1 , Evan Layher1 , Elizabeth R. Chrastil  2,5 & Emily G. Jacobs  1,4,5**\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal fuid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity3–10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups12.\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum13–16, particularly in regions central to theory-of-mind processing13. These GMV changes persist at 6 years postpartum17 and are traceable decades later18,19, underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues21. Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n1 Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA. 2 Department of Neurobiology and Behavior, University of California, Irvine, CA, USA. 3 Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA. 4 Neuroscience Research Institute, University of California, Santa Barbara, CA, USA. 5 These authors contributed equally: Elizabeth R. Chrastil, Emily G. Jacobs.  e-mail: laura.pritschet@pennmedicine.upenn.edu; chrastil@uci.edu; emily.jacobs@psych.ucsb.edu", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "**Fig. 3 | Subcortical GMV changed throughout gestation. a**, Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at *q* < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline—36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). **b**, The participant's hippocampus and surrounding cortex were segmented\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline—36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at *q* < 0.05. For **a** and **b**, nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg48*.* DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia or neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).\n\nCritically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography14. Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and fnally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative infuence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superfcial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specifc brain areas or networks but are plausibly distributed across many of them. However, as a frst approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this fgure.\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli suffciently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (refex arcs) and autonomic actions (autonomic refexes). Endowing predictive coding with these refexes—hence realizing an \"active inference\" architecture—permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.\n\nEquipped with a generative model like the one shown in Fig. 1, an active inference agent can continuously infer (and act upon) the state of the world and of the body, including the internal milieu, at multiple time scales. Of particular interest, here are multimodal inferences that unite exteroceptive and interoceptive sources of evidence. One example of this is the perception of faces expressing emotions. Two studies reported that", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "**Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a**, Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). **b**, Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. **c**, A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n# **Discussion**\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence13–15,20,21,24–26. Investigations that compare women week. **d**, Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes11,27. But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain—indicating greater tract integrity—as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n# **Results**\n\n#### **Serological evaluations**\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml−1 and progesterone (P) = 0.84 ng ml−1; 3 weeks preparturition, E = 12,400 pg ml−1 and P = 103 ng ml−1; 3 months postparturition, E = 11.50 pg ml−1 and P = 0.04 ng ml−1).\n\n#### **Whole-brain dynamics from baseline through postpartum**\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline—2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV (*F* = 27.87, *P* < 0.001, deviance explained = 93.9%, *R*2 adj = 0.91), summary CT (*F* = 15.79, *P* < 0.001, deviance explained = 78.6%, *R*2 adj = 0.75) and total brain volume (*F* = 26.12, *P* < 0.001, deviance explained = 93.4%, *R*2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, *F* = 4.62, *P* = 0.007, deviance explained = 60.2%, *R*2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion *(F* = 10.44, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.77) and increased cerebrospinal fluid (CSF; *F* = 13.32, *P* < 0.001, deviance explained = 83.8%, *R*2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n#### **Cortical volume and thickness changes tied to gestation**\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline—36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).\n\nChanges in GMV were near-ubiquitous across the cortical mantle (Fig. 2a). Most large-scale brain networks exhibited decreases in GMV (Fig. 2b and Supplementary Table 1); indeed, 80% of the 400 regions of interest (ROI) demonstrated negative relationships between GMV and gestation week (Fig. 2a and Supplementary Table 2). Together, these results provide evidence of a global decrease in cortical volume across pregnancy. Several sensory and attention subnetworks were particularly sensitive to gestation, including the control (subnetwork B), salience/ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks (Supplementary Table 1). Regions driving these network-level changes include the bilateral inferior parietal lobe, postcentral gyri, insulae, prefrontal cortex, posterior cingulate and somatosensory cortex (Fig. 2c, Supplementary Table 2 and validation of findings using alternate pipeline in Supplementary Tables 1 and 3). These regions and associated brain networks appear to decrease in volume at a faster rate than the rest of the brain throughout pregnancy, as determined by a subsequent analysis controlling for total GMV (Supplementary Tables 1 and 2). GMV reductions were also significantly correlated with the participant's estradiol and progesterone concentrations (Supplementary Table 1). A highly similar pattern of results was observed when examining pregnancy-related CT changes (Supplementary Fig. 3 and Supplementary Tables 4 and 5). Significant reductions in cortical GMV over gestation remained after controlling for standard quality control (QC) metrics, albeit with some influence on the magnitude and location of the observed effects (Supplementary Figs. 4 and 5).\n\nIn contrast, GMV within regions of the default mode (subnetwork C), limbic (subnetworks A and B) and visual peripheral networks buck the global trend by slightly increasing (for example, temporal poles), remaining constant (for example, orbitofrontal cortex) or reducing at a much slower rate (for example, extrastriate cortex) than total GMV (Fig. 2a,b and Supplementary Tables 1 and 2). CT changes in these regions exhibit similar patterns (Supplementary Fig. 3 and Supplementary Tables 4 and 5).\n\n#### **Subcortical GMV changes tied to gestation**\n\nConsistent with the broader cortical reductions in GMV, several subcortical regions significantly reduced in volume across gestation (Fig. 3a, left). This included bilateral ventral diencephalon (right hemisphere values shown in Fig. 3a, right; encompasses hypothalamus, substantia nigra, mammillary body, lateral geniculate nucleus and red nucleus among others22), caudate, hippocampus and thalamus, along with left putamen and brain stem (Supplementary Table 6, *q* < 0.05).\n\nNext, high-resolution segmentation of the MTL allowed us to interrogate subcortical structures at a finer resolution, revealing nonlinear volumetric decreases in CA1 (*F*(2,15) = 5.84, *q* = 0.031, *R*2 adj = 0.36; Fig. 3b, left) and CA2/CA3 (*F*(2,15) = 6.82, *q* = 0.027, *R*2 adj = 0.41; Fig. 3b, middle) across gestation. PHC exhibited linear volumetric decreases across gestation (*F*(1,16) = 24.87, *q* < 0.001, *R*2 adj = 0.58; Fig. 3b, right) which was also tied to estradiol (*F*(1,12) = 20.21, *q* = 0.005, *R*2 adj = 0.60). All three relationships remained significant after proportional correction for total GMV. There was no significant change in other subregions or total volume of the hippocampal body, or in the parahippocampal gyrus (Supplementary Table 7 and Supplementary Fig. 8).\n\n#### **White matter microstructure changes tied to gestation**\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all *q* < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n#### **Comparing brain changes across pregnancy against controls**\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls23. The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. [105] Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.[106]\n\nIn feedforward neural networks the signal passes in only one direction.[107] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks.[108] Perceptrons[109] use only a single layer of neurons; deep learning[110] uses multiple layers. Convolutional neural networks strengthen the connection\n\nA neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.\n\nbetween neurons that are \"close\" to each other—this is especially important in image processing, where a local set of neurons must identify an \"edge\" before the network can identify an object.[111]\n\n#### **Deep learning**\n\nDeep learning[110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higherlevel features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[112]\n\nDeep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, [113] and others. The reason that deep learning performs so\n\nwell in so many applications is not known as of 2023.[114] The sudden success of deep learning in 2012– 2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet. [j]\n\n#### **GPT**\n\nGenerative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia3.pdf" - }, - { - "text": "# Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Coopera , Allison M. Barryb , Paschalina Chrysostomidoua , Romane Loligniera , Jinyi Wanga , Magdalena Redondo Canalesa , Heather F. Tittertona , David L. Bennettb , Greg A. Weira,*\n\n# Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury–induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n# 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability,57 which is a key pathological driver of neuropathic pain.20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell37,44 and subpopulation-specific sequencing studies.3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury.3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSponsorships or competing interests that may be relevant to content are disclosed at the end of this article.\n\n*Corresponding author. Address: School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QQ, United Kingdom. Tel.: 144 (0) 141 330 7023. E-mail address: gregory.weir@glasgow.ac.uk (G.A. Weir).\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models.24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts,48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers53 but some contrasting studies describe the preferential loss of large cells6 or loss of cells of all sizes.46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods.56 Shi et al.50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after \"mid-thigh\" sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression,5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI,49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush.44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize\n\na School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom, b Nuffield Department of Clinical Neurosciences, University of\n\nOxford, Oxford, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "SNI-related gene expression signatures were less evident in Mrgprd-expressing and C-LTMR neurons at later timepoints, compared with other populations in injured DRG.3 This could be explained by a loss of axotomized neurons of these classes and therefore sampling of only uninjured neurons at this timepoint.24,43,64 In terms of the transcriptional response to injury, nonpeptidergic nociceptors show enrichment of individual proapoptotic factors early after injury,23,68 and we extend these results in this study, by describing a subpopulation-specific enrichment of GO terms associated with apoptosis that is evident as early as 3 days after injury. Such data and single-cell transcriptomic profiling of all DRG neurons following injury37,44 may offer the opportunity to elucidate the cell death pathways engaged and upstream effectors that enrich this process to nonpeptidergic nociceptive neurons.\n\n#### 4.3. Implications for pain pathogenesis\n\nNeuronal loss has been proposed as a key contributor to poor functional recovery following nerve injury,54 and biased survival of different afferent types might be expected to contribute to modality-specific sensory deficits. Beyond loss of function, does DRG neuron loss contribute to chronic pain, in either an adaptive or maladaptive manner? Intrathecal delivery of GDNF is neuroprotective and reverses the reduction in the number of IB4-binding DRG neurons and central terminals seen following transection.5 Treatment is concurrently analgesic and abrogates pain-related behaviors.7,60 However, the pleiotropic nature of GDNF makes it impossible to directly attribute the analgesic effects to the reversal of neuron loss. Indeed, it is possible that GDNF exerts its effect by actions on intact nonpeptidergic nociceptive afferents,52 activation of which is known to drive aversive behaviors in the neuropathic state.62 These data leave the contribution of nonpeptidergic nociceptor loss to behavior in the GDNF treatment paradigm ambiguous. Other pharmacological approaches have been found effective at reversing a neuronal loss in rodent models, but the impact on pain behavior was not studied.21,22\n\nRodents develop marked mechanical and thermal hypersensitivity rapidly following nerve injury and before timepoints at which neuron loss is observed.10 This lack of a temporal correlation may suggest a limited contribution to evoked hypersensitivities. The temporal profile of ongoing tonic pain (eg, pain aversiveness as measured by condition place preference assays26) is less defined and so is its correlation to the timing of neuron loss.\n\nThere are many anatomical sites within the somatosensory nervous system where differential loss of sensory neuron populations could impact neurobiology. For example, loss of cutaneous afferents may afford more opportunity for plasticity in reinnervation patterns, such as collateral sprouting of uninjured or surviving afferents, and the types of nerve endings made by different molecular subpopulations.17,27 It also seems likely that the death of many neurons within a DRG could contribute to the expansion and activation of immune cell types, which are known to play a major role in neuropathic pain.30,69 Finally, under normal conditions, peripheral sensory input is integrated into the dorsal horn of the spinal cord by complex interneuron circuitry. Many spinal circuits are engaged by convergent input from different afferent types.9,41,70 Therefore, selective loss of input from discrete afferent types could undoubtedly impact the normal processing of remaining afferent signals.34 Experimentally abrogating neuronal loss may be a fruitful approach to assess the contribution to nervous system plasticity (adaptive or maladaptive) following injury. In this regard, our in vitro readout would be a useful experimental platform to help delineate the precise cell death pathways and signaling cascades engaged (which could then be experimentally manipulated). Such studies should consider that plasticity may evolve over time. The loss of IB41 central terminals is transient following crush and has even been observed to reverse at longer timepoints following SNItrans. 36 These observations, in conjunction with ours of loss of neurons, raise the intriguing question of the source of such central reinnervation.\n\n#### 4.4. Study limitations\n\nOur efforts focused on traumatic nerve injury paradigms owing to previous contrasting results using these robust and reproducible experimental models. We did not extend our studies to systemic neuropathy models, such as chemotherapy or diabetic neuropathy. A recent postmortem analysis reported a neuronal loss in the DRG from patients with painful diabetic peripheral neuropathy.19 Transcriptional responses vary substantially across different nerve insults,44 so it would be of interest to test whether neuronal loss and the subpopulation vulnerability reported in this study are common features across different types of insults.\n\nUsing multiple approaches, we assess the na¨ıve mouse L4 DRG to contain approximately 8000 neurons, consistent with a previous estimate,67 and observed a frank loss of smalldiameter neurons following injury. However, the extent of loss observed using our semiautomated approach was less than that observed using manual techniques.67 Two major limitations in this study may explain this discrepancy: First, owing to technical issues, the cleared DRG dataset is unpaired ipsilateral–contralateral which adds larger variability. Second, the analysis method is prone to undercounting deep nuclei. The signal-to-noise is better for superficial nuclei and smaller tissue volumes. Given the reduction in DRG volume after SNItrans, nuclei in larger contralateral DRG may be undercounted.\n\nWhile we made efforts to profile the loss of several molecularly discrete sensory neuron populations, we acknowledge that not all subtypes were profiled. Furthermore, recent single-cell RNA sequencing has given us a more granular appreciation of the heterogeneity of sensory neurons.42 Future studies could leverage our experimental approach and new transgenic lines to characterize the loss of neurons in more detail. Such experiments may be pertinent before embarking on molecular or functional profiling of populations post–nerve injury.\n\n#### 4.5. Conclusions\n\nIn sum, we have provided data from multiple complementary experimental approaches to support the hypothesis that DRG neurons are lost following nerve injury in mice. We describe a substantial loss, which is biased towards specific subpopulations and particularly present in small-diameter nonpeptidergic nociceptive neurons.\n\n# Conflict of interest statement\n\nD.L.B. has acted as a consultant in the last 2 years for AditumBio, Biogen, Biointervene, Combigene, LatigoBio, GSK, Ionis, Lexicon therapeutics, Neuvati, Olipass, Orion, Replay, SC Health Managers, Theranexus, Third Rock Ventures, and Vida Ventures on behalf of Oxford University Innovation. D.L.B. has received research funding from Lilly and Astra Zeneca, and G.A.W. has received research funding from Ono Pharmaceutical. D.L.B. has received", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "What may reflect the decrease in GMV during pregnancy?", - "target_page": 6, - "target_passage": " Decreases in GMV may reflect ‘fine-tuning’ of the brain by neuromodulatory hormones in prepara- tion for parenthood", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "the offspring12. Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment13. Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted31. Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs37–39, including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition40. For both adolescence41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period42, but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.\n\nThese findings provide a critical rationale for conducting further precision imaging studies of pregnancy in demographically enriched cohorts to determine the universality and idiosyncrasy of these adaptations and their role in maternal health. Are the changes observed in our participant reflective of the broader population? Do deviations from the norm lead to maladaptive outcomes? A precision imaging approach can help determine whether the pace of pregnancy-induced neuroanatomical changes drives divergent brain health outcomes in women, as may be the case during other rapid periods of brain development44. One in five women experiences perinatal depression45 and while the first FDA-approved treatment is now available46, early detection remains elusive. Precision imaging studies could offer clues about an individual's risk for or resilience to depression before symptom onset, helping clinicians better determine when and how to intervene. Neuroscientists and clinicians also lack tools to facilitate detection and treatment of neurological disorders that co-occur, worsen or remit with pregnancy, such as epilepsy, headaches, multiple sclerosis and intracranial hypertension47. Precision mapping of the maternal brain lays the groundwork for a greater understanding of the subtle and sweeping structural, functional, behavioral and clinical changes that unfold across pregnancy. Such pursuits will advance our basic understanding of the human brain and its remarkable ability to undergo protracted plasticity in adulthood.\n\n### **Online content**\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41593-024-01741-0.\n\n# **References**\n\n- 1. World Health Organization. Maternal, newborn, child and adolescent health and ageing. platform.who.int/data/ maternal-newborn-child-adolescent-ageing (2022).\n- 2. Thornburg, K. L., Bagby, S. P. & Giraud, G. D. *Knobil and Neill's Physiology of Reproduction* pp. 1927–1955 (Elsevier, 2015).\n- 3. Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. *Nat. Rev. Neurosci.* **9**, 11–25 (2008).\n- 4. Gregg, C. Pregnancy, prolactin and white matter regeneration. *J. Neurol. Sci.* **285**, 22–27 (2009).\n- 5. Haim, A. et al. A survey of neuroimmune changes in pregnant and postpartum female rats. *Brain Behav. Immun.* **59**, 67–78 (2017).\n- 6. Barrière, D. A. et al. Brain orchestration of pregnancy and maternal behavior in mice: a longitudinal morphometric study. *NeuroImage* **230**, 117776 (2021).\n- 7. Celik, A., Somer, M., Kukreja, B., Wu, T. & Kalish, B. T. The genomic architecture of pregnancy-associated plasticity in the maternal mouse hippocampus. *eNeuro* **9**, ENEURO.0117-22. 2022 (2022).\n- 8. Puri, T. A., Richard, J. E. & Galea, L. A. M. Beyond sex diferences: short- and long-term efects of pregnancy on the brain. *Trends Neurosci.* **46**, 459–471 (2023).\n- 9. Chaker, Z. et al. Pregnancy-responsive pools of adult neural stem cells for transient neurogenesis in mothers. *Science* **382**, 958–963 (2023).\n- 10. Diamond, M. C., Johnson, R. E. & Ingham, C. Brain plasticity induced by environment and pregnancy. *Int. J. Neurosci.* **2**, 171–178 (1971).\n- 11. Servin-Barthet, C. et al. The transition to motherhood: linking hormones, brain and behaviour. *Nat. Rev. Neurosci.* **24**, 605–619 (2023).\n- 12. Ammari, R. et al. Hormone-mediated neural remodeling orchestrates parenting onset during pregnancy. *Science* **382**, 76–81 (2023).\n- 13. Hoekzema, E. et al. Pregnancy leads to long-lasting changes in human brain structure. *Nat. Neurosci.* **20**, 287–296 (2017).\n- 14. Hoekzema, E. et al. Mapping the efects of pregnancy on resting state brain activity, white matter microstructure, neural metabolite concentrations and grey matter architecture. *Nat. Commun.* **13**, 6931 (2022).\n- 15. Martínez-García, M., Paternina-Die, M., Desco, M., Vilarroya, O. & Carmona, S. Characterizing the brain structural adaptations across the motherhood transition. *Front. Glob. Womens Health* **2**, 742775 (2021).\n- 16. Spalek, K. et al. Pregnancy renders anatomical changes in hypothalamic substructures of the human brain that relate to aspects of maternal behavior. *Psychoneuroendocrinology* **164**, 107021 (2024).\n- 17. Martínez-García, M. et al. Do pregnancy-induced brain changes reverse? The brain of a mother six years after parturition. *Brain Sci.* **11**, 168 (2021b).\n- 18. De Lange, A.-M. G. et al. Population-based neuroimaging reveals traces of childbirth in the maternal brain. *Proc. Natl Acad. Sci. USA* **116**, 22341–22346 (2019).", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "- - Step 2:\n\t- a. When an MM/GM relationship is started in the ConsistentStopped state, the MM/GM relationship enters the ConsistentSynchronized state. Therefore, no updates (write I/O) were performed on the master volume while in the ConsistentStopped state. Otherwise, the **-force** option must be specified, and the MM/GM relationship then enters the InconsistentCopying state while the background copy is started.\n\t- b. When an MM/GM relationship is started in the InconsistentStopped state, the MM/GM relationship enters the InconsistentCopying state while the background copy is started.\n- -Step 3:\n\nWhen the background copy completes, the MM/GM relationship changes from the InconsistentCopying state to the ConsistentSynchronized state.\n\n- - Step 4:\n\t- a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the MM/GM relationship enters the Idling state when you specify the **-access** option, which enables write I/O on the auxiliary volume.\n\t- b. When an MM/GM relationship is stopped in the ConsistentSynchronized state without an **-access** parameter, the auxiliary volumes remain read-only and the state of the relationship changes to ConsistentStopped.\n\t- c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the ConsistentStopped state, issue the **svctask stoprcrelationship** command, which specifies the **-access** option, and the MM/GM relationship enters the Idling state.\n- - Step 5:\n\t- a. When an MM/GM relationship is started from the Idling state, you must specify the **-primary** argument to set the copy direction. If no write I/O was performed (to the master or auxiliary volume) while in the Idling state, the MM/GM relationship enters the ConsistentSynchronized state.\n\t- b. If write I/O was performed to the master or auxiliary volume, the **-force** option must be specified and the MM/GM relationship then enters the InconsistentCopying state while the background copy is started. The background process copies only the data that changed on the primary volume while the relationship was stopped.\n\n#### **Stop on Error**\n\nWhen a MM/GM relationship is stopped (intentionally, or because of an error), the state changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state enter the InconsistentStopped state.\n\nIf the connection is broken between the two systems that are in a partnership, all (intercluster) MM/GM relationships enter a Disconnected state. For more information, see \"Connected versus disconnected\" on page 536.\n\n**Common states:** Stand-alone relationships and Consistency Groups share a common configuration and state model. All MM/GM relationships in a Consistency Group have the same state as the Consistency Group.\n\n#### **State overview**\n\nThe following sections provide an overview of the various MM/GM states.", - "page_start": 556, - "page_end": 556, - "source_file": "sg247938.pdf" - }, - { - "text": "**Fig. 2 | Cortical GMV showed widespread change through gestation and postpartum. a**, Multivariate regression analyses reveal largely negative relationships between gestation week and regional GMV, with only a minority of regions unaffected or increasing over the gestational window (baseline—36 weeks). All associations presented here were corrected for multiple comparisons (FDR at *q* < 0.05; nonsignificant values set to zero for interpretability). **b**, Average network change was calculated by estimating GMV percent change from baseline (initial) to 36 weeks gestation (final). Attention and control networks appear most affected. **c**, Six representative regions, classified by major subnetworks, that exhibit pronounced GMV change across gestation. For each panel, we display a scatterplot between average GMV of the ROIs and gestation week (left; gestation sessions only, 19 scans), and summary GMV of ROIs by pregnancy stage across the whole study (right; gestation and postpartum sessions, 26 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. All statistical tests were corrected for multiple comparisons (FDR at *q* < 0.05) and values were *z* scored and transformed to have a mean of zero and s.d. of one for easier comparison across regions. Please note that the data values shown here are raw (see Supplementary Tables 1 and 2 and Supplementary Data 1 for exhaustive list). Brain visualizations created with R package ggseg48. IQR, interquartile range; Lat, lateral; Med, medial; DMN, default mode network; VisPeri, visual peripheral network; SomMot, somatomotor network; VisCent, visual central network; Cont, control network; TempPar, temporal parietal network; DorsAttn, dorsal attention network; SalVentAttn, salience/ventral attention network.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed4.pdf" - }, - { - "text": "| Parameter | Value |\n| --- | --- |\n| Total Volume size per I/O Group | There is a per I/O Group limit of 1024 terabytes (TB) on |\n| | the quantity of master and auxiliary volume address |\n| | spaces that can participate in Metro Mirror and Global |\n| | Mirror relationships. This maximum configuration uses all |\n| | 512 MiB of bitmap space for the I/O Group and allows |\n| | 10 MiB of space for all remaining copy services features. |\n\n# **11.6.23 Remote Copy states and events**\n\nThis section describes the various states of a MM/GM relationship and the conditions that cause them to change. In Figure 11-94, the MM/GM relationship diagram shows an overview of the status that can apply to a MM/GM relationship in a connected state.\n\n*Figure 11-94 Metro Mirror or Global Mirror mapping state diagram*\n\nWhen the MM/GM relationship is created, you can specify whether the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is useful when MM/GM relationships are established for volumes that were created with the format option.\n\nThe following step identifiers are shown in Figure 11-94:\n\n- - Step 1:\n\t- a. The MM/GM relationship is created with the **-sync** option, and the MM/GM relationship enters the ConsistentStopped state.\n\t- b. The MM/GM relationship is created without specifying that the master and auxiliary volumes are in sync, and the MM/GM relationship enters the InconsistentStopped state.", - "page_start": 555, - "page_end": 555, - "source_file": "sg247938.pdf" - }, - { - "text": "#### -**-gmmaxhostdelay** *max_host_delay*\n\nThis parameter specifies the maximum time delay, in milliseconds, at which the Global Mirror link tolerance timer starts counting down. This threshold value determines the additional effect that Global Mirror operations can add to the response times of the Global Mirror source volumes. You can use this parameter to increase the threshold from the default value of 5 milliseconds.\n\n#### -**-maxreplicationdelay** *max_replication_delay*\n\nThis parameter sets a maximum replication delay in seconds. The value must be a number 0 - 360 (0 being the default value, no delay). This feature sets the maximum number of seconds to be tolerated to complete a single I/O. If I/O cannot complete within the *max_replication_delay*, the 1920 event is reported. This is the system-wide setting, and applies to MM/GM relationships.\n\nUse the **chsystem** command to adjust these values, as shown in the following example:\n\nchsystem -gmlinktolerance 300\n\nYou can view all of these parameter values by using the **lssystem <***system_name***>** command.\n\nFocus on the **gmlinktolerance** parameter in particular. If poor response extends past the specified tolerance, a 1920 event is logged and one or more GM relationships automatically stop to protect the application hosts at the primary site. During normal operations, application hosts experience a minimal effect from the response times because the GM feature uses asynchronous replication.\n\nHowever, if GM operations experience degraded response times from the secondary system for an extended period, I/O operations begin to queue at the primary system. This queue results in an extended response time to application hosts. In this situation, the **gmlinktolerance** feature stops GM relationships, and the application host's response time returns to normal.\n\nAfter a 1920 event occurs, the GM auxiliary volumes are no longer in the consistent_synchronized state. Fix the cause of the event and restart your GM relationships. For this reason, ensure that you monitor the system to track when these 1920 events occur.\n\nYou can disable the **gmlinktolerance** feature by setting the **gmlinktolerance** value to 0 (zero). However, the **gmlinktolerance** feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the **gmlinktolerance** feature under the following circumstances:\n\n- - During SAN maintenance windows in which degraded performance is expected from SAN components, and application hosts can stand extended response times from GM volumes.\n- - During periods when application hosts can tolerate extended response times and it is expected that the **gmlinktolerance** feature might stop the GM relationships. For example, if you test by using an I/O generator that is configured to stress the back-end storage, the **gmlinktolerance** feature might detect the high latency and stop the GM relationships.\n\nDisabling the **gmlinktolerance** feature prevents this result at the risk of exposing the test host to extended response times.\n\nA 1920 event indicates that one or more of the SAN components cannot provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload).", - "page_start": 564, - "page_end": 564, - "source_file": "sg247938.pdf" - }, - { - "text": "# **nature neuroscience**\n\n# **Neuroanatomical changes observed over the course of a human pregnancy**\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\n**Laura Pritschet  1 , Caitlin M. Taylor  1 , Daniela Cossio  2 , Joshua Faskowitz  3 , Tyler Santander1 , Daniel A. Handwerker  3 , Hannah Grotzinger1 , Evan Layher1 , Elizabeth R. Chrastil  2,5 & Emily G. Jacobs  1,4,5**\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal fuid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity3–10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups12.\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum13–16, particularly in regions central to theory-of-mind processing13. These GMV changes persist at 6 years postpartum17 and are traceable decades later18,19, underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues21. Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n1 Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA. 2 Department of Neurobiology and Behavior, University of California, Irvine, CA, USA. 3 Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA. 4 Neuroscience Research Institute, University of California, Santa Barbara, CA, USA. 5 These authors contributed equally: Elizabeth R. Chrastil, Emily G. Jacobs.  e-mail: laura.pritschet@pennmedicine.upenn.edu; chrastil@uci.edu; emily.jacobs@psych.ucsb.edu", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30–110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows—many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility. This article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/2007– 2013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n- 1. IPCC. 2014 Summary for policymakers. In *Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change* (eds CB Field *et al*.), pp. 1–32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "**Fig. 3 | Subcortical GMV changed throughout gestation. a**, Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at *q* < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline—36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). **b**, The participant's hippocampus and surrounding cortex were segmented\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline—36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at *q* < 0.05. For **a** and **b**, nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg48*.* DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia or neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).\n\nCritically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography14. Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. The following conditions must be true:\n\n- -The relationship was ConsistentSynchronized when it became disconnected.\n- -No writes received successful completion at the master while disconnected.\n\nOtherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.\n\n#### *Empty*\n\nThis state applies only to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show.\n\nIt is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point the state of the relationship becomes the state of the Consistency Group.\n\n# **11.7 Remote Copy commands**\n\nThis section presents commands that need to be issued to create and operate remote copy services.\n\n# **11.7.1 Remote Copy process**\n\nThe MM/GM process includes the following steps:\n\n- 1. A system partnership is created between two IBM Spectrum Virtualize systems (for intercluster MM/GM).\n- 2. A MM/GM relationship is created between two volumes of the same size.\n- 3. To manage multiple MM/GM relationships as one entity, the relationships can be made part of a MM/GM Consistency Group to ensure data consistency across multiple MM/GM relationships, or for ease of management.\n- 4. The MM/GM relationship is started. When the background copy completes, the relationship is consistent and synchronized. When synchronized, the auxiliary volume holds a copy of the production data at the master that can be used for disaster recovery.\n- 5. To access the auxiliary volume, the MM/GM relationship must be stopped with the access option enabled before write I/O is submitted to the auxiliary.\n\nFollowing these steps, the remote host server is mapped to the auxiliary volume and the disk is available for I/O.\n\nFor more information about MM/GM commands, see *IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line Interface User's Guide,* GC27-2287*.*\n\nThe command set for MM/GM contains the following broad groups:\n\n- -Commands to create, delete, and manipulate relationships and Consistency Groups\n- -Commands to cause state changes\n\nIf a configuration command affects more than one system, MM/GM coordinates configuration activity between the systems. Certain configuration commands can be performed only when the systems are connected, and fail with no effect when they are disconnected.", - "page_start": 562, - "page_end": 562, - "source_file": "sg247938.pdf" - }, - { - "text": "**Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a**, Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). **b**, Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. **c**, A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n# **Discussion**\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence13–15,20,21,24–26. Investigations that compare women week. **d**, Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes11,27. But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "How to light up my sports smart watch?", - "target_page": 2, - "target_passage": "Up button: Short press to light up or turn off the screen", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "Click \"camera\" in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n#### **5. Data synchronization**\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n#### **6. Tilt to wake the screen**\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n#### **7. Do not disturb mode**\n\nIn the APP, tap \"Device\" > \"More\" > \"Do not disturb mode\", set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n### **8. Daily alarm clock**\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n#### **9. Sedentary reminder**\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n#### **10. Drink water reminder**\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n#### **11. Dial push**\n\n#### 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with \"custom watch faces\" are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n#### **12. Firmware version**", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "Sports smart watch User Manual DT3 Mate\n\n**Thank you for choosing our smart watch. You can fully understand**\n\n**the use and operation of the equipment by reading this manual.**\n\n**The company reserves the right to modify the contents of this manual without any prior notice.**\n\nThe product contains: a packing box, a manual, a watch body, and a charging cable.\n\n#### **A. Watch function description**\n\nButton description:", - "page_start": 0, - "page_end": 0, - "source_file": "6126797.pdf" - }, - { - "text": "3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n\n4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n\n5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n#### **1.2 App notification**\n\n1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n\n2) Swipe to the bottom to click the delete icon to clear all message records.\n\n#### **1.3 Drop-down menu**\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n1) Bluetooth connection status; time; power left;\n\n2) About, where you can check the firmware version of watch and the address of the Bluetooth\n\n3) Setting, where you can enter it to set part of the functions;\n\n4) Brightness adjustment; where you can adjust the brightness of the screen;\n\n5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n#### **1.4 Phone/Call History**\n\n1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n\n2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n\n3. Dial the keyboard, you can enter the phone number to make a call.\n\n#### **1.5 message**\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n#### **1.6 Frequently used contacts**\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n#### **1.7 Fitness data**\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n#### **1.8 Sports modes** (walking, running, cycling, rope skipping, badminton,\n\n#### basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the \"Start\" button on the screen to start the exercise; click the \"Start\" button again to pause the recording of the exercise; click the \"End\" button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n#### **1.9 Heart rate**\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n#### **1.10 ECG**\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n#### **2.0 My QR code**\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n#### **2.1 Remote control music**", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n#### **2.2 Sleep**\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## **2.3 stopwatch**\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## **2.4 Weather**\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n# **2.5 Find mobile phone**\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## **2.6 Meteorology**\n\nClick on \"Meteorology\" on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## **2.7 Massager**\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## **3.0 Menu style**\n\nThere are a variety of menu styles for users to choose.\n\n## **3.1 Settings**\n\n1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n\n2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n\n3) Set screen time; a variety of screen time lengths can be selected.\n\n4) Vibration intensity; set reminder vibration intensity.\n\n5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n\n6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on \"Firmware upgrade\" in the column of \"Device\", and users can decide to whether upgrade the firmware version.\n\n#### **13. Unbind**\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the smart watch, and click \"Forget this device\". The \"About\" of the watch has an \"Unbind\" button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n#### **●Frequently asked questions and answers**\n\n***Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.**\n\n***Why can't I take a hot bath with my watch?**\n\n**The temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.**\n\n***No power on, no charging**\n\n**If you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.**", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "and view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "#### **B**.**Bind to the APP**\n\n#### **1. APP download method**\n\n1.1 Scan the QR code to download\n\n1.2 Search the application at App market and download\n\nFor Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\nAfter WearPro is installed, the app icon appears as .\n\n#### 2.Bind Bluetooth\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n#### 2.2 Connected to the APP state:\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## **3. Find Watch**\n\nAfter the smartwatch is bound to the APP, you click \"Find Watch\" in the APP, the smartwatch will light up and vibrate for once.\n\n#### **4. Camera**", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "Our new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of – if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "Is my sports smartwatch's fitness data turned on or off by default?", - "target_page": 4, - "target_passage": "Fitness data is turned on by default.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n#### **1.6 Frequently used contacts**\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n#### **1.7 Fitness data**\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n#### **1.8 Sports modes** (walking, running, cycling, rope skipping, badminton,\n\n#### basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the \"Start\" button on the screen to start the exercise; click the \"Start\" button again to pause the recording of the exercise; click the \"End\" button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n#### **1.9 Heart rate**\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n#### **1.10 ECG**\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n#### **2.0 My QR code**\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n#### **2.1 Remote control music**", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n#### **2.2 Sleep**\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## **2.3 stopwatch**\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## **2.4 Weather**\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n# **2.5 Find mobile phone**\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## **2.6 Meteorology**\n\nClick on \"Meteorology\" on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## **2.7 Massager**\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## **3.0 Menu style**\n\nThere are a variety of menu styles for users to choose.\n\n## **3.1 Settings**\n\n1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n\n2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n\n3) Set screen time; a variety of screen time lengths can be selected.\n\n4) Vibration intensity; set reminder vibration intensity.\n\n5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n\n6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "Click \"camera\" in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n#### **5. Data synchronization**\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n#### **6. Tilt to wake the screen**\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n#### **7. Do not disturb mode**\n\nIn the APP, tap \"Device\" > \"More\" > \"Do not disturb mode\", set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n### **8. Daily alarm clock**\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n#### **9. Sedentary reminder**\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n#### **10. Drink water reminder**\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n#### **11. Dial push**\n\n#### 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with \"custom watch faces\" are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n#### **12. Firmware version**", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "#### **B**.**Bind to the APP**\n\n#### **1. APP download method**\n\n1.1 Scan the QR code to download\n\n1.2 Search the application at App market and download\n\nFor Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\nAfter WearPro is installed, the app icon appears as .\n\n#### 2.Bind Bluetooth\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n#### 2.2 Connected to the APP state:\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## **3. Find Watch**\n\nAfter the smartwatch is bound to the APP, you click \"Find Watch\" in the APP, the smartwatch will light up and vibrate for once.\n\n#### **4. Camera**", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n\n4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n\n5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n#### **1.2 App notification**\n\n1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n\n2) Swipe to the bottom to click the delete icon to clear all message records.\n\n#### **1.3 Drop-down menu**\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n1) Bluetooth connection status; time; power left;\n\n2) About, where you can check the firmware version of watch and the address of the Bluetooth\n\n3) Setting, where you can enter it to set part of the functions;\n\n4) Brightness adjustment; where you can adjust the brightness of the screen;\n\n5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n#### **1.4 Phone/Call History**\n\n1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n\n2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n\n3. Dial the keyboard, you can enter the phone number to make a call.\n\n#### **1.5 message**\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on \"Firmware upgrade\" in the column of \"Device\", and users can decide to whether upgrade the firmware version.\n\n#### **13. Unbind**\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the smart watch, and click \"Forget this device\". The \"About\" of the watch has an \"Unbind\" button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n#### **●Frequently asked questions and answers**\n\n***Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.**\n\n***Why can't I take a hot bath with my watch?**\n\n**The temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.**\n\n***No power on, no charging**\n\n**If you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.**", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "programming across the country's largest markets, as well as five OMNI Television stations which deliver multilingual news, information and entertainment to Canada's multiple language communities.\n\nThe Sportsnet specialty network provides sports programming across Canada through its four regional television channels and its nationallydistributed Sportsnet ONE, Sportsnet World, and Sportsnet 360 stations. Rogers also owns other Canadian specialty television channels, including FX Canada, OLN, The Biography Channel and G4.\n\nThe Shopping Channel – Canada's only nationally televised and Internet shopping service – is a leading interactive multi-channel retailer, offering a vast assortment of exclusive products and top brand names. As one of Canada's most innovative and diversified retailers, it provides customers with exceptional selections in health/beauty, jewelry, home/lifestyle, fashion/accessories, and electronics.\n\nRogers also publishes many well-known consumer magazines, such as Maclean's, Chatelaine, FLARE, L'actualité, and Canadian Business, and is the leading publisher of a number of industry, medical and financial publications. Rogers also controls a suite of fast-growing digital media assets, including 90+ owned and 300+ premium partnership online sites, as well as the recently launched Next Issue Canada digital magazine platform which provides 100+ of North America's most celebrated titles on an unlimited anytime, anywhere basis.\n\nIn sports entertainment, Rogers owns the Toronto Blue Jays baseball team and Rogers Centre stadium, Canada's largest sports and entertainment facility and home field of the Blue Jays. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment which owns the NHL Maple Leafs, NBA Raptors, MLS Toronto FC and a number of other sports related assets.", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Our new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of – if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Figure 5-24 Content Manager OnDemand for i Add an Application Group Storage Management tab\n\n# **Cache Data**\n\nThe Cache Data setting determines whether the report data is stored in disk cache, and if so, how long it is kept in cache before it expires. If the Cache Data for n Days option is selected, the search cache is always selected.\n\n*Search cache* determines whether Content Manager OnDemand searches cache storage when users retrieve documents from the application group. When you set Cache Data to No, you can configure Content Manager OnDemand to retrieve existing documents from cache storage while preventing new documents from being copied to cache storage. If you choose not to store reports in cache, you must select a storage set that supports archive storage.\n\n# **Life of Data and Indexes**\n\nThe Life of Data and Indexes settings determine the length of time that report data, indexes, and resources are maintained in the Content Manager OnDemand system before they are deleted from the application group. The report data, indexes, and resources can be maintained indefinitely, if set to never expire, or they can be kept for up to 273 years. If your retention requirements change, the Life of Data and Indexes value can be changed. The change affects data that is already archived and new data that is stored to the application group.\n\nDisk Storage Manager maintains documents on disk. It is initiated by the Start Disk Storage Management (**STRDSMOND**) command. Disk Storage Manager can delete documents after they exceed the cache data or Life of Data periods. For more information about running the **STRDSMOND** command, see the IBM Content Manager OnDemand for i - Common Server Administration Guide, SC19-2792.", - "page_start": 150, - "page_end": 150, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "When does my Sport smartwatch start and stop monitoring sleep?", - "target_page": 5, - "target_passage": "Sleep monitoring time period: from 18:00 at night to 10:00 the next day", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n#### **2.2 Sleep**\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## **2.3 stopwatch**\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## **2.4 Weather**\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n# **2.5 Find mobile phone**\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## **2.6 Meteorology**\n\nClick on \"Meteorology\" on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## **2.7 Massager**\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## **3.0 Menu style**\n\nThere are a variety of menu styles for users to choose.\n\n## **3.1 Settings**\n\n1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n\n2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n\n3) Set screen time; a variety of screen time lengths can be selected.\n\n4) Vibration intensity; set reminder vibration intensity.\n\n5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n\n6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n#### **1.6 Frequently used contacts**\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n#### **1.7 Fitness data**\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n#### **1.8 Sports modes** (walking, running, cycling, rope skipping, badminton,\n\n#### basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the \"Start\" button on the screen to start the exercise; click the \"Start\" button again to pause the recording of the exercise; click the \"End\" button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n#### **1.9 Heart rate**\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n#### **1.10 ECG**\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n#### **2.0 My QR code**\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n#### **2.1 Remote control music**", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Click \"camera\" in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n#### **5. Data synchronization**\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n#### **6. Tilt to wake the screen**\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n#### **7. Do not disturb mode**\n\nIn the APP, tap \"Device\" > \"More\" > \"Do not disturb mode\", set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n### **8. Daily alarm clock**\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n#### **9. Sedentary reminder**\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n#### **10. Drink water reminder**\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n#### **11. Dial push**\n\n#### 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with \"custom watch faces\" are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n#### **12. Firmware version**", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "#### **B**.**Bind to the APP**\n\n#### **1. APP download method**\n\n1.1 Scan the QR code to download\n\n1.2 Search the application at App market and download\n\nFor Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\nAfter WearPro is installed, the app icon appears as .\n\n#### 2.Bind Bluetooth\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n#### 2.2 Connected to the APP state:\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## **3. Find Watch**\n\nAfter the smartwatch is bound to the APP, you click \"Find Watch\" in the APP, the smartwatch will light up and vibrate for once.\n\n#### **4. Camera**", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n\n4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n\n5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n#### **1.2 App notification**\n\n1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n\n2) Swipe to the bottom to click the delete icon to clear all message records.\n\n#### **1.3 Drop-down menu**\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n1) Bluetooth connection status; time; power left;\n\n2) About, where you can check the firmware version of watch and the address of the Bluetooth\n\n3) Setting, where you can enter it to set part of the functions;\n\n4) Brightness adjustment; where you can adjust the brightness of the screen;\n\n5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n#### **1.4 Phone/Call History**\n\n1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n\n2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n\n3. Dial the keyboard, you can enter the phone number to make a call.\n\n#### **1.5 message**\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on \"Firmware upgrade\" in the column of \"Device\", and users can decide to whether upgrade the firmware version.\n\n#### **13. Unbind**\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the smart watch, and click \"Forget this device\". The \"About\" of the watch has an \"Unbind\" button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n#### **●Frequently asked questions and answers**\n\n***Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.**\n\n***Why can't I take a hot bath with my watch?**\n\n**The temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.**\n\n***No power on, no charging**\n\n**If you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.**", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "and view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Sports smart watch User Manual DT3 Mate\n\n**Thank you for choosing our smart watch. You can fully understand**\n\n**the use and operation of the equipment by reading this manual.**\n\n**The company reserves the right to modify the contents of this manual without any prior notice.**\n\nThe product contains: a packing box, a manual, a watch body, and a charging cable.\n\n#### **A. Watch function description**\n\nButton description:", - "page_start": 0, - "page_end": 0, - "source_file": "6126797.pdf" - }, - { - "text": "#### **Up button:**\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n#### **Button down:**\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n#### **Charging instructions:**\n\nWireless charging, as shown in the picture below.\n\n#### **1.1 Shortcut function:**\n\n1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n\n2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "Use the management GUI to manage and service your system. Select **Monitoring** → **Events** to list events that should be addressed and maintenance procedures that walk you through the process of correcting problems. Information in the Events window can be filtered in three ways:\n\n- -Recommended Actions\nShows only the alerts that require attention. Alerts are listed in priority order and should be resolved sequentially by using the available fix procedures. For each problem that is selected, you can perform the following tasks:\n\n- Run a fix procedure\n- View the properties\n- -Unfixed Messages and Alerts\n\nDisplays only the alerts and messages that are not fixed. For each entry that is selected, you can perform the following tasks:\n\n- Run a fix procedure\n- Mark an event as fixed\n- Filter the entries to show them by specific minutes, hours, or dates\n- Reset the date filter\n- View the properties\n- -Show All\n\nDisplays all event types whether they are fixed or unfixed. For each entry that is selected, you can perform the following tasks:\n\n- Run a fix procedure\n- Mark an event as fixed\n- Filter the entries to show them by specific minutes, hours, or dates\n- Reset the date filter\n- View the properties\n\nSome events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as *expired*. Monitoring events are below the coalesce threshold, and are usually transient.\n\n**Important:** The management GUI is the primary tool that is used to *operate* and *service* your system. Real-time *monitoring* should be established by using SNMP traps, email notifications, or syslog messaging on an automatic manner.\n\n# **13.6.1 Managing event log**\n\nRegularly check the status of the system using the management GUI. If you suspect a problem, first use the management GUI to diagnose and resolve the problem.\n\nUse the views that are available in the management GUI to verify the status of the system, the hardware devices, the physical storage, and the available volumes by completing the following steps:\n\n- 1. Click **Monitoring** → **Events** to see all problems that exist on the system (see Figure 13-34 on page 704).", - "page_start": 724, - "page_end": 724, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "Have the operating profits in Japan for Nissan gone up or down in 2004?", - "target_page": 5, - "target_passage": "operating profits in Japan came to ¥341.1 billion, a decrease of 3.2 percent compared to last year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Due to changes in government regulations, information on risks involved in business operations has been disclosed in the Yukashoken-Houkokusho for the year ended March 31,2005 as follows:\n\n#### Economic Factors\n\nThe demand for products manufactured by Nissan is affected by the economic conditions in each country or market in which they are offered for sale. Nissan conducts its operations all over the world and, in particular, in the major markets of North America, Europe, and Asia, to say nothing of Japan. While Nissan strives to develop a comprehensive and integrated projection of the global economic outlook, any greater-than-anticipated downturn in one of these markets may have a significant effect on Nissan financial position and results of operations.\n\n#### International Activities and Overseas Expansion\n\nNissan's manufacturing and marketing activities outside Japan are conducted in the United States, in Europe, and in the developing and emerging markets of Asia. Nissan forecasts and evaluates a wide variety of risks inherent in doing business in such overseas markets including the following factors, each of which entails a greater-than-anticipated level of risk:\n\n- Unfavorable political or economic factors\n- Legal or regulatory changes\n- Potentially adverse tax consequences\n- Labor disputes including strikes\n- Difficulties in recruiting and retaining personnel\n- Social, political or economic turmoil due to terrorism, war, or other destabilizing factors.\n\n#### Research and Development\n\nNissan's technology must be \"real world\"—useful, pragmatic and easy to use. Nissan anticipates the nature and scope of the market demand, and then prioritizes and invests in new technologies. Nonetheless, any sudden and greater-than-anticipated changes in its business environment or in customer preferences may impact negatively on customer satisfaction with these new technologies.\n\n#### Product Defects\n\nNissan places a high priority on safety and does its best to enhance safety from the standpoint of research and development, manufacturing and sales. Although Nissan takes out insurance policies to cover product liability, this does not necessarily mean that all potential defects and the related liabilities are fully covered. If Nissan were to implement strict product recalls for its customers, Nissan would incur significant additional expenses which could adversely affect its financial position and results of operations.\n\n#### Fluctuation in Foreign Currency Exchange Rates\n\nNissan's Japanese operations export vehicles to various countries around the world. In general, the appreciation of the yen against other currencies adversely affects Nissan's financial results of operations and, on the contrary, the depreciation of the yen against other currencies favorably affects Nissan's financial results of operations. Any sharp appreciation of the currencies of those countries against the yen could lead to increases in both procurement and production costs which would adversely affect Nissan's competitiveness.\n\n#### Derivatives\n\nNissan utilizes derivatives transactions for the purpose of hedging its exposure to fluctuation in foreign exchange rates, interest rates and commodity prices. While Nissan can hedge against these risks by using derivatives transactions, Nissan, by so doing, may miss the potential gains which could result from seizing the market opportunities to profit from such fluctuation in exchange rates and interest rates.\n\nIn addition, Nissan manages its exposure to credit risk by limiting its counterparties to financial institutions with high credit ratings. However, a default by any one of these counterparties could have an adverse effect on Nissan's financial position and operating results.\n\n#### Lawsuits and Claims\n\nWith respect to various lawsuits and claims which Nissan encounters, the possibility exists that the position defended by Nissan will not be accepted and that the outcome may be significantly different from that anticipated. As a result, any such verdict or settlement could adversely affect Nissan's financial position and operating results.\n\n#### Government Regulations\n\nThe automobile industry worldwide is influenced by a broad spectrum of regulations governing the emission levels of exhaust fumes, fuel economy guidelines, noise level limitations and safety standards, and Nissan expects these regulations to become increasingly stringent. In order to ensure compliance, it may be necessary for Nissan to make significant ongoing investments in these areas which would have an impact on its financial position and results of operations.\n\n#### Intellectual Property Rights\n\nNissan owns a wide variety of proprietary technologies and has the expertise to differentiate Nissan's products making them unique from those of its competitors. These assets have proven their value in the growth of Nissan's business and will, no doubt, continue to be of value in the future. Nissan strives to protect its intellectual property assets; however, in certain markets, Nissan may encounter difficulty in fully protecting the proprietary rights to its own technologies. Cases may arise where Nissan finds itself unable to prohibit others from infringing on its intellectual property rights.\n\nThe Company has established Intellectual Property Rights Management Department for the purpose of protecting intellectual property rights in specific areas, strengthening activities to protect Nissan's intellectual property rights, and abstracting new intellectual property rights. And the department has been performing various activities to protect and create Nissan Brand.\n\n#### Natural Disasters\n\nNissan's corporate headquarters and many of its manufacturing facilities are located in Japan, where the statistically proven probability of earthquakes is higher than in many other countries. Nissan has developed risk management guidelines relating to earthquake damage and the CEO has organized a global task force to direct disaster prevention and recovery activities. In addition, the Gruop has begun to strengthen its manufacturing facilities with anti-seismic reinforcement. However, if a severe earthquake were to hit one of Nissan's key facilities causing a halt in production, this would adversely affect Nissan's financial position and results of operations.\n\n#### Sales Financing Business Risk\n\nSales financing is an integral part of Nissan's core business, providing strong support to its automotive sales, while maintaining high profitability and a sound and stable financial condition through strict risk management policies. However, the sales financing companies have a high exposure to interest-rate risk, residual value risk, and credit risk, any one of which may adversely affect Nissan's financial position and results of operations.\n\n#### Counterparty Credit Risk\n\nNissan does business with a variety of counterparties and manages its counterparty credit risk by conducting a comprehensive annual assessment of its customers' financial condition based on their financial information. Nonetheless, any significant default by a counterparty would adversely affect Nissan's financial position and results of operations.\n\n#### Employee Retirement Benefit Expenses and Obligations\n\nThe amount of retirement Nissan's benefit obligation and related expenses are calculated using various actuarial assumptions including the discount rate applied, the projected rate of return on plan assets, and so forth. If Nissan's actual results differ from those assumptions or if the assumptions are changed, the resulting effects will be accumulated and recognized systematically over future periods. The cumulative effect could adversely impact the recognition of expenses and liabilities recorded in future periods.\n\n#### Purchase of raw materials and parts\n\nNissan purchases raw materials and parts from many suppliers. Market conditions that Nissan can't control and whether or not the suppliers can procure raw materials and parts continuously may adversely affect Nissan's financial position and results of operations.", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT_", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "DESPITE NISSAN'S RECORD OPERATING RESULT IN FISCAL 2004, ITS STOCK PERFORMANCE RETURN WAS NEGATIVE AND LOWER THAN THE TOPIX INDEX. THE INVESTOR RELATIONS TEAM WAS STRENGTHENED AT THE START OF FISCAL 2005 TO BETTER ADDRESS THE NEEDS OF INVESTORS AND ENHANCE THEIR UNDERSTANDING OF NISSAN'S PERFORMANCE. INVESTORS WILL NOW BE ABLE TO GAIN A MORE IN-DEPTH VIEW OF THE COMPANY'S OPERATIONS AND PERFORMANCE INDICATORS.\n\n#### **Share Performance in Fiscal 2004**\n\nNissan's share price began at ¥1,143 at the beginning of fiscal 2004 and ended the fiscal year at ¥1,099, generating a negative return of 3.85 percent. Total shareholder return (TSR) was -1.67 percent, while the dividend yield came to 2.18 percent (¥24 per share dividend, divided by the ¥1,099 closing price). Adverse movements in foreign exchange rates and commodity price hikes adversely affected Nissan's profitability, which was reflected in the share price. In addition, specific events relating directly to the company also had a negative impact. Later in this report, corporate officers will explain what actions Nissan has undertaken to ensure better performance.\n\n#### **Payout Policy**\n\nNissan announced its NISSAN Value-Up three-year dividend policy, covering the period from fiscal 2005 to fiscal 2007, at the annual general meeting of shareholders on June 23, 2004. Nissan proposes a long-term dividend policy to provide more visibility and improve transparency into the ways in which Nissan rewards its shareholders. Nissan believes that a long-term dividend policy reduces uncertainty for investors who already own or are considering acquiring Nissan stock.\n\n#### **Fiscal Year 2004 Share Performance** (Index: April 1, 2004=100)\n\n80 Apr. **2004 2005** \n\n#### **IR Activities**\n\nUnder NISSAN Value-Up, the IR team's performance will be evaluated based on the price-earnings ratio (PER) and volatility relative to our major competitors. PER is used to measure how successfully the IR team manages market expectations about Nissan in order to maintain the Nissan share price close to an intrinsic value. The other measure, volatility, is used to measure the risk investors perceive when considering Nissan stock. If Nissan can successfully reduce volatility, the minimum return required by investors should decline. The IR team believes that a strengthening of disclosure activities is required to improve both measures. The team plans to disclose not only financial results but also more forward-looking information about Nissan fundamentals such as technology and product. Such forward-looking information helps investors to forecast future performance more precisely and reduces uncertainty about the future. As a consequence, Nissan will increase the number of investor conferences, events, and teleconferences during fiscal 2005.\n\n#### **Five-Year Share Performance**", - "page_start": 16, - "page_end": 16, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "NISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT_", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "### INFORMATION ON SUBSIDIARIES AND AFFILIATES\n\n| Consolidated subsidiaries | | | | As of Mar. 31, 2005 |\n| --- | --- | --- | --- | --- |\n| Company | Location | Principal business | Capital (millions) | Nissan share*(%) |\n| Japan | | | | |\n| Nissan Shatai Co., Ltd. | Hiratsuka-shi, Kanagawa | Manufacture and sales of automobiles and parts | ¥7,904 | 43.80 |\n| Aichi Machine Industry Co., Ltd. | Nagoya, Aichi | Manufacture and sales of automotive parts | ¥8,518 | 41.70 |\n| JATCO Ltd. | Fuji, Shizuoka | Manufacture and sales of automotive parts | ¥29,935 | 81.76 |\n| Nissan Kohki Co., Ltd. | Samukawa, Kanagawa | Manufacture and sales of automotive parts | ¥2,020 | 97.73 |\n| Calsonic Kansei Corporation | Tokyo | Manufacture and sales of automotive parts | ¥40,606 | 41.87 |\n| Nissan Motor Car Carrier Co., Ltd. | Tokyo | International automobile transport | ¥640 | 60.00 |\n| Nissan Trading Co., Ltd. | Yokohama, Kanagawa | Import and export of automobiles, parts, etc. | ¥320 | 100.00 |\n| Nissan Financial Services Co., Ltd. | Chiba, Chiba | Automobile financing and leasing | ¥16,387 | 100.00 |\n| Autech Japan, Inc. | Chigasaki, Kanagawa | Development, manufacture and sales of limited-edition automobiles | ¥480 | 100.00 |\n| Nissan Real Estate Development | Tokyo | Real estate sales, purchase and leasing | ¥1,000 | 70.50 |\n| Corporation | | | | |\n| Nissan Finance Co., Ltd. | Tokyo | Finance and accounting support | ¥2,491 | 100.00 |\n| Aichi Nissan Motor Co., Ltd. | Nagoya, Aichi | Sales of automobiles and parts | ¥100 | 100.00 |\n| Tokyo Nissan Motor Sales Co., Ltd. | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Nissan Prince Tokyo Motor Sales | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Co., Ltd. | | | | |\n| Nissan Chuo Parts Sales Co., Ltd. | Yokohama, Kanagawa | Sales of automobile repair parts | ¥545 | 80.61 |\n| US | | | | |\n| Nissan North America, Inc. | Gardena, California | Management of North American subsidiaries, manufacture and sales of automobiles and parts | $1,791 | 100.00 |\n| Nissan Motor Acceptance Corporation | Torrance California | Finance of wholesale and retail automobile sales in US | $499 | 100.00 |\n| Nissan Motor Corporation | Honolulu, Hawaii | Sales of automobiles and parts | $6 | 100.00 |\n| in Hawaii, Ltd. | | | | |\n| Nissan Capital of America, Inc. | Torrance, California | Financing for group companies | $1 | 100.00 |\n| Nissan Technical Center | Farmington Hills | Research and development, testing | $16 | 100.00 |\n| North America, Inc. | Michigan | | | |\n| Nissan Motor Insurance Corporation | Honolulu, Hawaii | Casualty insurance | $10 | 100.00 |\n| Nissan Forklift Co., North America | Marengo, Illinois | Manufacture and sales of forklifts and parts | $34 | 100.00 |\n| Canada | | | | |\n| Nissan Canada, Inc. | Mississauga, Ontario | Sales of automobiles and parts | CAN$68 | 100.00 |\n| Mexico | | | | |\n| Nissan Mexicana, S.A. de C.V. | Mexico D.F. | Manufacture and sales of automobiles and parts | P17,056 | 100.00 |", - "page_start": 107, - "page_end": 107, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **More value, Higher quality, Win-win partnerships** HIROTO SAIKAWA\n\nPURCHASING\n\nExecutive Vice President\n\n\"The evolution that took place in Nissan's purchasing activities during the Nissan Revival Plan, or NRP, and continued through NISSAN 180, will stretch even further during NISSAN Value-Up. Why evolution and not revolution? Because the shift in purchasing that started six years ago was not a single action, it was a mindset change that continues to drive all our activities.\n\nPurchasing represents the single largest area of cost for Nissan. Through the NISSAN Value-Up business plan, we are determined to drive greater value from our purchasing activities and maintain the momentum built over the last six years.\n\nDuring the Nissan Revival Plan years, our focus was on catching up with the rest of the industry. NISSAN 180 was focused on reaching the benchmarks set during NRP and now as we enter the NISSAN Value-Up period, that focus evolves towards being the global cost leader.\n\nOne of the key breakthrough strategies of NISSAN Value-Up is the focus on new and emerging markets. On the sales side, markets like China, India, Russia and ASEAN represent significant opportunities for Nissan. On the purchasing side, we look at the cost competitiveness of these new markets and how we can increasingly use them to enhance our global competitiveness.\n\nOur strategy for what we call 'Leading Competitive Countries', or LCCs, is to focus on those markets that we see as trend leaders in both cost, quality and supply stability. We will focus first on China and then on ASEAN nations. This will bring cost advantages for our major regions, such as Japan, North America and Western Europe, making us more competitive. We're also investigating sourcing from Eastern Europe, the Mercosur trading zone, and India.\n\nOur Alliance with Renault has also provided substantial purchasing benefits and opportunities. Formed in 2001, the Renault Nissan Purchasing Organization, or RNPO, now accounts for over 70 percent of all purchasing for Nissan and Renault. Nissan will further benefit from RNPO through the utilization of Renault supply bases in certain LCCs.\n\nAlthough the turnaround in the Nissan business has been profound, we also recognize that our supplier partners have played a significant role. Going forward, we intend to reinforce those relationships, building value on both sides. For example, we are reinvigorating our innovative 3-3-3 engineering program.\n\nWe are also deploying a purchasing process that gets suppliers involved earlier and further upstream in the product development process, the concept of 'project partners'. This is a program that identifies key technologies and innovations that require substantial investments from both sides. Suppliers will be selected as project partners for a specific area and will work closer with us to develop lower cost and higher quality solutions. This win-win approach has already started with interior systems and chassis development projects.\n\nLast year, we faced several challenges with raw materials. Those risks—both price and supply related—are a factor that we have to recognize and address in the coming years. Last year, the pressure was concentrated on the supply side, going forward we see an increasingly challenging cost environment. Working closely with our key raw material suppliers as well as parts suppliers and accelerating our cost reduction countermeasures will be key during NISSAN Value-Up.\n\nOur purchasing philosophy at Nissan is focused on value, quality and relationships. We want our purchasing process to be transparent and proactive, and create more value for our suppliers and for the company.\"", - "page_start": 49, - "page_end": 49, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **NISSAN Value-Up: Sustaining Performance**\n\nNissan's position today is much different than it was six years ago or even three years ago. In 1999, we were in crisis, and the Nissan Revival Plan was needed to revive our company and build a future. In April 2002, when NISSAN 180 began, we wanted to complete the revival process, with an emphasis on profitable growth.\n\nNISSAN Value-Up is about sustaining performance. About taking all the gains we have made in connecting with our customers, in growing volumes, in creating value, in earning profits, in improving management— and then building upon these gains.\n\nWith NISSAN Value-Up, you will not see a radical break from NISSAN 180. This plan is evolutionary, not revolutionary. We will take the core elements that got us to this point—namely, more revenue, less cost, more quality and speed, and maximized Alliance benefit with Renault and build upon them.\n\nNISSAN Value-Up has three critical commitments:\n\n- Profit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan.\nVolume:Nissan will achieve global sales of 4.2 million units measured in fiscal 2008.\n\n- ROIC: Nissan will achieve a 20 percent ROIC on average over the course of the plan, based on the new formula that excludes cash on hand from the denominator.\nNISSAN Value-Up will oversee 28 new models, resulting in the start of production of 70 models worldwide, over two dozen more than the 44 production starts during NISSAN 180. Of the 28 new models, 18 will be replacements for existing models and 10 will be completely new \"conquest\" models. We will enter more new segments, and we will introduce six models that will delight customers by being completely innovative in their concept and benefits.\n\nWe will pursue four major breakthroughs while implementing NISSAN Value-Up:\n\n- Our Infiniti luxury brand will extend its reach into new markets such as China and Russia and continue to establish its credibility as a Tier-1 luxury player.\n- We will develop our Light Commercial Vehicle (LCV) business into a fully competitive global operation through new market and product entries. By 2007, we plan to increase our LCV volume by 40 percent from fiscal 2004 to 434,000 units. During this period, operating margin is targeted to double from 4 percent to 8 percent.\n- We will take a more efficient global sourcing approach to maximize our opportunities and minimize our overall costs as we grow. Our engineering, production and purchasing functions will continue their acceleration toward being fully integrated global operations.\n- We will continue to invest in new and emerging markets, including China, India and Russia.", - "page_start": 11, - "page_end": 11, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### LETTER FROM THE PRESIDENT AND CEO\n\nA public company has two key responsibilities to its shareholders: transparency and value creation.\n\nAt Nissan, transparency is essential to our business. Especially in uncertain times, it builds trust between a company and its shareholders. And we believe transparency is the best way to encourage long-term investment in our company.\n\nBut transparency is not yet universal. Nissan is still one of the few large corporations that publicly disclose future business plans, performance indicators, commitments and future dividends. We trust that these measures give shareholders a clear view of our company's future direction.\n\nFrom the start of the Nissan Revival Plan (NRP) in 1999, we have created value by focusing on key value drivers—particularly sales growth, operating profit margin, and return on invested capital.\n\nBy the end of fiscal 2001 we exceeded our NRP commitments by returning Nissan to profit one year ahead of schedule, halving the company's debt and over-delivering on our commitment to achieve a 4.5 percent operating profit margin.\n\nFollowing NRP, we launched a three-year business plan called NISSAN 180. By the end of the plan in fiscal 2004, we committed to achieve the following:\n\n- An increase in global sales of 1 million units, compared to the start of the plan. We are confident of meeting this final commitment by the end of the measurement period in September 2005.\n- An 8 percent operating profit margin. For every year of the NISSAN 180 plan our operating margin has been at or above 10 percent topping the performance of all global automakers.\n- Zero net automotive debt. We now have more than ¥200 billion in net cash under the new and more demanding accounting standards.\n\n#### Review of 2004\n\nNissan lived up to its challenges in fiscal 2004, despite a very challenging year in the global industry, full of risks both anticipated and unexpected.\n\nConsolidated net revenues reached ¥8 trillion 576.3 billion, up 15.4 percent from last year. Consolidated operating profit improved by 4.4 percent to a record ¥861.2 billion. As a percentage of net revenue, our operating profit margin came to 10 percent, which remains at the top level among global automakers. And our net income reached ¥512.3 billion, or ¥125.16 per share, compared to ¥122.02 per share for the previous fiscal year.\n\n#### NISSAN Value-Up\n\nThe Nissan revival story is now complete. Our next three-year business plan, 'NISSAN Value-Up,' is focused, as its name suggests, on delivering sustainable long-term value to all our stakeholders. As such, it is evolutionary not revolutionary.\n\nAs with our previous business plans, NISSAN Value-Up establishes three core commitments. They are ambitious, and will require us to stretch our capabilities. But they are realistic.\n\n> Profit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan. Operating profit remains at the center of our management system, as it is the most accurate measure of business performance.", - "page_start": 3, - "page_end": 3, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### LETTER FROM THE COO\n\nMuch has been written about the Nissan revival. While innovative product, an improved cost base, greater manufacturing efficiencies and a better-defined brand have all been factors, the strongest element in our revival has been our people. And, what we learned during the crisis in the 90s and through the Nissan Revival Plan and Nissan 180 plan, now guides how we will manage the company in the future. We call it the Nissan Management Way. It is both a philosophy and set of disciplines that guide us at all levels of the organization and will help Nissan build on the momentum of the past six years.\n\nAlthough our president and CEO Carlos Ghosn has now taken on the same responsibilities at Renault, our basic management style will not change. As in the past, the Executive Committee, chaired by Carlos Ghosn, is still the highest decision making authority for strategy and management policy.\n\nThe COO position I now hold was created to provide an \"operating officer\" in the truest sense of the title. As COO my role is to assist the CEO by executing the business plan, monitoring the Company's performance and supervising dayto-day operations. The decisions I make are always based on the Nissan Management Way and support the commitments of the NISSAN Value-Up business plan.\n\nWhat distinguishes the Nissan Management Way is that we are both profit-driven and customer-focused, and that we share our strategy globally and execute in a cross-functional way. These cross-functional activities are particularly important to our success; along with cross-functional thinking, they have helped create an organization of singular structure, focus and culture. In this organization, employees representing each of Nissan's three axis—regional businesses such as Japan and U.S., functions such as engineering and manufacturing, and products—are actively encouraged to work together to maximize profits and to avoid a 'silo' mentality that is only focused on their immediate operational group.\n\nFiscal 2005 is a year of immense challenges and uncertainties, but we have still pushed ahead with an ambitious business plan for this period. As COO, my priority is to keep a close watch on Nissan's performance to ensure that we deliver our commitments. These include achieving the final Nissan 180 commitment of one million additional vehicles by the end of September 2005 and hitting our financial targets for fiscal 2005. There is no doubt that we have the strong leadership and management teams capable of sustaining the high level of performance required to reach these goals.\n\nNissan is now a learning organization. We have fully integrated the changes that began during the Nissan Revival Plan and continue to shape our business in the future. Our employees continually seek to build a better Nissan and fortify the brand, and are not afraid to speak out on issues and openly discuss challenges that face the business. Within the Nissan Management Way, we call that \"healthy conflict\"— and it strongly related to our belief in transparency and accountability. This is the essence of the evolution that continues to empower our company.\n\nOur alliance with Renault also continues to be a source of immense strength. We expect to further reinforce the Alliance and to develop new synergies now that Carlos Ghosn is the CEO of both companies.\n\nWhile we have the kinds of advantages I have mentioned, we also have risks. One of those risks is complacency. During the last six years, we have made significant achievements and consistently met tough commitments, but countless challenges remain. Our industry is immensely competitive, our customers more demanding than ever and we have no time to rest and congratulate ourselves. We need to create a culture where employees are always motivated to challenge themselves and the company and to create value for all our stakeholders.\n\nPeople around the world know that Nissan is a profitable and customer-driven company. As COO, one of my key roles under NISSAN Value-Up is to promote this customer-driven culture throughout the entire value chain, from initial product planning to after-sales service. I truly believe that by enhancing our focus on profit and pursuing a customer-driven approach, we can provide more value to all our stakeholders: employees, communities, suppliers, partners, and, of course, our shareholders.\n\nToshiyuki Shiga Chief Operating Officer", - "page_start": 5, - "page_end": 5, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **Succeeding Despite Growing Competition**\n\nCHINA\n\nKATSUMI NAKAMURA President & CEO, Dongfeng Motor Co., Ltd.\n\n\"To understand the depth of Nissan's commitment to China, you need look no further than Dongfeng Motor Co., Ltd., our joint venture with Dongfeng Motor Corporation. DFL, as we refer to it, is the biggest JV in China's automotive industry, representing a 50-50 investment by Nissan and Dongfeng totaling RMB16.7\n\nbillion (US$2 billion). Dongfeng is the major commercial vehicle manufacturer in China, and the Dongfeng brand is famous throughout the country. With 70,000 employees and over fifty subsidiaries, DFL is a strategic alliance for both companies. In China, most joint ventures with foreign makers are small and focus only on producing the foreign partner's products. In contrast, DFL integrates Nissan's technology, products and the Nissan Management Way in the production of vehicles under both the Nissan and Dongfeng brands.\n\nGreater competition and a softer economy made 2004 a difficult year in the passenger vehicle market. Yet we sold approximately 92,000 passenger vehicles in China during the last calendar year. That number included 61,000 DFL-produced Nissan-branded vehicles, 21,000 Zhengzhou-produced Nissan pickups and SUVs, and 10,000 imported vehicles. We also sold nearly 88,000 light commercial vehicles under the Dongfeng brand.\n\nIncreases in raw material costs and reductions in selling price did affect the commercial vehicle business in fiscal 2004. As a result, operating profit from DFL to Nissan totaled ¥10 billion, which was lower than anticipated. While we work to manage material price increases, we're still focused on improving the quality and price competitiveness of our products. We're also planning to export these models to Africa, South America, and the Middle East.\n\nTwo or three years ago, the passenger vehicle market in China was a seller's market. That reversed during the last half of the year, influenced by macroeconomic controls and more products coming onto the market. As a result, most automakers entered into a price war. We stayed out of that because we didn't want to damage our brand image. Instead, we found alternative means to adapt to the market. For example, we did not discount the selling price of the Teana during its high-profile launch. In December 2004, we also announced to customers that we would give them a rebate if prices went down after they bought a Nissan. We released a model change for the Sunny, and kept firm on the Teana's pricing. These actions have helped keep our brand image high, while building customer loyalty, selling cars and reducing inventory.\n\nCalendar year 2005 looks very promising to us. The Teana has been a tremendous success, winning 12 awards—including Car of the Year for 2005 in China—and helping solidify Nissan's reputation for quality. The car continues to sell well, and opens the door for five models that will be launched in fiscal 2005: the Tiida sedan in April; Fuga in June; Quest in August, which is imported from the U.S.; the Tiida in the second half along; and the 350Z in calendar year 2006. The Tiida has already won two awards at the Shanghai Motor Show for best new model and roominess, and answers the strong demand in China for fuel efficiency.\n\nIn June 2005, the China State Administration for Industry and Commerce officially recognized the NISSAN trademark as a \"famous trademark.\" Only trademarks with superior reputations receive this distinction. Not only does this represent an important milestone in Nissan's efforts to build its brand in China, it also represents the first time a Japanese automaker has had its trademark acknowledged in China. Currently, Nissan and YKK are the only Japanese companies to be awarded this status. Now that Nissan's brand image is respected in China, we must improve our", - "page_start": 65, - "page_end": 65, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "How can CEDAR Oil be used with the AY11236 microscope?", - "target_page": 10, - "target_passage": "1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil. 2. After finishing the observation, wipe off the cedar oil. 3. Do not use the 40x objective until you have wiped off all of the cedar oil.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## **INDEX**\n\n| Maintenance | 1 |\n| --- | --- |\n| Model AY11240/Model AY11238 | 2-5 |\n| Model AY11228/Model AY11232 | 6-9 |\n| Model AY11230/Model AY11234 | 10-13 |\n| Model AY11236 | 14-18 |\n| Warranty Information | Back Cover |\n\n### **IMPORTANT NOTES**\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32oF to 104oF.\n- 3. Do not use this instrument in an environment with a lot of dust. **Cover the instrument when not in use.**\n- 4. Do not subject the instrument to shock.\n\n## **MAINTENANCE**\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios: Wet weather: 1:2\n\nDry Weather: 1:1\n\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n# **MODEL AY11240/AY11238**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90o vertical to 45o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45o angle. The head rotates 360o. The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11240 Model AY11238**\n\n- 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n- 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n### **USING THE 5-HOLE DIAPHRAGM**\n\n- 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n- 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole. Use the following guidelines to match the hole number to the objective that you have selected: 40x objective: Use #5 hole 10x objective: Use #4 or #3 hole 4x objective: Use #2 or #1 hole\n\n### **COARSE KNOB ADJUSTMENT - Model AY11240**\n\n- 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n- 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\n## **MODEL AY11228/AY11232**\n\n### **MICROSCOPE USAGE**\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n### **CONSTRUCTION**\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- 6. Adjust the interpupillary distance by using the eyepiece interpupillary slide adjustment.\n- 7. Observe using the right eyepiece adjusting the coarse and fine focus and adjust the diopter ring until image is clear and sharp.\n- 8. Observe with the left eyepiece and adjust the diopter ring until image is clear and sharp.\n- 9. Rotate the fine focus adjustment when using other objectives. NOTE: This instrument is equipped with patent objectives so the precision or parfocalization is very high.\n\n**Fig. 1 - Objective Parts**\n\n- 10. If the image is in focus with the 10x objective, you can select other objectives and observe the specimen even if the fine adjustment knob has not been used by using the following method (See Fig. 1):\n- 1. Unscrew the 40x or 100x objective and remove from turret.\n- 2. Remove the mark sleeve.\n- 3. Turn the ring on the objective to adjust its parfocal distance.\n- 4. Re-insert the objective and compare with the 10x.\n- 5. Adjust until the 40x and 100x objectives image is clear.\n\n### **USING THE CEDAR OIL**\n\n- 1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil.\n- 2. After finishing the observation, wipe off the cedar oil.\n- 3. Do not use the 40x objective until you have wiped off all of the cedar oil.\n\n# **OPERATION (cont.)**\n\n### **ADJUSTING THE CONDENSER APERTURE**\n\n- 1. The numerical aperture of the condenser should match the numerical aperture of the objective being used.\n- 2. To make sure that the objectives are imaging properly (especially the 40x and 100x), follow this procedure:\n- 1. Take off the eyepiece.\n- 2. Look through the eyepiece.\n- 3. The smallest circle or light that you can see is the eyepiece's exit pupil.\n- 4. Adjust the aperture of the iris diaphragm in the condenser to 70% or 80% for the best contrast for observation (See Fig. 2.).\n\n**Fig. 2 - Condenser Diaphram Aperture**\n\n## **TROUBLESHOOTING**\n\n| Problem | Possible Cause | Solution |\n| --- | --- | --- |\n| 1. Image not clear. | 1.Specimen is in incorrect | 1. Re-position specimen. |\n| | position. | 2. Clean lens. |\n| | 2. Lens is dirty. | 3. Put a drop of Cedar oil on |\n| | 3. Cedar oil not placed on | immersion objective. |\n| | immersion objective. | 4. Rotate turret several times to |\n| | 4. Bubbles in Cedar oil. | eliminate bubbles. |\n| | 5. Cedar oil on 40x objective. | 5. Clean 40x objective. |\n| | 6. Iris diaphragm open too wide. | 6. Reduce size of iris diaphragm. |\n| 2. Poor illumination. | 1. Condenser position is incorrect. | 1. Re-position condenser. |\n| | 2. Lens is dirty. | 2. Clean lens. |\n| | 3. Specimen is not placed level. | 3. Re-position specimen so it is level. |\n| 3. Illumination not bright. | 1. Iris diaphragm opening too small. | 1. Open iris diaphragm wider. |\n| | 2. Position of condenser too low. | 2. Raise condenser. |\n| | 3. Lens is dirty. | 3. Clean lens. |\n| 4. Cannot focus at high | 1. Specimen is in incorrect position. | 1. Re-position specimen. |\n| magnification. | | |\n| 5. Objective lenses touch | 1. Stage is too high. | 1. Re-position stage. |\n| specimen. | | |", - "page_start": 9, - "page_end": 9, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- \n- \n- \n- \n- \n- \n- \n- \n\n- \n- \n- \n- \n- \n- \n- \n- \n- \n\n| Classification Optical | | Magnification | Numerica | Working |\n| --- | --- | --- | --- | --- |\n| System | | | Aperture | Distance |\n| Dry | | 4x Adjustable | 0.1 | 37.42mm |\n| | | Focus | | |\n| Achromatic | Dry | 10x | 0.25 | 7.14mm |\n| Objective | | | | |\n| Dry | | 40x Spring | 0.65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n\n| Classification | Magnification | Field of View (FOV) |\n| --- | --- | --- |\n| | | Diameter |\n| Plain Field Eyepiece | Model AY11240 10x | 18mm |\n| | Model AY11238 | |\n| | 10x | 25mm |\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n\n## **PARTS LIST**\n\n#### **Model AY11240**\n\n**Model AY11238**\n\n| Name | | Qty | Name | | Qty |\n| --- | --- | --- | --- | --- | --- |\n| Microscope Stand | | 1 | Microscope Stand | | 1 |\n| Achromatic Objective | 4x | 1 | Achromatic | 4x | 1 |\n| | 10x | 1 | Objective | 10x | 1 |\n| | 40x (s) | 1 | | 40x (s) | 1 |\n| Plain Concave Mirror | | 1 | 10x Wide Field Eyepiece | | 1 |\n| Plastic Dust Cover | | 1 | Plastic Dust Cover | | 1 |\n| 10x Wide Field Eyepiece | | 1 | Spare Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 | Lens Cleaning Tissue | | 1 |\n| Specification | | 1 | Specification | | 1 |\n| Inspection Certificate | | 1 | Inspection Certificate | | 1 |\n| Packing List | | 1 | Packing List | | 1 |\n\n### **OPERATION**\n\n#### **Model AY11240 Model AY11238**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Adjust the stand to an angle that provides comfortable observation.\n- 5. Rotate and adjust concave mirror to light the field of view. **NOTE: Do not reflect the Sun with the mirror. This can cause serious eye injury or permanent eye damage.**\n- 6. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret. 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.", - "page_start": 2, - "page_end": 2, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11230 Model AY11234**\n\n### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n#### **USING THE VERTICAL TUBE - MODELS AY11230/11234**\n\n1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n\n- 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n- 3. Make sure that both the images in\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n# **MODEL AY11236**\n\n**Model AY11236**\n\n# **MICROSCOPE USAGE**\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n- 1. Length of mechanical tube: 160mm\n- 2. Conjugate distance between object and image: 195mm\n- 3. Condenser: Abbe; numerical aperture: NA1.25 (oil immersion)\n- 4. Illumination: Input 110V or 200V; Output: 20W\n- 5. Fine adjustment range: .002mm\n- 6. Coarse Adjustment Range: 20mm\n- 7. Shift or Mechanical Stage: Longitude 40mm; Transversal 70mm\n- 8. Condenser Elevation Range: 15mm\n- 9. Iris diaphragm aperture: 2mm-30mm\n\n### **Objective Specifications**\n\n| Classification | | Optical Magnification | Numerical | Working |\n| --- | --- | --- | --- | --- |\n| | System | | Aperture | Distance |\n| Achromatic Objective | Dry | 4x Adjustable | 0 .1 | 37.42mm |\n| | | Focus | | |\n| | Dry | 10x | 0 .25 | 7.14mm |\n| | Dry | 40x Spring | 0 .65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n| | Oil | 100x Spring | 1.25 | 0.18mm |\n| | Immer | Adjustable | | |\n| | sion | Focus | | |\n\nNote: For oil immersion, please use the index of refraction 1.515 oil\n\n### **Eyepiece Specifications**\n\n| Classification | Magnification | Field of View (FOV) Diameter |\n| --- | --- | --- |\n| Plain Field Eyepiece | 10x | 18mm |\n\n### **Total Magnification**\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n| | 100x (oil,s) | | 1000x |\n\n# **PARTS LIST**\n\n| Name | | Qty |\n| --- | --- | --- |\n| Microscope Stand | | 1 |\n| Achromatic | 4x (parfocal distance adjustable) | 1 |\n| 10x | | 1 |\n| Objective | 40x (s) (parfocal distance adjustable) | 1 |\n| | 100x (oil,s) (parfocal distance adjustable) | 1 |\n| 10x Wide Field Eyepiece w/Pointer | | 2 |\n| Abbe Condenser NA1.25 | | 1 |\n| Plastic Dust Cover | | 1 |\n| Spare 6V20W Halogen Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 |\n| Cedar Oil | | 1 |\n| 1A Fuse (spare) | | 1 |\n| Specification | | 1 |\n| Inspection Certificate | | 1 |\n| Packing List | | 1 |\n\n## **OPERATION**\n\n- 1. Remove all components from package. Identify all parts before assembling instrument.\n- 2. Attach 4x, 10x and 40x objectives by screwing into revolving turret. Tighten and secure to maximum finger pressure only.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 10x objective provides a larger field of view making it easier to search the specimen.", - "page_start": 8, - "page_end": 8, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11228 Model AY11232**\n\n#### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet before changing the bulb.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n**MODEL AY11230/AY11234**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11230**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n- 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n### **Model AY11234**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11230**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11234**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x 45x | 3.5x 22.5x | 5.3x 33.8x | 10.5x 67.5x | 14x 90x |\n| Field of View Objective Dia. (mm) | | 28.6- 4.4 | 57.2- 8.8 | 38.1- 5.9 | 19.0- 2.9 | 14.3- 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x- 112.5x | 8.8x- 56.3x | 13x- 84.4x | 26.3x- 169x | 35x- 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n### **PARTS LIST**\n\n#### **Model AY11230**\n\n#### **Model AY11234**\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11230 Model AY11234**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- **12** 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11228**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n\n6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n#### **Model AY11232**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11228**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11232**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x- 45x | 3.5x- 22.5x | 5.3x- 33.8x | 10.5x- 67.5x | 14x- 90x |\n| Field of View Objective Dia. (mm) | | 28.6- | 57.2- | 38.1- | 19.0- | 14.3- |\n| | | 4.4 | 8.8 | 5.9 | 2.9 | 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x 112.5x | 8.8x 56.3x | 13x 84.4x | 26.3x 169x | 35x 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n#### **Model AY11228**\n\n#### **Model AY11232**\n\n| Name | Qty | |\n| --- | --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 | |\n| 10x Wide Field Eyepiece | 2 | |\n| Eyeshade | 2 | Eyeshade |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) | |\n| Fuse 2A (spare) | 1 | |\n| Lens Cleaning Tissue | 1 | |\n| Dust Cover | 1 | |\n| Black/White Working Stage | 1 | |\n| Specifications | 1 | |\n| Packing Slip | 1 | |\n| Quality Inspection Certificate | 1 | |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n#### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11228 Model AY11232**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "in cash and drilling carries. This was CNOOC's second investment with Chesapeake and its second investment in the U.S. onshore E&P industry. We are currently drilling with five rigs in this play and expect to accelerate our drilling to 15 rigs by year-end 2013. We believe our leasehold position could support the drilling of up to 7,600 additional net wells.\n\nCleveland, Tonkawa and Mississippian Plays — These three liquids-rich plays of the Anadarko Basin should become significant contributors to our growth in the years ahead. The Cleveland and Tonkawa plays are tight sandstones located in western Oklahoma and the eastern Texas Panhandle, and they provide returns that are some of the very best in\n\n# **Fracking Operations Transparency**\n\nNatural gas and oil operations continue to grow and expand across the country as vast new resources are unlocked through the process of hydraulic fracturing, or \"fracking,\" a proven technology that has been used safely and successfully in the completion of more than 1 million U.S. wells since 1949.\n\nDuring the fracking process, a mixture of approximately 99% water and sand, combined with a small amount of chemical additives, is pumped at high pressure into a targeted formation to create small fissures or fractures in the surrounding rock or shale. These fractures are kept propped open by the sand to allow the natural gas or oil to freely flow into a wellbore.\n\nIn our continuing efforts to educate the public and alleviate common misconceptions about hydraulic fracturing, Chesapeake became one of the first energy companies to disclose the additives used in the process. We are actively participating in a national, publicly accessible web-based registry developed by the Ground Water Protection Council and the Interstate Oil and Gas Compact Commission, with support of the U.S. Department of Energy. The registry allows for fracking additives to be reported on a well-by-well basis and offers public access to that material on its website. Chesapeake began loading well completion data onto the registry on February 15, 2011, for wells where completion reports have been filed with the appropriate state agencies.\n\nTo view the listings and learn more about the fracking process, the additives used and measures taken to protect fresh ground water aquifers, visit www.fracfocus.org.\n\nthe company. We have acquired approximately 600,000 net leasehold acres prospective for these plays and have drilled 75 net wells to date. We are currently using eight rigs and believe our leasehold could support the drilling of up to an additional 3,700 net wells.\n\nThe Mississippian fractured carbonate is primarily an oil play and is located on the Anadarko Basin shelf of northern Oklahoma and southern Kansas. We have acquired approximately 900,000 net leasehold acres prospective for this play and have drilled 40 net wells to date. We are currently using four rigs and believe our leasehold could support the drilling of up to an additional 6,000 net wells. This is an area where we anticipate bringing in a joint venture partner later in 2011 or in early 2012.\n\nBone Spring, Avalon, Wolfcamp and Wolfberry Plays — These four liquids-rich plays of the Permian Basin should also become significant contributors to our growth in the years ahead. To date, we have acquired approximately 560,000 net leasehold acres that we believe are prospective for these plays and have drilled 155 net wells. We are currently using eight rigs and believe our leasehold could support the drilling of up to an additional 4,400 net wells.\n\nUtica Shale — Chesapeake has high hopes for this emerging shale play in eastern Ohio, especially because it would become the fourth large unconventional play (along with the Haynesville and Bossier shales and the Mississippian carbonate) that Chesapeake has discovered. In addition, we believe the play will have three distinct components (oil,\n\n*A prime example of Best Management Practices for fracture stimulation, this well in Bradford County, Pennsylvania, is now producing natural gas from the Marcellus Shale. A closely regulated completion technique, fracking is necessary to allow natural gas or oil to freely flow into the wellbore.*", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "For the AY11230 microscope, what is the interpupillary adjustment?", - "target_page": 7, - "target_passage": "Model AY11230 1. Interpupillary Adjustment: 55mm - 75mm", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "### **Model AY11240 Model AY11238**\n\n- 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n- 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n### **USING THE 5-HOLE DIAPHRAGM**\n\n- 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n- 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole. Use the following guidelines to match the hole number to the objective that you have selected: 40x objective: Use #5 hole 10x objective: Use #4 or #3 hole 4x objective: Use #2 or #1 hole\n\n### **COARSE KNOB ADJUSTMENT - Model AY11240**\n\n- 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n- 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\n## **MODEL AY11228/AY11232**\n\n### **MICROSCOPE USAGE**\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n### **CONSTRUCTION**\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **INDEX**\n\n| Maintenance | 1 |\n| --- | --- |\n| Model AY11240/Model AY11238 | 2-5 |\n| Model AY11228/Model AY11232 | 6-9 |\n| Model AY11230/Model AY11234 | 10-13 |\n| Model AY11236 | 14-18 |\n| Warranty Information | Back Cover |\n\n### **IMPORTANT NOTES**\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32oF to 104oF.\n- 3. Do not use this instrument in an environment with a lot of dust. **Cover the instrument when not in use.**\n- 4. Do not subject the instrument to shock.\n\n## **MAINTENANCE**\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios: Wet weather: 1:2\n\nDry Weather: 1:1\n\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n# **MODEL AY11240/AY11238**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90o vertical to 45o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45o angle. The head rotates 360o. The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11230 Model AY11234**\n\n### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n#### **USING THE VERTICAL TUBE - MODELS AY11230/11234**\n\n1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n\n- 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n- 3. Make sure that both the images in\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n# **MODEL AY11236**\n\n**Model AY11236**\n\n# **MICROSCOPE USAGE**\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11228 Model AY11232**\n\n#### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet before changing the bulb.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n**MODEL AY11230/AY11234**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11230**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n- 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n### **Model AY11234**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11230**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11234**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x 45x | 3.5x 22.5x | 5.3x 33.8x | 10.5x 67.5x | 14x 90x |\n| Field of View Objective Dia. (mm) | | 28.6- 4.4 | 57.2- 8.8 | 38.1- 5.9 | 19.0- 2.9 | 14.3- 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x- 112.5x | 8.8x- 56.3x | 13x- 84.4x | 26.3x- 169x | 35x- 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n### **PARTS LIST**\n\n#### **Model AY11230**\n\n#### **Model AY11234**\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11230 Model AY11234**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- **12** 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11228**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n\n6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n#### **Model AY11232**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11228**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11232**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x- 45x | 3.5x- 22.5x | 5.3x- 33.8x | 10.5x- 67.5x | 14x- 90x |\n| Field of View Objective Dia. (mm) | | 28.6- | 57.2- | 38.1- | 19.0- | 14.3- |\n| | | 4.4 | 8.8 | 5.9 | 2.9 | 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x 112.5x | 8.8x 56.3x | 13x 84.4x | 26.3x 169x | 35x 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n#### **Model AY11228**\n\n#### **Model AY11232**\n\n| Name | Qty | |\n| --- | --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 | |\n| 10x Wide Field Eyepiece | 2 | |\n| Eyeshade | 2 | Eyeshade |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) | |\n| Fuse 2A (spare) | 1 | |\n| Lens Cleaning Tissue | 1 | |\n| Dust Cover | 1 | |\n| Black/White Working Stage | 1 | |\n| Specifications | 1 | |\n| Packing Slip | 1 | |\n| Quality Inspection Certificate | 1 | |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n#### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11228 Model AY11232**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- 6. Adjust the interpupillary distance by using the eyepiece interpupillary slide adjustment.\n- 7. Observe using the right eyepiece adjusting the coarse and fine focus and adjust the diopter ring until image is clear and sharp.\n- 8. Observe with the left eyepiece and adjust the diopter ring until image is clear and sharp.\n- 9. Rotate the fine focus adjustment when using other objectives. NOTE: This instrument is equipped with patent objectives so the precision or parfocalization is very high.\n\n**Fig. 1 - Objective Parts**\n\n- 10. If the image is in focus with the 10x objective, you can select other objectives and observe the specimen even if the fine adjustment knob has not been used by using the following method (See Fig. 1):\n- 1. Unscrew the 40x or 100x objective and remove from turret.\n- 2. Remove the mark sleeve.\n- 3. Turn the ring on the objective to adjust its parfocal distance.\n- 4. Re-insert the objective and compare with the 10x.\n- 5. Adjust until the 40x and 100x objectives image is clear.\n\n### **USING THE CEDAR OIL**\n\n- 1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil.\n- 2. After finishing the observation, wipe off the cedar oil.\n- 3. Do not use the 40x objective until you have wiped off all of the cedar oil.\n\n# **OPERATION (cont.)**\n\n### **ADJUSTING THE CONDENSER APERTURE**\n\n- 1. The numerical aperture of the condenser should match the numerical aperture of the objective being used.\n- 2. To make sure that the objectives are imaging properly (especially the 40x and 100x), follow this procedure:\n- 1. Take off the eyepiece.\n- 2. Look through the eyepiece.\n- 3. The smallest circle or light that you can see is the eyepiece's exit pupil.\n- 4. Adjust the aperture of the iris diaphragm in the condenser to 70% or 80% for the best contrast for observation (See Fig. 2.).\n\n**Fig. 2 - Condenser Diaphram Aperture**\n\n## **TROUBLESHOOTING**\n\n| Problem | Possible Cause | Solution |\n| --- | --- | --- |\n| 1. Image not clear. | 1.Specimen is in incorrect | 1. Re-position specimen. |\n| | position. | 2. Clean lens. |\n| | 2. Lens is dirty. | 3. Put a drop of Cedar oil on |\n| | 3. Cedar oil not placed on | immersion objective. |\n| | immersion objective. | 4. Rotate turret several times to |\n| | 4. Bubbles in Cedar oil. | eliminate bubbles. |\n| | 5. Cedar oil on 40x objective. | 5. Clean 40x objective. |\n| | 6. Iris diaphragm open too wide. | 6. Reduce size of iris diaphragm. |\n| 2. Poor illumination. | 1. Condenser position is incorrect. | 1. Re-position condenser. |\n| | 2. Lens is dirty. | 2. Clean lens. |\n| | 3. Specimen is not placed level. | 3. Re-position specimen so it is level. |\n| 3. Illumination not bright. | 1. Iris diaphragm opening too small. | 1. Open iris diaphragm wider. |\n| | 2. Position of condenser too low. | 2. Raise condenser. |\n| | 3. Lens is dirty. | 3. Clean lens. |\n| 4. Cannot focus at high | 1. Specimen is in incorrect position. | 1. Re-position specimen. |\n| magnification. | | |\n| 5. Objective lenses touch | 1. Stage is too high. | 1. Re-position stage. |\n| specimen. | | |", - "page_start": 9, - "page_end": 9, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- \n- \n- \n- \n- \n- \n- \n- \n\n- \n- \n- \n- \n- \n- \n- \n- \n- \n\n| Classification Optical | | Magnification | Numerica | Working |\n| --- | --- | --- | --- | --- |\n| System | | | Aperture | Distance |\n| Dry | | 4x Adjustable | 0.1 | 37.42mm |\n| | | Focus | | |\n| Achromatic | Dry | 10x | 0.25 | 7.14mm |\n| Objective | | | | |\n| Dry | | 40x Spring | 0.65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n\n| Classification | Magnification | Field of View (FOV) |\n| --- | --- | --- |\n| | | Diameter |\n| Plain Field Eyepiece | Model AY11240 10x | 18mm |\n| | Model AY11238 | |\n| | 10x | 25mm |\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n\n## **PARTS LIST**\n\n#### **Model AY11240**\n\n**Model AY11238**\n\n| Name | | Qty | Name | | Qty |\n| --- | --- | --- | --- | --- | --- |\n| Microscope Stand | | 1 | Microscope Stand | | 1 |\n| Achromatic Objective | 4x | 1 | Achromatic | 4x | 1 |\n| | 10x | 1 | Objective | 10x | 1 |\n| | 40x (s) | 1 | | 40x (s) | 1 |\n| Plain Concave Mirror | | 1 | 10x Wide Field Eyepiece | | 1 |\n| Plastic Dust Cover | | 1 | Plastic Dust Cover | | 1 |\n| 10x Wide Field Eyepiece | | 1 | Spare Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 | Lens Cleaning Tissue | | 1 |\n| Specification | | 1 | Specification | | 1 |\n| Inspection Certificate | | 1 | Inspection Certificate | | 1 |\n| Packing List | | 1 | Packing List | | 1 |\n\n### **OPERATION**\n\n#### **Model AY11240 Model AY11238**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Adjust the stand to an angle that provides comfortable observation.\n- 5. Rotate and adjust concave mirror to light the field of view. **NOTE: Do not reflect the Sun with the mirror. This can cause serious eye injury or permanent eye damage.**\n- 6. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret. 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.", - "page_start": 2, - "page_end": 2, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "FIG. 6: Profiles of the final dried-in nanoparticle layer for the dewetting of a suspension of nanoparticles in a volatile solvent that partially wets the substrate for (a) high (Ω = 10−3 ), (b) medium (Ω = 2 × 10−6 ) and (c) low (Ω = 0.78 × 10−8 ) evaporation rates, for the case when χ = H/l0 = 1.09, the lateral length scale is ` = p γ/κH with κ = (Sp/l0) exp(d0/l0)H being an energy scale related to wettability and the vertical length scale is H = p 2SLW /κd0. The remaining dimensionless parameters are the evaporation number Ω = Qeη0` 2/H3 , the diffusion number Γ = D(0)η0/Hκ = 10−4 and the dimensionless chemical potential M = Hµ/κ = −0.0035. The system size is L = 19500`. Film thickness and hp in the plots are scaled by the precursor film thickness.\n\ncircular throughout the dewetting and evaporation process. In this case one should interprete the coordinate x as the distance from the centre of the circular film.\n\nWe start with a film of height h0 of finite length sitting on a precursor film and assume that the film contains nanoparticles at constant concentration φ0. The chosen parameter values ensure that the film of thickness h0 is linearly stable. As we do not incorporate noise, no nucleation of additional holes can occur (even with noise the probability would be extremely low). Without evaporation the film dewets 'classically' by a retraction of the initially step-like front. After a short time, surface tension smoothes the profile of the receding front and a capillary rim forms that collects all the", - "page_start": 19, - "page_end": 19, - "source_file": "1001.2669.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n- 1. Length of mechanical tube: 160mm\n- 2. Conjugate distance between object and image: 195mm\n- 3. Condenser: Abbe; numerical aperture: NA1.25 (oil immersion)\n- 4. Illumination: Input 110V or 200V; Output: 20W\n- 5. Fine adjustment range: .002mm\n- 6. Coarse Adjustment Range: 20mm\n- 7. Shift or Mechanical Stage: Longitude 40mm; Transversal 70mm\n- 8. Condenser Elevation Range: 15mm\n- 9. Iris diaphragm aperture: 2mm-30mm\n\n### **Objective Specifications**\n\n| Classification | | Optical Magnification | Numerical | Working |\n| --- | --- | --- | --- | --- |\n| | System | | Aperture | Distance |\n| Achromatic Objective | Dry | 4x Adjustable | 0 .1 | 37.42mm |\n| | | Focus | | |\n| | Dry | 10x | 0 .25 | 7.14mm |\n| | Dry | 40x Spring | 0 .65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n| | Oil | 100x Spring | 1.25 | 0.18mm |\n| | Immer | Adjustable | | |\n| | sion | Focus | | |\n\nNote: For oil immersion, please use the index of refraction 1.515 oil\n\n### **Eyepiece Specifications**\n\n| Classification | Magnification | Field of View (FOV) Diameter |\n| --- | --- | --- |\n| Plain Field Eyepiece | 10x | 18mm |\n\n### **Total Magnification**\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n| | 100x (oil,s) | | 1000x |\n\n# **PARTS LIST**\n\n| Name | | Qty |\n| --- | --- | --- |\n| Microscope Stand | | 1 |\n| Achromatic | 4x (parfocal distance adjustable) | 1 |\n| 10x | | 1 |\n| Objective | 40x (s) (parfocal distance adjustable) | 1 |\n| | 100x (oil,s) (parfocal distance adjustable) | 1 |\n| 10x Wide Field Eyepiece w/Pointer | | 2 |\n| Abbe Condenser NA1.25 | | 1 |\n| Plastic Dust Cover | | 1 |\n| Spare 6V20W Halogen Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 |\n| Cedar Oil | | 1 |\n| 1A Fuse (spare) | | 1 |\n| Specification | | 1 |\n| Inspection Certificate | | 1 |\n| Packing List | | 1 |\n\n## **OPERATION**\n\n- 1. Remove all components from package. Identify all parts before assembling instrument.\n- 2. Attach 4x, 10x and 40x objectives by screwing into revolving turret. Tighten and secure to maximum finger pressure only.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 10x objective provides a larger field of view making it easier to search the specimen.", - "page_start": 8, - "page_end": 8, - "source_file": "Microscope Manual.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "The illumination of my AY11236 microscope is not very strong, what can I do to solve this?", - "target_page": 10, - "target_passage": "1. Open iris diaphragm wider. 2. Raise condenser. 3. Clean lens.", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "## **INDEX**\n\n| Maintenance | 1 |\n| --- | --- |\n| Model AY11240/Model AY11238 | 2-5 |\n| Model AY11228/Model AY11232 | 6-9 |\n| Model AY11230/Model AY11234 | 10-13 |\n| Model AY11236 | 14-18 |\n| Warranty Information | Back Cover |\n\n### **IMPORTANT NOTES**\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32oF to 104oF.\n- 3. Do not use this instrument in an environment with a lot of dust. **Cover the instrument when not in use.**\n- 4. Do not subject the instrument to shock.\n\n## **MAINTENANCE**\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios: Wet weather: 1:2\n\nDry Weather: 1:1\n\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n# **MODEL AY11240/AY11238**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90o vertical to 45o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45o angle. The head rotates 360o. The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11240 Model AY11238**\n\n- 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n- 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n- 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n### **USING THE 5-HOLE DIAPHRAGM**\n\n- 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n- 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole. Use the following guidelines to match the hole number to the objective that you have selected: 40x objective: Use #5 hole 10x objective: Use #4 or #3 hole 4x objective: Use #2 or #1 hole\n\n### **COARSE KNOB ADJUSTMENT - Model AY11240**\n\n- 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n- 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\n## **MODEL AY11228/AY11232**\n\n### **MICROSCOPE USAGE**\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n### **CONSTRUCTION**\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11230 Model AY11234**\n\n### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n#### **USING THE VERTICAL TUBE - MODELS AY11230/11234**\n\n1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n\n- 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n- 3. Make sure that both the images in\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n# **MODEL AY11236**\n\n**Model AY11236**\n\n# **MICROSCOPE USAGE**\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- \n- \n- \n- \n- \n- \n- \n- \n\n- \n- \n- \n- \n- \n- \n- \n- \n- \n\n| Classification Optical | | Magnification | Numerica | Working |\n| --- | --- | --- | --- | --- |\n| System | | | Aperture | Distance |\n| Dry | | 4x Adjustable | 0.1 | 37.42mm |\n| | | Focus | | |\n| Achromatic | Dry | 10x | 0.25 | 7.14mm |\n| Objective | | | | |\n| Dry | | 40x Spring | 0.65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n\n| Classification | Magnification | Field of View (FOV) |\n| --- | --- | --- |\n| | | Diameter |\n| Plain Field Eyepiece | Model AY11240 10x | 18mm |\n| | Model AY11238 | |\n| | 10x | 25mm |\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n\n## **PARTS LIST**\n\n#### **Model AY11240**\n\n**Model AY11238**\n\n| Name | | Qty | Name | | Qty |\n| --- | --- | --- | --- | --- | --- |\n| Microscope Stand | | 1 | Microscope Stand | | 1 |\n| Achromatic Objective | 4x | 1 | Achromatic | 4x | 1 |\n| | 10x | 1 | Objective | 10x | 1 |\n| | 40x (s) | 1 | | 40x (s) | 1 |\n| Plain Concave Mirror | | 1 | 10x Wide Field Eyepiece | | 1 |\n| Plastic Dust Cover | | 1 | Plastic Dust Cover | | 1 |\n| 10x Wide Field Eyepiece | | 1 | Spare Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 | Lens Cleaning Tissue | | 1 |\n| Specification | | 1 | Specification | | 1 |\n| Inspection Certificate | | 1 | Inspection Certificate | | 1 |\n| Packing List | | 1 | Packing List | | 1 |\n\n### **OPERATION**\n\n#### **Model AY11240 Model AY11238**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Adjust the stand to an angle that provides comfortable observation.\n- 5. Rotate and adjust concave mirror to light the field of view. **NOTE: Do not reflect the Sun with the mirror. This can cause serious eye injury or permanent eye damage.**\n- 6. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Attach 4x, 10x and 40x objectives to revolving turret. 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.", - "page_start": 2, - "page_end": 2, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "### **Model AY11228 Model AY11232**\n\n#### **SELECTING OBJECTIVE MAGNIFICATION**\n\n- 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n- 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n### **FOCUSING**\n\n- 1. Remove the lens protective cover.\n- 2. Place the specimen on the working stage.\n- 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n- 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet before changing the bulb.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n### **FOCUSING**\n\n- 1. Turn the focusing knob away or toward you until a clear image is viewed.\n- 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n### **ZOOM MAGNIFICATION**\n\n- 1. Turn the zoom magnification knob to the desired magnification and field of view.\n- 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n- 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n### **DIOPTER RING ADJUSTMENT**\n\n- 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n- a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n- b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n- c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each\n- viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n### **CHANGING THE BULB**\n\n- 1. Disconnect the power cord from the electrical outlet.\n- 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n- 3. Replace with a new halogen bulb.\n- 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n**MODEL AY11230/AY11234**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11230**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n- 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n### **Model AY11234**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11230**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11234**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x 45x | 3.5x 22.5x | 5.3x 33.8x | 10.5x 67.5x | 14x 90x |\n| Field of View Objective Dia. (mm) | | 28.6- 4.4 | 57.2- 8.8 | 38.1- 5.9 | 19.0- 2.9 | 14.3- 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x- 112.5x | 8.8x- 56.3x | 13x- 84.4x | 26.3x- 169x | 35x- 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n### **PARTS LIST**\n\n#### **Model AY11230**\n\n#### **Model AY11234**\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11230 Model AY11234**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- **12** 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n#### **Model AY11228**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: 60mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Right Diopter Adjustment Range: +4 to -6 dopters\n\n6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n#### **Model AY11232**\n\n- 1. Interpupillary Adjustment: 55mm 75mm\n- 2. Working Stage Diameter: 95mm\n- 3. Focus Knob Adjustment Range: >50mm\n- 4. Elevator Adjustment Range: 110mm\n- 5. Diopter Adjustment Range: +/- 5 diopters\n- 6. Illumination:\n\n Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n### **Optical Specifications - Model AY11228**\n\n| Total | Objective | Eyepiece Magnification | Working Distance |\n| --- | --- | --- | --- |\n| Magnification | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n### **Optical Specifications - Model AY11232**\n\n| Objective Zoom Scale | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Accessory Large Objective | | - | 0.5x | 0.75x | 1.5x | 2x |\n| Working Distance (mm) | | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x- 45x | 3.5x- 22.5x | 5.3x- 33.8x | 10.5x- 67.5x | 14x- 90x |\n| Field of View Objective Dia. (mm) | | 28.6- | 57.2- | 38.1- | 19.0- | 14.3- |\n| | | 4.4 | 8.8 | 5.9 | 2.9 | 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x 56x | 4.4x 28x | 6.6x 42x | 13.2x 84x | 17.6x 112x |\n| Field of View Objective Dia. (mm) | | 25.7- | 51.4- | 34.3- | 17.1- | 12.9- |\n| | | 4.0 | 8 | 5.3 | 2.7 | 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| Field of View Objective Dia. (mm) | | 22.9- | 45.8- | 30.5- | 15.3- | 11.5- |\n| | | 3.6 | 7.2 | 4.8 | 24 | 1.8 |\n| WF20x/12mm | Total Magnification | 14x 90x | 7x 45x | 10.5x 67.5x | 21x 135x | 28x 180x |\n| Field of View Objective Dia. (mm) | | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x 112.5x | 8.8x 56.3x | 13x 84.4x | 26.3x 169x | 35x 225x |\n| Field of View Objective Dia. (mm) | | 12.9- | 25.8- | 17.2- | 8.6- | 6.5- |\n| | | 2.0 | 4.0 | 2.7 | 1.3 | 1.0 |\n\n#### **Model AY11228**\n\n#### **Model AY11232**\n\n| Name | Qty | |\n| --- | --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 | |\n| 10x Wide Field Eyepiece | 2 | |\n| Eyeshade | 2 | Eyeshade |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) | |\n| Fuse 2A (spare) | 1 | |\n| Lens Cleaning Tissue | 1 | |\n| Dust Cover | 1 | |\n| Black/White Working Stage | 1 | |\n| Specifications | 1 | |\n| Packing Slip | 1 | |\n| Quality Inspection Certificate | 1 | |\n\n| Name | Qty |\n| --- | --- |\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n### **OPERATION**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n- 3. Fix the binocular body on the stand with the tightening screw.\n- 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n#### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n### **Model AY11228 Model AY11232**\n\n- 1. Remove components from package. identify all parts before assembling.\n- 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n### **SELECTING THE ILLUMINATION**\n\n- 1. Depending on microscope use, select oblique or transmitted illumination.\n- 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n- 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n#### **CHANGING THE INTERPUPILLARY DISTANCE**\n\n- 1. The distance between the observer's pupils is the interpupillary distance.\n- 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "# **4.6.3 Viewing the local port mask**\n\nTo view the local port mask for the system, use the **lssystem** command, as shown in Example 4-2.\n\n*Example 4-2 Viewing the local port mask*\n\n```\nIBM_Storwize:ITSO:superuser>lssystem\nid 000001003D600126\nname ITSO\nlocation local\npartnership\n...\n...\nlocal_fc_port_mask 0000000000000000000000000000000000000000000000000000000000001111\npartner_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111\n...\n```\n# **4.7 Other administrative procedures**\n\nIn this section, we discuss other administrative procedures.\n\n# **4.7.1 Removing a control enclosure from a clustered system**\n\nRemoving a control enclosure from a system causes a loss of access to drives in this control enclosure and any expansions enclosures connected to this control enclosure. If not planned and executed carefully and correctly, this procedure can cause disruption in access to storage or data loss. Make sure that you have current and verified data backups before removing a control enclosure from a system. Follow carefully the enclosure removal procedure that is provided at this IBM Knowledge Center web page.\n\n# **4.7.2 Shutting down the system**\n\nYou can safely shut down an Storwize V7000 cluster by using the GUI or CLI.\n\nAfter you shut down the entire cluster, you need to power it on manually to start the system again. Make sure that someone is available with physical access to the IBM Spectrum Virtualize hardware who can start the system after it is shut down.\n\n**Note:** For systems with enabled encryption, ensure that the cluster can access at least one valid encryption key provider. Access to an encryption key provider (USB key or a key server) is required at start time to unlock encrypted data.\n\nAlso, never shut down your IBM Storwize V7000 system by powering off the PSUs, removing both PSUs, or removing both power cables from the enclosure. It can lead to data loss.\n\nBefore shutting down the cluster, make sure that all hosts that have volumes mapped from the system are prepared for the storage system shutdown. This can be achieved by several methods, as shown in the following examples:\n\n- -Shutting down the host. This is the safest option.", - "page_start": 145, - "page_end": 145, - "source_file": "sg247938.pdf" - }, - { - "text": "- 6. Adjust the interpupillary distance by using the eyepiece interpupillary slide adjustment.\n- 7. Observe using the right eyepiece adjusting the coarse and fine focus and adjust the diopter ring until image is clear and sharp.\n- 8. Observe with the left eyepiece and adjust the diopter ring until image is clear and sharp.\n- 9. Rotate the fine focus adjustment when using other objectives. NOTE: This instrument is equipped with patent objectives so the precision or parfocalization is very high.\n\n**Fig. 1 - Objective Parts**\n\n- 10. If the image is in focus with the 10x objective, you can select other objectives and observe the specimen even if the fine adjustment knob has not been used by using the following method (See Fig. 1):\n- 1. Unscrew the 40x or 100x objective and remove from turret.\n- 2. Remove the mark sleeve.\n- 3. Turn the ring on the objective to adjust its parfocal distance.\n- 4. Re-insert the objective and compare with the 10x.\n- 5. Adjust until the 40x and 100x objectives image is clear.\n\n### **USING THE CEDAR OIL**\n\n- 1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil.\n- 2. After finishing the observation, wipe off the cedar oil.\n- 3. Do not use the 40x objective until you have wiped off all of the cedar oil.\n\n# **OPERATION (cont.)**\n\n### **ADJUSTING THE CONDENSER APERTURE**\n\n- 1. The numerical aperture of the condenser should match the numerical aperture of the objective being used.\n- 2. To make sure that the objectives are imaging properly (especially the 40x and 100x), follow this procedure:\n- 1. Take off the eyepiece.\n- 2. Look through the eyepiece.\n- 3. The smallest circle or light that you can see is the eyepiece's exit pupil.\n- 4. Adjust the aperture of the iris diaphragm in the condenser to 70% or 80% for the best contrast for observation (See Fig. 2.).\n\n**Fig. 2 - Condenser Diaphram Aperture**\n\n## **TROUBLESHOOTING**\n\n| Problem | Possible Cause | Solution |\n| --- | --- | --- |\n| 1. Image not clear. | 1.Specimen is in incorrect | 1. Re-position specimen. |\n| | position. | 2. Clean lens. |\n| | 2. Lens is dirty. | 3. Put a drop of Cedar oil on |\n| | 3. Cedar oil not placed on | immersion objective. |\n| | immersion objective. | 4. Rotate turret several times to |\n| | 4. Bubbles in Cedar oil. | eliminate bubbles. |\n| | 5. Cedar oil on 40x objective. | 5. Clean 40x objective. |\n| | 6. Iris diaphragm open too wide. | 6. Reduce size of iris diaphragm. |\n| 2. Poor illumination. | 1. Condenser position is incorrect. | 1. Re-position condenser. |\n| | 2. Lens is dirty. | 2. Clean lens. |\n| | 3. Specimen is not placed level. | 3. Re-position specimen so it is level. |\n| 3. Illumination not bright. | 1. Iris diaphragm opening too small. | 1. Open iris diaphragm wider. |\n| | 2. Position of condenser too low. | 2. Raise condenser. |\n| | 3. Lens is dirty. | 3. Clean lens. |\n| 4. Cannot focus at high | 1. Specimen is in incorrect position. | 1. Re-position specimen. |\n| magnification. | | |\n| 5. Objective lenses touch | 1. Stage is too high. | 1. Re-position stage. |\n| specimen. | | |", - "page_start": 9, - "page_end": 9, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## **SPECIFICATIONS**\n\n- 1. Length of mechanical tube: 160mm\n- 2. Conjugate distance between object and image: 195mm\n- 3. Condenser: Abbe; numerical aperture: NA1.25 (oil immersion)\n- 4. Illumination: Input 110V or 200V; Output: 20W\n- 5. Fine adjustment range: .002mm\n- 6. Coarse Adjustment Range: 20mm\n- 7. Shift or Mechanical Stage: Longitude 40mm; Transversal 70mm\n- 8. Condenser Elevation Range: 15mm\n- 9. Iris diaphragm aperture: 2mm-30mm\n\n### **Objective Specifications**\n\n| Classification | | Optical Magnification | Numerical | Working |\n| --- | --- | --- | --- | --- |\n| | System | | Aperture | Distance |\n| Achromatic Objective | Dry | 4x Adjustable | 0 .1 | 37.42mm |\n| | | Focus | | |\n| | Dry | 10x | 0 .25 | 7.14mm |\n| | Dry | 40x Spring | 0 .65 | 0.57mm |\n| | | Adjustable | | |\n| | | Focus | | |\n| | Oil | 100x Spring | 1.25 | 0.18mm |\n| | Immer | Adjustable | | |\n| | sion | Focus | | |\n\nNote: For oil immersion, please use the index of refraction 1.515 oil\n\n### **Eyepiece Specifications**\n\n| Classification | Magnification | Field of View (FOV) Diameter |\n| --- | --- | --- |\n| Plain Field Eyepiece | 10x | 18mm |\n\n### **Total Magnification**\n\n| | Magnification | Eyepiece | 10x |\n| --- | --- | --- | --- |\n| Objective | | | |\n| | 4x | | 40x |\n| | 10x | | 100x |\n| | 40x (s) | | 400x |\n| | 100x (oil,s) | | 1000x |\n\n# **PARTS LIST**\n\n| Name | | Qty |\n| --- | --- | --- |\n| Microscope Stand | | 1 |\n| Achromatic | 4x (parfocal distance adjustable) | 1 |\n| 10x | | 1 |\n| Objective | 40x (s) (parfocal distance adjustable) | 1 |\n| | 100x (oil,s) (parfocal distance adjustable) | 1 |\n| 10x Wide Field Eyepiece w/Pointer | | 2 |\n| Abbe Condenser NA1.25 | | 1 |\n| Plastic Dust Cover | | 1 |\n| Spare 6V20W Halogen Bulb | | 1 |\n| Lens Cleaning Tissue | | 1 |\n| Cedar Oil | | 1 |\n| 1A Fuse (spare) | | 1 |\n| Specification | | 1 |\n| Inspection Certificate | | 1 |\n| Packing List | | 1 |\n\n## **OPERATION**\n\n- 1. Remove all components from package. Identify all parts before assembling instrument.\n- 2. Attach 4x, 10x and 40x objectives by screwing into revolving turret. Tighten and secure to maximum finger pressure only.\n- 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n- 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n- 5. Observe the specimen using the lowest magnification objective first. The 10x objective provides a larger field of view making it easier to search the specimen.", - "page_start": 8, - "page_end": 8, - "source_file": "Microscope Manual.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "What event marks the beginning of the field of artificial intelligence?", - "target_page": 22, - "target_passage": "The field of AI research was founded at a workshop at Dartmouth College in 1956.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "# **Artificial intelligence**\n\n**Artificial intelligence** (**AI**), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\"[2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]\n\nArtificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## **Goals**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[314] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[315][316] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[317][318]\n\n## **History**\n\nThe study of mechanical or \"formal\" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as \"0\" and \"1\", could simulate any conceivable form of mathematical reasoning.[319][320] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an \"electronic brain\".[r] They developed several areas of research that would become part of AI,[322] such as McCullouch and Pitts design for \"artificial neurons\" in 1943,[115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that \"machine intelligence\" was plausible.[323][320]\n\nThe field of AI research was founded at a workshop at Dartmouth College in 1956.[s][6] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as \"astonishing\":[u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[320]\n\nResearchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field.[327] In 1965 Herbert Simon predicted, \"machines will be capable, within twenty years, of doing any work a man can do\".[328] In 1967 Marvin Minsky agreed, writing that \"within a generation ... the problem of creating 'artificial intelligence' will substantially be solved\".[329] They had, however, underestimated the difficulty of the problem.[w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[331] and ongoing pressure from the U.S. Congress to fund more productive projects. [332] Minsky's and Papert's book *Perceptrons* was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. [333] The \"AI winter\", a period when obtaining funding for AI projects was difficult, followed.[9]\n\nIn the early 1980s, AI research was revived by the commercial success of expert systems, [334] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. [8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longerlasting winter began.[10]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Edward Fredkin argues that \"artificial intelligence is the next step in evolution\", an idea first proposed by Samuel Butler's \"Darwin among the Machines\" as far back as 1863, and expanded upon by George Dyson in his 1998 book *Darwin Among the Machines: The Evolution of Global Intelligence*. [398]\n\n## **In fiction**\n\nThought-capable artificial beings have appeared as storytelling devices since antiquity, [399] and have been a persistent theme in science fiction. [400]\n\nA common trope in these works began with Mary Shelley's *Frankenstein*, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's *2001: A Space Odyssey* (both 1968), with HAL 9000, the murderous computer in charge of the *Discovery One* spaceship, as well as *The Terminator* (1984) and *The Matrix* (1999). In contrast, the rare loyal robots such as Gort from *The Day the Earth Stood Still* (1951) and\n\nThe word \"robot\" itself was coined by Karel Čapek in his 1921 play *R.U.R.*, the title standing for \"Rossum's Universal Robots\".\n\nBishop from *Aliens* (1986) are less prominent in popular culture.[401]\n\nIsaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the \"Multivac\" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[402] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. [403]\n\nSeveral works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's *R.U.R.*, the films *A.I. Artificial Intelligence* and *Ex Machina*, as well as the novel *Do Androids Dream of Electric Sheep?*, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[404]\n\n## **See also**\n\n- Artificial intelligence and elections Use and impact of AI on political elections\n- Artificial intelligence content detection Software to detect AI-generated content\n- Behavior selection algorithm Algorithm that selects actions for intelligent agents\n- Business process automation Automation of business processes\n- Case-based reasoning Process of solving new problems based on the solutions of similar past problems\n- Computational intelligence Ability of a computer to learn a specific task from data or experimental observation\n- Digital immortality Hypothetical concept of storing a personality in digital form\n- Emergent algorithm Algorithm exhibiting emergent behavior\n- Female gendering of AI technologies Gender biases in digital technology", - "page_start": 27, - "page_end": 27, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\".[368]\n\n### **Evaluating approaches to AI**\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n#### **Symbolic AI and its limits**\n\nSymbolic AI (or \"GOFAI\")[370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\"[371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult.[372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge.[373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n#### **Neat vs. scruffy**\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n#### **Soft vs. hard computing**", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "U.S. Computer Science PhD graduates have specialized in \"AI\".[353] About 800,000 \"AI\"-related U.S. job openings existed in 2022.[354] According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies.[355]\n\n## **Philosophy**\n\nPhilosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. [356] Another major focus has been whether machines can be conscious, and the associated ethical implications.[357] Many other topics in philosophy are relevant to AI, such as epistemology and free will. [358] Rapid advancements have intensified public discussions on the philosophy and ethics of AI. [357]\n\n## **Defining artificial intelligence**\n\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\"[359] He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\".[359] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[323] Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\"[360]\n\nRussell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1] However, they are critical that the test requires the machine to imitate humans. \"Aeronautical engineering texts\", they wrote, \"do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.' \" [362] AI founder John McCarthy agreed, writing that \"Artificial intelligence is not, by definition, simulation of human intelligence\".[363]\n\nMcCarthy defines intelligence as \"the computational part of the ability to achieve goals in the world\".[364] Another AI founder, Marvin Minsky similarly describes it as \"the ability to solve hard problems\".[365] The leading AI textbook defines it as the study of\n\nThe Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior. [361]\n\nagents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the \"intelligence\" of the machine—and no other philosophical discussion is required, or may not even be possible.\n\nAnother definition has been adopted by Google,[366] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, [335] and began to look into \"sub-symbolic\" approaches.[336] Rodney Brooks rejected \"representation\" in general and focussed directly on engineering machines that move and survive.[x] Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[86][341] But the most important development was the revival of \"connectionism\", including neural network research, by Geoffrey Hinton and others.[342] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[343]\n\nAI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This \"narrow\" and \"formal\" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).[344] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as \"artificial intelligence\" (a tendency known as the AI effect).[345] However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or \"AGI\"), which had several well-funded institutions by the 2010s.[4]\n\nDeep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[11] For many specific tasks, other methods were abandoned.[y] Deep learning's success was based on both hardware improvements (faster computers, [347] graphics processing units, cloud computing[348] ) and access to large amounts of data[349] (including curated datasets,[348] such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[306]\n\nIn 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers refocussed their careers on these issues. The alignment problem became a serious field of academic study. [283]\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text.[350] ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months.[351] It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness.[352] These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Glossary of artificial intelligence List of definitions of terms and concepts commonly used in the study of artificial intelligence\n- Intelligence amplification Use of information technology to augment human intelligence\n- Intelligent agent Software agent which acts autonomously\n- Mind uploading Hypothetical process of digitally emulating a brain\n- Organoid intelligence Use of brain cells and brain organoids for intelligent computing\n- Robotic process automation Form of business process automation technology\n- Wetware computer Computer composed of organic material\n\n## **Explanatory notes**\n\n- a. This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n- b. This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n- c. It is among the reasons that expert systems proved to be inefficient for capturing knowledge.[30][31]\n- d. \"Rational agent\" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.\n- e. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper \"Computing Machinery and Intelligence\".[42] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: \"An Inductive Inference Machine\".[43]\n- f. See AI winter § Machine translation and the ALPAC report of 1966\n- g. Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[93]\n- h. Expectation–maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables. [95]\n- i. Some form of deep neural networks (without a specific learning algorithm) were described by: Warren S. McCulloch and Walter Pitts (1943)[115] Alan Turing (1948);[116] Karl Steinbuch and Roger David Joseph (1961).[117] Deep or recurrent networks that learned (or used gradient descent) were developed by: Frank Rosenblatt(1957);[116] Oliver Selfridge (1959);[117] Alexey Ivakhnenko and Valentin Lapa (1965);[118] Kaoru Nakano (1971);[119] Shun-Ichi Amari (1972);[119] John Joseph Hopfield (1982).[119] Precursors to backpropagation were developed by: Henry J. Kelley (1960);[116] Arthur E. Bryson (1962);[116] Stuart Dreyfus (1962);[116] Arthur E. Bryson and Yu-Chi Ho (1969);[116] Backpropagation was independently developed by: Seppo Linnainmaa (1970);[120] Paul Werbos (1974).[116]\n- j. Geoffrey Hinton said, of his work on neural networks in the 1990s, \"our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow.\"[121]", - "page_start": 28, - "page_end": 28, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Roberts, Jacob (2016). \"Thinking Machines: The Search for Artificial Intelligence\" (https://web.ar chive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinkin g-machines-the-search-for-artificial-intelligence). *Distillations*. Vol. 2, no. 2. pp. 14–23. Archived from the original (https://www.sciencehistory.org/distillations/magazine/thinking-ma chines-the-search-for-artificial-intelligence) on 19 August 2018. Retrieved 20 March 2018.\n- Robitzski, Dan (5 September 2018). \"Five experts share what scares them the most about AI\" (https://futurism.com/artificial-intelligence-experts-fear/amp). Archived (https://web.archive.or g/web/20191208094101/https://futurism.com/artificial-intelligence-experts-fear/amp) from the original on 8 December 2019. Retrieved 8 December 2019.\n\nRose, Steve (11 July 2023). \"AI Utopia or dystopia?\". *The Guardian Weekly*. pp. 42–43.\n\n- Russell, Stuart (2019). *Human Compatible: Artificial Intelligence and the Problem of Control*. United States: Viking. ISBN 978-0-5255-5861-3. OCLC 1083694322 (https://search.worldca t.org/oclc/1083694322).\n- Sainato, Michael (19 August 2015). \"Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence\" (https://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gat es-warn-about-artificial-intelligence). *Observer*. Archived (https://web.archive.org/web/20151 030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-ab out-artificial-intelligence) from the original on 30 October 2015. Retrieved 30 October 2015.\n- Sample, Ian (5 November 2017). \"Computer says no: why making AIs fair, accountable and transparent is crucial\" (https://www.theguardian.com/science/2017/nov/05/computer-says-no -why-making-ais-fair-accountable-and-transparent-is-crucial). *The Guardian*. Archived (http s://web.archive.org/web/20221010134155/https://theguardian.com/science/2017/nov/05/co mputer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial) from the original on 10 October 2022. Retrieved 30 January 2018.\n- Rothman, Denis (7 October 2020). \"Exploring LIME Explanations and the Mathematics Behind It\" (https://www.codemotion.com/magazine/ai-ml/lime-explainable-ai). *Codemotion*. Archived (https://web.archive.org/web/20231125045932/https://www.codemotion.com/magazine/ai-m l/lime-explainable-ai/) from the original on 25 November 2023. Retrieved 25 November 2023.\n- Scassellati, Brian (2002). \"Theory of mind for a humanoid robot\". *Autonomous Robots*. **12** (1): 13–24. doi:10.1023/A:1013298507114 (https://doi.org/10.1023%2FA%3A1013298507114). S2CID 1979315 (https://api.semanticscholar.org/CorpusID:1979315).\n- Schmidhuber, J. (2015). \"Deep Learning in Neural Networks: An Overview\". *Neural Networks*. **61**: 85–117. arXiv:1404.7828 (https://arxiv.org/abs/1404.7828). doi:10.1016/j.neunet.2014.09.003 (https://doi.org/10.1016%2Fj.neunet.2014.09.003). PMID 25462637 (https://pubmed.ncbi.nlm.nih.gov/25462637). S2CID 11715509 (https://api. semanticscholar.org/CorpusID:11715509).\n- Schmidhuber, Jürgen (2022). \"Annotated History of Modern AI and Deep Learning\" (https://peop le.idsia.ch/~juergen/). Archived (https://web.archive.org/web/20230807173414/https://peopl e.idsia.ch/~juergen/) from the original on 7 August 2023. Retrieved 5 October 2024.\n- Searle, John (1980). \"Minds, Brains and Programs\" (http://cogprints.org/7150/1/10.1.1.83.5248. pdf) (PDF). *Behavioral and Brain Sciences*. **3** (3): 417–457. doi:10.1017/S0140525X00005756 (https://doi.org/10.1017%2FS0140525X00005756). S2CID 55303721 (https://api.semanticscholar.org/CorpusID:55303721). Archived (https://we b.archive.org/web/20190317230215/http://cogprints.org/7150/1/10.1.1.83.5248.pdf) (PDF) from the original on 17 March 2019. Retrieved 22 August 2020.\n- Searle, John (1999). *Mind, language and society* (https://archive.org/details/mindlanguagesoci0 0sear). New York: Basic Books. ISBN 978-0-4650-4521-1. OCLC 231867665 (https://searc h.worldcat.org/oclc/231867665). Archived (https://web.archive.org/web/20200726220615/htt ps://archive.org/details/mindlanguagesoci00sear) from the original on 26 July 2020. Retrieved 22 August 2020.", - "page_start": 62, - "page_end": 62, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Finding a provably correct or optimal solution is intractable for many important problems.[15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.\n\n#### **Narrow vs. general AI**\n\nAI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[378][379] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.\n\n#### **Machine consciousness, sentience, and mind**\n\nThe philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that \"[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on.\"[380] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.\n\n#### **Consciousness**\n\nDavid Chalmers identified two problems in understanding the mind, which he named the \"hard\" and \"easy\" problems of consciousness.[381] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this *feels* or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a colorblind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to *know what red looks like*. [382]\n\n#### **Computationalism and functionalism**\n\nComputationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. [383]\n\nPhilosopher John Searle characterized this position as \"strong AI\": \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\"[ac] Searle challenges this claim with his Chinese room argument, which attempts to", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "What would a superintelligence need?", - "target_page": 27, - "target_passage": "possess intelligence far surpassing that of the brightest and most gifted human mind.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Finding a provably correct or optimal solution is intractable for many important problems.[15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.\n\n#### **Narrow vs. general AI**\n\nAI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[378][379] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.\n\n#### **Machine consciousness, sentience, and mind**\n\nThe philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that \"[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on.\"[380] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.\n\n#### **Consciousness**\n\nDavid Chalmers identified two problems in understanding the mind, which he named the \"hard\" and \"easy\" problems of consciousness.[381] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this *feels* or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a colorblind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to *know what red looks like*. [382]\n\n#### **Computationalism and functionalism**\n\nComputationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. [383]\n\nPhilosopher John Searle characterized this position as \"strong AI\": \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\"[ac] Searle challenges this claim with his Chinese room argument, which attempts to", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### **Existential risk**\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\".[265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives *almost any* goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager).[267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\"[268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\".[269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\"[273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\".[276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\"[277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\"[278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests.\"[280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\"[281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "EFFECT OF SUPERCHARGING ON ALTITUDE PERFORMANCE\n\nFigure 2.17. Fffect of Supercharging on Altitude Performonce", - "page_start": 159, - "page_end": 159, - "source_file": "00-80T-80.pdf" - }, - { - "text": "U.S. Computer Science PhD graduates have specialized in \"AI\".[353] About 800,000 \"AI\"-related U.S. job openings existed in 2022.[354] According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies.[355]\n\n## **Philosophy**\n\nPhilosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. [356] Another major focus has been whether machines can be conscious, and the associated ethical implications.[357] Many other topics in philosophy are relevant to AI, such as epistemology and free will. [358] Rapid advancements have intensified public discussions on the philosophy and ethics of AI. [357]\n\n## **Defining artificial intelligence**\n\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\"[359] He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\".[359] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[323] Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\"[360]\n\nRussell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1] However, they are critical that the test requires the machine to imitate humans. \"Aeronautical engineering texts\", they wrote, \"do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.' \" [362] AI founder John McCarthy agreed, writing that \"Artificial intelligence is not, by definition, simulation of human intelligence\".[363]\n\nMcCarthy defines intelligence as \"the computational part of the ability to achieve goals in the world\".[364] Another AI founder, Marvin Minsky similarly describes it as \"the ability to solve hard problems\".[365] The leading AI textbook defines it as the study of\n\nThe Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior. [361]\n\nagents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the \"intelligence\" of the machine—and no other philosophical discussion is required, or may not even be possible.\n\nAnother definition has been adopted by Google,[366] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia3.pdf" - }, - { - "text": "2. A SohoPizza is almost the same as a MargheritaPizza but has additional toppings of olives and parmesan cheese — create this by cloning MargheritaPizza and adding two existential restrictions along the property hasTopping, one with a filler of OliveTopping, and one with a filler of ParmesanTopping.\n\n_____________________________________________________________________________________\n\n_____________________________________________________________________________________\n\n#### **Exercise 18: Make Subclasses of NamedPizza Disjoint**\n\n1. We want to make these subclasses of NamedPizza disjoint from each other. I.e., any individual can belong to at most one of these classes. To do that first select MargheritaPizza (or any other subclass of NamedPizza).\n\n2. Click on the (+) sign next to Disjoint With near the bottom of the Description view. This will bring up a Class hierarchy view. Use this to navigate to the subclasses of NamedPizza and use <control><left click> to select all of the other sibling classes to the one you selected. Then select OK. You should now see the appropriate disjoint axioms showing up on each subclass of NamedPizza. Synchronize the reasoner. Your UI should look similar to figure 4.19 now.\n\nFigure 4.19 Subclasses of NamedPizza are Disjoint", - "page_start": 36, - "page_end": 36, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[314] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[315][316] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[317][318]\n\n## **History**\n\nThe study of mechanical or \"formal\" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as \"0\" and \"1\", could simulate any conceivable form of mathematical reasoning.[319][320] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an \"electronic brain\".[r] They developed several areas of research that would become part of AI,[322] such as McCullouch and Pitts design for \"artificial neurons\" in 1943,[115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that \"machine intelligence\" was plausible.[323][320]\n\nThe field of AI research was founded at a workshop at Dartmouth College in 1956.[s][6] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as \"astonishing\":[u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[320]\n\nResearchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field.[327] In 1965 Herbert Simon predicted, \"machines will be capable, within twenty years, of doing any work a man can do\".[328] In 1967 Marvin Minsky agreed, writing that \"within a generation ... the problem of creating 'artificial intelligence' will substantially be solved\".[329] They had, however, underestimated the difficulty of the problem.[w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[331] and ongoing pressure from the U.S. Congress to fund more productive projects. [332] Minsky's and Papert's book *Perceptrons* was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. [333] The \"AI winter\", a period when obtaining funding for AI projects was difficult, followed.[9]\n\nIn the early 1980s, AI research was revived by the commercial success of expert systems, [334] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. [8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longerlasting winter began.[10]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia3.pdf" - }, - { - "text": "David Chalmers calls this form of idealism one of \"the handful of promising approaches to the mind– body problem.\"[127]\n\n### **New mysterianism**\n\nNew mysterianism, most significantly associated with the philosopher Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[128][11] McGinn draws on Noam Chomsky's distinction between problems, which are in principle solvable, and mysteries, which human cognitive faculties are unequipped to ever understand, and places the mind–body problem in the latter category. [128] His position is that a naturalistic explanation does exist but that the human mind is cognitively closed to it due to its limited range of intellectual abilities.[128] He cites Jerry Fodor's concept of the modularity of mind in support of cognitive closure.[128]\n\nWhile in McGinn's strong form, new mysterianism states that the relationship between consciousness and the material world can *never* be understood by the human mind, there are also weaker forms that argue it cannot be understood within existing paradigms but that advances in science or philosophy may open the way to other solutions (see above).[43] The ideas of Thomas Nagel and Joseph Levine fall into the second category. [43] Steven Pinker has also endorsed this weaker version of the view, summarizing it as follows:[9]\n\n> And then there is the theory put forward by philosopher Colin McGinn that our vertigo when pondering the Hard Problem is itself a quirk of our brains. The brain is a product of evolution, and just as animal brains have their limitations, we have ours. Our brains can't hold a hundred numbers in memory, can't visualize seven-dimensional space and perhaps can't intuitively grasp why neural information processing observed from the outside should give rise to subjective experience on the inside. This is where I place my bet, though I admit that the theory could be demolished when an unborn genius—a Darwin or Einstein of consciousness—comes up with a flabbergasting new idea that suddenly makes it all clear to us.\n\n### **Commentary on the problem's explanatory targets**\n\nPhilosopher Raamy Majeed argued in 2016 that the hard problem is associated with two \"explanatory targets\":[54]\n\n- 1. [PQ] Physical processing gives rise to experiences with a phenomenal character.\n- 2. [Q] Our phenomenal qualities are thus-and-so.\n\nThe first fact concerns the relationship between the physical and the phenomenal (i.e., how and why are some physical states felt states), whereas the second concerns the very nature of the phenomenal itself (i.e., what does the felt state feel like?).\n\nWolfgang Fasching argues that the hard problem is not about qualia, but about the what-it-is-like-ness of experience in Nagel's sense—about the givenness of phenomenal contents:", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia2.pdf" - }, - { - "text": "the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[282] However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[283]\n\n## **Ethical machines and alignment**\n\nFriendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[284]\n\nMachines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[285] The field of machine ethics is also called computational morality, [285] and was founded at an AAAI symposium in 2005.[286]\n\nOther approaches include Wendell Wallach's \"artificial moral agents\"[287] and Stuart J. Russell's three principles for developing provably beneficial machines.[288]\n\n#### **Open source**\n\nActive organizations in the AI open-source community include Hugging Face, [289] Google, [290] EleutherAI and Meta. [291] Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight,[292][293] meaning that their architecture and trained parameters (the \"weights\") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case.[294] Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[295]\n\n### **Frameworks**\n\nArtificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas:[296][297]\n\n- **Respect** the dignity of individual people\n- **Connect** with other people sincerely, openly, and inclusively\n- **Care** for the wellbeing of everyone\n- **Protect** social values, justice, and the public interest\n\nOther developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[298] however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[299]", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Glossary of artificial intelligence List of definitions of terms and concepts commonly used in the study of artificial intelligence\n- Intelligence amplification Use of information technology to augment human intelligence\n- Intelligent agent Software agent which acts autonomously\n- Mind uploading Hypothetical process of digitally emulating a brain\n- Organoid intelligence Use of brain cells and brain organoids for intelligent computing\n- Robotic process automation Form of business process automation technology\n- Wetware computer Computer composed of organic material\n\n## **Explanatory notes**\n\n- a. This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n- b. This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n- c. It is among the reasons that expert systems proved to be inefficient for capturing knowledge.[30][31]\n- d. \"Rational agent\" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.\n- e. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper \"Computing Machinery and Intelligence\".[42] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: \"An Inductive Inference Machine\".[43]\n- f. See AI winter § Machine translation and the ALPAC report of 1966\n- g. Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[93]\n- h. Expectation–maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables. [95]\n- i. Some form of deep neural networks (without a specific learning algorithm) were described by: Warren S. McCulloch and Walter Pitts (1943)[115] Alan Turing (1948);[116] Karl Steinbuch and Roger David Joseph (1961).[117] Deep or recurrent networks that learned (or used gradient descent) were developed by: Frank Rosenblatt(1957);[116] Oliver Selfridge (1959);[117] Alexey Ivakhnenko and Valentin Lapa (1965);[118] Kaoru Nakano (1971);[119] Shun-Ichi Amari (1972);[119] John Joseph Hopfield (1982).[119] Precursors to backpropagation were developed by: Henry J. Kelley (1960);[116] Arthur E. Bryson (1962);[116] Stuart Dreyfus (1962);[116] Arthur E. Bryson and Yu-Chi Ho (1969);[116] Backpropagation was independently developed by: Seppo Linnainmaa (1970);[120] Paul Werbos (1974).[116]\n- j. Geoffrey Hinton said, of his work on neural networks in the 1990s, \"our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow.\"[121]", - "page_start": 28, - "page_end": 28, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "Where can I find the Inspect tool to evaluate the safety of our models?", - "target_page": 21, - "target_passage": "The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### **Safer and healthier technologies and organisation**\n\nTo support the **practical implementation of preventive safety and health measures**, numerous actors (e.g. organisations of OSH professionals and practitioners, and standardisation institutes such as the European Committee for Standardisation and the International Organisation for Standardisation) issued safety and health guidance or standards, or developed new and advanced OSH management systems, the engineering sciences worked on better technical preventive technologies, on measuring and monitoring technologies, the medical sciences introduced better medical diagnosis and treatment of work-related diseases, and the social sciences contributed with better knowledge on the legal and economic determinants of OSH, or analysed the characteristics of awareness raising, knowledge development and healthy work organisation.\n\nIt is obvious that **better technical and organisational prevention at work** contributed to more safety and the evident strong reduction in accidents. **Prominent fields and examples** of such improvements are: technically safer design of moving vehicles (e.g. for fork lifts or heavy trucks and machines, light and noise warning signals for moving vehicles); safer design of machines like automatic shutdowns or disconnections, two-hand operating of machines (e.g. for pressing and punching), safer cranes including better technologies for communication between co-workers, coverage of moving parts, safer company cars (e.g. safety belts and airbags), safer tools (e.g. for drilling or cutting); improved personal protective equipment like air-supplied breathing apparatus, steel mesh gloves for meat workers, trousers for forest workers that resist a chainsaw; minimum safety requirements for buildings (e.g. forms and size of stairs and handrails, fire exits and fire alarms, safer ladders and scaffolds), emergency equipment like eye wash and emergency showers; better monitoring of acute hazards (e.g. in sewage water systems), exhaust and ventilation technologies to avoid fumes, dusts, chemicals or contact with hazardous biological agents; strong safety obligations for work in confined spaces, or for work at height and work in trenches; introduction of explosion zones and of non-sparking tools, a comprehensive system of warning signals, warning signals for slippery floors and unsafe grounds, better warning systems and equipment in particularly dangerous work environments like road maintenance, combined with better organisational measures; quality systems that promote continuous repair and maintenance of tools; regular instructions by safety representatives and safety coordinators, and guarantee of minimum safety standards of machines and products by European standards like CE ('European Conformity').\n\n#### **Major technological developments**\n\nThe widespread **introduction of new or advanced technologies** — automation, digitalisation/ICT, green technologies, new material technologies and so on — results in substantial changes in work organisation and work processes, and replacement of (traditional) materials (screws by glues, metal and wood by plastics, nanomaterials). For OSH regulators and practitioners, it is a constant challenge to assess these changes regarding their impact on risks for health and safety and to develop adequate risk prevention and mitigation measures.\n\n**Foresight studies** (e.g. by EU-OSHA) have shown that such technological change can help improve working conditions, for example, by taking over heavy, dangerous or routine work (automation, robotisation, exoskeletons), or by better communication and remote control via ICT tools. At the same time, they can also pose new risks, creating rigid work processes without much decision latitude, along with technical options for extreme surveillance and control (e.g. by constant geolocation), or pose new safety risks like working at height (renewable energies) or by exposure to materials with widely unknown health effects (e.g. nano).\n\nEU-OSHA has **published several foresight studies** to emphasise possible safety and health concerns. Examples are the reports and fact sheets about new safety risks in green jobs (green buildings, solar energy, wind energy) published more than 10 years ago. Since 2015, EU-OSHA has been publishing reviews and discussion papers on emerging risks and foresight topics. This work covers topics like robotics, performance-enhancing drugs, 3D printing, monitoring technologies, developments in the eretail sector, artificial intelligence, platform work, Long COVID, exoskeletons and so on. In 2018, the Agency published a foresight report on new and emerging OSH risks associated with digitalisation.\n\nA well-known example of such changes in work processes causing new OSH challenges is the **growing number of workers outside the premises of the employer**, that is, at non-stationary or mobile workplaces or at home. This refers to the increasing amount of mobile work in transport, traffic and", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOn a Likert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences.51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk.45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement.52 Data were analyzed from October 2023 to March 2024.\n\n## **Results**\n\n#### **Automated Tasks**\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In **Table 2**, ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n### **Clinical Evaluation Tasks**\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in **Table 3** and **Table 4**. The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\n| | Table 2. Automated Evaluation Scores, Large Language Model (LLM)–Generated and Physician-Written | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Summary type | R-1a | R-2a | R-La | BERT-p | BERT-r | SCALE |\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |\n\nAbbreviations: BERT, bidirectional encoder representations from transformers; p, precision-based scores; r, recall-based scores; R, recall-oriented understudy for gisting evaluation; SCALE, source chunking approach for large-scale inconsistency evaluation.\n\na R-1, R-2, R-L are the 3 types of recall-oriented understudy for gisting evaluation scores. Higher is better for all metrics.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 6/12", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "#### **Safety**\n\nIn the area of safety, we have established Vision Zero, the goal of which is to reduce the number of fatal accidents to zero. As a reference point, we are using the number of such accidents in 1995 that involved Nissan vehicles. We realize that accidents cannot be completely avoided, so our objective is to be substantially zero in the future. To achieve this, we have set a series of milestones, including cutting the 1995 fatal accident figure in half by 2015.\n\nInterestingly, while the number of fatal ones is decreasing, the number of all accidents in Japan is increasing. Our first goal is to decrease the overall accident count, which should further reduce the number of fatalities. Several factors contribute to accidents, including driver inexperience and higher speeds. Based on these factors, we came up with the approach of Safety Shield. Safety Shield establishes a timeline for the entire accident, covering the safe driving zone, the moment before the accident, the actual crash, the response time by authorities, and the time taken for post-accident rescue.\n\nIn the past, safety technology primarily focused on dealing with damage in and around the vehicle, such as airbags, body structure design, seatbelts and crumple zones. Now we are studying normal driving conditions and researching how we can keep car and driver in the safe driving zone. In cases where the driving environment becomes unsafe, some type of warning would usually help the driver to return to the safe driving zone. A driver actually in danger has probably lost control of the car. In the latter\n\ncases, we must focus on safety technologies that prompt the vehicle itself to automatically assist the driver. An example of this is Nissan's Lane Departure Prevention system or brake assist: When the vehicle approaches the lane markers, this system not only warns the driver to pay attention through a display and an audible buzzer, it also generates part of the necessary yaw movement needed to return the vehicle to its lane and safety.\n\nAnother Nissan safety innovation is the Around View Monitor. This system offers a 360-degree view on a dashboard display of what is around the vehicle. In addition to significantly reducing the blind spots in driving, the Around View Monitor is helpful when parking, since it improves the driver's field of vision and enables better maneuverability.\n\nIn developing safety technologies, we also look at the conditions that exist seconds before an unavoidable crash. With this information, we can provide technologies to minimize the impact and damage in addition to notifying the authorities and calling for assistance afterward. Because we are building on actual accident data, the final stage in the Safety Shield involves collecting and analyzing the data and feeding what we learn back into the development process. We have committed ourselves to introducing over ten new safety technologies during the next three years, spanning the entire driving range from the safe driving zone to the actual crash.\n\nFor more on safety at Nissan, please see the *2005 Nissan Sustainability Report*\n\nSafety Shield—concept image Around View Monitor\n\n**46** Nissan Annual Report 2004", - "page_start": 47, - "page_end": 47, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Eurostat has developed under the lead of UNECE a framework to assess the quality of employment in its multiple facets.497 Eurostat describes this framework as set of 68 indicators498 on seven dimensions *'that address employment quality from the perspective of the employed person. Its design also facilitates international comparisons.'*499 OSH is covered under the section 'Safety' and is based on four indicators and includes two outcome and two risk indicators: 1) Fatal occupational injuries / Number of fatal accidents at work (excluding traffic accidents); 2) Non-fatal occupational injuries / Number of non-fatal accidents at work; 3) Exposure to physical health risk factors; and 4) Exposure to mental health risk factors. Eurostat implements the OSH parts of this framework by its ESAW and by the OSH-related ad hoc modules to the LFS, called 'Accidents at work and other work-related health problems' (surveys in 2007, 2013 and 2020).\n\nFor more detailed monitoring at EU level, DG EMPL/ACSH and EU-OSHA developed a structural model that uses four groupings: **Generic information** on the basics of the OSH systems and on major context factors like age or sectoral structure, main policies for the **Steering of OSH**, an overview on relevant **Working conditions and Prevention**, and **Outcomes**, that is, accidents, diseases and wellbeing, and some elements of the **OSH infrastructure and monitoring capacity**. Currently, the OSH Barometer works with 16 quantitative and qualitative indicators in these four groupings. Some of these indicators are purely descriptive, like the short descriptions of OSH authorities, OSH institutions or OSH-related surveys, and others allow qualitative comparisons of structures and policies, for example, the indicator on 'National strategies' or 'Social dialogue'. Many indicators, for example, on working conditions or work accidents, are based on quantitative data from surveys and statistics. These indicators allow a comparison between sectors, occupations, types of enterprises, countries, for example.\n\n| CHAPTERS | INDICATORS |\n| --- | --- |\n| Generic information | Indicator: OSH authorities (descriptive) |\n| | Indicator: Economic and sector profile (quantitative) |\n| | Indicator: Workforce profile (quantitative) |\n| Steering of OSH | |\n| | Indicator: Regulation (descriptive) |\n| | Indicator: National strategies (descriptive) |\n| | Indicator: Social dialogue (descriptive, composite indicator) |\n| Working conditions and prevention | |\n| | Indicator: Working conditions (quantitative) |\n| | Indicator: Prevention in companies (quantitative) |\n| | Indicator: Worker involvement (quantitative) |\n| | Indicator: OSH culture and health awareness (quantitative) |\n| Accidents, diseases and wellbeing | |\n| | Indicator: Work accidents (quantitative) |\n| | Indicator: Work-related diseases (quantitative) |\n| | Indicator: Health perception of workers (quantitative) |", - "page_start": 137, - "page_end": 137, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **UNDERSTANDING QUICK ANALYSIS**\n\nThe *Quick Analysis* tools were developed in response to the fact that users weren't using or even aware of the more powerful analytical tools found in Excel. So Excel decided to combine\n\n*Live Preview* with some of these tools to create the *Quick Analysis* tools.\n\n### **The Quick Analysis Button**\n\nThe *Quick Analysis* button appears when a range is selected in a worksheet. Clicking on the button displays the *Quick Analysis* gallery which contains quick analysis tools that can be applied to the selected data.\n\nThe tools have been organised along tabs at the top – *FORMATTING*, *CHARTS*, *TOTALS*, *TABLES*, and *SPARKLINES*.\n\nWhen you click on a tab, options specific to that tab are presented.\n\n### **Using Quick Analysis Tools With Live Preview**\n\nMost of the *Quick Analysis* tools in the *Quick Analysis* gallery provide a Live Preview of the changes in the worksheet when you point to an option.\n\nThis is very useful if you are not sure of the formatting or type of analysis you require as it provides you with a preview of what the data would look like if you selected that specific option.\n\nAt the right we have selected only the totals from the worksheet shown above. We have pointed to options from the *TOTALS* tab (*% Total* and *Average*) and from the *FORMATTING* tab (*Data Bars*).\n\nLive Preview has either presented another row of analysed data or has formatted the selection accordingly.\n\nAll of these tools are also available on the ribbon but using the *Quick Analysis* tools is much quicker.", - "page_start": 35, - "page_end": 35, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "#### **Evaluation**\n\nIt is critical to ensure that AI systems are safe, ethical, and without bias in the clinical domain. For the proposed approach, we performed comprehensive automatic evaluations and a novel, rigorous, patient safety-focused clinical evaluation. The unique clinical evaluation framework was designed to (1) screen for and identify the common, specific correctness issues in LLMs observed in longform clinical summarization and (2) assess the potential patient safety implications associated with any incorrectness identified using a modified version of the World Health Organization's International Classification for Patient Safety.45\n\n### **Automated Evaluations**\n\nWe used the summarization evaluation metrics of recall-oriented understudy for gisting evaluation (ROUGE),46 bidirectional encoder representations from transformers score (BERTScore),47 and source chunking approach for large-scale inconsistency evaluation (SCALE).48 ROUGE computes the overlap of n-grams between the generated and reference summaries. For longform document summarization, the following ROUGE scores are considered to be close to the reference summaries: ROUGE-1, above 0.4; ROUGE-2, above 0.2; and ROUGE-L, above 0.3.46 BERTScore leverages the pretrained contextual embeddings from BERT and matches words to compute a similarity score for each token in the candidate sentence with each token in the reference sentence. We used SCALE,48 a natural language inference–based approach, to measure the faithfulness between the source document and the generated text. Further background is provided about SCALE in eAppendix 2 in Supplement 1.\n\n#### **Statistical Analysis**\n\nBased on prior work, 3 board certified EM physician leaders (M.M., A.F., and P.S.) with experience in formal quality and patient safety review processes performed retrospective reviews of ED-based EHR records of 50 individual ED patient encounters, randomly selected from the test dataset.49 Based on prior published clinical evaluations of LLM, as well as the study feasibility of using EM physician quality and patient safety leaders, 50 ED patient encounters were evaluated.50 Reviewers\n\nCBC indicates complete blood count; CMP, comprehensive metabolic panel; CTH, computed tomography of the head; EHR, electronic health record; Hct, hematocrit; Hgb, hemoglobin; HPI, history of present illness; HR, heart rate; IP, inpatient; IVF, intravenous fluid; N/V/D, nausea, vomiting, and diarrhea; RR, respiratory rate; SDU, step down unit; SPO2, peripheral capillary oxygen saturation; WBC, white blood cell; WBG, whole blood glucose.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 5/12", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed8.pdf" - }, - { - "text": "### **Changes in Internal Control Over Financial Reporting**\n\nBased on an evaluation, under the supervision and with the participation of our management, including our Chief Executive OÇcer and Chief Financial OÇcer, there has been no change in our internal control over Ñnancial reporting during our last Ñscal quarter identiÑed in connection with that evaluation, that has materially aÅected, or is reasonably likely to materially aÅect, our internal control over Ñnancial reporting.\n\n### **ITEM 9B. OTHER INFORMATION**\n\nNone.", - "page_start": 95, - "page_end": 95, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "# **Editing the indexer parameters**\n\nFigure 11-5 shows the Edit Indexer Parameters window.\n\nFigure 11-5 Edit exit information\n\nIf an application exists, edit your indexing parameters and add the following line, as in shown in Figure 11-5:\n\nINPEXIT=ARSSPVIN\n\nFor more information about activating this exit, see the Content Manager OnDemand for z/OS Version 9.0 Administration Guide, SC19-3364.", - "page_start": 302, - "page_end": 302, - "source_file": "sg246915.pdf" - }, - { - "text": "## **INDEX**\n\n| Maintenance | 1 |\n| --- | --- |\n| Model AY11240/Model AY11238 | 2-5 |\n| Model AY11228/Model AY11232 | 6-9 |\n| Model AY11230/Model AY11234 | 10-13 |\n| Model AY11236 | 14-18 |\n| Warranty Information | Back Cover |\n\n### **IMPORTANT NOTES**\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32oF to 104oF.\n- 3. Do not use this instrument in an environment with a lot of dust. **Cover the instrument when not in use.**\n- 4. Do not subject the instrument to shock.\n\n## **MAINTENANCE**\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios: Wet weather: 1:2\n\nDry Weather: 1:1\n\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n# **MODEL AY11240/AY11238**\n\n## **MICROSCOPE USAGE**\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## **CONSTRUCTION**\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90o vertical to 45o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45o angle. The head rotates 360o. The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "**Figure 3: Distribution of tester's age at positive test for all opiate-only/positive-for-both tests.**\n\nNote: as a guide to the OCU population, this chart is left-truncated as DIP tests are not given to under-18s.\n\nThe above statistics include tests in which no Police National Computer (PNC) number was recorded for an individual. This number is needed to identify an individual and hence to check whether future tests are further tests by that individual or represent a new individual testing positive. Excluding tests in which no PNC number was recorded makes little difference to the descriptive statistics, see Table 3 below.", - "page_start": 10, - "page_end": 10, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "What was the age category of most new opiate/crack users during the crime peak in the mid-1990s?", - "target_page": 9, - "target_passage": "mplying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# Summary\n\n### **Executive summary**\n\nThis paper uses a range of datasets and methodologies to:\n\n- obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013;1\n- examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n- It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n- Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18–24.3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime – those who have the potential to inflict most societal harm – has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n- In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n- However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n- Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have\n\n<sup>1</sup> At the time of writing, data was unavailable for the period after November 2013. 2\n\nIt is 68 per cent if the 2013 figure is adjusted to correct for the missing month of data.\n\n<sup>3</sup> 787 if adjusted for the missing month.", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "initiated use at an older age. Currently it is not possible to determine whether this is a reporting issue or a genuine shift in the age profile of new opiate/crack-cocaine users.\n\n- The report has several important policy implications. Even though numbers of new initiates involved with crime have dropped to the low thousands, putting downward pressure on crime, identification and early diversion to treatment remains paramount. Frontier Economics have estimated that the average4 lifetime crime cost of an injecting drug user is £445,000, so the potential for social harm – even from a small number of individuals – remains large and potentially long-lasting. This means local areas need to manage both the (relatively large) stock of current users, and the (much smaller) flow of new initiates, whose treatment needs may be different. There is no evidence of any new epidemic in this country, but given the impact of the epidemic of the 80s and early 90s on crime, ongoing monitoring of recent trends is required to spot early signs of any emerging problems.\n### **Aims and Methodology**\n\nPrevious Home Office research has demonstrated the importance of opiate/crack-cocaine use in driving aggregate trends in acquisitive crime (Morgan, 2014). While established estimates exist of the *total* number of opiate/crack-cocaine users (OCUs) in England (Hay *et al*., 2013), there are no estimates for the number of *new* OCUs each year (throughout this paper the number of new OCUs is also referred to as **'incidence'**). This is important for three main reasons.\n\n- i) **Stock and flows:** Simply knowing the stock of OCUs tells us nothing about the flows in and out – i.e. if the stock were constant each year that could mean that no one starts using these drugs and no one quits or it could mean *all* existing users quit but that they are wholly replaced by new users, or any similar scenario in between. Clearly the policy response would need to be quite different for each of these cases, so knowing the true situation is important.\n- ii) **Early-warning system:** Research by the Home Office and others has shown that there is generally a lag between the start of a heroin/crack epidemic and the point at which it becomes visible on administrative datasets. Closing this gap is important for policy, and part of the reason for its existence is the lack of incidence estimates. Evidence also suggests epidemics spread from area to area, so it is important to monitor local as well as national trends.\n- iii) **The social harm that can arise:** Though research suggests that not all OCUs resort to acquisitive crime to help finance their drug use, numerous studies show that a proportion consistently do and these individuals can be extremely prolific offenders (Morgan, 2014). One study by Frontier Economics estimated that the average lifetime cost to society of an injecting drug user was £445,000 from crime alone. Hence analysing and identifying new OCUs is a policy priority (Frontier Economics, 2010).\n\nThere are two inter-connected reasons why regular national incidence estimates have not been attempted before5 . The first is that data on this issue are sparse given the 'hidden' nature of opiate/crack markets and that date of first use is not something that gets recorded at the moment it actually occurs. The second reason, which flows from the first, is that current\n\n<sup>4</sup> The average is useful, but hides the fact that offending within the opiate/crack population is highly skewed with a few individuals responsible for the majority of crime and many individuals manage to use heroin and crack without resorting to acquisitive crime at all (Morgan, 2014).\n\n<sup>5</sup> Though regular national-level estimates have not been attempted, studies have estimated incidence at various times and at various different levels of geography, see for example: De Angelis *et al*., 2004, Millar *et al*., 2001 and Hickman *et al*., 2001.", - "page_start": 3, - "page_end": 3, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "between March 2011 and March 2015 can also be seen in the raw numbers for total new OCU treatment presentations.22\n\n**Figure 10: New treatment presentations for opiate/crack use.**\n\nFigure 10 shows that, rather than increasing in the current year, new presentations for opiate/crack use have actually fallen slightly from 48,154 in 2013/14 to 47,241 in 2014/15, a decrease of 1.9%. However, given that the early signs of previous opiate/crack use epidemics have been missed before (see Morgan, 2014), and the potential social harm that a fresh increase in new OCUs could cause, further analysis was conducted on the most recent data to try and determine whether the apparent flattening in trends was actually caused by the early stages of a significant surge in new users.\n\nThe treatment data was broken down by age to check whether the slight fall in total new presentations in 2014/15 masked an increase in younger treatment presentations. This showed instead that opiate/crack presentations by those aged 18-24 had fallen from 3,579 in 2013/14 to 3,021 in 2014/15, a fall of 15.6%. In other words, younger new presentations have fallen at a faster rate over the last year than for those aged over-25. Furthermore, separate statistics produced for those in treatment aged 18-and-under also show a fall in aggregate numbers in treatment for opiates and crack.\n\nWe also looked at trends at the local level, given that previous epidemics have started in very specific areas and have taken several years to spread nationally. This means that the start of an epidemic can be hidden in the national data because it has not reached enough areas to register.\n\n<sup>22</sup> Note that this series counts the start of any new treatment journey, regardless of whether an individual has been in treatment before. So unlike our definition of 'new' elsewhere it includes individuals who have been to treatment previously.", - "page_start": 26, - "page_end": 26, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled `positive-for-both').\n\nThe reason for highlighting only the opiates-only and the `positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), *powder-*cocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nOpiate/opiate+cocaine positive tests in England 2004–2013 (all positive tests including repeats\n\n| by the same individual) | | | |\n| --- | --- | --- | --- |\n| Age | | Year of birth | |\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\n#### **Table 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.**\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s.9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18–20 (Millar *et al*., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born\n\n<sup>9</sup> Note that the dataset counts tests, not unique individuals, so the same person can appear more than once.", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## 2. Estimating an incidence trend from treatment data\n\nThis section uses treatment data from the National Database Treatment Monitoring System (NDTMS) to estimate the number of new OCUs annually. The NDTMS captures data on the numbers of people presenting to services with problem drug misuse and information about the drug treatment they receive. All drug treatment agencies in England provide a basic level of information to the NDTMS on their activities each month. The data for this report included all unique individuals presenting to treatment with opiates or crack-cocaine listed as their primary drug between 2005 and 2014. All individuals whose age of first use was listed as below ten or before 2005 were then excluded. Excluding individuals who started using opiates/crack before 2005 resulted in a large number of records being left out, due to the fact that the majority of the treatment population, even in 2013/14, initiated in the 1980s and 1990s when heroin and crack use surged in the UK. However, this exclusion is necessary for the incidence methodology, as explained later in this section. The remaining dataset included 52,829 individuals, as shown in Table 10.\n\n| Reason for exclusion | Number of | Total number |\n| --- | --- | --- |\n| | individuals | of individuals |\n| | excluded | analysed |\n| Initial sample prior to exclusion | 0 | 243,588 |\n| No age at first use recorded or age was below 10 or higher than age at | 443 | 243,145 |\n| first treatment | | |\n| Year of first use before 2005 | 190,316 | 52,829 |\n| Percentage of total sample initiating 2005–14 | n/a | 21.7% |\n\n### **Table 10: Descriptive statistics from the NDTMS data.**\n\nThe majority of those presenting for treatment between 2005 and 2014 started using opiates/crack before 2005 (around four in five). Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014. This suggests an average of just under 5,000 new starters per year during this period. But this would be an under-estimate of incidence because it is likely that some of those who began use between 2005 and 2014 would not yet have come to treatment during that period.\n\nTo correct for this, we use two variants of a methodology employed by researchers in Millar *et al*. (2001) and Hickman *et al*. (2001). These papers discuss the methodology in detail.\n\nNew opiate and crack-cocaine users: characteristics and trends 22 In brief, the method uses the lag-to-treatment distribution for the sample coupled with the number of new treatment presentations in a given year to estimate OCU incidence in that year. So, when presenting to treatment, all individuals are asked to provide the year in which they first began using their primary drug, which for this analysis was limited to opiates and/or crack", - "page_start": 21, - "page_end": 21, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# Conclusion\n\nThis report has attempted to draw together available data and evidence to estimate the number of new opiate/crack-cocaine users (OCUs) per year in England since 2005 and then to look briefly at their characteristics. This is important as previous research has suggested that – mostly through the actions of a minority - this group has the potential to have a large impact on crime trends and therefore to impose significant societal costs.\n\nThough data on this population is imperfect, a number of different data sources and methodologies are available to estimate OCU incidence. From these, three key conclusions emerge:\n\n- The number of new opiate/crack users is clearly far lower now than it was in the 1980s and early 1990s and has even dropped 20-45% since 2005.\n- This means numbers of new users in 2013 may be around 5,000-8,000 with an approximate upper bound of 10,000; and numbers involved with prolific criminality will be lower still.\n- The downward trend in new OCUs has flattened since about 2011, but available data do not suggest that this is the precursor to a new increase. If anything, the downward trend may resume in 2014, though the situation requires further monitoring.\n\nFor local areas then, this report suggests that it is still important to identify new OCUs as the arrestee data showed that a proportion of these are likely to offend over a long period of time. But also, there was some evidence of a shift to older initiates, which may require a slightly different treatment approach.", - "page_start": 29, - "page_end": 29, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of *new* users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay *et al*., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "The analysis showed that of the 149 Drug Action Team areas in England, 72 per cent had decreases in new OCU treatment numbers in the year to September 2014 compared to the previous year. Furthermore, of the 42 areas showing an increase, only 11 also showed a rise for the 12 months to September 2010 compared with the 12 months to September 2014, and most of these involved small numbers of individuals.\n\nOverall then, the very recent data on treatment presentations do not currently suggest that the number of new OCUs is on the verge of increasing, merely that it flattened for a period.\n\nA number of factors could explain the flattening. Most importantly, if there was some sort of shock that caused a one-off reduction in the lag-time to treatment this could make it appear as if incidence was rising when in fact new users may be falling but a greater percentage may simply be turning up to treatment faster. Such a shock may have occurred given the reduction in heroin supply seen from the end of 2010 through to 2012 (see Ahmad *et al*,. 2016). If users unable to obtain heroin used this enforced abstinence as a spur to seek treatment and hence to present to treatment services earlier than they otherwise would have done, this could cause a one-off 'concertina effect' in which treatment numbers initially flatten or even rise but then fall again. This would also explain why the downward trend has apparently resumed: evidence suggests the reduction in supply has also ended.\n\nHowever, further analysis revealed some other possibilities based on the characteristics of those attending opiate/crack treatment for the first time in recent years. The Appendix includes a series of graphs with age-of-onset distributions for those who first attended treatment in 2013, and then 2012, and so on back to 2004. These show that the majority of those who presented to treatment in 2004 initiated use in the mid-1990s in line with the likely peak of the epidemic. But by 2012 a far greater number of individuals presenting to treatment say they started using opiates/crack only a year or two before.23 In other words, there appears to be a shift towards a shorter lag between initiation and treatment. This shift looks even more dramatic when using proportions rather than absolute numbers, see the Appendix.\n\nFurthermore, these individuals (those who seem to have both initiated recently *and* presented to treatment within a year or two of initiation) show a notably different age-of-initiation profile compared to the established profile in the literature, which peaks around 18–22 (Donmall & Jones, 2005). These individuals have a notably older age profile: see figure 11 chart, which compares recent initiates who presented to treatment in 2005 with recent initiates who presented to treatment in 2013.\n\n<sup>23</sup> This shift does not appear to be related to the reduction in heroin supply occurring around 2010/11. As Appendix 1 demonstrates, the pattern emerges far earlier.", - "page_start": 27, - "page_end": 27, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "**Table 3: Descriptive statistics for the DIP positive opiate-only/positive-for-both tests in which an individual can be identified with a PNC number.** \n\n| All positive opiate/opiate+cocaine tests (including repeats) that were recorded on PNC; | | | |\n| --- | --- | --- | --- |\n| England 2004–2013 | | | |\n| | Age | Year of birth | |\n| Number of tests | 296,008 | Number of tests | 296,008 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThe age and year of birth distributions are also similar and are shown in the Appendix. Thus, for the majority of the analysis that follows, tests with no PNC number were excluded.10\n\nThe charts and tables above use data from *all* positive tests, so will include cases where the same individual has tested positively on more than one occasion. The following data look just at the *first* test for each individual testing positive for opiates-only or positive-for-both.\n\n| Table 4: Descriptive statistics on first positive opiate-only/positive-for-both tests. |\n| --- |\n\n| First positive opiate/opiate+cocaine tests (unique individuals) | | | |\n| --- | --- | --- | --- |\n| Age | | Year of birth | |\n| Number of tests | 104,817 | Number of tests | 104,817 |\n| Mean | 31 | Mean | 1977 |\n| Median | 30 | Median | 1977 |\n| Mode | 27 | Mode | 1980 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThere were just over 100,000 unique individuals who tested positive for opiates-only or positivefor-both between 2004 and 2013. The distribution of the 296,008 positive tests these individuals gave, shows that the vast majority (55%) were only tested once (see Figure 4), which is likely to be why the age statistics are quite similar between Table 3 and Table 4. However, within this\n\n<sup>10</sup> Examining the data it is also clear that some areas recorded a higher proportion of cases without a PNC number than others. Thus excluding these cases further affects the variation in geographic coverage across time. See Appendix for more.", - "page_start": 11, - "page_end": 11, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# 1. Drug Interventions Programme Data\n\nThe Drug Interventions Programme (DIP) was introduced in April 2003 with the aim of developing and integrating measures for directing adult drug-misusing offenders into drug treatment and reducing offending behaviour. Offenders charged with certain 'trigger offences' (mostly acquisitive crime or drug offences) were drug tested and those testing positive were required to have a drug treatment assessment.\n\nThis section contains a series of descriptive statistics taken from the DIP dataset covering the years from 2004 to 2013. The particular focus is on analysis that can shed light on trends and characteristics for *new* opiate or crack-cocaine users (OCUs). Because it is a dataset predicated on involvement with the criminal justice system it is only representative of a subset of OCUs: those who have been arrested or charged with an offence – mostly an acquisitive crime offence (around 85 per cent of the offences leading to a positive drugs test are acquisitive6 ). Research has shown that up to half of all OCUs commit little or no acquisitive crime (Gossop et al, 2003; Morgan, 2014). So the analysis in this section provides only a guide to the numbers, trends and characteristics for the *total* number of new OCUs. But it does provide a helpful picture for the crime-involved subset of new OCUs.\n\nAspects of DIP have changed over time and this affects the data available, so it is important to run through them briefly. In 2005, testing was switched from the point of charge to the point of arrest. DIP was introduced in different areas in waves with the total number of areas increasing through 2004 to 2006. After this point, there was more consistency in DIP's geographical coverage, though some other areas that were not part of the nationally funded programme, did choose to run their own drug-testing-on-arrest programmes and some of these data have also been collected within the DIP data. This process increased slightly from 2010 when all police forces in England and Wales were given authorisation to conduct drug testing and related treatment interventions. This enabled local partners in all areas to decide whether or not to introduce drug testing as a locally driven approach to reducing drug-related offending. Again, some of these data were also captured alongside the data from the original DIP areas. In April 2013, DIP ceased to be a nationally funded programme. Instead, Police and Crime Commissioners (PCCs) were given the power to decide which local interventions (including drug testing on arrest) they would fund to address Class A drug-related offending. Drug testing continues to operate in many areas across England and Wales, but in some areas there may be drop-off in the data from that point.\n\nA full discussion of DIP's geographical coverage over time is contained in the Appendix, which also shows how the available data break down by local authority area. While there is some variation in the number of local authorities returning DIP data, particularly post-2006, areas with higher test volumes are well covered throughout the period. So, while any trend should be treated with care, more confidence can be taken in the analysis of the year of birth and age characteristics.\n\n<sup>6</sup> See Appendix 3 in: http://socialwelfare.bl.uk/subject-areas/services-activity/substance-misuse/homeoffice/141816horr02c.pdf", - "page_start": 5, - "page_end": 5, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "According to the National Database Treatment Monitoring System, how many people started using opiates/crack between 2005 and 2014?", - "target_page": 22, - "target_passage": " Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 2. Estimating an incidence trend from treatment data\n\nThis section uses treatment data from the National Database Treatment Monitoring System (NDTMS) to estimate the number of new OCUs annually. The NDTMS captures data on the numbers of people presenting to services with problem drug misuse and information about the drug treatment they receive. All drug treatment agencies in England provide a basic level of information to the NDTMS on their activities each month. The data for this report included all unique individuals presenting to treatment with opiates or crack-cocaine listed as their primary drug between 2005 and 2014. All individuals whose age of first use was listed as below ten or before 2005 were then excluded. Excluding individuals who started using opiates/crack before 2005 resulted in a large number of records being left out, due to the fact that the majority of the treatment population, even in 2013/14, initiated in the 1980s and 1990s when heroin and crack use surged in the UK. However, this exclusion is necessary for the incidence methodology, as explained later in this section. The remaining dataset included 52,829 individuals, as shown in Table 10.\n\n| Reason for exclusion | Number of | Total number |\n| --- | --- | --- |\n| | individuals | of individuals |\n| | excluded | analysed |\n| Initial sample prior to exclusion | 0 | 243,588 |\n| No age at first use recorded or age was below 10 or higher than age at | 443 | 243,145 |\n| first treatment | | |\n| Year of first use before 2005 | 190,316 | 52,829 |\n| Percentage of total sample initiating 2005–14 | n/a | 21.7% |\n\n### **Table 10: Descriptive statistics from the NDTMS data.**\n\nThe majority of those presenting for treatment between 2005 and 2014 started using opiates/crack before 2005 (around four in five). Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014. This suggests an average of just under 5,000 new starters per year during this period. But this would be an under-estimate of incidence because it is likely that some of those who began use between 2005 and 2014 would not yet have come to treatment during that period.\n\nTo correct for this, we use two variants of a methodology employed by researchers in Millar *et al*. (2001) and Hickman *et al*. (2001). These papers discuss the methodology in detail.\n\nNew opiate and crack-cocaine users: characteristics and trends 22 In brief, the method uses the lag-to-treatment distribution for the sample coupled with the number of new treatment presentations in a given year to estimate OCU incidence in that year. So, when presenting to treatment, all individuals are asked to provide the year in which they first began using their primary drug, which for this analysis was limited to opiates and/or crack", - "page_start": 21, - "page_end": 21, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# Summary\n\n### **Executive summary**\n\nThis paper uses a range of datasets and methodologies to:\n\n- obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013;1\n- examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n- It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n- Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18–24.3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime – those who have the potential to inflict most societal harm – has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n- In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n- However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n- Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have\n\n<sup>1</sup> At the time of writing, data was unavailable for the period after November 2013. 2\n\nIt is 68 per cent if the 2013 figure is adjusted to correct for the missing month of data.\n\n<sup>3</sup> 787 if adjusted for the missing month.", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "methods for calculating incidence are complicated and imperfect. It should be acknowledged in advance that this paper does not fully resolve these issues. It is merely intended as a first step, to obtain workable estimates upon which to base policy until more sophisticated methods are developed. That said, every effort is made in this analysis to sense-check the results against other available datasets. The datasets used and the structure of the paper is as follows.\n\n- i) **Drug Interventions Programme (DIP) data.** In part one, we produce general descriptive statistics from these data, which capture individuals who test positive for opiates/crack-cocaine following arrest or charge. Due to the limitations in coverage of these data over time, we draw only broad conclusions, some of which act as a sensecheck for the main results from part two.\n- ii) **Data on presentations to treatment from the National Drug Treatment Monitoring System (NDTMS).** In part two, we use two models based on previous research papers to calculate OCU incidence at the national level between 2005 and 2013. Most of the main conclusions come from this section.", - "page_start": 4, - "page_end": 4, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "between March 2011 and March 2015 can also be seen in the raw numbers for total new OCU treatment presentations.22\n\n**Figure 10: New treatment presentations for opiate/crack use.**\n\nFigure 10 shows that, rather than increasing in the current year, new presentations for opiate/crack use have actually fallen slightly from 48,154 in 2013/14 to 47,241 in 2014/15, a decrease of 1.9%. However, given that the early signs of previous opiate/crack use epidemics have been missed before (see Morgan, 2014), and the potential social harm that a fresh increase in new OCUs could cause, further analysis was conducted on the most recent data to try and determine whether the apparent flattening in trends was actually caused by the early stages of a significant surge in new users.\n\nThe treatment data was broken down by age to check whether the slight fall in total new presentations in 2014/15 masked an increase in younger treatment presentations. This showed instead that opiate/crack presentations by those aged 18-24 had fallen from 3,579 in 2013/14 to 3,021 in 2014/15, a fall of 15.6%. In other words, younger new presentations have fallen at a faster rate over the last year than for those aged over-25. Furthermore, separate statistics produced for those in treatment aged 18-and-under also show a fall in aggregate numbers in treatment for opiates and crack.\n\nWe also looked at trends at the local level, given that previous epidemics have started in very specific areas and have taken several years to spread nationally. This means that the start of an epidemic can be hidden in the national data because it has not reached enough areas to register.\n\n<sup>22</sup> Note that this series counts the start of any new treatment journey, regardless of whether an individual has been in treatment before. So unlike our definition of 'new' elsewhere it includes individuals who have been to treatment previously.", - "page_start": 26, - "page_end": 26, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled `positive-for-both').\n\nThe reason for highlighting only the opiates-only and the `positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), *powder-*cocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nOpiate/opiate+cocaine positive tests in England 2004–2013 (all positive tests including repeats\n\n| by the same individual) | | | |\n| --- | --- | --- | --- |\n| Age | | Year of birth | |\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\n#### **Table 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.**\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s.9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18–20 (Millar *et al*., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born\n\n<sup>9</sup> Note that the dataset counts tests, not unique individuals, so the same person can appear more than once.", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of *new* users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay *et al*., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine. From this information it is possible to create a distribution, for all presentations, of the lag-time between initiation and their first presentation at treatment. This might show – for example – that only ten per cent of all individuals presenting to treatment do so in the first year of use, but that 25 per cent present within two years, and so on. This means that for each year, we can estimate the number of individuals who have begun an opiate-crack career *but who have yet to come to treatment*. Adding these to the numbers who began in that year and have come to treatment gives our total incidence estimate for each year.\n\nThe first model uses NDTMS data for the cohort starting use in 2005 (n=8,960), the lag-time distribution for those initiating use in 2005 and presenting to treatment between 2005 and 201418 is shown below.\n\n| Table 11: Time-to-treatment distribution for those initiating use in 2005 and presenting to |\n| --- |\n| treatment between 2005 and 2014.19 |\n\n| Lag time to treatment (years) | 0-1 | 1-2 | 2-3 | 3-4 | 4-5 | 5-6 | 6-7 | 7-8 | 8-9 | 9-10 |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Percentage | 15% | 17% | 17% | 14% | 10% | 9% | 6% | 5% | 4% | 4% |\n| Cumulative percentage | 15% | 31% | 49% | 62% | 73% | 82% | 88% | 92% | 96% | 100% |\n\nTable 11 shows that 15 per cent of the individuals who started use in 2005 and had presented for treatment by 2014, presented within one year of initiation. A further 17 per cent presented between one and two years after initiation, prior to coming to treatment, meaning that overall 31 per cent of the sample said they came to treatment within two years of first using opiates/crack. (The fact this is not 32% is simply due to rounding).\n\nAs a basis for the total lag-to-treatment distribution, the main limitation with the above analysis is that it assumes all individuals coming to treatment do so within ten years. Examining data from earlier cohorts suggests this is inaccurate, as a small proportion of OCUs will continue to use these drugs for a long time, sometimes two decades or more, before seeking treatment, and some never will. However, we cannot use an earlier cohort for the distribution because this is equivalent to using out-of-date data. The average lag-to-treatment is likely to have reduced over time given the expansion of treatment places and the influence of DIP. Using old data will miss this and bias the estimates. Even using the 2005 cohort's distribution contains the assumption that the time-to-treatment lag has not altered significantly between 2005 and 2013/14. So, to try and obtain the most accurate model, we used the figures from the 2005 cohort for the first ten years, as above, on the basis that this covers the majority of individuals and for that we want the most up-to-date data possible whilst maintaining a long enough time period. We then index the trend at that point to an older cohort, and use data from that cohort to model the 'tail' of the distribution – i.e. those who take longer than ten years to reach treatment.20 The result is a 20-year lag-to-treatment distribution, shown in Table 12 below.\n\n<sup>18</sup> Data for 2014 was available until October 2014. This was converted to annual figures by multiplying up by 1.2 to account for the missing months in a linear fashion.\n\n<sup>19</sup> The percentages from this table can be calculated from the numbers in Table 13.\n\n<sup>20</sup> In reality there is always a trade-off in this methodology between the up-to-dateness of the cohort used to measure the lagto-treatment and the number of years of lag measured, i.e. we could use a more recent cohort, say 2008. But that would mean excluding all those who take longer than seven years to come to treatment, an even larger proportion. We are indebted to Tim Millar for providing the dataset used to model the 'tail' of the distribution. It contained a longer time series of", - "page_start": 22, - "page_end": 22, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "initiated use at an older age. Currently it is not possible to determine whether this is a reporting issue or a genuine shift in the age profile of new opiate/crack-cocaine users.\n\n- The report has several important policy implications. Even though numbers of new initiates involved with crime have dropped to the low thousands, putting downward pressure on crime, identification and early diversion to treatment remains paramount. Frontier Economics have estimated that the average4 lifetime crime cost of an injecting drug user is £445,000, so the potential for social harm – even from a small number of individuals – remains large and potentially long-lasting. This means local areas need to manage both the (relatively large) stock of current users, and the (much smaller) flow of new initiates, whose treatment needs may be different. There is no evidence of any new epidemic in this country, but given the impact of the epidemic of the 80s and early 90s on crime, ongoing monitoring of recent trends is required to spot early signs of any emerging problems.\n### **Aims and Methodology**\n\nPrevious Home Office research has demonstrated the importance of opiate/crack-cocaine use in driving aggregate trends in acquisitive crime (Morgan, 2014). While established estimates exist of the *total* number of opiate/crack-cocaine users (OCUs) in England (Hay *et al*., 2013), there are no estimates for the number of *new* OCUs each year (throughout this paper the number of new OCUs is also referred to as **'incidence'**). This is important for three main reasons.\n\n- i) **Stock and flows:** Simply knowing the stock of OCUs tells us nothing about the flows in and out – i.e. if the stock were constant each year that could mean that no one starts using these drugs and no one quits or it could mean *all* existing users quit but that they are wholly replaced by new users, or any similar scenario in between. Clearly the policy response would need to be quite different for each of these cases, so knowing the true situation is important.\n- ii) **Early-warning system:** Research by the Home Office and others has shown that there is generally a lag between the start of a heroin/crack epidemic and the point at which it becomes visible on administrative datasets. Closing this gap is important for policy, and part of the reason for its existence is the lack of incidence estimates. Evidence also suggests epidemics spread from area to area, so it is important to monitor local as well as national trends.\n- iii) **The social harm that can arise:** Though research suggests that not all OCUs resort to acquisitive crime to help finance their drug use, numerous studies show that a proportion consistently do and these individuals can be extremely prolific offenders (Morgan, 2014). One study by Frontier Economics estimated that the average lifetime cost to society of an injecting drug user was £445,000 from crime alone. Hence analysing and identifying new OCUs is a policy priority (Frontier Economics, 2010).\n\nThere are two inter-connected reasons why regular national incidence estimates have not been attempted before5 . The first is that data on this issue are sparse given the 'hidden' nature of opiate/crack markets and that date of first use is not something that gets recorded at the moment it actually occurs. The second reason, which flows from the first, is that current\n\n<sup>4</sup> The average is useful, but hides the fact that offending within the opiate/crack population is highly skewed with a few individuals responsible for the majority of crime and many individuals manage to use heroin and crack without resorting to acquisitive crime at all (Morgan, 2014).\n\n<sup>5</sup> Though regular national-level estimates have not been attempted, studies have estimated incidence at various times and at various different levels of geography, see for example: De Angelis *et al*., 2004, Millar *et al*., 2001 and Hickman *et al*., 2001.", - "page_start": 3, - "page_end": 3, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# 1. Drug Interventions Programme Data\n\nThe Drug Interventions Programme (DIP) was introduced in April 2003 with the aim of developing and integrating measures for directing adult drug-misusing offenders into drug treatment and reducing offending behaviour. Offenders charged with certain 'trigger offences' (mostly acquisitive crime or drug offences) were drug tested and those testing positive were required to have a drug treatment assessment.\n\nThis section contains a series of descriptive statistics taken from the DIP dataset covering the years from 2004 to 2013. The particular focus is on analysis that can shed light on trends and characteristics for *new* opiate or crack-cocaine users (OCUs). Because it is a dataset predicated on involvement with the criminal justice system it is only representative of a subset of OCUs: those who have been arrested or charged with an offence – mostly an acquisitive crime offence (around 85 per cent of the offences leading to a positive drugs test are acquisitive6 ). Research has shown that up to half of all OCUs commit little or no acquisitive crime (Gossop et al, 2003; Morgan, 2014). So the analysis in this section provides only a guide to the numbers, trends and characteristics for the *total* number of new OCUs. But it does provide a helpful picture for the crime-involved subset of new OCUs.\n\nAspects of DIP have changed over time and this affects the data available, so it is important to run through them briefly. In 2005, testing was switched from the point of charge to the point of arrest. DIP was introduced in different areas in waves with the total number of areas increasing through 2004 to 2006. After this point, there was more consistency in DIP's geographical coverage, though some other areas that were not part of the nationally funded programme, did choose to run their own drug-testing-on-arrest programmes and some of these data have also been collected within the DIP data. This process increased slightly from 2010 when all police forces in England and Wales were given authorisation to conduct drug testing and related treatment interventions. This enabled local partners in all areas to decide whether or not to introduce drug testing as a locally driven approach to reducing drug-related offending. Again, some of these data were also captured alongside the data from the original DIP areas. In April 2013, DIP ceased to be a nationally funded programme. Instead, Police and Crime Commissioners (PCCs) were given the power to decide which local interventions (including drug testing on arrest) they would fund to address Class A drug-related offending. Drug testing continues to operate in many areas across England and Wales, but in some areas there may be drop-off in the data from that point.\n\nA full discussion of DIP's geographical coverage over time is contained in the Appendix, which also shows how the available data break down by local authority area. While there is some variation in the number of local authorities returning DIP data, particularly post-2006, areas with higher test volumes are well covered throughout the period. So, while any trend should be treated with care, more confidence can be taken in the analysis of the year of birth and age characteristics.\n\n<sup>6</sup> See Appendix 3 in: http://socialwelfare.bl.uk/subject-areas/services-activity/substance-misuse/homeoffice/141816horr02c.pdf", - "page_start": 5, - "page_end": 5, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# Conclusion\n\nThis report has attempted to draw together available data and evidence to estimate the number of new opiate/crack-cocaine users (OCUs) per year in England since 2005 and then to look briefly at their characteristics. This is important as previous research has suggested that – mostly through the actions of a minority - this group has the potential to have a large impact on crime trends and therefore to impose significant societal costs.\n\nThough data on this population is imperfect, a number of different data sources and methodologies are available to estimate OCU incidence. From these, three key conclusions emerge:\n\n- The number of new opiate/crack users is clearly far lower now than it was in the 1980s and early 1990s and has even dropped 20-45% since 2005.\n- This means numbers of new users in 2013 may be around 5,000-8,000 with an approximate upper bound of 10,000; and numbers involved with prolific criminality will be lower still.\n- The downward trend in new OCUs has flattened since about 2011, but available data do not suggest that this is the precursor to a new increase. If anything, the downward trend may resume in 2014, though the situation requires further monitoring.\n\nFor local areas then, this report suggests that it is still important to identify new OCUs as the arrestee data showed that a proportion of these are likely to offend over a long period of time. But also, there was some evidence of a shift to older initiates, which may require a slightly different treatment approach.", - "page_start": 29, - "page_end": 29, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "What proportion of opiate users tested in 2004 were still positive a decade later?", - "target_page": 18, - "target_passage": "Nearly ten per cent (8.9%) of individuals who tested positive for opiates at charge in 2004 also tested positive nearly a decade later in 2013 (on arrest)", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "**Table 3: Descriptive statistics for the DIP positive opiate-only/positive-for-both tests in which an individual can be identified with a PNC number.** \n\n| All positive opiate/opiate+cocaine tests (including repeats) that were recorded on PNC; | | | |\n| --- | --- | --- | --- |\n| England 2004–2013 | | | |\n| | Age | Year of birth | |\n| Number of tests | 296,008 | Number of tests | 296,008 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThe age and year of birth distributions are also similar and are shown in the Appendix. Thus, for the majority of the analysis that follows, tests with no PNC number were excluded.10\n\nThe charts and tables above use data from *all* positive tests, so will include cases where the same individual has tested positively on more than one occasion. The following data look just at the *first* test for each individual testing positive for opiates-only or positive-for-both.\n\n| Table 4: Descriptive statistics on first positive opiate-only/positive-for-both tests. |\n| --- |\n\n| First positive opiate/opiate+cocaine tests (unique individuals) | | | |\n| --- | --- | --- | --- |\n| Age | | Year of birth | |\n| Number of tests | 104,817 | Number of tests | 104,817 |\n| Mean | 31 | Mean | 1977 |\n| Median | 30 | Median | 1977 |\n| Mode | 27 | Mode | 1980 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThere were just over 100,000 unique individuals who tested positive for opiates-only or positivefor-both between 2004 and 2013. The distribution of the 296,008 positive tests these individuals gave, shows that the vast majority (55%) were only tested once (see Figure 4), which is likely to be why the age statistics are quite similar between Table 3 and Table 4. However, within this\n\n<sup>10</sup> Examining the data it is also clear that some areas recorded a higher proportion of cases without a PNC number than others. Thus excluding these cases further affects the variation in geographic coverage across time. See Appendix for more.", - "page_start": 11, - "page_end": 11, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled `positive-for-both').\n\nThe reason for highlighting only the opiates-only and the `positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), *powder-*cocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nOpiate/opiate+cocaine positive tests in England 2004–2013 (all positive tests including repeats\n\n| by the same individual) | | | |\n| --- | --- | --- | --- |\n| Age | | Year of birth | |\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\n#### **Table 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.**\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s.9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18–20 (Millar *et al*., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born\n\n<sup>9</sup> Note that the dataset counts tests, not unique individuals, so the same person can appear more than once.", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of *new* users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay *et al*., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# Summary\n\n### **Executive summary**\n\nThis paper uses a range of datasets and methodologies to:\n\n- obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013;1\n- examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n- It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n- Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18–24.3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime – those who have the potential to inflict most societal harm – has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n- In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n- However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n- Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have\n\n<sup>1</sup> At the time of writing, data was unavailable for the period after November 2013. 2\n\nIt is 68 per cent if the 2013 figure is adjusted to correct for the missing month of data.\n\n<sup>3</sup> 787 if adjusted for the missing month.", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "population there exists a small group of frequent repeat users. 1,828 individuals (1.7% of this population) accounted for just over ten per cent of all positive tests (30,471 tests in total). These individuals provided between 16 and 57 positive tests over the period 2004 to 2013.\n\n**Figure 4: Proportion of positive tests by number of times an individual tested positive.** \n\nThe age and year-of-birth distributions for the 104,817 individuals reveals a similar profile to the distribution for total tests (Figures 5 and 6).", - "page_start": 12, - "page_end": 12, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "initiated use at an older age. Currently it is not possible to determine whether this is a reporting issue or a genuine shift in the age profile of new opiate/crack-cocaine users.\n\n- The report has several important policy implications. Even though numbers of new initiates involved with crime have dropped to the low thousands, putting downward pressure on crime, identification and early diversion to treatment remains paramount. Frontier Economics have estimated that the average4 lifetime crime cost of an injecting drug user is £445,000, so the potential for social harm – even from a small number of individuals – remains large and potentially long-lasting. This means local areas need to manage both the (relatively large) stock of current users, and the (much smaller) flow of new initiates, whose treatment needs may be different. There is no evidence of any new epidemic in this country, but given the impact of the epidemic of the 80s and early 90s on crime, ongoing monitoring of recent trends is required to spot early signs of any emerging problems.\n### **Aims and Methodology**\n\nPrevious Home Office research has demonstrated the importance of opiate/crack-cocaine use in driving aggregate trends in acquisitive crime (Morgan, 2014). While established estimates exist of the *total* number of opiate/crack-cocaine users (OCUs) in England (Hay *et al*., 2013), there are no estimates for the number of *new* OCUs each year (throughout this paper the number of new OCUs is also referred to as **'incidence'**). This is important for three main reasons.\n\n- i) **Stock and flows:** Simply knowing the stock of OCUs tells us nothing about the flows in and out – i.e. if the stock were constant each year that could mean that no one starts using these drugs and no one quits or it could mean *all* existing users quit but that they are wholly replaced by new users, or any similar scenario in between. Clearly the policy response would need to be quite different for each of these cases, so knowing the true situation is important.\n- ii) **Early-warning system:** Research by the Home Office and others has shown that there is generally a lag between the start of a heroin/crack epidemic and the point at which it becomes visible on administrative datasets. Closing this gap is important for policy, and part of the reason for its existence is the lack of incidence estimates. Evidence also suggests epidemics spread from area to area, so it is important to monitor local as well as national trends.\n- iii) **The social harm that can arise:** Though research suggests that not all OCUs resort to acquisitive crime to help finance their drug use, numerous studies show that a proportion consistently do and these individuals can be extremely prolific offenders (Morgan, 2014). One study by Frontier Economics estimated that the average lifetime cost to society of an injecting drug user was £445,000 from crime alone. Hence analysing and identifying new OCUs is a policy priority (Frontier Economics, 2010).\n\nThere are two inter-connected reasons why regular national incidence estimates have not been attempted before5 . The first is that data on this issue are sparse given the 'hidden' nature of opiate/crack markets and that date of first use is not something that gets recorded at the moment it actually occurs. The second reason, which flows from the first, is that current\n\n<sup>4</sup> The average is useful, but hides the fact that offending within the opiate/crack population is highly skewed with a few individuals responsible for the majority of crime and many individuals manage to use heroin and crack without resorting to acquisitive crime at all (Morgan, 2014).\n\n<sup>5</sup> Though regular national-level estimates have not been attempted, studies have estimated incidence at various times and at various different levels of geography, see for example: De Angelis *et al*., 2004, Millar *et al*., 2001 and Hickman *et al*., 2001.", - "page_start": 3, - "page_end": 3, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "**Figure 3: Distribution of tester's age at positive test for all opiate-only/positive-for-both tests.**\n\nNote: as a guide to the OCU population, this chart is left-truncated as DIP tests are not given to under-18s.\n\nThe above statistics include tests in which no Police National Computer (PNC) number was recorded for an individual. This number is needed to identify an individual and hence to check whether future tests are further tests by that individual or represent a new individual testing positive. Excluding tests in which no PNC number was recorded makes little difference to the descriptive statistics, see Table 3 below.", - "page_start": 10, - "page_end": 10, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## 2. Estimating an incidence trend from treatment data\n\nThis section uses treatment data from the National Database Treatment Monitoring System (NDTMS) to estimate the number of new OCUs annually. The NDTMS captures data on the numbers of people presenting to services with problem drug misuse and information about the drug treatment they receive. All drug treatment agencies in England provide a basic level of information to the NDTMS on their activities each month. The data for this report included all unique individuals presenting to treatment with opiates or crack-cocaine listed as their primary drug between 2005 and 2014. All individuals whose age of first use was listed as below ten or before 2005 were then excluded. Excluding individuals who started using opiates/crack before 2005 resulted in a large number of records being left out, due to the fact that the majority of the treatment population, even in 2013/14, initiated in the 1980s and 1990s when heroin and crack use surged in the UK. However, this exclusion is necessary for the incidence methodology, as explained later in this section. The remaining dataset included 52,829 individuals, as shown in Table 10.\n\n| Reason for exclusion | Number of | Total number |\n| --- | --- | --- |\n| | individuals | of individuals |\n| | excluded | analysed |\n| Initial sample prior to exclusion | 0 | 243,588 |\n| No age at first use recorded or age was below 10 or higher than age at | 443 | 243,145 |\n| first treatment | | |\n| Year of first use before 2005 | 190,316 | 52,829 |\n| Percentage of total sample initiating 2005–14 | n/a | 21.7% |\n\n### **Table 10: Descriptive statistics from the NDTMS data.**\n\nThe majority of those presenting for treatment between 2005 and 2014 started using opiates/crack before 2005 (around four in five). Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014. This suggests an average of just under 5,000 new starters per year during this period. But this would be an under-estimate of incidence because it is likely that some of those who began use between 2005 and 2014 would not yet have come to treatment during that period.\n\nTo correct for this, we use two variants of a methodology employed by researchers in Millar *et al*. (2001) and Hickman *et al*. (2001). These papers discuss the methodology in detail.\n\nNew opiate and crack-cocaine users: characteristics and trends 22 In brief, the method uses the lag-to-treatment distribution for the sample coupled with the number of new treatment presentations in a given year to estimate OCU incidence in that year. So, when presenting to treatment, all individuals are asked to provide the year in which they first began using their primary drug, which for this analysis was limited to opiates and/or crack", - "page_start": 21, - "page_end": 21, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "**Figure 6: Age at first positive test (opiates-only or positive-for-both.)**\n\nNote: as a guide to the OCU population, this chart is left-truncated as DIP tests are not given to under-18s.", - "page_start": 13, - "page_end": 13, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "# Conclusion\n\nThis report has attempted to draw together available data and evidence to estimate the number of new opiate/crack-cocaine users (OCUs) per year in England since 2005 and then to look briefly at their characteristics. This is important as previous research has suggested that – mostly through the actions of a minority - this group has the potential to have a large impact on crime trends and therefore to impose significant societal costs.\n\nThough data on this population is imperfect, a number of different data sources and methodologies are available to estimate OCU incidence. From these, three key conclusions emerge:\n\n- The number of new opiate/crack users is clearly far lower now than it was in the 1980s and early 1990s and has even dropped 20-45% since 2005.\n- This means numbers of new users in 2013 may be around 5,000-8,000 with an approximate upper bound of 10,000; and numbers involved with prolific criminality will be lower still.\n- The downward trend in new OCUs has flattened since about 2011, but available data do not suggest that this is the precursor to a new increase. If anything, the downward trend may resume in 2014, though the situation requires further monitoring.\n\nFor local areas then, this report suggests that it is still important to identify new OCUs as the arrestee data showed that a proportion of these are likely to offend over a long period of time. But also, there was some evidence of a shift to older initiates, which may require a slightly different treatment approach.", - "page_start": 29, - "page_end": 29, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "Who led the Fronde des princes?", - "target_page": 4, - "target_passage": "It was headed by the highest-ranking French nobles, among them Louis's uncle Gaston, Duke of Orléans and first cousin Anne Marie Louise d'Orléans, Duchess of Montpensier, known as la Grande Mademoiselle; Princes of the Blood such as Condé, his brother Armand de Bourbon, Prince of Conti, and their sister the Duchess of Longueville; dukes of legitimised royal descent, such as Henri, Duke of Longueville, and François, Duke of Beaufort; so-called \"foreign princes\" such as Frédéric Maurice, Duke of Bouillon, his brother Marshal Turenne, and Marie de Rohan, Duchess of Chevreuse; and scions of France's oldest families, such as François de La Rochefoucauld.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Condé, attacked the rebels in Paris; the rebels were under the political control of Anne's old friend Marie de Rohan. Beaufort, who had escaped from the prison where Anne had incarcerated him five years before, was the military leader in Paris, under the nominal control of Conti. After a few battles, a political compromise was reached; the Peace of Rueil was signed, and the court returned to Paris.\n\nUnfortunately for Anne, her partial victory depended on Condé, who wanted to control the queen and destroy Mazarin's influence. It was Condé's sister who pushed him to turn against the queen. After striking a deal with her old friend Marie de Rohan, who was able to impose the nomination of *Charles de l'Aubespine, marquis de Châteauneuf* as minister of justice, Anne arrested Condé, his brother Armand de Bourbon, Prince of Conti, and the husband of their sister Anne Genevieve de Bourbon, duchess of Longueville. This situation did not last long, and Mazarin's unpopularity led to the creation of a coalition headed mainly by Marie de Rohan and the duchess of Longueville. This aristocratic coalition was strong enough to liberate the princes, exile Mazarin, and impose a condition of virtual house arrest on Queen Anne.\n\n1655 portrait of Louis, the Victor of the Fronde, portrayed as the god Jupiter\n\nPortrait by Justus van Egmont between the years 1649–1652.\n\n#### All these events were witnessed by Louis and\n\nlargely explained his later distrust of Paris and the higher aristocracy. [27] \"In one sense, Louis's childhood came to an end with the outbreak of the Fronde. It was not only that life became insecure and unpleasant – a fate meted out to many children in all ages – but that Louis had to be taken into the confidence of his mother and Mazarin on political and military matters of which he could have no deep understanding\".[28] \"The family home became at times a near-prison when Paris had to be abandoned, not in carefree outings to other chateaux but in humiliating flights\".[28] The royal family was driven out of Paris twice in this manner, and at one point Louis XIV and Anne were held under virtual arrest in the royal palace in Paris. The Fronde years planted in Louis a hatred of Paris and a consequent determination to move out of the ancient capital as soon as possible, never to return.[29]\n\nJust as the first *Fronde* (the *Fronde parlementaire* of 1648–1649) ended, a second one (the *Fronde des princes* of 1650–1653) began. Unlike that which preceded it, tales of sordid intrigue and half-hearted warfare characterized this second phase of upper-class insurrection. To the aristocracy, this rebellion represented a protest for the reversal of their political demotion from vassals to courtiers. It was headed by the highest-ranking French\n\nnobles, among them Louis's uncle Gaston, Duke of Orléans and first cousin Anne Marie Louise d'Orléans, Duchess of Montpensier, known as *la Grande Mademoiselle*; Princes of the Blood such as Condé, his brother Armand de Bourbon, Prince of Conti, and their sister the Duchess of Longueville; dukes of legitimised royal descent, such as Henri, Duke of Longueville, and François, Duke of Beaufort; so-called \"foreign princes\" such as Frédéric Maurice, Duke of Bouillon, his brother Marshal Turenne, and Marie de Rohan, Duchess of Chevreuse; and scions of France's oldest families, such as François de La Rochefoucauld.\n\nQueen Anne played the most important role in defeating the Fronde because she wanted to transfer absolute authority to her son. In addition, most of the princes refused to deal with Mazarin, who went into exile for a number of years. The *Frondeurs* claimed to act on Louis's behalf, and in his real interest, against his mother and Mazarin.\n\nQueen Anne had a very close relationship with the Cardinal, and many observers believed that Mazarin became Louis XIV's stepfather by a secret marriage to Queen Anne.[30] However, Louis's coming-of-age and subsequent coronation deprived them of the *Frondeurs*' pretext for revolt. The *Fronde* thus gradually lost steam and ended in 1653, when Mazarin returned triumphantly from exile. From that time until his death, Mazarin was in charge of foreign and financial policy without the daily supervision of Anne, who was no longer regent.[31]\n\nDuring this period, Louis fell in love with Mazarin's niece Marie Mancini, but Anne and Mazarin ended the king's infatuation by sending Mancini away from court to be married in Italy. While Mazarin might have been tempted for a short time to marry his niece to the King of France, Queen Anne was absolutely against this; she wanted to marry her son to the daughter of her brother, Philip IV of Spain, for both dynastic and political reasons. Mazarin soon supported the Queen's position because he knew that her support for his power and his foreign policy depended on making peace with Spain from a strong position and on the Spanish marriage. Additionally, Mazarin's relations with Marie Mancini were not good, and he did not trust her to support his position. All of Louis's tears and his supplications to his mother did not make her change her mind. The Spanish marriage would be very", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Painting from 1667 depicting Louis as patron of the fine arts\n\nThe *Cour royale* and the *Cour de marbre* at Versailles\n\nfamous throughout Europe. Composers and musicians such as Jean-Baptiste Lully, Jacques Champion de Chambonnières, and François Couperin thrived. In 1661, Louis founded the Académie Royale de Danse, and in 1669, the Académie d'Opéra, important driving events in the evolution of ballet. He also attracted, supported and patronized such artists as André Charles Boulle, who revolutionised marquetry with his art of inlay, today known as \"Boulle work\". Always on the lookout for new talent, the king launched music competitions: in 1683, Michel-Richard de Lalande thus became deputy master of the Royal Chapel, composing his *Symphonies for the Soupers du Roy* along with 77 large scale *Grand Motets*.\n\nOver the course of four building campaigns, Louis converted a hunting lodge commissioned by Louis XIII into the spectacular Palace of Versailles. Except for the current Royal Chapel (built near the end of his reign), the palace achieved much of its current appearance after the third building campaign, which was followed by an official move of the royal court to Versailles on 6 May 1682. Versailles became a dazzling, aweinspiring setting for state affairs and the reception of foreign dignitaries. At Versailles, the king alone commanded attention.\n\nSeveral reasons have been suggested for the creation of the extravagant and stately palace, as well as the relocation of the monarchy's seat. The memoirist Saint-Simon speculated that Louis viewed Versailles as an isolated power centre where\n\ntreasonous cabals could be more readily discovered and foiled.[62] There has also been speculation that the revolt of the *Fronde* caused Louis to hate Paris, which he abandoned for a country retreat, but his sponsorship of many public works in Paris, such as the establishment of a police force and of street-lighting,[111] lend little credence to this theory. As a further example of his continued care for the capital, Louis constructed the *Hôtel des Invalides*, a military complex and home to this day for officers and soldiers rendered infirm either by injury or old age. While pharmacology was still quite rudimentary in his day, the *Invalides* pioneered new treatments and set new standards for hospice treatment. The conclusion of the Treaty of Aix-la-Chapelle in 1668 also induced Louis to demolish Paris's northern walls in 1670 and replace them with wide tree-lined boulevards.[112]\n\nBust of Louis XIV by Gianlorenzo Bernini\n\nLouis also renovated and improved the Louvre and other royal residences. Gian Lorenzo\n\nBernini was originally to plan additions to the Louvre; however, his plans would have meant the destruction of much of the existing structure, replacing it with an Italian summer villa in the centre of Paris. Bernini's plans were eventually shelved in favour of the elegant Louvre Colonnade designed by three Frenchmen: Louis Le Vau, Charles Le Brun, and Claude Perrault. With the relocation of the court to Versailles, the Louvre was given over to the arts and the public.[113] During his visit from Rome, Bernini also executed a renowned portrait bust of the king.\n\n# **Image and depiction**\n\nFew rulers in world history have commemorated themselves in as grand a manner as Louis.[114] He cultivated his image as the Sun King (*le Roi Soleil*), the centre of the universe \"without equal\". Louis used court ritual and the arts to validate and augment his control over France. With his support, Colbert established from the beginning of Louis's personal reign a centralised and institutionalised system for creating and perpetuating the royal image. The King was thus portrayed largely in majesty or at war, notably against Spain. This portrayal of the monarch was to be found in numerous media of artistic expression, such as painting, sculpture, theatre, dance, music, and the almanacs that diffused royal propaganda to the population at large.\n\n### **Evolution of royal portraiture**\n\nOver his lifetime, Louis commissioned numerous works of art to portray himself, among them over 300 formal portraits. The earliest portrayals of Louis already followed the pictorial conventions of the day in depicting the child king as the majestically royal incarnation of France. This idealisation of the monarch continued in later works, which avoided depictions of the effect of smallpox that Louis contracted in 1647. In the 1660s, Louis began to be shown as a Roman emperor, the god Apollo, or Alexander the Great, as can be seen in many works of Charles Le Brun, such as sculpture, paintings, and the decor of major monuments.", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia5.pdf" - }, - { - "text": "negotiations in 1709 and 1710. France retained Île-Saint-Jean and Île Royale, and Louis acquired a few minor European territories, such as the Principality of Orange and the Ubaye Valley, which covered transalpine passes into Italy. Thanks to Louis, his allies the Electors of Bavaria and Cologne were restored to their prewar status and returned their lands.[102]\n\n# **Personal life**\n\n#### **Marriages and children**\n\nLouis and his wife Maria Theresa of Spain had six children from the marriage contracted for them in 1660. However, only one child, the eldest, survived to adulthood: Louis, *le Grand Dauphin*, known as *Monseigneur*. Maria Theresa died in 1683, whereupon Louis remarked that she had never caused him unease on any other occasion.\n\nDespite evidence of affection early on in their marriage, Louis was never faithful to Maria Theresa. He took a series of mistresses, both official and unofficial. Among the better documented are Louise de La Vallière (with whom he had five children; 1661–1667), Bonne de Pons d'Heudicourt (1665), Catherine Charlotte de Gramont (1665), Françoise-Athénaïs, Marquise de Montespan (with whom he had seven children; 1667–1680), Anne de Rohan-Chabot (1669–1675), Claude de Vin des Œillets (one child born in 1676),\n\nIsabelle de Ludres (1675–1678), and Marie Angélique de Scorailles (1679–1681), who died at age 19 in childbirth. Through these liaisons, he produced numerous illegitimate children, most of whom he married to members of cadet branches of the royal family.\n\nLouis proved relatively more faithful to his second wife, Françoise d'Aubigné, Marquise de Maintenon. He first met her through her work caring for his children by Madame de Montespan, noting the care she gave to his favourite, Louis Auguste, Duke of Maine. [103] The king was, at first, put off by her strict religious practice, but he warmed to her through her care for his children.[103]\n\nWhen he legitimized his children by Madame de Montespan on 20 December 1673, Françoise d'Aubigné became the royal governess at Saint-Germain.[103] As governess, she was one of very few people permitted to speak to him as an equal, without limits.[103] It is believed that they were married secretly at Versailles on or around 10 October 1683[104] or January 1684.[105] This marriage, though never announced or publicly discussed, was an open secret and lasted until his death.[106]\n\n### **Piety and religion**\n\nLouis was a pious and devout king who saw himself as the head and protector of the Catholic Church in France. He made his devotions daily regardless of where he was, following the liturgical calendar regularly. [107] Under the influence of his very religious second wife, he became much stronger in the practice of his Catholic faith.[108] This included banning opera and comedy performances during Lent. [108]\n\nTowards the middle and the end of his reign, the centre for the King's religious observances was usually the Chapelle Royale at Versailles. Ostentation was a distinguishing feature of daily Mass, annual celebrations, such as those of Holy Week, and special ceremonies.[109] Louis established the Paris Foreign Missions Society, but his informal alliance with the Ottoman Empire was criticised for undermining Christendom. [110]\n\n#### **Patronage of the arts**\n\nLouis generously supported the royal court of France and those who worked under him. He brought the Académie Française under his patronage and became its \"Protector\". He allowed Classical French literature to flourish by protecting such writers as Molière, Racine, and La Fontaine, whose works remain influential to this day. Louis also patronised the visual arts by funding and commissioning artists such as Charles Le Brun, Pierre Mignard, Antoine Coysevox, and Hyacinthe Rigaud, whose works became\n\nWedding of Louis and Maria Theresa\n\nDual Cypher of King Louis XIV & Queen Marie Thérèse\n\nLouis XIV encouraged Catholic missions through the creation of the Paris Foreign Missions Society", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The Queen sought a lasting peace between Catholic nations, but only after a French victory over her native Spain. She also gave a partial Catholic orientation to French foreign policy. This was felt by the Netherlands, France's Protestant ally, which negotiated a separate peace with Spain in 1648.[18]\n\nIn 1648, Anne and Mazarin successfully negotiated the Peace of Westphalia, which ended the Thirty Years' War. [19] Its terms ensured Dutch independence from Spain, awarded some autonomy to the various German princes of the Holy Roman Empire, and granted Sweden seats on the Imperial Diet and territories controlling the mouths of the Oder, Elbe, and Weser Rivers. [20] France, however, profited most from the settlement. Austria, ruled by the Habsburg Emperor Ferdinand III, ceded all Habsburg lands and claims in Alsace to France and acknowledged her *de facto* sovereignty over the Three Bishoprics of Metz, Verdun, and Toul. [21] Moreover, many petty German states sought French protection, eager to emancipate themselves from Habsburg domination. This anticipated the formation of the 1658 League of the Rhine, which further diminished Imperial power.\n\nBaptismal certificate, 1638\n\n#### **Early acts**\n\nAs the Thirty Years' War came to an end, a civil war known as the Fronde erupted in France. It effectively checked France's ability to exploit the Peace of Westphalia. Anne and Mazarin had largely pursued the policies of Cardinal Richelieu, augmenting the Crown's power at the expense of the nobility and the *Parlements*. Anne was more concerned with internal policy than foreign affairs; she was a very proud queen who insisted on the divine rights of the King of France.[22]\n\nAll this led her to advocate a forceful policy in all matters relating to the King's authority, in a manner that was much more radical than the one proposed by Mazarin. The Cardinal depended totally on Anne's support and had to use all his influence on the Queen to temper some of her radical actions. Anne imprisoned any aristocrat or member of parliament who challenged her will; her main aim was to transfer to her son an absolute authority in the matters of finance and justice. One of the leaders of the Parlement of Paris, whom she had jailed, died in prison.[23]\n\nThe *Frondeurs*, political heirs of the disaffected feudal aristocracy, sought to protect their traditional feudal privileges from the increasingly centralized royal government. Furthermore, they believed their traditional influence and authority was being usurped by the recently ennobled bureaucrats (the *Noblesse de Robe*, or \"nobility of the robe\"), who administered the kingdom and on whom the monarchy increasingly began to rely. This belief intensified the nobles' resentment.\n\nIn 1648, Anne and Mazarin attempted to tax members of the *Parlement de Paris*. The members refused to comply and ordered all of the king's earlier financial edicts burned. Buoyed by the victory of *Louis, duc d'Enghien* (later known as *le Grand Condé*) at the Battle of Lens, Mazarin, on Queen Anne's insistence, arrested certain members in a show of force.[24] The most important arrest, from Anne's point of view, concerned Pierre Broussel, one of the most important leaders in the *Parlement de Paris*.\n\nPeople in France were complaining about the expansion of royal authority, the high rate of taxation, and the reduction of the authority of the Parlement de Paris and other regional representative entities. Paris erupted in rioting as a result, and Anne was forced, under intense pressure, to free Broussel. Moreover, on the night of 9–10 February 1651, when Louis was twelve, a mob of angry Parisians broke into the royal palace and demanded to see their king. Led into the royal bed-chamber, they gazed upon Louis, who was feigning sleep, were appeased, and then quietly departed.[25] The threat to the royal family prompted Anne to flee Paris with the king and his courtiers.\n\nShortly thereafter, the conclusion of the Peace of Westphalia allowed Condé's army to return to aid Louis and his court. Condé's family was close to Anne at that time, and he agreed to help her attempt to restore the king's authority. [26] The queen's army, headed by\n\nLouis XIV, then Dauphin of France, in 1642, one year before his accession to the throne, by Philippe de Champaigne\n\nLouis XIV in 1643, by Claude Deruet\n\nEurope after the Peace of Westphalia in 1648", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The Nine Years' War, which lasted from 1688 to 1697, initiated a period of decline in Louis's political and diplomatic fortunes. It arose from two events in the Rhineland. First, in 1685, the Elector Palatine Charles II died. All that remained of his immediate family was Louis's sister-in-law, Elizabeth Charlotte. German law ostensibly barred her from succeeding to her brother's lands and electoral dignity, but it was unclear enough for arguments in favour of Elizabeth Charlotte to have a chance of success. Conversely, the princess was demonstrably entitled to a division of the family's personal property. Louis pressed her claims to land and chattels, hoping the latter, at least, would be given to her. [76] Then, in 1688, Maximilian Henry of Bavaria, Archbishop of Cologne, an ally of France, died. The archbishopric had traditionally been held by the Wittelsbachs of Bavaria, but the Bavarian claimant to replace Maximilian Henry, Prince Joseph Clemens of Bavaria, was at that time not more than 17 years old and not even ordained. Louis sought instead to install his own candidate, Wilhelm Egon von Fürstenberg, to ensure the key Rhenish state remained an ally. [77]\n\nIn light of his foreign and domestic policies during the early 1680s, which were perceived as aggressive, Louis's actions, fostered by the succession crises of the late 1680s, created concern and alarm in much of Europe. This led to the formation of the 1686 League of Augsburg by the Holy Roman Emperor, Spain, Sweden, Saxony, and Bavaria. Their stated intention was to return France to at least the borders agreed to in the Treaty of Nijmegen.[78] Emperor Leopold I's persistent refusal to convert the Truce of Ratisbon into a permanent treaty fed Louis's fears that the Emperor would turn on France and attack the Reunions after settling his affairs in the Balkans.[79]\n\nAnother event Louis found threatening was England's Glorious Revolution of 1688. Although King James II was Catholic, his two Anglican daughters, Mary and Anne, ensured the English people a Protestant succession. But when James II's son James Francis Edward Stuart was born, he took precedence in succession over his sisters. This seemed to herald an era of Catholic monarchs in England. Protestant lords called on the Dutch Prince\n\nBattle of Fleurus, 1690\n\nLouis in 1690\n\nWilliam III of Orange, grandson of Charles I of England, to come to their aid. He sailed for England with troops despite Louis's warning that France would regard it as a provocation. Witnessing numerous desertions and defections, even among those closest to him, James II fled England. Parliament declared the throne vacant, and offered it to James's daughter Mary II and his son-inlaw and nephew William. Vehemently anti-French, William (now William III of England) pushed his new kingdoms into war, thus transforming the League of Augsburg into the Grand Alliance. Before this happened, Louis expected William's expedition to England to absorb his energies and those of his allies, so he dispatched troops to the Rhineland after the expiry of his ultimatum to the German princes requiring confirmation of the Truce of Ratisbon and acceptance of his demands about the succession crises. This military manoeuvre was also intended to protect his eastern provinces from Imperial invasion by depriving the enemy army of sustenance, thus explaining the preemptive scorched earth policy pursued in much of southwestern Germany (the \"Devastation of the Palatinate\").[80]\n\nLouis XIV at the siege of Namur (1692)\n\nFrench armies were generally victorious throughout the war because of Imperial commitments in the Balkans, French logistical superiority, and the quality of French generals such as Condé's famous pupil, François Henri de Montmorency-Bouteville, duc de Luxembourg. [81] He triumphed at the Battles of Fleurus in 1690, Steenkerque in 1692, and Landen in 1693, although, the battles proved to be of little of strategic consequence,[82][83] mostly due to the nature of late 17th-century warfare.[84]\n\nAlthough an attempt to restore James II failed at the Battle of the Boyne in 1690, France accumulated a string of victories from Flanders in the north, Germany in the east, and Italy and Spain in the south, to the high seas and the colonies. Louis personally supervised the captures of Mons in 1691 and Namur in 1692. Luxembourg gave France the defensive line of the Sambre by capturing Charleroi in 1693. France also overran most of the Duchy of Savoy after the battles of Marsaglia and Staffarde in 1693. While naval stalemate ensued after the French victory at the Battle of Beachy Head in 1690 and the Allied victory at Barfleur-La Hougue in 1692, the Battle of Torroella in 1694 exposed Catalonia to French invasion, culminating in the capture of Barcelona.\n\nThe Dutch captured Pondichéry in 1693, but a 1697 French raid on the Spanish treasure port of Cartagena, Spain, yielded a fortune of 10,000,000 livres.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia5.pdf" - }, - { - "text": "**Louis XIV** (Louis-Dieudonné; 5 September 1638 – 1 September 1715), also known as **Louis the Great** (*Louis le Grand*) or the **Sun King** (*le Roi Soleil*), was King of France from 1643 until his death in 1715. His verified reign of 72 years and 110 days is the longest of any sovereign. [1][a] An emblematic character of the Age of Absolutism in Europe, [3] Louis XIV's legacy is widely characterized by French colonial expansion, the conclusion of Eighty Years' War involving the Habsburgs, and his architectural bequest, marked by commissioned works of art and buildings. His pageantry, opulent lifestyle and ornate cultivated image earned him enduring admiration. Louis XIV raised France to be the exemplar nation-state of the early modern period, and established a cultural prestige which lasted through the subsequent centuries, and continues today.\n\nLouis began his personal rule of France in 1661, after the death of his chief minister Cardinal Mazarin, when the King famously declared that he would take over the job himself.[4] An adherent of the divine right of kings, Louis continued his predecessors' work of creating a centralised state governed from the capital. He sought to eliminate the remnants of feudalism persisting in parts of France; by compelling many members of the nobility to reside at his lavish Palace of Versailles, he succeeded in pacifying the aristocracy, many of whom had participated in the Fronde rebellions during his minority. He thus became one of the most powerful French monarchs and consolidated a system of absolute monarchy in France that endured until the French Revolution. Louis also enforced uniformity of religion under the Catholic Church. His revocation of the Edict of Nantes abolished the rights of the Huguenot Protestant minority and subjected them to a wave of dragonnades, effectively forcing Huguenots to emigrate or convert, virtually destroying the French Protestant community.\n\nDuring Louis's long reign, France emerged as the leading European power and regularly made war. A conflict with Spain marked his entire childhood, while during his personal rule, Louis fought three major continental conflicts, each against powerful foreign alliances: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. In addition, France contested shorter wars such as the War of Devolution and the War of the Reunions. Warfare defined Louis's foreign policy, impelled by his personal ambition for glory and power: \"a mix of commerce, revenge, and pique\".[5] His wars strained France's resources to the utmost, while in peacetime he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] Upon his death in 1715, Louis XIV left his great-grandson and successor, Louis XV, a powerful but war-weary kingdom, in major debt after the War of the Spanish Succession that had raged on since 1701.\n\nSome of his other notable achievements include the construction of the Canal du Midi, the patronage of artists, and the founding of the French Academy of Sciences.\n\n# **Early years**\n\nPortrait by Hyacinthe Rigaud , 1701\n\n| | King of France (more...) |\n| --- | --- |\n| Reign | 14 May 1643 – 1 September |\n| | 1715 |\n| Coronation | 7 June 1654 |\n| | Reims Cathedral |\n| Predecessor | Louis XIII |\n| Successor | Louis XV |\n| Regent | Anne of Austria (1643–1651) |\n| Chief ministers See list | |\n| | Cardinal Mazarin |\n| | (1643–1661) |\n| | Jean-Baptiste Colbert |\n| | (1661–1683) |\n| | The Marquis of Louvois |\n| | (1683–1691) |\n| Born | 5 September 1638 |\n| | Château de Saint-Germain |\n| | en-Laye, Saint-Germain-en |\n| | Laye, France |\n| Died | 1 September 1715 (aged 76) |\n| | Palace of Versailles, |\n| | Versailles, France |\n| Burial | 9 September 1715 |\n| | Basilica of Saint-Denis |\n| Spouses | Maria Theresa of Spain |\n| | (m. 1660; died 1683) |\n| | Françoise d'Aubigné, |\n| | Marquise de Maintenon |\n| | (private) |\n| | (m. 1683) |", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV was born on 5 September 1638 in the Château de Saint-Germain-en-Laye, to Louis XIII and Anne of Austria. He was named Louis Dieudonné (Louis the God-given)[7] and bore the traditional title of French heirs apparent: *Dauphin*. [8] At the time of his birth, his parents had been married for 23 years. His mother had experienced four stillbirths between 1619 and 1631. Leading contemporaries thus regarded him as a divine gift and his birth a miracle of God.[9]\n\nLouis's relationship with his mother was uncommonly affectionate for the time. Contemporaries and eyewitnesses claimed that the Queen would spend all her time with Louis.[10] Both were greatly interested in food and theatre, and it is highly likely that Louis developed these interests through his close relationship with his mother. This long-lasting and loving relationship can be evidenced by excerpts in Louis's journal entries, such as:\n\n> \"Nature was responsible for the first knots which tied me to my mother. But attachments formed later by shared qualities of the spirit are far more difficult to break than those formed merely by blood.\"[11]\n\nIt was his mother who gave Louis his belief in the absolute and divine power of his monarchical rule.[12]\n\nDuring his childhood, he was taken care of by the governesses Françoise de Lansac and Marie-Catherine de Senecey. In 1646, Nicolas V de Villeroy became the young king's tutor. Louis XIV became friends with Villeroy's young children, particularly François de Villeroy, and divided his time between the Palais-Royal and the nearby Hotel de Villeroy.\n\n# **Minority and the** *Fronde*\n\n#### **Issue** *more...*\n\nLouis, Grand Dauphin Marie Thérèse, Madame Royale Philippe Charles, Duke of Anjou *Illegitimate*: Marie Anne, Princess of Conti Louis, Count of Vermandois Louis Auguste, Duke of Maine Louis César, Count of Vexin Louise Françoise, Princess of Condé Louise Marie Anne, Mademoiselle de Tours Louise, Baroness of La Queue Françoise Marie, Duchess of Orléans Louis Alexandre, Count of Toulouse\n\n#### **Names**\n\nLouis-Dieudonné de France\n\n**House** Bourbon **Father** Louis XIII **Mother** Anne of Austria **Religion** Catholicism **Signature**\n\n### **Accession**\n\nSensing imminent death in the spring of 1643, King Louis XIII decided to put his affairs in order for his four-year-old son Louis XIV. Not trusting the judgement of his Spanish wife Queen Anne, who would normally have become the sole regent of France, the king decreed that a regency council would rule on his son's behalf, with Anne at its head.[13]\n\nLouis XIII died on 14 May 1643. On 18 May[14] Queen Anne had her husband's will annulled by the *Parlement de Paris*, a judicial body of nobles and high-ranking clergy, [15] and she became sole regent. She exiled her husband's ministers Chavigny and Bouthilier and appointed the Count of Brienne as her minister of foreign affairs.[16] Anne kept the direction of religious policy strongly in hand until her son's majority in 1661.\n\nShe appointed Cardinal Mazarin as chief minister, giving him the daily administration of policy. She continued the policies of her late husband and Cardinal Richelieu, despite their persecution of her, in order to win absolute authority in France and victory abroad for her son. Anne protected Mazarin by exiling her followers the Duke of Beaufort and Marie de Rohan, who conspired against him in 1643.[17]\n\nThe best example of Anne's loyalty to France was her treatment of one of Richelieu's men, the Chancellor Pierre Séguier. Séguier had brusquely interrogated Anne in 1637 (like a\n\nLouis XIV as a young child, unknown painter\n\n\"common criminal\", as she recalled) following the discovery that she was giving military secrets to her father in Spain, and Anne was virtually under house arrest for years. By keeping the effective Séguier in his post, Anne sacrificed her own feelings for the interests of France and her son Louis.", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The French were nevertheless forced to retreat from most of the Dutch Republic, which deeply shocked Louis; he retreated to St Germain for a time, where no one, except a few intimates, was allowed to disturb him.[47] French military advantages allowed them however to hold their ground in Alsace and the Spanish Netherlands while retaking Franche-Comté. By 1678, mutual exhaustion led to the Treaty of Nijmegen, which was generally settled in France's favour and allowed Louis to intervene in the Scanian War. Despite the military defeat, his ally Sweden regained much of what it had lost under the 1679 treaties of Saint-Germain-en-Laye, Fontainebleau and Lund imposed on Denmark–Norway and Brandenburg.[48] Yet Louis's two primary goals, the destruction of the Dutch Republic and the conquest of the Spanish Netherlands, had failed.[49]\n\nLouis was at the height of his power, but at the cost of uniting his opponents; this increased as he continued his expansion. In 1679, he dismissed his foreign minister Simon Arnauld, marquis de Pomponne, because he was seen as having compromised too much with the allies. Louis maintained the strength of his army, but in his next series of territorial claims avoided using military force alone. Rather, he combined it with legal pretexts in his efforts to augment the boundaries of his kingdom. Contemporary treaties were intentionally phrased ambiguously. Louis established the Chambers of Reunion to determine the full extent of his rights and obligations under those treaties.\n\nCities and territories, such as Luxembourg and Casale, were prized for their strategic positions on the frontier and access to important waterways. Louis also sought Strasbourg, an important strategic crossing on the left bank of the Rhine and theretofore a Free Imperial City of the Holy Roman Empire, annexing it and other territories in 1681. Although a part of Alsace, Strasbourg was not part of Habsburg-ruled Alsace and was thus not ceded to France in the Peace of Westphalia.\n\nFollowing these annexations, Spain declared war, precipitating the War of the Reunions. However, the Spanish were rapidly defeated because the Emperor (distracted by the Great Turkish War) abandoned them, and the Dutch only supported them minimally. By the Truce of Ratisbon, in 1684, Spain was forced to acquiesce in the French occupation of most of the conquered territories, for 20 years.[50]\n\nLouis's policy of the *Réunions* may have raised France to its greatest size and power during his reign, but it alienated much of Europe. This poor public opinion was compounded by French actions off the Barbary Coast and at Genoa. First, Louis had\n\nAlgiers and Tripoli, two Barbary pirate strongholds, bombarded to obtain a favourable treaty and the liberation of Christian slaves. Next, in 1684, a punitive mission was launched against Genoa in retaliation for its support for Spain in previous wars. Although the Genoese submitted, and the Doge led an official mission of apology to Versailles, France gained a reputation for brutality and arrogance. European apprehension at growing French might and the realisation of the extent of the dragonnades' effect (discussed below) led many states to abandon their alliances with France.[51] Accordingly, by the late 1680s, France became increasingly isolated in Europe.\n\n### **Non-European relations and the colonies**\n\nFrench colonies multiplied in Africa, the Americas, and Asia during Louis's reign, and French explorers made important discoveries in North America. In 1673, Louis Jolliet and Jacques Marquette discovered the Mississippi River. In 1682, René-Robert Cavelier, Sieur de La Salle, followed the Mississippi to the Gulf of Mexico and claimed the vast Mississippi basin in Louis's name, calling it *Louisiane*. French trading posts were also established in India, at Chandernagore and Pondicherry, and in the Indian Ocean at Île Bourbon. Throughout these regions, Louis and Colbert embarked on an extensive program of architecture and urbanism meant to reflect the styles of Versailles and Paris and the 'gloire' of the realm.[52]\n\nMeanwhile, diplomatic relations were initiated with distant countries. In 1669, Suleiman Aga led an Ottoman embassy to revive the old Franco-Ottoman alliance. [53] Then, in 1682,\n\nafter the reception of the Moroccan embassy of Mohammed Tenim in France, Moulay Ismail, Sultan of Morocco, allowed French consular and commercial establishments in his country. [54] In 1699, Louis once again received a Moroccan ambassador, Abdallah bin Aisha, and in 1715, he received a Persian embassy led by Mohammad Reza Beg.\n\nFrom farther afield, Siam dispatched an embassy in 1684, reciprocated by the French magnificently the next year under Alexandre, Chevalier de Chaumont. This, in turn, was succeeded by another Siamese embassy under Kosa Pan, superbly received at Versailles in 1686. Louis then sent another embassy in 1687, under Simon de la Loubère, and French influence grew at the\n\nThe Persian embassy to Louis XIV sent by Soltan Hoseyn in 1715. *Ambassade de Perse auprès de*\n\n*Louis XIV*, studio of Antoine Coypel.\n\n1674\").", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia5.pdf" - }, - { - "text": "illegitimate son Louis-Auguste de Bourbon, Duke of Maine. [129] Orléans, however, had Louis's will annulled by the *Parlement of Paris* after his death and made himself sole regent. He stripped Maine and his brother, Louis-Alexandre, Count of Toulouse, of the rank of Prince of the Blood, which Louis had granted them, and significantly reduced Maine's power and privileges.[130]\n\n#### **Line of succession in 1715**\n\nLine of succession to the French throne upon the death of Louis XIV in 1715. Louis XIV's only surviving legitimate grandson, Philip V, was not included in the line of succession due to having renounced the French throne after the war of the Spanish Succession, which lasted for 13 years after the death of Charles II of Spain in 1700.[131]\n\n```\nLouis XIII (1601–1643)\n Louis XIV (1638–1715)\n Louis, Grand Dauphin (1661–1711)\n Louis, Duke of Burgundy (1682–1712)\n Louis, Duke of Brittany (1707–1712)\n (1) Louis, Duke of Anjou (1710–1774)\n Philip V of Spain (1683–1746)\n Charles, Duke of Berry (1686–1714)\nPhilippe I, Duke of Orléans (1640–1701)\n (2) Philippe II, Duke of Orléans (1674–1723)\n (3) Louis, Duke of Chartres (1703–1752)\n```\nFurther down the French line of succession in 1715 was the House of Condé, followed by the House of Conti (a cadet branch of the House of Condé). Both of these royal houses were descended in the male line from Henri II, Prince of Condé, a second cousin of French King Louis XIII (the father of Louis XIV) in the male line.\n\n## **Legacy**\n\n#### **Reputation**\n\nAccording to Philippe de Courcillon's *Journal*, Louis on his deathbed advised his heir with these words:\n\nDo not follow the bad example which I have set you; I have often undertaken war too lightly and have sustained it for vanity. Do not imitate me, but be a peaceful prince, and may you apply yourself principally to the alleviation of the burdens of your subjects.[132]\n\nSome historians point out that it was a customary demonstration of piety in those days to exaggerate one's sins. Thus they do not place much emphasis on Louis's deathbed declarations in assessing his accomplishments. Rather, they focus on military and diplomatic successes, such as how he placed a French prince on the Spanish throne. This, they contend, ended the threat of an aggressive Spain that historically interfered in domestic French politics. These historians also emphasise the effect of Louis's wars in expanding France's boundaries and creating more defensible frontiers that preserved France from invasion until the Revolution.[132]\n\nArguably, Louis also applied himself indirectly to \"the alleviation of the burdens of [his] subjects.\" For example, he patronised the arts, encouraged industry, fostered trade and commerce, and sponsored the founding of an overseas empire. Moreover, the significant reduction in civil wars and aristocratic rebellions during his reign are seen by these\n\nTerritorial expansion of France under Louis XIV (1643–1715) is depicted in orange.\n\nhistorians as the result of Louis's consolidation of royal authority over feudal elites. In their analysis, his early reforms centralised France and marked the birth of the modern French state. They regard the political and military victories as well as numerous cultural achievements as how Louis helped raise France to a preeminent position in Europe.[133] Europe came to admire France for its military and cultural successes, power, and sophistication. Europeans generally began to emulate French manners, values, goods, and deportment. French became the universal language of the European elite.\n\nLouis's detractors have argued that his considerable foreign, military and domestic expenditure impoverished and bankrupted France. His supporters, however, distinguish the state, which was impoverished, from France, which was not. As supporting evidence, they cite the literature of the time, such as the social commentary in Montesquieu's *Persian Letters*. [134]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia5.pdf" - }, - { - "text": "In July 1695, the city of Namur, occupied for three years by the French, was besieged by an allied army led by William III. Louis XIV ordered the surprise destruction of a Flemish city to divert the attention of these troops. This led to the bombardment of Brussels, in which more than 4,000 buildings were destroyed, including the entire city centre. The strategy failed, as Namur fell three weeks later, but harmed Louis XIV's reputation: a century later, Napoleon deemed the bombardment \"as barbarous as it was useless\".[85]\n\nPeace was broached by Sweden in 1690. By 1692, both sides evidently wanted peace, and secret bilateral talks began, but to no avail.[86] Louis tried to break up the alliance against him by dealing with individual opponents but did not achieve his aim until 1696 when the Savoyards agreed to the Treaty of Turin and switched sides. Thereafter, members of the League of Augsburg rushed to the peace table, and negotiations for a general peace began in earnest, culminating in the Peace of Ryswick of 1697.[87]\n\nMarshal de Luxembourg\n\n#### **Peace of Ryswick**\n\nThe Peace of Ryswick ended the War of the League of Augsburg and disbanded the Grand Alliance. By manipulating their rivalries and suspicions, Louis divided his enemies and broke their power.\n\nThe treaty yielded many benefits for France. Louis secured permanent French sovereignty over all of Alsace, including Strasbourg, and established the Rhine as the Franco-German border (as it is to this day). Pondichéry and Acadia were returned to France, and Louis's *de facto* possession of Saint-Domingue was recognised as lawful. However, he returned Catalonia and most of the Reunions.\n\nFrench military superiority might have allowed him to press for more advantageous terms. Thus, his generosity to Spain with regard to Catalonia has been read as a concession to foster pro-French sentiment and may ultimately have induced King Charles II to name Louis's grandson Philip, Duke of Anjou, heir to the Spanish throne.[88] In exchange for financial compensation, France renounced its interests in the Electorate of Cologne and the Palatinate. Lorraine, which had been occupied by the French since 1670, was returned to its rightful Duke Leopold, albeit with a right of way to the French military. William and Mary were recognised as joint sovereigns of the British Isles, and Louis withdrew support for James II. The Dutch were given the right to garrison forts in the Spanish Netherlands that acted as a protective barrier against possible French aggression. Though in some respects the Treaty of Ryswick may appear a diplomatic defeat for Louis since he failed to place client rulers in control of the Palatinate or the Electorate of Cologne, he did fulfil many of the aims laid down in his 1688 ultimatum.[89] In any case, peace in 1697 was desirable to Louis, since France was exhausted from the costs of the war.\n\n## **War of the Spanish Succession**\n\n#### **Causes and build-up to the war**\n\nBy the time of the Peace of Ryswick, the Spanish succession had been a source of concern to European leaders for well over forty years. King Charles II ruled a vast empire comprising Spain, Naples, Sicily, Milan, the Spanish Netherlands, and numerous Spanish colonies. He produced no children, however, and consequently had no direct heirs.\n\nThe principal claimants to the throne of Spain belonged to the ruling families of France and Austria. The French claim derived from Louis XIV's mother Anne of Austria (the older sister of Philip IV of Spain) and his wife Maria Theresa (Philip IV's eldest daughter). Based on the laws of primogeniture, France had the better claim as it originated from the eldest daughters in two generations. However, their renunciation of succession rights complicated matters. In the case of Maria Theresa, nonetheless, the renunciation was considered null and void owing to Spain's breach of her marriage contract with Louis. In contrast, no renunciations tainted the claims of Emperor Leopold I's son Charles, Archduke of Austria, who was a grandson of Philip III's youngest daughter Maria Anna. The English and Dutch feared that a French or Austrian-born Spanish king would threaten the balance of power and thus preferred the Bavarian Prince Joseph Ferdinand, a grandson of Leopold I through his first wife Margaret Theresa of Spain (the younger daughter of Philip IV).\n\nIn an attempt to avoid war, Louis signed the Treaty of the Hague with William III of England in 1698. This agreement divided Spain's Italian territories between Louis's son *le Grand Dauphin* and Archduke Charles, with the rest of the empire awarded to Joseph Ferdinand. William III consented to permitting the Dauphin's new territories to become part of France when the latter", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "What was one of Louis XIV's most ill-famed decrees?", - "target_page": 6, - "target_passage": "One of Louis's more infamous decrees was the Grande Ordonnance sur les Colonies of 1685, the Code Noir (black code)", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "Alternatively, Louis's critics attribute the social upheaval culminating in the French Revolution to his failure to reform French institutions while the monarchy was still secure. Other scholars counter that there was little reason to reform institutions that largely worked well under Louis. They also maintain that events occurring almost 80 years after his death were not reasonably foreseeable to Louis and that in any case, his successors had sufficient time to initiate reforms of their own.[135]\n\nLouis has often been criticised for his vanity. The memoirist Saint-Simon, who claimed that Louis slighted him, criticised him thus:\n\n> There was nothing he liked so much as flattery, or, to put it more plainly, adulation; the coarser and clumsier it was, the more he relished it.\n\nFor his part, Voltaire saw Louis's vanity as the cause for his bellicosity:\n\nRoyal procession passing the Pont-Neuf under Louis XIV\n\nIt is certain that he passionately wanted glory, rather than the conquests themselves. In the acquisition of Alsace and half of Flanders, and of all of Franche-Comté, what he really liked was the name he made for himself.[136]\n\nNonetheless, Louis has also received praise. The anti-Bourbon Napoleon described him not only as \"a great king\", but also as \"the only King of France worthy of the name\".[137] Leibniz, the German Protestant philosopher, commended him as \"one of the greatest kings that ever was\".[138] And Lord Acton admired him as \"by far the ablest man who was born in modern times on the steps of a throne\".[139] The historian and philosopher Voltaire wrote: \"His name can never be pronounced without respect and without summoning the image of an eternally memorable age\".[140] Voltaire's history, *The Age of Louis XIV*, named Louis's reign as not only one of the four great ages in which reason and culture flourished, but the greatest ever. [141][142]\n\n### **Quotes**\n\nNumerous quotes have been attributed to Louis XIV by legend.\n\nThe well-known \"I am the state\" (*\"L'État, c'est moi.\"*) was reported from at least the late 18th century. [143] It was widely repeated but also denounced as apocryphal by the early 19th century. [144][b][145]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Siamese embassy of King Narai to Louis XIV in 1686, led by Kosa Pan. Engraving by Nicolas Larmessin.\n\n**Centralisation of power**\n\nPortrait of Louis XIV (gray pastel on paper by Charles Le Brun, 1667, Louvre Museum)\n\nSiamese court, which granted Mergui as a naval base to France. However, the death of Narai, King of Ayutthaya, the execution of his pro-French minister Constantine Phaulkon, and the siege of Bangkok in 1688 ended this era of French influence.[55]\n\nFrance also attempted to participate actively in Jesuit missions to China. To break the Portuguese dominance there, Louis sent Jesuit missionaries to the court of the Kangxi Emperor in 1685: Jean de Fontaney, Joachim Bouvet, Jean-François Gerbillon, Louis Le Comte, and Claude de Visdelou. [56] Louis also received a Chinese Jesuit, Michael Shen Fu-Tsung, at Versailles in 1684.[57] Furthermore, Louis's librarian and translator Arcadio Huang was Chinese.[58][59]\n\n# **Height of power**\n\nBy the early 1680s, Louis had greatly augmented French influence in the world. Domestically, he successfully increased the influence of the crown and its authority over the church and aristocracy, thus consolidating absolute monarchy in France.\n\nLouis initially supported traditional Gallicanism, which limited papal authority in France, and convened an Assembly of the French clergy in November 1681. Before its dissolution eight months later, the Assembly had accepted the Declaration of the Clergy of France, which increased royal authority at the expense of papal power. Without royal approval, bishops could not leave France, and appeals could not be made to the pope. Additionally, government officials could not be excommunicated for acts committed in pursuance of their duties. Although the king could not make ecclesiastical law, all papal regulations without royal assent were invalid in France. Unsurprisingly, the Pope repudiated the Declaration.[4]\n\nBy attaching nobles to his court at Versailles, Louis achieved increased control over the French aristocracy. According to historian Philip Mansel, the king turned the palace into:\n\nan irresistible combination of marriage market, employment agency and entertainment capital of aristocratic Europe, boasting the best theatre, opera, music, gambling, sex and (most important) hunting.[60]\n\nApartments were built to house those willing to pay court to the king.[61] However, the pensions and privileges necessary to live in a style appropriate to their rank were only possible by waiting constantly on Louis. [62] For this purpose, an elaborate court ritual was created wherein the king became the centre of attention and was observed throughout the day by the public. With his excellent memory, Louis could then see who attended him at court and who was absent, facilitating the subsequent distribution of favours and positions.\n\nLouis receiving the Doge of Genoa at Versailles on 15 May 1685, following the Bombardment of Genoa. (*Reparation faite à Louis XIV par le Doge de Gênes. 15 mai 1685* by Claude Guy Halle, Versailles.)\n\nAnother tool Louis used to control his nobility was censorship, which often involved the opening of letters to discern their author's opinion of the government and king.[61] Moreover, by entertaining, impressing, and domesticating them with extravagant luxury and other distractions, Louis not only cultivated public opinion of him, but he also ensured the aristocracy remained under his scrutiny.\n\nLouis's extravagance at Versailles extended far beyond the scope of elaborate court rituals. He took delivery of an African elephant as a gift from the king of Portugal.[63] He encouraged leading nobles to live at Versailles. This, along with the prohibition of private armies, prevented them from passing time on their own estates and in their regional power bases, from which they historically waged local wars and plotted resistance to royal authority. Louis thus compelled and seduced the old military aristocracy (the \"nobility of the sword\") into becoming his ceremonial courtiers, further weakening their power. In their place, he raised commoners or the more recently ennobled bureaucratic aristocracy (the \"nobility of the robe\"). He judged that royal authority thrived more surely by filling high executive and administrative positions with these men because they could be more easily dismissed than nobles of ancient lineage and entrenched influence. It is believed that Louis's policies were rooted in his", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia5.pdf" - }, - { - "text": "- The film, *Le Roi Danse* (2000; translated: *The King Dances*), directed by Gérard Corbiau, reveals Louis through the eyes of Jean-Baptiste Lully, his court musician.\n- Julian Sands portrayed Louis in Roland Jaffe's *Vatel* (2000).\n- Alan Rickman directed, co-wrote, and stars as Louis XIV in the film, *A Little Chaos*, which centres on construction in the gardens of Versaille, at the time immediately before and after the death of Queen Maria Theresa.\n- The 2016 film *The Death of Louis XIV*, directed by Albert Serra, is set during the last two weeks of Louis XIV's life before dying of gangrene, with the monarch played by Jean-Pierre Léaud.\n\n#### **Television**\n\n- Louis XIV is portrayed by Thierry Perkins-Lyautey in the British television film *Charles II: The Power and the Passion.*\n- The 15-year-old Louis XIV, as played by the Irish actor Robert Sheehan, is a major character of the short-lived historical fantasy series *Young Blades* from January to June 2005.\n- George Blagden portrays Louis XIV in the Canal+ series *Versailles* which aired for three seasons from 2015.\n\n#### **Musicals**\n\n- Emmanuel Moire portrayed Louis XIV in the 2005-07 Kamel Ouali musical Le Roi Soleil.\n# **Health and death**\n\nLouis XIV (seated) with his son *le Grand Dauphin* (to the left), his grandson Louis, Duke of Burgundy (to the right), his great-grandson Louis Duke of Anjou, and Madame de Ventadour, Anjou's governess, who commissioned this painting; busts of Henry IV and Louis XIII are in the background.\n\n*The Death of Louis XIV at the Palace of Versailles*, Thomas Jones Barker, 1835-1840\n\nDespite the image of a healthy and virile king that Louis sought to project, evidence exists to suggest that his health was not very good. He had many ailments: for example, symptoms of diabetes, as confirmed in reports of suppurating periostitis in 1678, dental abscesses in 1696, along with recurring boils, fainting spells, gout, dizziness, hot flushes, and headaches.\n\nFrom 1647 to 1711, the three chief physicians to the king (Antoine Vallot, Antoine d'Aquin, and Guy-Crescent Fagon) recorded all of his health problems in the *Journal de Santé du Roi* (*Journal of the King's Health*), a daily report of his health. On 18 November 1686, Louis underwent a painful operation for an anal fistula that was performed by the surgeon Charles Felix de Tassy, who prepared a specially shaped curved scalpel for the occasion. The wound took more than two months to heal.[124]\n\nLouis died of gangrene at Versailles on 1 September 1715, four days before his 77th birthday, after 72 years on the throne. Enduring much pain in his last days, he finally \"yielded up his soul without any effort, like a candle going out\", while reciting the psalm *Deus, in adjutorium me festina* (*O Lord, make haste to help me*).[125] His body was laid to rest in Saint-Denis Basilica outside Paris. It remained there undisturbed for about 80 years until revolutionaries exhumed and destroyed all of the remains found in the Basilica.[126] In 1848, at Nuneham House, a piece of Louis's mummified heart, taken from his tomb and kept in a silver locket by Lord Harcourt, Archbishop of York, was shown to the Dean of Westminster, William Buckland, who ate a part of it.[127]\n\nCardinal Armand Gaston Maximilien de Rohan gave Last Rites (confession, viaticum, and unction) to king Louis XIV. [128]\n\n### **Succession**\n\nLouis outlived most of his immediate legitimate family. His last surviving legitimate son, Louis, Dauphin of France, died in 1711. Barely a year later, the Duke of Burgundy, the eldest of the Dauphin's three sons and then heir-apparent to Louis, followed his father. Burgundy's elder son, Louis, Duke of Brittany, joined them a few weeks later. Thus, on his\n\ndeathbed, Louis's heir-apparent was his five-year-old great-grandson, Louis, Duke of Anjou, Burgundy's younger son.\n\nLouis foresaw an underaged successor and sought to restrict the power of his nephew Philip II, Duke of Orléans, who, as his closest surviving legitimate relative in France, would probably become regent to the prospective Louis XV. Accordingly, the king created a regency council as Louis XIII had in anticipation of Louis XIV's own minority, with some power vested in his", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia5.pdf" - }, - { - "text": "**Louis XIV** (Louis-Dieudonné; 5 September 1638 – 1 September 1715), also known as **Louis the Great** (*Louis le Grand*) or the **Sun King** (*le Roi Soleil*), was King of France from 1643 until his death in 1715. His verified reign of 72 years and 110 days is the longest of any sovereign. [1][a] An emblematic character of the Age of Absolutism in Europe, [3] Louis XIV's legacy is widely characterized by French colonial expansion, the conclusion of Eighty Years' War involving the Habsburgs, and his architectural bequest, marked by commissioned works of art and buildings. His pageantry, opulent lifestyle and ornate cultivated image earned him enduring admiration. Louis XIV raised France to be the exemplar nation-state of the early modern period, and established a cultural prestige which lasted through the subsequent centuries, and continues today.\n\nLouis began his personal rule of France in 1661, after the death of his chief minister Cardinal Mazarin, when the King famously declared that he would take over the job himself.[4] An adherent of the divine right of kings, Louis continued his predecessors' work of creating a centralised state governed from the capital. He sought to eliminate the remnants of feudalism persisting in parts of France; by compelling many members of the nobility to reside at his lavish Palace of Versailles, he succeeded in pacifying the aristocracy, many of whom had participated in the Fronde rebellions during his minority. He thus became one of the most powerful French monarchs and consolidated a system of absolute monarchy in France that endured until the French Revolution. Louis also enforced uniformity of religion under the Catholic Church. His revocation of the Edict of Nantes abolished the rights of the Huguenot Protestant minority and subjected them to a wave of dragonnades, effectively forcing Huguenots to emigrate or convert, virtually destroying the French Protestant community.\n\nDuring Louis's long reign, France emerged as the leading European power and regularly made war. A conflict with Spain marked his entire childhood, while during his personal rule, Louis fought three major continental conflicts, each against powerful foreign alliances: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. In addition, France contested shorter wars such as the War of Devolution and the War of the Reunions. Warfare defined Louis's foreign policy, impelled by his personal ambition for glory and power: \"a mix of commerce, revenge, and pique\".[5] His wars strained France's resources to the utmost, while in peacetime he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] Upon his death in 1715, Louis XIV left his great-grandson and successor, Louis XV, a powerful but war-weary kingdom, in major debt after the War of the Spanish Succession that had raged on since 1701.\n\nSome of his other notable achievements include the construction of the Canal du Midi, the patronage of artists, and the founding of the French Academy of Sciences.\n\n# **Early years**\n\nPortrait by Hyacinthe Rigaud , 1701\n\n| | King of France (more...) |\n| --- | --- |\n| Reign | 14 May 1643 – 1 September |\n| | 1715 |\n| Coronation | 7 June 1654 |\n| | Reims Cathedral |\n| Predecessor | Louis XIII |\n| Successor | Louis XV |\n| Regent | Anne of Austria (1643–1651) |\n| Chief ministers See list | |\n| | Cardinal Mazarin |\n| | (1643–1661) |\n| | Jean-Baptiste Colbert |\n| | (1661–1683) |\n| | The Marquis of Louvois |\n| | (1683–1691) |\n| Born | 5 September 1638 |\n| | Château de Saint-Germain |\n| | en-Laye, Saint-Germain-en |\n| | Laye, France |\n| Died | 1 September 1715 (aged 76) |\n| | Palace of Versailles, |\n| | Versailles, France |\n| Burial | 9 September 1715 |\n| | Basilica of Saint-Denis |\n| Spouses | Maria Theresa of Spain |\n| | (m. 1660; died 1683) |\n| | Françoise d'Aubigné, |\n| | Marquise de Maintenon |\n| | (private) |\n| | (m. 1683) |", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia5.pdf" - }, - { - "text": "- Félix, Joël. \"'The most difficult financial matter that has ever presented itself': paper money and the financing of warfare under Louis XIV.\" *Financial History Review* 25.1 (2018): 43–70 online (http://centaur.reading.ac.uk/72452/ 2/The%20most%20difficult%20financial%20matter%20FH.pdf) Archived (https://web.archive.org/web/2021022610 4833/http://centaur.reading.ac.uk/72452/2/The%20most%20difficult%20financial%20matter%20FH.pdf) 26 February 2021 at the Wayback Machine.\n- Goubert, Pierre (197). *Louis XIV and Twenty Million Frenchmen*. social history from Annales School. ISBN 978-0- 3947-1751-7.\n- Jones, Colin. *The Great Nation: France from Louis XIV to Napoleon (1715–1799)* (2002)\n- Klaits, Joseph. *Printed propaganda under Louis XIV: absolute monarchy and public opinion* (Princeton University Press, 2015).\n- Le Roy Ladurie, Emmanuel. *The Ancien Régime: A History of France 1610–1774* (1999), survey by leader of the Annales School ISBN 0631211969\n- Lewis, W. H. *The Splendid Century: Life in the France of Louis XIV* (1953) ISBN 0881339210\n- Mitford, Nancy (1966). *The Sun King: Louis XIV at Versailles* (2012 ed.). New York Review of Books. ISBN 978-1- 5901-7491-3.\n\nPrest, Julia, and Guy Rowlands, eds. *The Third Reign of Louis XIV, c. 1682–1715* (Taylor & Francis, 2016).\n\n- Rothkrug, Lionel. *Opposition to Louis XIV: The Political and Social Origins of French Enlightenment* (Princeton University Press, 2015).\n- Rowlands, Guy. *The Dynastic State and the Army under Louis XIV: Royal Service and Private Interest, 1661–1701* (2002)\n- Rubin, David Lee, ed. *Sun King: The Ascendancy of French Culture during the Reign of Louis XIV*. Washington: Folger Books and Cranbury: Associated University Presses, 1992.\n- Rule, John C., *Louis XIV and the craft of kingship* 1969.\n- Shennan, J. H. *Louis XIV* (1993)\n- Thompson, Ian. *The Sun King's Garden: Louis XIV, André Le Nôtre And the Creation of the Gardens of Versailles*. London: Bloomsbury Publishing, 2006 ISBN 1-5823-4631-3\n- Treasure, Geoffrey. *The Making of Modern Europe, 1648–1780* (3rd ed. 2003). pp. 230–296.\n- Wilkinson, Rich. *Louis XIV* (Routledge, 2007). ISBN 978-0-4153-5815-6\n- Cénat, Jean-Philippe. *Le roi stratège: Louis XIV et la direction de la guerre, 1661–1715* (Presses universitaires de Rennes, 2019).\n- Croix, Alain. \"Vingt millions de Français et Louis XIV.\" *Revue dhistoire moderne contemporaine* 2 (2020): 27–46.\n- Engerand, Fernand, editor (1899). (in French) *Inventaire des tableaux du Roy rédigé en 1709 et 1710 par Nicolas Bailly*. Paris: Ernest Leroux. Copy (http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) Archived (https://we b.archive.org/web/20160307153902/http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) 7 March 2016 at the Wayback Machine at Gallica.\n\n# **External links**\n\n- Ranum, Orest, ed. (1972). *The Century of Louis XIV* (http://www.palgrave.com/in/book/9781349004997). Archived (https://web.archive.org/web/20180207182952/https://www.palgrave.com/in/book/9781349004997) from the original on 7 February 2018. Retrieved 7 July 2017. {{cite book}}: |work= ignored (help)\n- Works by or about Louis XIV (https://archive.org/search.php?query=%28+%22Louis+XIV%22+OR+%22Louis+the +Great%22+OR+%22Sun+King%22+OR+%28%221638-1715%22+AND+Louis%29+%29) at the Internet Archive\n- Works by Louis XIV (https://librivox.org/author/9631) at LibriVox (public domain audiobooks)\n- Louis XIV (http://www.history.com/topics/louis-xiv) Archived (https://web.archive.org/web/20170622232619/http://w ww.history.com/topics/louis-xiv) 22 June 2017 at the Wayback Machine at *History.com*\n- Full text of marriage contract (https://web.archive.org/web/20070616071522/http://www.smae.diplomatie.gouv.fr/ch oiseul/ressource/pdf/D16590004.pdf), France National Archives transcription (in French)\n- *Le Siècle de Louis XIV* by Voltaire, 1751, hosted by French Wikisource\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Louis_XIV&oldid=1267574624\"", - "page_start": 33, - "page_end": 33, - "source_file": "wikipedia5.pdf" - }, - { - "text": "illegitimate son Louis-Auguste de Bourbon, Duke of Maine. [129] Orléans, however, had Louis's will annulled by the *Parlement of Paris* after his death and made himself sole regent. He stripped Maine and his brother, Louis-Alexandre, Count of Toulouse, of the rank of Prince of the Blood, which Louis had granted them, and significantly reduced Maine's power and privileges.[130]\n\n#### **Line of succession in 1715**\n\nLine of succession to the French throne upon the death of Louis XIV in 1715. Louis XIV's only surviving legitimate grandson, Philip V, was not included in the line of succession due to having renounced the French throne after the war of the Spanish Succession, which lasted for 13 years after the death of Charles II of Spain in 1700.[131]\n\n```\nLouis XIII (1601–1643)\n Louis XIV (1638–1715)\n Louis, Grand Dauphin (1661–1711)\n Louis, Duke of Burgundy (1682–1712)\n Louis, Duke of Brittany (1707–1712)\n (1) Louis, Duke of Anjou (1710–1774)\n Philip V of Spain (1683–1746)\n Charles, Duke of Berry (1686–1714)\nPhilippe I, Duke of Orléans (1640–1701)\n (2) Philippe II, Duke of Orléans (1674–1723)\n (3) Louis, Duke of Chartres (1703–1752)\n```\nFurther down the French line of succession in 1715 was the House of Condé, followed by the House of Conti (a cadet branch of the House of Condé). Both of these royal houses were descended in the male line from Henri II, Prince of Condé, a second cousin of French King Louis XIII (the father of Louis XIV) in the male line.\n\n## **Legacy**\n\n#### **Reputation**\n\nAccording to Philippe de Courcillon's *Journal*, Louis on his deathbed advised his heir with these words:\n\nDo not follow the bad example which I have set you; I have often undertaken war too lightly and have sustained it for vanity. Do not imitate me, but be a peaceful prince, and may you apply yourself principally to the alleviation of the burdens of your subjects.[132]\n\nSome historians point out that it was a customary demonstration of piety in those days to exaggerate one's sins. Thus they do not place much emphasis on Louis's deathbed declarations in assessing his accomplishments. Rather, they focus on military and diplomatic successes, such as how he placed a French prince on the Spanish throne. This, they contend, ended the threat of an aggressive Spain that historically interfered in domestic French politics. These historians also emphasise the effect of Louis's wars in expanding France's boundaries and creating more defensible frontiers that preserved France from invasion until the Revolution.[132]\n\nArguably, Louis also applied himself indirectly to \"the alleviation of the burdens of [his] subjects.\" For example, he patronised the arts, encouraged industry, fostered trade and commerce, and sponsored the founding of an overseas empire. Moreover, the significant reduction in civil wars and aristocratic rebellions during his reign are seen by these\n\nTerritorial expansion of France under Louis XIV (1643–1715) is depicted in orange.\n\nhistorians as the result of Louis's consolidation of royal authority over feudal elites. In their analysis, his early reforms centralised France and marked the birth of the modern French state. They regard the political and military victories as well as numerous cultural achievements as how Louis helped raise France to a preeminent position in Europe.[133] Europe came to admire France for its military and cultural successes, power, and sophistication. Europeans generally began to emulate French manners, values, goods, and deportment. French became the universal language of the European elite.\n\nLouis's detractors have argued that his considerable foreign, military and domestic expenditure impoverished and bankrupted France. His supporters, however, distinguish the state, which was impoverished, from France, which was not. As supporting evidence, they cite the literature of the time, such as the social commentary in Montesquieu's *Persian Letters*. [134]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia5.pdf" - }, - { - "text": "important both for its role in ending the war between France and Spain, because many of the claims and objectives of Louis's foreign policy for the next 50 years would be based upon this marriage, and because it was through this marriage that the Spanish throne would ultimately be delivered to the House of Bourbon.[32]\n\n# **Personal reign and reforms**\n\n### **Coming of age and early reforms**\n\nLouis XIV was declared to have reached the age of majority on the 7th of September 1651. On the death of Mazarin, in March 1661, Louis personally took the reins of government and astonished his court by declaring that he would rule without a chief minister: \"Up to this moment I have been pleased to entrust the government of my affairs to the late Cardinal. It is now time that I govern them myself. You [secretaries and ministers] will assist me with your counsels when I ask for them. I request and order you to seal no orders except by my command . . . I order you not to sign anything, not even a passport . . . without my command; to render account to me personally each day and to favor no one\".[33] Capitalizing on the widespread public yearning for peace and order after decades of foreign and civil strife, the young king consolidated central political authority at the expense of the feudal aristocracy. Praising his ability to choose and encourage men of talent, the historian Chateaubriand noted: \"it is the voice of genius of all kinds which sounds from the tomb of Louis\".[34]\n\nLouis began his personal reign with administrative and fiscal reforms. In 1661, the treasury verged on bankruptcy. To rectify the situation, Louis chose Jean-Baptiste Colbert as Controller-General of Finances in 1665. However, Louis first had to neutralize Nicolas Fouquet, the powerful Superintendent of Finances. Although Fouquet's financial indiscretions were not very different from Mazarin's before him or Colbert's after him, his ambition worried Louis. He lavishly entertained the king at the opulent château of Vaux-le-\n\nMonogram\n\nVicomte, flaunting a wealth which could hardly have accumulated except through embezzlement of government funds.\n\nFouquet appeared eager to succeed Mazarin and Richelieu in power, and he indiscreetly purchased and privately fortified the remote island of Belle Île. These acts sealed his doom. Fouquet was charged with embezzlement; the *Parlement* found him guilty and sentenced him to exile; and finally Louis altered the sentence to life imprisonment.\n\nFouquet's downfall gave Colbert a free hand to reduce the national debt through more efficient taxation. The principal taxes included the *aides* and *douanes* (both customs duties), the *gabelle* (salt tax), and the *taille* (land tax). The *taille* was reduced at first, and certain tax-collection contracts were auctioned instead of being sold privately to a favoured few. Financial officials were required to keep regular accounts, revising inventories and removing unauthorized exemptions: up to 1661 only 10 per cent of income from the royal domain reached the king. Reform had to overcome vested interests: the *taille* was collected by officers of the Crown who had purchased their post at a high price, and punishment of abuses necessarily lowered the value of the purchase. Nevertheless, Colbert achieved excellent results, with the deficit of 1661 turning into a surplus by 1666, with interest on the debt decreasing from 52 million to 24 million livres. The *taille* was reduced to 42 million in 1661 and 35 million in 1665, while revenue from indirect taxation\n\nMembers of the *Académie des sciences* with Louis in 1667; in the background appears the new Paris Observatory.\n\nprogressed from 26 million to 55 million. The revenues of the royal domain were raised from 80,000 livres in 1661 to 5.5 million in 1671. In 1661, the receipts were equivalent to 26 million British pounds, of which 10 million reached the treasury. The expenditure was around 18 million pounds, leaving a deficit of 8 million. In 1667, the net receipts had risen to 20 million pounds sterling, while expenditure had fallen to 11 million, leaving a surplus of 9 million pounds.\n\nMoney was the essential support of the reorganized and enlarged army, the panoply of Versailles, and the growing civil administration. Finance had always been the weakness of the French monarchy: tax collection was costly and inefficient; direct taxes dwindled as they passed through the hands of many intermediate officials; and indirect taxes were collected by private contractors called tax farmers who made a handsome profit. The state coffers leaked at every joint.\n\nThe main weakness arose from an old bargain between the French crown and nobility: the king might raise taxes on the nation without consent if only he exempted the nobility. Only the \"unprivileged\" classes paid direct taxes, which came to mean the peasants only, as most bourgeois finagled exemptions in one way or another. The system laid the whole burden of state expenses on the backs of the poor and powerless. After 1700, with the support of Louis's pious secret wife Madame de Maintenon, the king", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia5.pdf" - }, - { - "text": "was persuaded to change his fiscal policy. Though willing enough to tax the nobles, Louis feared the political concessions which they would demand in return. Only towards the close of his reign under the extreme exigency of war, was he able, for the first time in French history, to impose direct taxes on the aristocracy. This was a step toward equality before the law and toward sound public finance, though it was predictably diminished by concessions and exemptions won by the insistent efforts of nobles and bourgeois.[35]\n\nLouis and Colbert also had wide-ranging plans to grow French commerce and trade. Colbert's mercantilist administration established new industries and encouraged manufacturers and inventors, such as the Lyon silk manufacturers and the Gobelins tapestry manufactory. He invited manufacturers and artisans from all over Europe to France, such as Murano glassmakers, Swedish ironworkers, and Dutch shipbuilders. He aimed to decrease imports while increasing French exports, hence reducing the net outflow of precious metals from France.\n\nLouis instituted reforms in military administration through Michel le Tellier and his son François-Michel le Tellier, successive Marquis de Louvois. They helped to curb the\n\nindependent spirit of the nobility, imposing order on them at court and in the army. Gone were the days when generals protracted war at the frontiers while bickering over precedence and ignoring orders from the capital and the larger strategic picture, with the old military aristocracy (*noblesse d'épée*, nobility of the sword) monopolizing senior military positions and the higher ranks. Louvois modernized the army and reorganised it into a professional, disciplined, well-trained force. He was devoted to the soldiers' material well-being and morale, and even tried to direct campaigns.\n\n### **Relations with the major colonies**\n\nLouis's legal reforms were enacted in his numerous Great Ordinances. Prior to that, France was a patchwork of legal systems, with as many traditional legal regimes as there were provinces, and two co-existing legal systems—customary law in the north and Roman civil law in the south.[36] The *Grande Ordonnance de Procédure Civile* of 1667, the *Code Louis*, was a comprehensive legal code imposing a uniform regulation of civil procedure throughout the kingdom. Among other things, it prescribed baptismal, marriage and death records in the state's registers, not the church's, and it strictly regulated the right of the *Parlements* to remonstrate.[37] The *Code Louis* later became the basis for the Napoleonic code, which in turn inspired many modern legal codes.\n\nOne of Louis's more infamous decrees was the *Grande Ordonnance sur les Colonies* of 1685, the *Code Noir* (black code). Although it sanctioned slavery, it attempted to humanise the practice by prohibiting the separation of families. Additionally, in the colonies, only Roman Catholics could own slaves, and these had to be baptised.\n\nLouis ruled through a number of councils:\n\n- Conseil d'en haut (\"High Council\", concerning the most important matters of state)—composed of the king, the crown prince, the controller-general of finances, and the secretaries of state in charge of various departments. The members of that council were called ministers of state.\n- Conseil des dépêches (\"Council of Messages\", concerning notices and administrative reports from the provinces).\n- Conseil de Conscience (\"Council of Conscience\", concerning religious affairs and episcopal appointments).\n- Conseil royal des finances (\"Royal Council of Finances\") headed by the \"chef du conseil des finances\" (an honorary post in most cases)—this was one of the few posts in the council available to the high aristocracy. [38]\n\n# **Early wars in the Low Countries**\n\n### **Spain**\n\nThe death of Louis's maternal uncle King Philip IV of Spain in 1665 precipitated the War of Devolution. In 1660, Louis had married Philip IV's eldest daughter, Maria Theresa, as one of the provisions of the 1659 Treaty of the Pyrenees. [39] The marriage treaty specified that Maria Theresa was to renounce all claims to Spanish territory for herself and all her descendants.[39] Mazarin\n\nLouis and his family portrayed as Roman gods in a 1670 painting by Jean Nocret. L to R: Louis's aunt, Henriette-Marie; his brother, Philippe, duc d'Orléans; the Duke's daughter, Marie Louise d'Orléans, and wife, Henriette-Anne Stuart; the Queen-mother, Anne of Austria; three daughters of Gaston d'Orléans; Louis XIV; the Dauphin Louis; Queen Marie-Thérèse; *la Grande Mademoiselle*.", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV was born on 5 September 1638 in the Château de Saint-Germain-en-Laye, to Louis XIII and Anne of Austria. He was named Louis Dieudonné (Louis the God-given)[7] and bore the traditional title of French heirs apparent: *Dauphin*. [8] At the time of his birth, his parents had been married for 23 years. His mother had experienced four stillbirths between 1619 and 1631. Leading contemporaries thus regarded him as a divine gift and his birth a miracle of God.[9]\n\nLouis's relationship with his mother was uncommonly affectionate for the time. Contemporaries and eyewitnesses claimed that the Queen would spend all her time with Louis.[10] Both were greatly interested in food and theatre, and it is highly likely that Louis developed these interests through his close relationship with his mother. This long-lasting and loving relationship can be evidenced by excerpts in Louis's journal entries, such as:\n\n> \"Nature was responsible for the first knots which tied me to my mother. But attachments formed later by shared qualities of the spirit are far more difficult to break than those formed merely by blood.\"[11]\n\nIt was his mother who gave Louis his belief in the absolute and divine power of his monarchical rule.[12]\n\nDuring his childhood, he was taken care of by the governesses Françoise de Lansac and Marie-Catherine de Senecey. In 1646, Nicolas V de Villeroy became the young king's tutor. Louis XIV became friends with Villeroy's young children, particularly François de Villeroy, and divided his time between the Palais-Royal and the nearby Hotel de Villeroy.\n\n# **Minority and the** *Fronde*\n\n#### **Issue** *more...*\n\nLouis, Grand Dauphin Marie Thérèse, Madame Royale Philippe Charles, Duke of Anjou *Illegitimate*: Marie Anne, Princess of Conti Louis, Count of Vermandois Louis Auguste, Duke of Maine Louis César, Count of Vexin Louise Françoise, Princess of Condé Louise Marie Anne, Mademoiselle de Tours Louise, Baroness of La Queue Françoise Marie, Duchess of Orléans Louis Alexandre, Count of Toulouse\n\n#### **Names**\n\nLouis-Dieudonné de France\n\n**House** Bourbon **Father** Louis XIII **Mother** Anne of Austria **Religion** Catholicism **Signature**\n\n### **Accession**\n\nSensing imminent death in the spring of 1643, King Louis XIII decided to put his affairs in order for his four-year-old son Louis XIV. Not trusting the judgement of his Spanish wife Queen Anne, who would normally have become the sole regent of France, the king decreed that a regency council would rule on his son's behalf, with Anne at its head.[13]\n\nLouis XIII died on 14 May 1643. On 18 May[14] Queen Anne had her husband's will annulled by the *Parlement de Paris*, a judicial body of nobles and high-ranking clergy, [15] and she became sole regent. She exiled her husband's ministers Chavigny and Bouthilier and appointed the Count of Brienne as her minister of foreign affairs.[16] Anne kept the direction of religious policy strongly in hand until her son's majority in 1661.\n\nShe appointed Cardinal Mazarin as chief minister, giving him the daily administration of policy. She continued the policies of her late husband and Cardinal Richelieu, despite their persecution of her, in order to win absolute authority in France and victory abroad for her son. Anne protected Mazarin by exiling her followers the Duke of Beaufort and Marie de Rohan, who conspired against him in 1643.[17]\n\nThe best example of Anne's loyalty to France was her treatment of one of Richelieu's men, the Chancellor Pierre Séguier. Séguier had brusquely interrogated Anne in 1637 (like a\n\nLouis XIV as a young child, unknown painter\n\n\"common criminal\", as she recalled) following the discovery that she was giving military secrets to her father in Spain, and Anne was virtually under house arrest for years. By keeping the effective Séguier in his post, Anne sacrificed her own feelings for the interests of France and her son Louis.", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Painting from 1667 depicting Louis as patron of the fine arts\n\nThe *Cour royale* and the *Cour de marbre* at Versailles\n\nfamous throughout Europe. Composers and musicians such as Jean-Baptiste Lully, Jacques Champion de Chambonnières, and François Couperin thrived. In 1661, Louis founded the Académie Royale de Danse, and in 1669, the Académie d'Opéra, important driving events in the evolution of ballet. He also attracted, supported and patronized such artists as André Charles Boulle, who revolutionised marquetry with his art of inlay, today known as \"Boulle work\". Always on the lookout for new talent, the king launched music competitions: in 1683, Michel-Richard de Lalande thus became deputy master of the Royal Chapel, composing his *Symphonies for the Soupers du Roy* along with 77 large scale *Grand Motets*.\n\nOver the course of four building campaigns, Louis converted a hunting lodge commissioned by Louis XIII into the spectacular Palace of Versailles. Except for the current Royal Chapel (built near the end of his reign), the palace achieved much of its current appearance after the third building campaign, which was followed by an official move of the royal court to Versailles on 6 May 1682. Versailles became a dazzling, aweinspiring setting for state affairs and the reception of foreign dignitaries. At Versailles, the king alone commanded attention.\n\nSeveral reasons have been suggested for the creation of the extravagant and stately palace, as well as the relocation of the monarchy's seat. The memoirist Saint-Simon speculated that Louis viewed Versailles as an isolated power centre where\n\ntreasonous cabals could be more readily discovered and foiled.[62] There has also been speculation that the revolt of the *Fronde* caused Louis to hate Paris, which he abandoned for a country retreat, but his sponsorship of many public works in Paris, such as the establishment of a police force and of street-lighting,[111] lend little credence to this theory. As a further example of his continued care for the capital, Louis constructed the *Hôtel des Invalides*, a military complex and home to this day for officers and soldiers rendered infirm either by injury or old age. While pharmacology was still quite rudimentary in his day, the *Invalides* pioneered new treatments and set new standards for hospice treatment. The conclusion of the Treaty of Aix-la-Chapelle in 1668 also induced Louis to demolish Paris's northern walls in 1670 and replace them with wide tree-lined boulevards.[112]\n\nBust of Louis XIV by Gianlorenzo Bernini\n\nLouis also renovated and improved the Louvre and other royal residences. Gian Lorenzo\n\nBernini was originally to plan additions to the Louvre; however, his plans would have meant the destruction of much of the existing structure, replacing it with an Italian summer villa in the centre of Paris. Bernini's plans were eventually shelved in favour of the elegant Louvre Colonnade designed by three Frenchmen: Louis Le Vau, Charles Le Brun, and Claude Perrault. With the relocation of the court to Versailles, the Louvre was given over to the arts and the public.[113] During his visit from Rome, Bernini also executed a renowned portrait bust of the king.\n\n# **Image and depiction**\n\nFew rulers in world history have commemorated themselves in as grand a manner as Louis.[114] He cultivated his image as the Sun King (*le Roi Soleil*), the centre of the universe \"without equal\". Louis used court ritual and the arts to validate and augment his control over France. With his support, Colbert established from the beginning of Louis's personal reign a centralised and institutionalised system for creating and perpetuating the royal image. The King was thus portrayed largely in majesty or at war, notably against Spain. This portrayal of the monarch was to be found in numerous media of artistic expression, such as painting, sculpture, theatre, dance, music, and the almanacs that diffused royal propaganda to the population at large.\n\n### **Evolution of royal portraiture**\n\nOver his lifetime, Louis commissioned numerous works of art to portray himself, among them over 300 formal portraits. The earliest portrayals of Louis already followed the pictorial conventions of the day in depicting the child king as the majestically royal incarnation of France. This idealisation of the monarch continued in later works, which avoided depictions of the effect of smallpox that Louis contracted in 1647. In the 1660s, Louis began to be shown as a Roman emperor, the god Apollo, or Alexander the Great, as can be seen in many works of Charles Le Brun, such as sculpture, paintings, and the decor of major monuments.", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "What did Louis XIV do to avoid the Spanish War of Succession in 1698?", - "target_page": 13, - "target_passage": "In an attempt to avoid war, Louis signed the Treaty of the Hague with William III of England in 1698. This agreement divided Spain's Italian territories between Louis's son le Grand Dauphin and Archduke Charles, with the rest of the empire awarded to Joseph Ferdinand.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "illegitimate son Louis-Auguste de Bourbon, Duke of Maine. [129] Orléans, however, had Louis's will annulled by the *Parlement of Paris* after his death and made himself sole regent. He stripped Maine and his brother, Louis-Alexandre, Count of Toulouse, of the rank of Prince of the Blood, which Louis had granted them, and significantly reduced Maine's power and privileges.[130]\n\n#### **Line of succession in 1715**\n\nLine of succession to the French throne upon the death of Louis XIV in 1715. Louis XIV's only surviving legitimate grandson, Philip V, was not included in the line of succession due to having renounced the French throne after the war of the Spanish Succession, which lasted for 13 years after the death of Charles II of Spain in 1700.[131]\n\n```\nLouis XIII (1601–1643)\n Louis XIV (1638–1715)\n Louis, Grand Dauphin (1661–1711)\n Louis, Duke of Burgundy (1682–1712)\n Louis, Duke of Brittany (1707–1712)\n (1) Louis, Duke of Anjou (1710–1774)\n Philip V of Spain (1683–1746)\n Charles, Duke of Berry (1686–1714)\nPhilippe I, Duke of Orléans (1640–1701)\n (2) Philippe II, Duke of Orléans (1674–1723)\n (3) Louis, Duke of Chartres (1703–1752)\n```\nFurther down the French line of succession in 1715 was the House of Condé, followed by the House of Conti (a cadet branch of the House of Condé). Both of these royal houses were descended in the male line from Henri II, Prince of Condé, a second cousin of French King Louis XIII (the father of Louis XIV) in the male line.\n\n## **Legacy**\n\n#### **Reputation**\n\nAccording to Philippe de Courcillon's *Journal*, Louis on his deathbed advised his heir with these words:\n\nDo not follow the bad example which I have set you; I have often undertaken war too lightly and have sustained it for vanity. Do not imitate me, but be a peaceful prince, and may you apply yourself principally to the alleviation of the burdens of your subjects.[132]\n\nSome historians point out that it was a customary demonstration of piety in those days to exaggerate one's sins. Thus they do not place much emphasis on Louis's deathbed declarations in assessing his accomplishments. Rather, they focus on military and diplomatic successes, such as how he placed a French prince on the Spanish throne. This, they contend, ended the threat of an aggressive Spain that historically interfered in domestic French politics. These historians also emphasise the effect of Louis's wars in expanding France's boundaries and creating more defensible frontiers that preserved France from invasion until the Revolution.[132]\n\nArguably, Louis also applied himself indirectly to \"the alleviation of the burdens of [his] subjects.\" For example, he patronised the arts, encouraged industry, fostered trade and commerce, and sponsored the founding of an overseas empire. Moreover, the significant reduction in civil wars and aristocratic rebellions during his reign are seen by these\n\nTerritorial expansion of France under Louis XIV (1643–1715) is depicted in orange.\n\nhistorians as the result of Louis's consolidation of royal authority over feudal elites. In their analysis, his early reforms centralised France and marked the birth of the modern French state. They regard the political and military victories as well as numerous cultural achievements as how Louis helped raise France to a preeminent position in Europe.[133] Europe came to admire France for its military and cultural successes, power, and sophistication. Europeans generally began to emulate French manners, values, goods, and deportment. French became the universal language of the European elite.\n\nLouis's detractors have argued that his considerable foreign, military and domestic expenditure impoverished and bankrupted France. His supporters, however, distinguish the state, which was impoverished, from France, which was not. As supporting evidence, they cite the literature of the time, such as the social commentary in Montesquieu's *Persian Letters*. [134]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia5.pdf" - }, - { - "text": "**Louis XIV** (Louis-Dieudonné; 5 September 1638 – 1 September 1715), also known as **Louis the Great** (*Louis le Grand*) or the **Sun King** (*le Roi Soleil*), was King of France from 1643 until his death in 1715. His verified reign of 72 years and 110 days is the longest of any sovereign. [1][a] An emblematic character of the Age of Absolutism in Europe, [3] Louis XIV's legacy is widely characterized by French colonial expansion, the conclusion of Eighty Years' War involving the Habsburgs, and his architectural bequest, marked by commissioned works of art and buildings. His pageantry, opulent lifestyle and ornate cultivated image earned him enduring admiration. Louis XIV raised France to be the exemplar nation-state of the early modern period, and established a cultural prestige which lasted through the subsequent centuries, and continues today.\n\nLouis began his personal rule of France in 1661, after the death of his chief minister Cardinal Mazarin, when the King famously declared that he would take over the job himself.[4] An adherent of the divine right of kings, Louis continued his predecessors' work of creating a centralised state governed from the capital. He sought to eliminate the remnants of feudalism persisting in parts of France; by compelling many members of the nobility to reside at his lavish Palace of Versailles, he succeeded in pacifying the aristocracy, many of whom had participated in the Fronde rebellions during his minority. He thus became one of the most powerful French monarchs and consolidated a system of absolute monarchy in France that endured until the French Revolution. Louis also enforced uniformity of religion under the Catholic Church. His revocation of the Edict of Nantes abolished the rights of the Huguenot Protestant minority and subjected them to a wave of dragonnades, effectively forcing Huguenots to emigrate or convert, virtually destroying the French Protestant community.\n\nDuring Louis's long reign, France emerged as the leading European power and regularly made war. A conflict with Spain marked his entire childhood, while during his personal rule, Louis fought three major continental conflicts, each against powerful foreign alliances: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. In addition, France contested shorter wars such as the War of Devolution and the War of the Reunions. Warfare defined Louis's foreign policy, impelled by his personal ambition for glory and power: \"a mix of commerce, revenge, and pique\".[5] His wars strained France's resources to the utmost, while in peacetime he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] Upon his death in 1715, Louis XIV left his great-grandson and successor, Louis XV, a powerful but war-weary kingdom, in major debt after the War of the Spanish Succession that had raged on since 1701.\n\nSome of his other notable achievements include the construction of the Canal du Midi, the patronage of artists, and the founding of the French Academy of Sciences.\n\n# **Early years**\n\nPortrait by Hyacinthe Rigaud , 1701\n\n| | King of France (more...) |\n| --- | --- |\n| Reign | 14 May 1643 – 1 September |\n| | 1715 |\n| Coronation | 7 June 1654 |\n| | Reims Cathedral |\n| Predecessor | Louis XIII |\n| Successor | Louis XV |\n| Regent | Anne of Austria (1643–1651) |\n| Chief ministers See list | |\n| | Cardinal Mazarin |\n| | (1643–1661) |\n| | Jean-Baptiste Colbert |\n| | (1661–1683) |\n| | The Marquis of Louvois |\n| | (1683–1691) |\n| Born | 5 September 1638 |\n| | Château de Saint-Germain |\n| | en-Laye, Saint-Germain-en |\n| | Laye, France |\n| Died | 1 September 1715 (aged 76) |\n| | Palace of Versailles, |\n| | Versailles, France |\n| Burial | 9 September 1715 |\n| | Basilica of Saint-Denis |\n| Spouses | Maria Theresa of Spain |\n| | (m. 1660; died 1683) |\n| | Françoise d'Aubigné, |\n| | Marquise de Maintenon |\n| | (private) |\n| | (m. 1683) |", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV in 1670, engraved portrait by Robert Nanteuil\n\nand Lionne, however, made the renunciation conditional on the full payment of a Spanish dowry of 500,000 écus. [40] The dowry was never paid and would later play a part persuading his maternal first cousin Charles II of Spain to leave his empire to Philip, Duke of Anjou (later Philip V of Spain), the grandson of Louis XIV and Maria Theresa.\n\nThe War of Devolution did not focus on the payment of the dowry; rather, the lack of payment was what Louis XIV used as a pretext for nullifying Maria Theresa's renunciation of her claims, allowing the land to \"devolve\" to him. In Brabant (the location of the land in dispute), children of first marriages traditionally were not disadvantaged by their parents' remarriages and still inherited property. Louis's wife was Philip IV's daughter by\n\nhis first marriage, while the new king of Spain, Charles II, was his son by a subsequent marriage. Thus, Brabant allegedly \"devolved\" to Maria Theresa, justifying France to attack the Spanish Netherlands.\n\nThe future Philip V being introduced as King of Spain by his grandfather, Louis XIV\n\n#### **Relations with the Dutch**\n\nDuring the Eighty Years' War with Spain, France supported the Dutch Republic as part of a general policy of opposing Habsburg power. Johan de Witt, Dutch Grand Pensionary from 1653 to 1672, viewed this as crucial for Dutch security and a counterweight against his domestic Orangist opponents. Louis provided support in the 1665-1667 Second Anglo-Dutch War but used the opportunity to launch the War of Devolution in 1667. This captured Franche-Comté and much of the Spanish Netherlands; French expansion in this area was a direct threat to Dutch economic interests.[41]\n\nThe Battle of Tolhuis, Louis XIV crosses the Lower Rhine at Lobith on 12 June 1672; Rijksmuseum Amsterdam\n\nThe Dutch opened talks with Charles II of England on a common diplomatic front against France, leading to the Triple Alliance, between England, the Dutch and Sweden. The threat of an escalation and a secret treaty to divide Spanish possessions\n\nwith Emperor Leopold, the other major claimant to the throne of Spain, led Louis to relinquish many of his gains in the 1668 Treaty of Aix-la-Chapelle. [42]\n\nLouis placed little reliance on his agreement with Leopold and as it was now clear French and Dutch aims were in direct conflict, he decided to first defeat the Republic, then seize the Spanish Netherlands. This required breaking up the Triple Alliance; he paid Sweden to remain neutral and signed the 1670 Secret Treaty of Dover with Charles, an Anglo-French alliance against the Dutch Republic. In May 1672, France invaded the Republic, supported by Münster and the Electorate of Cologne. [43]\n\nLouis XIV, 1670, by Claude Lefèbvre\n\nRapid French advance led to a coup that toppled De Witt and brought William III to power. Leopold viewed French expansion into the Rhineland as an increasing threat, especially after they seized the strategic Duchy of Lorraine in 1670. The prospect of Dutch defeat led Leopold to an alliance with Brandenburg-Prussia on 23 June, followed by another with the Republic on 25th.[44] Although Brandenburg was forced out of the war by the June 1673 Treaty of Vossem, in August an anti-French alliance was formed by the Dutch, Spain, Emperor Leopold and the Duke of Lorraine. [45]\n\nThe French alliance was deeply unpopular in England, and only more so after the disappointing battles against Michiel de Ruyter's fleet. Charles II of England made peace with the Dutch in the February 1674 Treaty of Westminster. However, French armies held significant advantages over their opponents; an undivided command, talented generals like Turenne, Condé and Luxembourg and vastly superior logistics. Reforms introduced by Louvois, the Secretary of War, helped maintain large field armies that could be mobilised much more quickly, allowing them to mount offensives in early spring before their opponents were ready. [46]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia5.pdf" - }, - { - "text": "In July 1695, the city of Namur, occupied for three years by the French, was besieged by an allied army led by William III. Louis XIV ordered the surprise destruction of a Flemish city to divert the attention of these troops. This led to the bombardment of Brussels, in which more than 4,000 buildings were destroyed, including the entire city centre. The strategy failed, as Namur fell three weeks later, but harmed Louis XIV's reputation: a century later, Napoleon deemed the bombardment \"as barbarous as it was useless\".[85]\n\nPeace was broached by Sweden in 1690. By 1692, both sides evidently wanted peace, and secret bilateral talks began, but to no avail.[86] Louis tried to break up the alliance against him by dealing with individual opponents but did not achieve his aim until 1696 when the Savoyards agreed to the Treaty of Turin and switched sides. Thereafter, members of the League of Augsburg rushed to the peace table, and negotiations for a general peace began in earnest, culminating in the Peace of Ryswick of 1697.[87]\n\nMarshal de Luxembourg\n\n#### **Peace of Ryswick**\n\nThe Peace of Ryswick ended the War of the League of Augsburg and disbanded the Grand Alliance. By manipulating their rivalries and suspicions, Louis divided his enemies and broke their power.\n\nThe treaty yielded many benefits for France. Louis secured permanent French sovereignty over all of Alsace, including Strasbourg, and established the Rhine as the Franco-German border (as it is to this day). Pondichéry and Acadia were returned to France, and Louis's *de facto* possession of Saint-Domingue was recognised as lawful. However, he returned Catalonia and most of the Reunions.\n\nFrench military superiority might have allowed him to press for more advantageous terms. Thus, his generosity to Spain with regard to Catalonia has been read as a concession to foster pro-French sentiment and may ultimately have induced King Charles II to name Louis's grandson Philip, Duke of Anjou, heir to the Spanish throne.[88] In exchange for financial compensation, France renounced its interests in the Electorate of Cologne and the Palatinate. Lorraine, which had been occupied by the French since 1670, was returned to its rightful Duke Leopold, albeit with a right of way to the French military. William and Mary were recognised as joint sovereigns of the British Isles, and Louis withdrew support for James II. The Dutch were given the right to garrison forts in the Spanish Netherlands that acted as a protective barrier against possible French aggression. Though in some respects the Treaty of Ryswick may appear a diplomatic defeat for Louis since he failed to place client rulers in control of the Palatinate or the Electorate of Cologne, he did fulfil many of the aims laid down in his 1688 ultimatum.[89] In any case, peace in 1697 was desirable to Louis, since France was exhausted from the costs of the war.\n\n## **War of the Spanish Succession**\n\n#### **Causes and build-up to the war**\n\nBy the time of the Peace of Ryswick, the Spanish succession had been a source of concern to European leaders for well over forty years. King Charles II ruled a vast empire comprising Spain, Naples, Sicily, Milan, the Spanish Netherlands, and numerous Spanish colonies. He produced no children, however, and consequently had no direct heirs.\n\nThe principal claimants to the throne of Spain belonged to the ruling families of France and Austria. The French claim derived from Louis XIV's mother Anne of Austria (the older sister of Philip IV of Spain) and his wife Maria Theresa (Philip IV's eldest daughter). Based on the laws of primogeniture, France had the better claim as it originated from the eldest daughters in two generations. However, their renunciation of succession rights complicated matters. In the case of Maria Theresa, nonetheless, the renunciation was considered null and void owing to Spain's breach of her marriage contract with Louis. In contrast, no renunciations tainted the claims of Emperor Leopold I's son Charles, Archduke of Austria, who was a grandson of Philip III's youngest daughter Maria Anna. The English and Dutch feared that a French or Austrian-born Spanish king would threaten the balance of power and thus preferred the Bavarian Prince Joseph Ferdinand, a grandson of Leopold I through his first wife Margaret Theresa of Spain (the younger daughter of Philip IV).\n\nIn an attempt to avoid war, Louis signed the Treaty of the Hague with William III of England in 1698. This agreement divided Spain's Italian territories between Louis's son *le Grand Dauphin* and Archduke Charles, with the rest of the empire awarded to Joseph Ferdinand. William III consented to permitting the Dauphin's new territories to become part of France when the latter", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Philip V of Spain\n\nsucceeded to his father's throne.[90] The signatories, however, omitted to consult the ruler of these lands, and Charles II was passionately opposed to the dismemberment of his empire. In 1699, he re-confirmed his 1693 will that named Joseph Ferdinand as his sole successor. [91]\n\nSix months later, Joseph Ferdinand died. Therefore, in 1700, Louis and William III concluded a fresh partitioning agreement, the Treaty of London. This allocated Spain, the Low Countries, and the Spanish colonies to the Archduke. The Dauphin would receive all of Spain's Italian territories.[92] Charles II acknowledged that his empire could only remain undivided by bequeathing it entirely to a Frenchman or an Austrian. Under pressure from his German wife, Maria Anna of Neuburg, Charles II named Archduke Charles as his sole heir.\n\n### **Acceptance of the will of Charles II and consequences**\n\nOn his deathbed in 1700, Charles II of Spain unexpectedly changed his will. The clear demonstration of French military superiority for many decades before this time, the pro-French faction at the court of Spain, and even Pope\n\nInnocent XII convinced him that France was more likely to preserve his empire intact. He thus offered the entire empire to the Dauphin's second son Philip, Duke of Anjou, provided it remained undivided. Anjou was not in the direct line of French succession, thus his accession would not cause a Franco-Spanish union.[92] If Anjou refused, the throne would be offered to his younger brother Charles, Duke of Berry. If the Duke of Berry declined it, it would go to Archduke Charles, then to the distantly related House of Savoy if Charles declined it.[93]\n\nLouis was confronted with a difficult choice. He could agree to a partition of the Spanish possessions and avoid a general war, or accept Charles II's will and alienate much of Europe. He may initially have been inclined to abide by the partition treaties, but the Dauphin's insistence persuaded him otherwise.[94] Moreover, Louis's foreign minister, Jean-Baptiste Colbert, marquis de Torcy, pointed out that war with the Emperor would almost certainly ensue whether Louis accepted the partition treaties or Charles II's will. He\n\nLouis in 1701\n\nemphasised that, should it come to war, William III was unlikely to stand by France since he \"made a treaty to avoid war and did not intend to go to war to implement the treaty\".[91] Indeed, in the event of war, it might be preferable to be already in control of the disputed lands. Eventually, therefore, Louis decided to accept Charles II's will. Philip,\n\nMost European rulers accepted Philip as king, some reluctantly. Depending on one's views of the war's inevitability, Louis acted reasonably or arrogantly. [95] He confirmed that Philip V retained his French rights despite his new Spanish position. Admittedly, he may only have been hypothesising a theoretical eventuality and not attempting a Franco-Spanish union. But his actions were certainly not read as disinterested. Moreover, Louis sent troops to the Spanish Netherlands to evict Dutch garrisons and secure Dutch recognition of Philip V. In 1701, Philip transferred the *asiento* (the right to supply slaves to Spanish colonies) to France, as a sign of the two nations' growing connections. As tensions mounted, Louis decided to acknowledge James Stuart, the son of James II, as King of England, Scotland and Ireland on the latter's death, infuriating William III. These actions enraged Britain and the Dutch Republic.[96] With the Holy Roman Emperor and the petty German states, they formed another Grand Alliance and declared war on France in 1702. French diplomacy secured Bavaria, Portugal, and Savoy as Franco-Spanish allies.[97]\n\n### **Commencement of fighting**\n\nDuke of Anjou, thus became Philip V, King of Spain.\n\nEven before war was officially declared, hostilities began with Imperial aggression in Italy. Once finally declared, the War of the Spanish Succession lasted almost until Louis's death, at great cost to him and France.\n\nThe war began with French successes, but the talents of John Churchill, 1st Duke of Marlborough, and Eugene of Savoy checked these victories and broke the myth of French invincibility. The duo allowed the Palatinate and Austria to occupy Bavaria after their victory at the Battle of Blenheim. Maximilian II Emanuel, Elector of Bavaria, had to flee to the Spanish Netherlands. The", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia5.pdf" - }, - { - "text": "experiences during the *Fronde*, when men of high birth readily took up the rebel cause against their king, who was actually the kinsman of some. This victory over the nobility may thus have ensured the end of major civil wars in France until the French Revolution about a century later.\n\n### **France as the pivot of warfare**\n\nUnder Louis, France was the leading European power, and most wars pivoted around its aggressiveness. No European state exceeded it in population, and no one could match its wealth, central location, and very strong professional army. It had largely avoided the devastation of the Thirty Years' War. Its weaknesses included an inefficient financial system that was hard-pressed to pay for its military adventures, and the tendency of most other powers to gang up against it.\n\nDuring Louis's reign, France fought three major wars: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. There were also two lesser conflicts: the War of Devolution and the War of the Reunions. [64] The wars were very expensive but defined Louis XIV's foreign policy, and his personality shaped his approach. Impelled \"by a mix of commerce, revenge, and pique\", Louis sensed that war was the ideal way to enhance his glory. In peacetime, he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] By 1695, France retained much of its dominance but had lost control of the seas to England and Holland, and most countries, both Protestant and Catholic, were in alliance against it. Sébastien Le Prestre de Vauban, France's leading military strategist, warned Louis in 1689 that a hostile \"Alliance\" was too powerful at sea. He recommended that France fight back by licensing French merchant ships to privateer and seize enemy merchant ships while avoiding its navies:\n\nLouis XIV\n\nFrance has its declared enemies Germany and all the states that it embraces; Spain with all its dependencies in Europe, Asia, Africa and America; the Duke of Savoy [in Italy], England, Scotland, Ireland, and all their colonies in the East and West Indies; and Holland with all its possessions in the four corners of the world where it has great establishments. France has ... undeclared enemies, indirectly hostile, hostile, and envious of its greatness, Denmark, Sweden, Poland, Portugal, Venice, Genoa, and part of the Swiss Confederation, all of which states secretly aid France's enemies by the troops that they hire to them, the money they lend them and by protecting and covering their trade.[65]\n\nVauban was pessimistic about France's so-called friends and allies:\n\nFor lukewarm, useless, or impotent friends, France has the Pope, who is indifferent; the King of England [James II] expelled from his country; the Grand Duke of Tuscany; the Dukes of Mantua, Modena, and Parma [all in Italy]; and the other faction of the Swiss. Some of these are sunk in the softness that comes of years of peace, the others are cool in their affections....The English and Dutch are the main pillars of the Alliance; they support it by making war against us in concert with the other powers, and they keep it going by means of the money that they pay every year to... Allies.... We must therefore fall back on privateering as the method of conducting war which is most feasible, simple, cheap, and safe, and which will cost least to the state, the more so since any losses will not be felt by the King, who risks virtually nothing....It will enrich the country, train many good officers for the King, and in a short time force his enemies to sue for peace.[66]\n\n# **Edict of Fontainebleau**\n\nLouis decided to persecute Protestants and revoke the 1598 Edict of Nantes, which awarded Huguenots political and religious freedom. He saw the persistence of Protestantism as a disgraceful reminder of royal powerlessness. After all, the Edict was the pragmatic concession of his grandfather Henry IV to end the longstanding French Wars of Religion. An additional factor in Louis's thinking was the prevailing contemporary European principle to assure socio-political stability, *cuius regio, eius religio* (\"whose realm, his religion\"), the idea that the religion of the ruler should be the religion of the realm (as originally confirmed in central Europe in the Peace of Augsburg of 1555).[67]\n\nResponding to petitions, Louis initially excluded Protestants from office, constrained the meeting of synods, closed churches outside of Edict-stipulated areas, banned Protestant outdoor preachers, and prohibited domestic Protestant migration. He also disallowed Protestant-Catholic intermarriages to which third parties objected, encouraged missions to the Protestants, and", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia5.pdf" - }, - { - "text": "impact of this victory won the support of Portugal and Savoy. Later, the Battle of Ramillies delivered the Low Countries to the Allies, and the Battle of Turin forced Louis to evacuate Italy, leaving it open to Allied forces. Marlborough and Eugene met again at the Battle of Oudenarde, which enabled them to invade France.\n\nFrance established contact with Francis II Rákóczi and promised support if he took up the cause of Hungarian independence.\n\nDefeats, famine, and mounting debt greatly weakened France. Between 1693 and 1710, over two million people died in two famines, made worse as foraging armies seized food supplies from the villages.[98] In desperation, Louis ordered a disastrous invasion of the English island of Guernsey in the autumn of 1704 with the aim of raiding their successful harvest. By the winter of 1708–09, he was willing to accept peace at nearly any cost. He agreed that the entire Spanish empire should be surrendered to Archduke Charles, and also consented to return to the frontiers of the Peace of Westphalia, giving up all the territories he had acquired over 60 years. But he could not promise that Philip V would accept these terms, so the Allies demanded that Louis single-handedly attack his grandson to force these terms on him. If he could not achieve this within the year, the war would resume. Louis would not accept these terms.[99]\n\n### **Turning point**\n\nThe final phases of the War of the Spanish Succession demonstrated that the Allies could not maintain Archduke Charles in Spain just as surely as France could not retain the entire Spanish inheritance for Philip V. The Allies were definitively expelled from central Spain by the Franco-Spanish victories at the Battles of Villaviciosa and Brihuega in 1710. French forces elsewhere remained obdurate despite their defeats. The Allies suffered a Pyrrhic victory at the Battle of Malplaquet with 21,000 casualties, twice that of the French.[100] Eventually, France recovered its military pride with the decisive victory at Denain in 1712.\n\nFrench military successes near the end of the war took place against the background of a changed political situation in Austria. In 1705, Emperor Leopold I died. His elder son and successor, Joseph I, followed him in 1711. His heir was none other than Archduke Charles, who secured control of all of his brother's Austrian landholdings. If the Spanish empire then fell to him, it would have resurrected a domain as vast as Holy Roman Emperor Charles V's in the 16th century. To the maritime powers of Great Britain and the Dutch Republic, this would have been as undesirable as a Franco-Spanish union.[101]\n\n#### **Conclusion of peace**\n\nAs a result of the fresh British perspective on the European balance of power, Anglo-French talks began, culminating in the 1713 Peace of Utrecht between Louis, Philip V of Spain, Anne of Great Britain, and the Dutch Republic. In 1714, after losing Landau and Freiburg, the Holy Roman Emperor also made peace with France in the Treaties of Rastatt and Baden.\n\nIn the general settlement, Philip V retained Spain and its colonies, while Austria received the Spanish Netherlands and divided Spanish Italy with Savoy. Britain kept Gibraltar and Menorca. Louis agreed to withdraw his support for James Stuart, son of James II and pretender to the thrones of Great Britain and Ireland, and ceded Newfoundland, Rupert's Land, and Acadia in the Americas to Anne. Britain gained the most from the treaty, but the final terms were much more favourable to France than those being discussed in peace\n\nThe Franco-Spanish army led by the Duke of Berwick defeated decisively the Alliance forces of Portugal, England, and the Dutch Republic at the Battle of Almansa.\n\nThe Battle of Ramillies where the French fought the Dutch and British, 23 May 1706\n\nLouis XIV depicted on a Louis d'or in 1709\n\nMap of France after the death of Louis XIV", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The French were nevertheless forced to retreat from most of the Dutch Republic, which deeply shocked Louis; he retreated to St Germain for a time, where no one, except a few intimates, was allowed to disturb him.[47] French military advantages allowed them however to hold their ground in Alsace and the Spanish Netherlands while retaking Franche-Comté. By 1678, mutual exhaustion led to the Treaty of Nijmegen, which was generally settled in France's favour and allowed Louis to intervene in the Scanian War. Despite the military defeat, his ally Sweden regained much of what it had lost under the 1679 treaties of Saint-Germain-en-Laye, Fontainebleau and Lund imposed on Denmark–Norway and Brandenburg.[48] Yet Louis's two primary goals, the destruction of the Dutch Republic and the conquest of the Spanish Netherlands, had failed.[49]\n\nLouis was at the height of his power, but at the cost of uniting his opponents; this increased as he continued his expansion. In 1679, he dismissed his foreign minister Simon Arnauld, marquis de Pomponne, because he was seen as having compromised too much with the allies. Louis maintained the strength of his army, but in his next series of territorial claims avoided using military force alone. Rather, he combined it with legal pretexts in his efforts to augment the boundaries of his kingdom. Contemporary treaties were intentionally phrased ambiguously. Louis established the Chambers of Reunion to determine the full extent of his rights and obligations under those treaties.\n\nCities and territories, such as Luxembourg and Casale, were prized for their strategic positions on the frontier and access to important waterways. Louis also sought Strasbourg, an important strategic crossing on the left bank of the Rhine and theretofore a Free Imperial City of the Holy Roman Empire, annexing it and other territories in 1681. Although a part of Alsace, Strasbourg was not part of Habsburg-ruled Alsace and was thus not ceded to France in the Peace of Westphalia.\n\nFollowing these annexations, Spain declared war, precipitating the War of the Reunions. However, the Spanish were rapidly defeated because the Emperor (distracted by the Great Turkish War) abandoned them, and the Dutch only supported them minimally. By the Truce of Ratisbon, in 1684, Spain was forced to acquiesce in the French occupation of most of the conquered territories, for 20 years.[50]\n\nLouis's policy of the *Réunions* may have raised France to its greatest size and power during his reign, but it alienated much of Europe. This poor public opinion was compounded by French actions off the Barbary Coast and at Genoa. First, Louis had\n\nAlgiers and Tripoli, two Barbary pirate strongholds, bombarded to obtain a favourable treaty and the liberation of Christian slaves. Next, in 1684, a punitive mission was launched against Genoa in retaliation for its support for Spain in previous wars. Although the Genoese submitted, and the Doge led an official mission of apology to Versailles, France gained a reputation for brutality and arrogance. European apprehension at growing French might and the realisation of the extent of the dragonnades' effect (discussed below) led many states to abandon their alliances with France.[51] Accordingly, by the late 1680s, France became increasingly isolated in Europe.\n\n### **Non-European relations and the colonies**\n\nFrench colonies multiplied in Africa, the Americas, and Asia during Louis's reign, and French explorers made important discoveries in North America. In 1673, Louis Jolliet and Jacques Marquette discovered the Mississippi River. In 1682, René-Robert Cavelier, Sieur de La Salle, followed the Mississippi to the Gulf of Mexico and claimed the vast Mississippi basin in Louis's name, calling it *Louisiane*. French trading posts were also established in India, at Chandernagore and Pondicherry, and in the Indian Ocean at Île Bourbon. Throughout these regions, Louis and Colbert embarked on an extensive program of architecture and urbanism meant to reflect the styles of Versailles and Paris and the 'gloire' of the realm.[52]\n\nMeanwhile, diplomatic relations were initiated with distant countries. In 1669, Suleiman Aga led an Ottoman embassy to revive the old Franco-Ottoman alliance. [53] Then, in 1682,\n\nafter the reception of the Moroccan embassy of Mohammed Tenim in France, Moulay Ismail, Sultan of Morocco, allowed French consular and commercial establishments in his country. [54] In 1699, Louis once again received a Moroccan ambassador, Abdallah bin Aisha, and in 1715, he received a Persian embassy led by Mohammad Reza Beg.\n\nFrom farther afield, Siam dispatched an embassy in 1684, reciprocated by the French magnificently the next year under Alexandre, Chevalier de Chaumont. This, in turn, was succeeded by another Siamese embassy under Kosa Pan, superbly received at Versailles in 1686. Louis then sent another embassy in 1687, under Simon de la Loubère, and French influence grew at the\n\nThe Persian embassy to Louis XIV sent by Soltan Hoseyn in 1715. *Ambassade de Perse auprès de*\n\n*Louis XIV*, studio of Antoine Coypel.\n\n1674\").", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia5.pdf" - }, - { - "text": "- The film, *Le Roi Danse* (2000; translated: *The King Dances*), directed by Gérard Corbiau, reveals Louis through the eyes of Jean-Baptiste Lully, his court musician.\n- Julian Sands portrayed Louis in Roland Jaffe's *Vatel* (2000).\n- Alan Rickman directed, co-wrote, and stars as Louis XIV in the film, *A Little Chaos*, which centres on construction in the gardens of Versaille, at the time immediately before and after the death of Queen Maria Theresa.\n- The 2016 film *The Death of Louis XIV*, directed by Albert Serra, is set during the last two weeks of Louis XIV's life before dying of gangrene, with the monarch played by Jean-Pierre Léaud.\n\n#### **Television**\n\n- Louis XIV is portrayed by Thierry Perkins-Lyautey in the British television film *Charles II: The Power and the Passion.*\n- The 15-year-old Louis XIV, as played by the Irish actor Robert Sheehan, is a major character of the short-lived historical fantasy series *Young Blades* from January to June 2005.\n- George Blagden portrays Louis XIV in the Canal+ series *Versailles* which aired for three seasons from 2015.\n\n#### **Musicals**\n\n- Emmanuel Moire portrayed Louis XIV in the 2005-07 Kamel Ouali musical Le Roi Soleil.\n# **Health and death**\n\nLouis XIV (seated) with his son *le Grand Dauphin* (to the left), his grandson Louis, Duke of Burgundy (to the right), his great-grandson Louis Duke of Anjou, and Madame de Ventadour, Anjou's governess, who commissioned this painting; busts of Henry IV and Louis XIII are in the background.\n\n*The Death of Louis XIV at the Palace of Versailles*, Thomas Jones Barker, 1835-1840\n\nDespite the image of a healthy and virile king that Louis sought to project, evidence exists to suggest that his health was not very good. He had many ailments: for example, symptoms of diabetes, as confirmed in reports of suppurating periostitis in 1678, dental abscesses in 1696, along with recurring boils, fainting spells, gout, dizziness, hot flushes, and headaches.\n\nFrom 1647 to 1711, the three chief physicians to the king (Antoine Vallot, Antoine d'Aquin, and Guy-Crescent Fagon) recorded all of his health problems in the *Journal de Santé du Roi* (*Journal of the King's Health*), a daily report of his health. On 18 November 1686, Louis underwent a painful operation for an anal fistula that was performed by the surgeon Charles Felix de Tassy, who prepared a specially shaped curved scalpel for the occasion. The wound took more than two months to heal.[124]\n\nLouis died of gangrene at Versailles on 1 September 1715, four days before his 77th birthday, after 72 years on the throne. Enduring much pain in his last days, he finally \"yielded up his soul without any effort, like a candle going out\", while reciting the psalm *Deus, in adjutorium me festina* (*O Lord, make haste to help me*).[125] His body was laid to rest in Saint-Denis Basilica outside Paris. It remained there undisturbed for about 80 years until revolutionaries exhumed and destroyed all of the remains found in the Basilica.[126] In 1848, at Nuneham House, a piece of Louis's mummified heart, taken from his tomb and kept in a silver locket by Lord Harcourt, Archbishop of York, was shown to the Dean of Westminster, William Buckland, who ate a part of it.[127]\n\nCardinal Armand Gaston Maximilien de Rohan gave Last Rites (confession, viaticum, and unction) to king Louis XIV. [128]\n\n### **Succession**\n\nLouis outlived most of his immediate legitimate family. His last surviving legitimate son, Louis, Dauphin of France, died in 1711. Barely a year later, the Duke of Burgundy, the eldest of the Dauphin's three sons and then heir-apparent to Louis, followed his father. Burgundy's elder son, Louis, Duke of Brittany, joined them a few weeks later. Thus, on his\n\ndeathbed, Louis's heir-apparent was his five-year-old great-grandson, Louis, Duke of Anjou, Burgundy's younger son.\n\nLouis foresaw an underaged successor and sought to restrict the power of his nephew Philip II, Duke of Orléans, who, as his closest surviving legitimate relative in France, would probably become regent to the prospective Louis XV. Accordingly, the king created a regency council as Louis XIII had in anticipation of Louis XIV's own minority, with some power vested in his", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The Nine Years' War, which lasted from 1688 to 1697, initiated a period of decline in Louis's political and diplomatic fortunes. It arose from two events in the Rhineland. First, in 1685, the Elector Palatine Charles II died. All that remained of his immediate family was Louis's sister-in-law, Elizabeth Charlotte. German law ostensibly barred her from succeeding to her brother's lands and electoral dignity, but it was unclear enough for arguments in favour of Elizabeth Charlotte to have a chance of success. Conversely, the princess was demonstrably entitled to a division of the family's personal property. Louis pressed her claims to land and chattels, hoping the latter, at least, would be given to her. [76] Then, in 1688, Maximilian Henry of Bavaria, Archbishop of Cologne, an ally of France, died. The archbishopric had traditionally been held by the Wittelsbachs of Bavaria, but the Bavarian claimant to replace Maximilian Henry, Prince Joseph Clemens of Bavaria, was at that time not more than 17 years old and not even ordained. Louis sought instead to install his own candidate, Wilhelm Egon von Fürstenberg, to ensure the key Rhenish state remained an ally. [77]\n\nIn light of his foreign and domestic policies during the early 1680s, which were perceived as aggressive, Louis's actions, fostered by the succession crises of the late 1680s, created concern and alarm in much of Europe. This led to the formation of the 1686 League of Augsburg by the Holy Roman Emperor, Spain, Sweden, Saxony, and Bavaria. Their stated intention was to return France to at least the borders agreed to in the Treaty of Nijmegen.[78] Emperor Leopold I's persistent refusal to convert the Truce of Ratisbon into a permanent treaty fed Louis's fears that the Emperor would turn on France and attack the Reunions after settling his affairs in the Balkans.[79]\n\nAnother event Louis found threatening was England's Glorious Revolution of 1688. Although King James II was Catholic, his two Anglican daughters, Mary and Anne, ensured the English people a Protestant succession. But when James II's son James Francis Edward Stuart was born, he took precedence in succession over his sisters. This seemed to herald an era of Catholic monarchs in England. Protestant lords called on the Dutch Prince\n\nBattle of Fleurus, 1690\n\nLouis in 1690\n\nWilliam III of Orange, grandson of Charles I of England, to come to their aid. He sailed for England with troops despite Louis's warning that France would regard it as a provocation. Witnessing numerous desertions and defections, even among those closest to him, James II fled England. Parliament declared the throne vacant, and offered it to James's daughter Mary II and his son-inlaw and nephew William. Vehemently anti-French, William (now William III of England) pushed his new kingdoms into war, thus transforming the League of Augsburg into the Grand Alliance. Before this happened, Louis expected William's expedition to England to absorb his energies and those of his allies, so he dispatched troops to the Rhineland after the expiry of his ultimatum to the German princes requiring confirmation of the Truce of Ratisbon and acceptance of his demands about the succession crises. This military manoeuvre was also intended to protect his eastern provinces from Imperial invasion by depriving the enemy army of sustenance, thus explaining the preemptive scorched earth policy pursued in much of southwestern Germany (the \"Devastation of the Palatinate\").[80]\n\nLouis XIV at the siege of Namur (1692)\n\nFrench armies were generally victorious throughout the war because of Imperial commitments in the Balkans, French logistical superiority, and the quality of French generals such as Condé's famous pupil, François Henri de Montmorency-Bouteville, duc de Luxembourg. [81] He triumphed at the Battles of Fleurus in 1690, Steenkerque in 1692, and Landen in 1693, although, the battles proved to be of little of strategic consequence,[82][83] mostly due to the nature of late 17th-century warfare.[84]\n\nAlthough an attempt to restore James II failed at the Battle of the Boyne in 1690, France accumulated a string of victories from Flanders in the north, Germany in the east, and Italy and Spain in the south, to the high seas and the colonies. Louis personally supervised the captures of Mons in 1691 and Namur in 1692. Luxembourg gave France the defensive line of the Sambre by capturing Charleroi in 1693. France also overran most of the Duchy of Savoy after the battles of Marsaglia and Staffarde in 1693. While naval stalemate ensued after the French victory at the Battle of Beachy Head in 1690 and the Allied victory at Barfleur-La Hougue in 1692, the Battle of Torroella in 1694 exposed Catalonia to French invasion, culminating in the capture of Barcelona.\n\nThe Dutch captured Pondichéry in 1693, but a 1697 French raid on the Spanish treasure port of Cartagena, Spain, yielded a fortune of 10,000,000 livres.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "Does nerve transection or crushing affect small afferents within the dorsal root ganglion in the same way?", - "target_page": 5, - "target_passage": "Both SNItrans (Fig. 2C) and SNIcrush (Fig. 2D) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post–nerve injury.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "# Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Coopera , Allison M. Barryb , Paschalina Chrysostomidoua , Romane Loligniera , Jinyi Wanga , Magdalena Redondo Canalesa , Heather F. Tittertona , David L. Bennettb , Greg A. Weira,*\n\n# Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury–induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n# 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability,57 which is a key pathological driver of neuropathic pain.20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell37,44 and subpopulation-specific sequencing studies.3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury.3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSponsorships or competing interests that may be relevant to content are disclosed at the end of this article.\n\n*Corresponding author. Address: School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QQ, United Kingdom. Tel.: 144 (0) 141 330 7023. E-mail address: gregory.weir@glasgow.ac.uk (G.A. Weir).\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models.24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts,48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers53 but some contrasting studies describe the preferential loss of large cells6 or loss of cells of all sizes.46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods.56 Shi et al.50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after \"mid-thigh\" sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression,5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI,49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush.44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize\n\na School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom, b Nuffield Department of Clinical Neurosciences, University of\n\nOxford, Oxford, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3-mm intervals through the entirety of a 30-mm-thick tissue section. Scale bar 5 100 mm. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov–Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 mm. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov–Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F2,145 8.26, P 5 0.004; n 5 4 to 5 mice; Sˇ ´ıd ´ak multiple comparisons tests: **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n# 3.3. Spared nerve injury induces a loss of Trpm81 and calcitonin gene-related peptide1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injuryinduced loss. To investigate potential loss of Trpm81 (coldsensitive), calcitonin gene-related peptide1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8FlpO; RC::FLTG (FlpO-dependent tdTom expression), CalcaCreERT2; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively (Figs. 4A–D). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A–C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations was not biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm81 and CGRP1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84).3 We found that early after injury (3 days post-SNItrans), nonpeptidergic (MrgDCreERT2-expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10aCreERT2), peptidergic nociceptors (Calca- CreERT2), C-LTMRs (ThCreERT2), and Ab-RA (rapidly adapting) and Ad-LTMRs (Ad/Ab-LTMR, Ntrk2CreERT2;AdvillinFlpO), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and Ad/Ab-LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n# 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection.5 We sought to use the DRG neurons from MrgDCreERT2;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury (Figs. 5A–E). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF (Fig. 5F). However, in the absence of GFs, YFP1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density (Figs. 5C and F). YFP1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media (Figs. 5D–F). These results contrasted with experiments using neurons derived from CalcaCreERT2;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP1 after 28 days in culture, regardless of exogenous GF addition (Figs. 5G–L). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n# 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly developed transgenic recombinase driver lines, we have shown that loss is biased across molecularly defined subpopulations. Nonpeptidergic nociceptive neurons are particularly susceptible to loss, with almost all Mrgprd1 axotomized afferents lost following an unrepaired transection injury (SNItrans) and roughly half lost following a model which contrastingly allows for nerve regenerations (SNIcrush). Finally, we have observed that the vulnerability of Mrgprd1 neurons extends to the in vitro setting and provide data to support the hypothesis that loss is driven by a lack of neurotrophic support following injury.\n\n# 4.1. Neuronal loss\n\nThe question of whether DRG neurons die following traumatic injury has been addressed by several groups over the last few decades. Despite contrasting findings on the extent, timing, and form that loss takes, most studies have observed frank loss of DRG neurons.6,38,46,53 However, more recent studies using recombinase driver lines and novel machine-learning approaches have cast doubt on this consensus.44,49 Our data strongly support the loss hypothesis and suggest that approximately 60% of axotomized afferents die within 2 weeks of SNI. The discrepancy between our findings and other recent studies may be partly explained by the sampling method used to estimate neuronal numbers. For example, Schulte et al.49 developed a novel machine-learning approach and found no reduction in neuron density across serial sections of rat DRG following SNI, and they inferred from this that frank loss did not occur. Our results are congruous, in that we also observed no reduction in neuron density. However, we found a substantial loss in the total neuron-containing volume of injured DRG, which underlies our contrasting conclusion of frank loss. Of note, morphological volumetric analysis and MRI have also previously demonstrated volume loss in both rodent and human DRG following nerve injury.35,65,66 These findings occur despite a major increase of nonneuronal cells in the injured DRG30 and support the notion that the total DRG neuron number is decreased.\n\n#### 4.2. Selectivity of neuron loss\n\nWhile definitively characterizing loss of molecularly defined subpopulations was challenging before the advent of recombinase driver lines, a consensus emerged that small-diameter neurons are more vulnerable to nerve injury–induced loss.50,53 Our data support this consensus and extend it to reveal that while there is a generalized partial loss of C-fiber populations including CGRP- and Trpm8-expressing neurons, Mrgprd-expressing neurons are particularly sensitive to loss. This selective vulnerability has been hinted at previously by the stark reduction in the number of DRG neurons and their central terminals that bind IB4 and express canonical markers such as the P2X3 receptor following nerve injury.5,8,29,36 Type 1a glomeruli are also reduced in lamina II, suggesting a structural loss of central terminals and not simply a loss of IB4-binding.2 However, it was not clear whether these data represented phenotypic changes in nonpeptidergic nociceptors or frank loss of neurons. We describe neuron loss that is delayed (occurring .7 days postinjury) with respect to histochemical and structural changes (occurring 1- 5 days postinjury2,29), suggesting that these changes precede and are not in themselves indicative of neuron loss.\n\nThe vulnerability of Mrgprd-expressing neurons is congruous with recent subpopulation bulk RNA-seq data, which found that", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the AvilFlpO;Atf3CreERT2;RC:: FLTG mouse line and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 mm. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): †P , 0.05 vs contra, ‡P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov–Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. *P , 0.05, **P , 0.01, ***P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\nprotein) neurons 28 days after sham surgery or SNItrans (Figs. 3A and B). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP1 neurons in L4 DRG (Fig. 3C).\n\nYellow fluorescent protein expression in MrgDChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage.44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgDCreERT2;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprd-expressing afferents with ChR2-YFP (Figs. 3D–F). We then tested whether the proportion of cutaneous tibial afferents that were YFP1 was altered following nerve injury. Following hindpaw FB injection, ;15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9% 28 days after SNItrans (Fig. 3G). Uptake by uninjured YFP1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP1 neurons in uninjured DRG revealed an area of 361 6 138 mm2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd-expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "SNI-related gene expression signatures were less evident in Mrgprd-expressing and C-LTMR neurons at later timepoints, compared with other populations in injured DRG.3 This could be explained by a loss of axotomized neurons of these classes and therefore sampling of only uninjured neurons at this timepoint.24,43,64 In terms of the transcriptional response to injury, nonpeptidergic nociceptors show enrichment of individual proapoptotic factors early after injury,23,68 and we extend these results in this study, by describing a subpopulation-specific enrichment of GO terms associated with apoptosis that is evident as early as 3 days after injury. Such data and single-cell transcriptomic profiling of all DRG neurons following injury37,44 may offer the opportunity to elucidate the cell death pathways engaged and upstream effectors that enrich this process to nonpeptidergic nociceptive neurons.\n\n#### 4.3. Implications for pain pathogenesis\n\nNeuronal loss has been proposed as a key contributor to poor functional recovery following nerve injury,54 and biased survival of different afferent types might be expected to contribute to modality-specific sensory deficits. Beyond loss of function, does DRG neuron loss contribute to chronic pain, in either an adaptive or maladaptive manner? Intrathecal delivery of GDNF is neuroprotective and reverses the reduction in the number of IB4-binding DRG neurons and central terminals seen following transection.5 Treatment is concurrently analgesic and abrogates pain-related behaviors.7,60 However, the pleiotropic nature of GDNF makes it impossible to directly attribute the analgesic effects to the reversal of neuron loss. Indeed, it is possible that GDNF exerts its effect by actions on intact nonpeptidergic nociceptive afferents,52 activation of which is known to drive aversive behaviors in the neuropathic state.62 These data leave the contribution of nonpeptidergic nociceptor loss to behavior in the GDNF treatment paradigm ambiguous. Other pharmacological approaches have been found effective at reversing a neuronal loss in rodent models, but the impact on pain behavior was not studied.21,22\n\nRodents develop marked mechanical and thermal hypersensitivity rapidly following nerve injury and before timepoints at which neuron loss is observed.10 This lack of a temporal correlation may suggest a limited contribution to evoked hypersensitivities. The temporal profile of ongoing tonic pain (eg, pain aversiveness as measured by condition place preference assays26) is less defined and so is its correlation to the timing of neuron loss.\n\nThere are many anatomical sites within the somatosensory nervous system where differential loss of sensory neuron populations could impact neurobiology. For example, loss of cutaneous afferents may afford more opportunity for plasticity in reinnervation patterns, such as collateral sprouting of uninjured or surviving afferents, and the types of nerve endings made by different molecular subpopulations.17,27 It also seems likely that the death of many neurons within a DRG could contribute to the expansion and activation of immune cell types, which are known to play a major role in neuropathic pain.30,69 Finally, under normal conditions, peripheral sensory input is integrated into the dorsal horn of the spinal cord by complex interneuron circuitry. Many spinal circuits are engaged by convergent input from different afferent types.9,41,70 Therefore, selective loss of input from discrete afferent types could undoubtedly impact the normal processing of remaining afferent signals.34 Experimentally abrogating neuronal loss may be a fruitful approach to assess the contribution to nervous system plasticity (adaptive or maladaptive) following injury. In this regard, our in vitro readout would be a useful experimental platform to help delineate the precise cell death pathways and signaling cascades engaged (which could then be experimentally manipulated). Such studies should consider that plasticity may evolve over time. The loss of IB41 central terminals is transient following crush and has even been observed to reverse at longer timepoints following SNItrans. 36 These observations, in conjunction with ours of loss of neurons, raise the intriguing question of the source of such central reinnervation.\n\n#### 4.4. Study limitations\n\nOur efforts focused on traumatic nerve injury paradigms owing to previous contrasting results using these robust and reproducible experimental models. We did not extend our studies to systemic neuropathy models, such as chemotherapy or diabetic neuropathy. A recent postmortem analysis reported a neuronal loss in the DRG from patients with painful diabetic peripheral neuropathy.19 Transcriptional responses vary substantially across different nerve insults,44 so it would be of interest to test whether neuronal loss and the subpopulation vulnerability reported in this study are common features across different types of insults.\n\nUsing multiple approaches, we assess the na¨ıve mouse L4 DRG to contain approximately 8000 neurons, consistent with a previous estimate,67 and observed a frank loss of smalldiameter neurons following injury. However, the extent of loss observed using our semiautomated approach was less than that observed using manual techniques.67 Two major limitations in this study may explain this discrepancy: First, owing to technical issues, the cleared DRG dataset is unpaired ipsilateral–contralateral which adds larger variability. Second, the analysis method is prone to undercounting deep nuclei. The signal-to-noise is better for superficial nuclei and smaller tissue volumes. Given the reduction in DRG volume after SNItrans, nuclei in larger contralateral DRG may be undercounted.\n\nWhile we made efforts to profile the loss of several molecularly discrete sensory neuron populations, we acknowledge that not all subtypes were profiled. Furthermore, recent single-cell RNA sequencing has given us a more granular appreciation of the heterogeneity of sensory neurons.42 Future studies could leverage our experimental approach and new transgenic lines to characterize the loss of neurons in more detail. Such experiments may be pertinent before embarking on molecular or functional profiling of populations post–nerve injury.\n\n#### 4.5. Conclusions\n\nIn sum, we have provided data from multiple complementary experimental approaches to support the hypothesis that DRG neurons are lost following nerve injury in mice. We describe a substantial loss, which is biased towards specific subpopulations and particularly present in small-diameter nonpeptidergic nociceptive neurons.\n\n# Conflict of interest statement\n\nD.L.B. has acted as a consultant in the last 2 years for AditumBio, Biogen, Biointervene, Combigene, LatigoBio, GSK, Ionis, Lexicon therapeutics, Neuvati, Olipass, Orion, Replay, SC Health Managers, Theranexus, Third Rock Ventures, and Vida Ventures on behalf of Oxford University Innovation. D.L.B. has received research funding from Lilly and Astra Zeneca, and G.A.W. has received research funding from Ono Pharmaceutical. D.L.B. has received", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 4. Spared nerve injury induces a loss of Trpm81 and CGRP1 but not myelinated DRG neurons. (A) Schematic of experimental approach. (B–D) FastBlue labelling and Trpm8-tdTom (B), Calca-YFP (C), or Thy1-CFP expression (D) 28 days after SNItrans in the L4 DRG, contralateral (top) or ipsilateral (bottom) to injury. Images are projections of optical sections at 3-mm intervals through the entirety of 30-mm-thick tissue sections. Scale bars 5 100 mm. (E–G) Quantification of the proportion of FB-labelled neurons also expressing Trpm8-tdTom (E), Calca-YFP (F), or Thy1-CFP (G) in L4 DRG contralateral or ipsilateral to SNItrans. Paired t tests; Trpm8-tdTom: t2 5 5.31, P 5 0.034, n 5 3 mice; Calca-YFP: t3 5 4.12, P 5 0.026, n 5 4 mice; Thy1-CFP: t3 5 4.42, P 5 0.022, n 5 4 mice. *P , 0.05. CFP, cyan fluorescent protein; CGRP, calcitonin gene-related peptide; DRG, dorsal root ganglion; FB, FastBlue.\n\nby a population of small-diameter, putative cold-sensitive neurons (Fig. 4B), accounting for 8.3 6 0.27% of FB-labelled neurons in contralateral DRG. This decreased to 4.2 6 0.96% ipsilateral to SNItrans injury (Fig. 4E), indicating a partial loss of Trpm81 afferents. When examining peptidergic afferents, we found that 48.1 6 2.42% of FB-labelled neurons in contralateral DRG were Calca-YFP1, compared with 34.3 6 2.54% 4 weeks after SNItrans injury (Figs. 4C and F), consistent with a partial loss of CGRP1 afferents. We used a Thy1-CFP line that demonstrates consistent expression postinjury61 and labels a sample of medium/large diameter myelinated afferents. CFP was largely restricted to NF2001 neurons, labelling 56% of this population. Expression was present in a heterogenous population of nociceptive (TrkA1) and nonnociceptive (TrkA-) myelinated neurons (Fig. S5, http://links.lww.com/PAIN/C84). Contralateral to injury, 15.6 6 1.8% of FB-labelled neurons expressed Thy1- CFP (Figs. 4D and G). In contrast to unmyelinated subpopulations, this proportion was higher in ipsilateral DRG following SNItrans (23.3 6 3.2%), consistent with no (or minimal) loss of Thy1-CFP-expressing afferents, accompanied by a loss of Thy1- CFP-negative neurons. We did not observe significant alterations in the population distributions of the cross-sectional area of surviving, damaged Trpm8-tdTom1, Calca-YFP1, or Thy1- CFP1 DRG neurons when compared with DRG contralateral to", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed2.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n# 2. Methods\n\n#### 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light–dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel ofthe University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7to 16 weeks atthe start ofthe experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of MrgprdCreERT2;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1. Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgDCreERT2;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al.50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n#### 2.2. Spared nerve transection and crush surgeries\n\nSpared nerve injury (transection of the common peroneal and tibial branches of the sciatic nerve; SNItrans) and common peroneal and tibial crush injury (SNIcrush), in which nerve axons were severed but the epineurium remained intact, were performed as previously described.12 Anesthesia was induced with 3% to 5% isoflurane and then maintained at 1.5% to 2% as required. Analgesia, consisting of carprofen (10 mg/kg) and buprenorphine (0.05 mg/kg) (Glasgow) or carprofen (5 mg/kg) and local bupivacaine (2 mg/kg) (Oxford) was provided perioperatively. The left hindpaw was secured with tape in hip abduction, and the operative field (lateral surface of the thigh) was shaved. Ophthalmic ointment was applied to the eyes, and the shaved area was swabbed with chlorhexidine solution. A longitudinal incision was made in the skin at the lateral mid-thigh. Using blunt dissection, an opening was made through the biceps femoris, exposing the sciatic nerve and the 3 peripheral branches (sural, tibial, and common peroneal nerves). For SNItrans, the common peroneal and tibial nerves were ligated using a 6-0 Vicryl suture (Ethicon, Raritan, NJ), and a 1- to 2-mm piece distal to the suture was removed using spring scissors. For SNIcrush, the exposed tibial and common peroneal nerves were clamped using a pair of fine hemostats (Fine Science Tools, Heidelberg, Germany) closed to their second clip, leaving the nerve branches intact but translucent. The muscle was closed with one 6-0 Vicryl suture (Ethicon), and the skin incision was closed with one 10 mm wound clip (Alzet, Cupertino, CA). Animals were monitored daily for self-mutilation, and no animals required sacrifice due to tissue damage.\n\n#### Table 1\n\n#### Transgenic lines used in the study.\n\n| Used name | Full name | Putative population | Ref | Source | Tamoxifen regime |\n| --- | --- | --- | --- | --- | --- |\n| Atf3CreERT2 | Atf3tm1.1(cre/ERT2)Msra | Axotomised afferents | 13 | Gift: Dr Franziska Denk | 50 mg/kg on days 0, 3, and 7 after surgery |\n| AvilFlpO | Aviltm1(flpo)Ddg | Sensory neurons | 1 | Gift: Prof David Ginty | N.A. |\n| MrgDCreERT2 | Mrgprdtm1.1(cre/ERT2)Wql | Major class of nonpeptidergic | 39 | The Jackson Laboratory (RRID: | General: 1x 50 mg/kg in adulthood, (.1 week |\n| | | neurons | | IMSR_JAX:031286) | before experiment) |\n| | | | | | 3D volumetric analysis: 5x i.p. (0.5 mg/animal/ |\n| | | | | | day), beginning between P10 and P17 |\n| MrgDChR2- | Mrgprdtm4.1(COP4)Mjz | Major class of nonpeptidergic | 59 | Mutant Mouse Resource & Research | N.A. |\n| YFP | | neurons | | Centers (RRID:MMRRC_036112-UNC) | |\n| CalcaCreERT2 | Calcatm1.1(cre/ERT2)Ptch | Peptidergic neurons | 51 | Gift: Prof Pao-Tien Chuang | 1x 75 mg/kg in adulthood (.1 week before |\n| | | | | | experiment) |\n| Trpm8FlpO | | Cold afferents | 4 | Gift: Dr Mark Hoon | N.A. |\n| Thy1-CFP | B6.Cg-Tg(Thy1-CFP) | Sample of myelinated afferents | 16 | The Jackson Laboratory (RRID: | N.A. |\n| | 23Jrs/J | | | IMSR_JAX:003710) | |\n| ThCreERT2 | Thtm1.1(cre/ERT2)Ddg/J | C low threshold | 1 | Gift: Prof David Ginty; The Jackson | 1x 50 mg/kg in adulthood (.2 weeks before |\n| | | mechanoreceptors | | Laboratory (RRID:IMSR_JAX:025614) | experiment) |\n| RC::FLTG | B6.Cg- Gt(ROSA) | Flp-mediated tdTomato; | 40 | The Jackson Laboratory (RRID: | N.A. |\n| | tm1.3(CAG-tdTomato,- 26Sor | Cre1Flp-mediated GFP | | IMSR_JAX:026932) | |\n| | EGFP)Pjen /J | expression | | | |\n| Ai14 | B6.Cg- Gt(ROSA) | Cre-mediated tdTomato | 33 | The Jackson Laboratory (RRID: | N.A. |\n| | tm14(CAG-tdTomato)Hze 26Sor / | expression | | IMSR_JAX:007914) | |\n| J | | | | | |\n| Ai32 | B6.Cg- Gt(ROSA) | Cre-mediated ChR2-eYFP | 32 | The Jackson Laboratory (RRID: | N.A. |\n| | tm32(CAG 26Sor | expression | | IMSR_JAX:024109) | |\n| | COP4*H134R/EYFP)Hze | | | | |\n\nCFP, cyan fluorescent protein; GFP, Green fluorescent protein; YFP, yellow fluorescent protein.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 3. Spared nerve crush or transection results in death of nonpeptidergic neurons. (A) Schematic of experimental approach for (B and C). (B) MrgDChR2-YFP L4 DRGs 4 weeks after SNI, contralateral or ipsilateral to injury. Images are projections of optical sections at 3-mm intervals through the entirety of 30-mm-thick tissue sections. Scale bars 5 100 mm. (C) Quantification of total number of MrgD-YFP1 cells per L4 DRG 4 weeks after SNI revealed a significant loss in ipsilateral DRG. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; Side x Treatment interaction: F1,5 5 9.23, P 5 0.029; n 5 3 mice. (D) The experimental approach used to generate data presented in (E–G). (E and F) MrgD-YFP expression and FB labelling in the L4 DRG, 14 days after SNI or crush surgery or contralateral to injury. White boxes represent regions enlarged in (F). Scale bars 5 100 mm (E) or 20 mm (F). (G) The proportion of FB-labelled DRG neurons decreased after spared nerve crush injury, and co-labelling is almost completely absent after SNI. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; side 3 injury interaction: F1,4 5 7.80, P 5 0.049; n 5 3 mice. Posttests: *P , 0.05, **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; SNI, spared nerve injury; FB, FastBlue; RM, repeated measures.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed2.pdf" - }, - { - "text": "(2) Sweepback will reduce the magnitude of change in the aerodynamic force coefficients due to compressibility. Any change in drag, lift, or moment coefbcients will be reduced by the use of sweepback. Various sweep angles applied to wings of moderate aspect ratio will produce these approximate effects in transonic flight.\n\n| | - |\n| --- | --- |\n| | -_ |\n| 00 | 0 |\n| 150 | 5 |\n| M\" | 15 |\n| 45' | 35 |\n| 600 | 60 |\n| | - |\n\nThese advantages of drag reduction and preservation of the transonic maximum lift coefficient are illustrated in figure 3.14.\n\nThus, the use of sweepback on a transonic aircraft will reduce and delay the drag rise and preserve the maneuverability of the aircraft in transonic flight. It should be noted that a small amount of sweepback produces very little benefit. If sweepback is to be used at all, at least 30' to 33' must be used to produce any significant benefit. Also note from figure 3.14 that the amount of sweepback required to d&y drag rise in supersonic flight is very large, e.g., more than 60° necessary at M=2.0. By comparison of the drag curves at high Mach numbers it will be appreciated that extremely high (and possibly impractical) sweepback is necessary to delay drag rise and that the lowest drag is abtained with zero sweepback. Therefore, the planform of a wing designed to operate continuously at high Mach numbers will tend to be very thin, low aspect ratio, and unswept. An immediate conclusion is that sweepback is a device of greatest application in the regime of transonic flight.\n\nA few of the less significant advantages of sweepback are as follows:\n\n(1) The wing lift curve slope is reduced for a given aspect ratio. This is illustrated by the lift curve comparison of figure 3.15 for the straight and swept wing. Any reduction of lift curve slope implies the wing is less sensitive to changes in angle of attack. This is a beneficial effect only when the effect of gusts and turbulence is considered. Since the swept wing has the lower lift curve slope it will be less sensitive to gusts and experience less \"bump\" due to gust for a given aspect ratio and wing loading. This is a consideration particular to the aircraft whose structural design shows a predominating effect of the gust load spectrum, e.g., transport, cargo, and patrol types.\n\n(2) \"Divergence\" of a surface is an aeroelastic problem which can occur at high dynamic pressures. Combined bending and twisting deflections interact with aerodynamic forces to produce sudden failure of the surface at high speeds. Sweep forward will aggravate this situation by \"leading\" the wing into the windstream and tends to lower the divergence speed. On the other hand, sweepback tends to stabilize the surface by \"trailing\" and tends to raise the divergence speed. By this tendency, sweepback may be beneficial in preventing divergence within the anticipated speed range.\n\n(3) Sweepback contributes slightly to the static directional-or weathercock-stability of an aircraft. This effect may be appreciated by inspection of hgure 3.13 which shows the swept wing in a yaw or sideslip. The wing into the wind has less sweep and a slight increase in drag; the wing away from the wind has more sweep and less drag. The net effect of these force changes is to produce a yawing moment tending to retarn the nose into the relative wind. This directional stability contribution is usually small and of importance in tailless aircraft only.", - "page_start": 246, - "page_end": 246, - "source_file": "00-80T-80.pdf" - }, - { - "text": "**Fig. 3 | Subcortical GMV changed throughout gestation. a**, Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at *q* < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline—36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). **b**, The participant's hippocampus and surrounding cortex were segmented\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline—36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at *q* < 0.05. For **a** and **b**, nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg48*.* DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia or neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).\n\nCritically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography14. Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "What are the EU's key nature conservation commitments for 2030?", - "target_page": 6, - "target_passage": "1. Legally protect a minimum of 30% of the EU’s land area and 30% of the EU’s sea area and integrate ecological corridors, as part of a true Trans-European Nature Network. 2. Strictly protect at least a third of the EU’s protected areas, including all remaining EU primary and old-growth forests. 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. **Every Member State will have to do its fair share of the effort** based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up **ecological corridors** to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the **Overseas Countries and Territories** also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n#### **Nature protection: key commitments by 2030**\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.\n\n<sup>27</sup> Guidance on a strategic framework for further supporting the deployment of EU-level green and blue infrastructure (SWD(2019) 193).", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "encouraging cooperation in **education for environmental sustainability** in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n### **4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA**\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances76. The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n# **4.1. Raising the level of ambition and commitment worldwide**\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts – working with like-minded partners in **a high-ambition coalition on biodiversity** – to agree an ambitious new global framework for post-2020 at the upcoming 15th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n- Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, **by 2050, all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n- Ambitious **global 2030 targets in line with EU commitments** in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n- A much **stronger implementation, monitoring and review** process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a **regular review cycle** to look at progress towards the\n\n<sup>76</sup> Green alliances focus on cooperation with African and other partners to implement the European Green Deal.", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "policies. In addition, by integrating policy coherence for sustainable development in all its policies, the EU will reduce the pressure on biodiversity worldwide. In all of its international cooperation, the EU should promote sustainable agricultural and fisheries practices and actions to protect and restore the world's forests. Particular attention will also be paid to sustainable water resource management, the restoration of degraded land, and the protection and restoration of biodiverse areas with high ecosystem services and climate mitigation potential. A better protection of natural ecosystems, coupled with efforts to reduce wildlife trade and consumption, will also help prevent and build up resilience to possible future diseases and pandemics. The EU will enhance its support to global efforts to apply the **One Health approach**83, which recognises the intrinsic connection between human health, animal health and healthy resilient nature.\n\nThe EU will step up support to partner countries across the world to achieve the new global targets, fight environmental crime, and tackle the drivers of biodiversity loss. In Africa, the EU will launch the **NaturAfrica** initiative to protect wildlife and key ecosystems while offering opportunities in green sectors for local populations. Similar projects will be developed in other regions. The EU will also support the Western Balkans and EU Neighbourhood countries in their efforts to protect biodiversity.\n\nIn all of its work, the EU will strengthen the links between **biodiversity protection and human rights**, gender, health, education, conflict sensitivity, the rights-based approach, land tenure and the role of indigenous peoples and local communities.\n\nAs part of its global efforts, the EU will promote biodiversity coalitions with partners and civil society around the world. For example, in March 2020, the Commission launched the **Global Biodiversity Coalition** of national parks, aquariums, botanic gardens, zoos, natural history and sciencemuseums to help raise awareness around the world on the need to protect and nurture biodiversity. The Commission will consider launching or joining other High Ambition Coalitions to help develop the post-2020 framework.\n\n### **5. CONCLUSION**\n\nProtecting and restoring biodiversity is the only way to preserve the quality and continuity of human life on Earth. The commitments proposed in this strategy pave the way for ambitious and necessary changes – changes that will ensure the wellbeing and economic prosperity of present and future generations in a healthy environment. The implementation of these commitments will take into account the diversity of challenges across sectors, regions and Member States, recognise the need to ensure social justice, fairness and inclusiveness in line with the European Pillar of Social Rights, and will require a sense of responsibility and strong joint efforts from the EU, its Member States, stakeholders and citizens.\n\nThe Commission invites the European Parliament and the Council to endorse this strategy ahead of the 15th Conference of the Parties to the Convention on Biological Diversity. To ensure full political ownership of this strategy, the Commission will suggest a standing progress point at the Council and at the European Parliament. It will review the strategy by 2024 to assess progress and whether further action is needed to meet its objectives.\n\n<sup>83</sup> https://www.who.int/features/qa/one-health/en/", - "page_start": 22, - "page_end": 22, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "build on the headline ambition to ensure that by 2050 **all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. As part of this, the world should commit to no human-induced extinction of species, at minimum where avoidable.\n\nThis strategy sets out how Europe can help make this happen. As a milestone, it aims to ensure that **Europe's biodiversity will be on the path to recovery by 2030** for the benefit of people, the planet, the climate and our economy, in line with the 2030 Agenda for Sustainable Development and with the objectives of the Paris Agreement on Climate Change. It addresses the five main drivers of biodiversity loss, sets out an enhanced governance framework to fill remaining gaps, ensures the full implementation of EU legislation, and pulls together all existing efforts. This strategy is enterprising and incentivising in spirit and action. It reflects the fact that **protecting and restoring nature will need more than regulation alone**. It will require action by citizens, businesses, social partners and the research and knowledge community, as well as strong partnerships between local, regional, national and European level. This strategy is in line with the ambitions and commitment set out in President von der Leyen's Political Guidelines and in the European Green Deal.\n\nAdopted in the heart of the COVID-19 pandemic, this strategy will also be a central element of the EU's recovery plan. It will be crucial to prevent and build resilience to future zoonosis outbreaks and to provide immediate business and investment opportunities for restoring the EU's economy.\n\nAll new initiatives and proposals will be underpinned by the Commission's better regulation tools. Based on public consultations and on the identification of the environmental, social and economic impacts, impact assessments will contribute to ensuring that all initiatives achieve their objectives in the most effective and least burdensome way and live up to a green oath to \"do no harm\".\n\n### **2. PROTECTING AND RESTORING NATURE IN THE EUROPEAN UNION**\n\nThe EU has legal frameworks, strategies and action plans to protect nature and restore habitats and species. But protection has been incomplete, restoration has been smallscale, and the implementation and enforcement of legislation has been insufficient17 .\n\nTo put biodiversity on the path to recovery by 2030, we need to step up the protection and restoration of nature. This should be done by improving and **widening our network of protected areas** and by developing an ambitious **EU Nature Restoration Plan**.\n\n#### **2.1. A coherent network of protected areas**\n\nBiodiversity fares better in protected areas. However, the current network of legally protected areas, including those under strict protection, is not sufficiently large to safeguard biodiversity. Evidence shows that the targets defined under the Convention on Biological Diversity are insufficient to adequately protect and restore nature18. Global\n\n<sup>17</sup> Mid-term review of the EU Biodiversity Strategy to 2020 (COM(2015) 478 and SWD(2015) 187); Fitness Check of the EU Nature Legislation (Birds and Habitats Directives) (SWD(2016) 472); Fitness Check of the EU Water Legislation (SWD(2019) 439).\n\n<sup>18</sup> The global Aichi biodiversity targets are that protected areas should cover 17% on land and 10% at sea, while scientific studies' figures range from 30% to 70%. See e.g. IPBES 2019.", - "page_start": 3, - "page_end": 3, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n- 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n- 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n- 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n- 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n- 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n### **3. ENABLING TRANSFORMATIVE CHANGE**\n\n### **3.1. A new governance framework**\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place **a new European biodiversity governance framework**. This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a **clear set of agreed indicators** and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## **3.2. Stepping up implementation and enforcement of EU environmental legislation**\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind60. This is having dramatic consequences on biodiversity and comes with a substantial economic cost61 . **The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy**, for which political support and financial and human resources will need to be prioritised.\n\n<sup>60</sup> See 2015 State of Nature in the EU report (COM (2015)219).\n\n<sup>61</sup> The costs of non-implementation are estimated at EUR 50 billion per year.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "### **2.2. An EU Nature Restoration Plan: restoring ecosystems across land and sea**\n\nProtecting the nature we have will not be enough to bring nature back into our lives. To reverse biodiversity loss, the world needs to be more ambitious on nature restoration. With a **new EU Nature Restoration Plan**, Europe will lead the way.\n\nThe plan will help improve the health of existing and new protected areas, and bring diverse and resilient nature back to all landscapes and ecosystems. This means reducing pressures on habitats and species, and ensuring all use of ecosystems is sustainable. It also means supporting the recovery of nature, limiting soil sealing and urban sprawl, and tackling pollution and invasive alien species. The plan will create jobs, reconcile economic activities with nature growth and help ensure the long-term productivity and value of our natural capital.\n\n#### *2.2.1. Strengthening the EU legal framework for nature restoration*\n\nNature restoration is already partially required from the Member States in existing EU legislation28 . However, **significant implementation and regulatory gaps hinder progress**. For instance, there is no requirement for Member States to have biodiversity restoration plans. There are not always clear or binding targets and timelines and no definition or criteria on restoration or on the sustainable use of ecosystems. There is also no requirement to comprehensively map, monitor or assess ecosystem services, health or restoration efforts. These issues are exacerbated by the gaps in implementation that prevent the existing legislation from achieving its objectives29 . Stronger implementation support and enforcement is required. To ensure that nature restoration across land and sea picks up, increases the EU's resilience, and contributes to climate change mitigation and adaptation as a key nature-based solution, this strategy puts forward two strands of actions:\n\n- Firstly, and subject to an impact assessment, the Commission will put forward a proposal for legally binding **EU nature restoration targets** in 2021 to restore degraded ecosystems, in particular those with the most potential to capture and store carbon and to prevent and reduce the impact of natural disasters. This will identify the conditions in which the targets must be met, as well as the most effective measures to reach them. The impact assessment will also look at the possibility of an EU-wide methodology to map, assess and achieve good condition of ecosystems so they can deliver benefits such as climate regulation, water regulation, soil health, pollination and disaster prevention and protection.\n- In that context, the Commission will request and support Member States to raise the level of implementation of existing legislation within clear deadlines. It will in particular request Member States to ensure **no deterioration in conservation trends and status** of all protected habitats and species by 203030. In addition, Member States will have to ensure that at least 30% of species and habitats not\n\n<sup>28</sup> Notably the EU Birds Directive (2009/147/EC), Habitats Directive (92/43/EEC), Water Framework Directive (2000/60/EC), Floods Directive (2007/60/EC) and Marine Strategy Framework Directive (2008/56/EC).\n\n<sup>29</sup> See Fitness Check of the EU Nature Legislation (SWD(2016) 472) and Fitness Check of the EU Water Legislation (SWD(2019) 439). See also below, Section 3.2.\n\n<sup>30</sup> Habitats and species listed under the Birds and Habitats Directives.", - "page_start": 6, - "page_end": 6, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "#### *3.3.2. Investments, pricing and taxation*\n\nTackling biodiversity loss and restoring ecosystems will require significant public and private investments at national and European level. This will mean making the most of all relevant EU programmes and financing instruments. The Commission will strengthen its **biodiversity proofing framework**69 , *inter alia* by using in an appropriate way the criteria established under the EU taxonomy, to ensure that EU funding supports biodiversity-friendly investments.\n\nTo meet the needs of this strategy, including investment priorities for Natura 2000 and green infrastructure, **at least €20 billion a year70 should be unlocked for spending on nature**. This will require mobilising private and public funding at national and EU level71, including through a range of different programmes in the next long-term EU budget. Moreover, as nature restoration will make a major contribution to climate objectives, a significant proportion of the 25% of the EU budget dedicated to climate action will be invested on biodiversity and nature-based solutions.\n\nUnder Invest EU, a dedicated natural-capital and circular-economy initiative will be established to mobilise at least €10 billion over the next 10 years, based on public/private blended finance. Nature and biodiversity is also a priority for the European Green Deal Investment Plan. To help unlock the investment needed, the EU must provide long-term certainty for investors and help embed sustainability in the financial system. The EU **sustainable finance taxonomy** will help guide investment towards a green recovery and the deployment of nature-based solutions. In 2021, the Commission will adopt a delegated act under the Taxonomy Regulation72 to establish a common classification of economic activities that substantially contribute to protecting and restoring biodiversity and ecosystems. This will be further supported by a **Renewed Sustainable Finance Strategy** later this year which will help ensure that the financial system contributes to mitigating existing and future risks to biodiversity and better reflect how biodiversity loss affects companies' profitability and long-term prospects73 .\n\nThe Commission will further promote tax systems and pricing that reflect environmental costs, including biodiversity loss. This should encourage changes in national fiscal systems to shift the tax burden from labour to pollution, under-priced resources, and other environmental externalities. The '**user pays' and 'polluter pays' principles** have to be applied to prevent and correct environmental degradation.\n\nPublic authorities' purchasing power represents 14% of EU GDP and can serve as a powerful driver of demand for the products and services of companies that invest in or contribute to nature-based solutions. To tap into this potential, when proposing further\n\n<sup>69</sup> See Common framework and guidance documents for biodiversity proofing of the EU budget.\n\n<sup>70</sup> The cost estimate is based on the 2018 Impact Assessment of the LIFE Regulation (SWD(2018) 292), a Study on the costs of implementing the Target 2 of the EU Biodiversity Strategy to 2020 and data submitted by 16 Member States under Article 8(1) of the Habitats Directive. The Commission will update the estimate, notably based on Member States' Prioritised Action Frameworks under the Habitats Directive.\n\n<sup>71</sup> Including the Common Agricultural Policy, Cohesion Policy funds, Horizon Europe, the European Maritime and Fisheries Fund, LIFE and external action funds.\n\n<sup>72</sup> See EU taxonomy for sustainable activities.\n\n<sup>73</sup> World Wildlife Fund (2019), The Nature of Risk – A Framework for Understanding Nature-Related Risk to Business.", - "page_start": 17, - "page_end": 17, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle79 and taking into account the call of the European Parliament80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n#### *4.2.2. Trade policy*\n\n**Trade policy will actively support and be part of the ecological transition**. In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market81, and to promote forest-friendly imports and value chains. The Commission will take a number of steps to **crack down on illegal wildlife trade**. This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further **tightening of the rules on EU ivory trade** later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n### *4.2.3. International cooperation, neighbourhood policy and resource mobilisation*\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to **double financial flows to developing countries for biodiversity**82. The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership\n\n<sup>79</sup> Under Article 191.2 TFEU, the Union policy on the environment shall aim at a high level of protection and shall be based on the precautionary principle.\n\n<sup>80</sup> European Parliament Resolution on international ocean governance (2017/2055(INI)).\n\n<sup>81</sup> In line with the Commission Communication on Stepping up EU Action to Protect and Restore the World's Forests (COM(2019) 352).\n\n<sup>82</sup> Including international financing where biodiversity is the principal objective and where it is a significant secondary objective, in line with CBD COP11 Decision XI/4 and EU and Member States financial reports submitted to the Convention on Biological Diversity in 2015 and 2018.", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "climate change, the effects of erosion and losses of soil organic carbon are becoming increasingly apparent. Desertification is also a growing threat in the EU35 .\n\nIt is therefore essential to step up efforts to **protect soil fertility, reduce soil erosion and increase soil organic matter**. This should be done by adopting sustainable soil management practices, including as part of the CAP. Significant progress is also needed on identifying contaminated soil sites, restoring degraded soils, defining the conditions for their good ecological status, introducing restoration objectives, and improving the monitoring of soil quality.\n\nTo address these issues in a comprehensive way and help to fulfil EU and international commitments on land-degradation neutrality, the Commission will update the **EU Soil Thematic Strategy**36 in 2021. The **Zero Pollution Action Plan for Air, Water and Soil** that the Commission will adopt in 2021 will also look at these issues. Soil sealing and rehabilitation of contaminated brownfields will be addressed in the upcoming Strategy for a Sustainable Built Environment. A **mission in the area of soil health and food** under Horizon Europe37 will aim to develop solutions for restoring soil health and functions.\n\n#### *2.2.4. Increasing the quantity of forests and improving their health and resilience*\n\nForests are hugely important for biodiversity, climate and water regulation, the provision of food, medicines and materials, carbon sequestration and storage, soil stabilisation and the purification of air and water. They are also a natural home for recreation and learning about nature. Foresters have a key role to play in ensuring sustainable forest management and in restoring and sustaining biodiversity in forests.\n\nIn addition to strictly protecting all remaining EU primary and old-growth forests, **the EU must increase the quantity, quality and resilience of its forests**, notably against fires, droughts, pests, diseases and other threats likely to increase with climate change. To retain their function for both biodiversity and climate, all forests need to be preserved in good health. More resilient forests can support a more resilient economy. They also play an important role in providing materials, products and services, which are key for the circular bio-economy.\n\nTo make this happen, the Commission will propose a dedicated **EU Forest Strategy** in 2021 in line with our wider biodiversity and climate neutrality ambitions. It will include a roadmap for **planting at least 3 billion additional trees in the EU by 2030**, in full respect of ecological principles. This will create substantial job opportunities linked to the collecting and cultivating of seeds, planting seedlings, and ensuring their development. Tree planting is particularly beneficial in cities, while in rural areas it can work well with agroforestry, landscape features and increased carbon sequestration. At the same time, the Commission will continue to work with Member States to ensure that the EU is sufficiently equipped to prevent and respond to major forest fires, which can inflict significant damages on forest biodiversity.\n\n<sup>35</sup> European Court of Auditors (2018), Combating desertification in the EU: a growing threat in need of more action, Special Report n°33/2018.\n\n<sup>36</sup> Thematic Strategy for Soil Protection (COM(2006) 231).\n\n<sup>37</sup> Horizon Europe mission area on soil health and food.", - "page_start": 9, - "page_end": 9, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "Afforestation, reforestation and tree planting to support biodiversity and ecosystem restoration will be promoted through the CAP Strategic Plans, and the Cohesion Policy funds. The new **European Urban Greening Platform**38 will also facilitate urban tree planting, including under the LIFE programme.\n\nThe share of forest areas covered by management plans should cover all managed public forests and an increased number of private forests, and biodiversity-friendly practices such as closer-to-nature-forestry should continue and be further developed. To support this, the Commission will develop guidelines on biodiversity-friendly afforestation and reforestation and closer-to-nature-forestry practices. This will be done in parallel with the new EU Forest Strategy.\n\nTo gain a better picture of the health of European forests, the Commission will work with other data providers to further develop the **Forest Information System for Europe**. This will help produce up-to-date assessments of the condition of European forests and link all EU forest-data web-platforms. This will also be presented as part of the EU Forest Strategy.\n\n### *2.2.5. Win-win solutions for energy generation*\n\nDecarbonising the energy system is critical for climate neutrality, as well as for the EU's recovery from the COVID-19 crisis and long-term prosperity. More sustainably sourced renewable energy will be essential to fight climate change and biodiversity loss. The EU will prioritise solutions such as ocean energy, offshore wind, which also allows for fish stock regeneration, solar-panel farms that provide biodiversity-friendly soil cover, and sustainable bioenergy.\n\nTo mitigate climate and environmental risks created by the increasing use of certain sources for bioenergy, the revised Renewable Energy Directive39 includes strengthened sustainability criteria. It also promotes the shift to advanced biofuels based on residues and non-reusable and non-recyclable waste. This approach should continue for all forms of bioenergy. The use of whole trees and food and feed crops for energy production – whether produced in the EU or imported – should be minimised.\n\nTo better understand and monitor the potential climate and biodiversity risks, the Commission is assessing the **EU and global biomass supply and demand** and related sustainability40. As part of its increased ambition to protect and restore forest ecosystems, the Commission will publish the results of this work on the use of forest biomass for energy production by the end of 2020. This will inform the Commission's policymaking, including the review and revision, where necessary, of the level of ambition of the Renewable Energy Directive, the Emissions Trading Scheme, and the Regulation on land use, land use change and forestry (LULUCF) set for 2021.\n\nIn line with the Renewable Energy Directive, the Commission will also develop operational guidance in 2021 on the **new sustainability criteria on forest biomass for** \n\n<sup>38</sup> See Section 2.2.8.\n\n<sup>39</sup> Directive (EU) 2018/2001 on the promotion of the use of energy from renewable sources.\n\n<sup>40</sup> JRC Biomass Assessment Study.", - "page_start": 10, - "page_end": 10, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "Was there a biodiversity governance framework in place in the EU before the European Commission's proposal?", - "target_page": 16, - "target_passage": "In the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place a new European biodiversity governance framework. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n- 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n- 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n- 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n- 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n- 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n### **3. ENABLING TRANSFORMATIVE CHANGE**\n\n### **3.1. A new governance framework**\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place **a new European biodiversity governance framework**. This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a **clear set of agreed indicators** and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## **3.2. Stepping up implementation and enforcement of EU environmental legislation**\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind60. This is having dramatic consequences on biodiversity and comes with a substantial economic cost61 . **The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy**, for which political support and financial and human resources will need to be prioritised.\n\n<sup>60</sup> See 2015 State of Nature in the EU report (COM (2015)219).\n\n<sup>61</sup> The costs of non-implementation are estimated at EUR 50 billion per year.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. **Every Member State will have to do its fair share of the effort** based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up **ecological corridors** to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the **Overseas Countries and Territories** also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n#### **Nature protection: key commitments by 2030**\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.\n\n<sup>27</sup> Guidance on a strategic framework for further supporting the deployment of EU-level green and blue infrastructure (SWD(2019) 193).", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle79 and taking into account the call of the European Parliament80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n#### *4.2.2. Trade policy*\n\n**Trade policy will actively support and be part of the ecological transition**. In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market81, and to promote forest-friendly imports and value chains. The Commission will take a number of steps to **crack down on illegal wildlife trade**. This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further **tightening of the rules on EU ivory trade** later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n### *4.2.3. International cooperation, neighbourhood policy and resource mobilisation*\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to **double financial flows to developing countries for biodiversity**82. The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership\n\n<sup>79</sup> Under Article 191.2 TFEU, the Union policy on the environment shall aim at a high level of protection and shall be based on the precautionary principle.\n\n<sup>80</sup> European Parliament Resolution on international ocean governance (2017/2055(INI)).\n\n<sup>81</sup> In line with the Commission Communication on Stepping up EU Action to Protect and Restore the World's Forests (COM(2019) 352).\n\n<sup>82</sup> Including international financing where biodiversity is the principal objective and where it is a significant secondary objective, in line with CBD COP11 Decision XI/4 and EU and Member States financial reports submitted to the Convention on Biological Diversity in 2015 and 2018.", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "encouraging cooperation in **education for environmental sustainability** in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n### **4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA**\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances76. The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n# **4.1. Raising the level of ambition and commitment worldwide**\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts – working with like-minded partners in **a high-ambition coalition on biodiversity** – to agree an ambitious new global framework for post-2020 at the upcoming 15th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n- Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, **by 2050, all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n- Ambitious **global 2030 targets in line with EU commitments** in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n- A much **stronger implementation, monitoring and review** process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a **regular review cycle** to look at progress towards the\n\n<sup>76</sup> Green alliances focus on cooperation with African and other partners to implement the European Green Deal.", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "EUROPEAN COMMISSION\n\n> Brussels, 20.5.2020 COM(2020) 380 final\n\n# **COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS**\n\n**EU Biodiversity Strategy for 2030** \n\n **Bringing nature back into our lives**", - "page_start": 0, - "page_end": 0, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "policies. In addition, by integrating policy coherence for sustainable development in all its policies, the EU will reduce the pressure on biodiversity worldwide. In all of its international cooperation, the EU should promote sustainable agricultural and fisheries practices and actions to protect and restore the world's forests. Particular attention will also be paid to sustainable water resource management, the restoration of degraded land, and the protection and restoration of biodiverse areas with high ecosystem services and climate mitigation potential. A better protection of natural ecosystems, coupled with efforts to reduce wildlife trade and consumption, will also help prevent and build up resilience to possible future diseases and pandemics. The EU will enhance its support to global efforts to apply the **One Health approach**83, which recognises the intrinsic connection between human health, animal health and healthy resilient nature.\n\nThe EU will step up support to partner countries across the world to achieve the new global targets, fight environmental crime, and tackle the drivers of biodiversity loss. In Africa, the EU will launch the **NaturAfrica** initiative to protect wildlife and key ecosystems while offering opportunities in green sectors for local populations. Similar projects will be developed in other regions. The EU will also support the Western Balkans and EU Neighbourhood countries in their efforts to protect biodiversity.\n\nIn all of its work, the EU will strengthen the links between **biodiversity protection and human rights**, gender, health, education, conflict sensitivity, the rights-based approach, land tenure and the role of indigenous peoples and local communities.\n\nAs part of its global efforts, the EU will promote biodiversity coalitions with partners and civil society around the world. For example, in March 2020, the Commission launched the **Global Biodiversity Coalition** of national parks, aquariums, botanic gardens, zoos, natural history and sciencemuseums to help raise awareness around the world on the need to protect and nurture biodiversity. The Commission will consider launching or joining other High Ambition Coalitions to help develop the post-2020 framework.\n\n### **5. CONCLUSION**\n\nProtecting and restoring biodiversity is the only way to preserve the quality and continuity of human life on Earth. The commitments proposed in this strategy pave the way for ambitious and necessary changes – changes that will ensure the wellbeing and economic prosperity of present and future generations in a healthy environment. The implementation of these commitments will take into account the diversity of challenges across sectors, regions and Member States, recognise the need to ensure social justice, fairness and inclusiveness in line with the European Pillar of Social Rights, and will require a sense of responsibility and strong joint efforts from the EU, its Member States, stakeholders and citizens.\n\nThe Commission invites the European Parliament and the Council to endorse this strategy ahead of the 15th Conference of the Parties to the Convention on Biological Diversity. To ensure full political ownership of this strategy, the Commission will suggest a standing progress point at the Council and at the European Parliament. It will review the strategy by 2024 to assess progress and whether further action is needed to meet its objectives.\n\n<sup>83</sup> https://www.who.int/features/qa/one-health/en/", - "page_start": 22, - "page_end": 22, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "#### *3.3.2. Investments, pricing and taxation*\n\nTackling biodiversity loss and restoring ecosystems will require significant public and private investments at national and European level. This will mean making the most of all relevant EU programmes and financing instruments. The Commission will strengthen its **biodiversity proofing framework**69 , *inter alia* by using in an appropriate way the criteria established under the EU taxonomy, to ensure that EU funding supports biodiversity-friendly investments.\n\nTo meet the needs of this strategy, including investment priorities for Natura 2000 and green infrastructure, **at least €20 billion a year70 should be unlocked for spending on nature**. This will require mobilising private and public funding at national and EU level71, including through a range of different programmes in the next long-term EU budget. Moreover, as nature restoration will make a major contribution to climate objectives, a significant proportion of the 25% of the EU budget dedicated to climate action will be invested on biodiversity and nature-based solutions.\n\nUnder Invest EU, a dedicated natural-capital and circular-economy initiative will be established to mobilise at least €10 billion over the next 10 years, based on public/private blended finance. Nature and biodiversity is also a priority for the European Green Deal Investment Plan. To help unlock the investment needed, the EU must provide long-term certainty for investors and help embed sustainability in the financial system. The EU **sustainable finance taxonomy** will help guide investment towards a green recovery and the deployment of nature-based solutions. In 2021, the Commission will adopt a delegated act under the Taxonomy Regulation72 to establish a common classification of economic activities that substantially contribute to protecting and restoring biodiversity and ecosystems. This will be further supported by a **Renewed Sustainable Finance Strategy** later this year which will help ensure that the financial system contributes to mitigating existing and future risks to biodiversity and better reflect how biodiversity loss affects companies' profitability and long-term prospects73 .\n\nThe Commission will further promote tax systems and pricing that reflect environmental costs, including biodiversity loss. This should encourage changes in national fiscal systems to shift the tax burden from labour to pollution, under-priced resources, and other environmental externalities. The '**user pays' and 'polluter pays' principles** have to be applied to prevent and correct environmental degradation.\n\nPublic authorities' purchasing power represents 14% of EU GDP and can serve as a powerful driver of demand for the products and services of companies that invest in or contribute to nature-based solutions. To tap into this potential, when proposing further\n\n<sup>69</sup> See Common framework and guidance documents for biodiversity proofing of the EU budget.\n\n<sup>70</sup> The cost estimate is based on the 2018 Impact Assessment of the LIFE Regulation (SWD(2018) 292), a Study on the costs of implementing the Target 2 of the EU Biodiversity Strategy to 2020 and data submitted by 16 Member States under Article 8(1) of the Habitats Directive. The Commission will update the estimate, notably based on Member States' Prioritised Action Frameworks under the Habitats Directive.\n\n<sup>71</sup> Including the Common Agricultural Policy, Cohesion Policy funds, Horizon Europe, the European Maritime and Fisheries Fund, LIFE and external action funds.\n\n<sup>72</sup> See EU taxonomy for sustainable activities.\n\n<sup>73</sup> World Wildlife Fund (2019), The Nature of Risk – A Framework for Understanding Nature-Related Risk to Business.", - "page_start": 17, - "page_end": 17, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "build on the headline ambition to ensure that by 2050 **all of the world's ecosystems are restored, resilient, and adequately protected.** The world should commit to the net-gain principle to give nature back more than it takes. As part of this, the world should commit to no human-induced extinction of species, at minimum where avoidable.\n\nThis strategy sets out how Europe can help make this happen. As a milestone, it aims to ensure that **Europe's biodiversity will be on the path to recovery by 2030** for the benefit of people, the planet, the climate and our economy, in line with the 2030 Agenda for Sustainable Development and with the objectives of the Paris Agreement on Climate Change. It addresses the five main drivers of biodiversity loss, sets out an enhanced governance framework to fill remaining gaps, ensures the full implementation of EU legislation, and pulls together all existing efforts. This strategy is enterprising and incentivising in spirit and action. It reflects the fact that **protecting and restoring nature will need more than regulation alone**. It will require action by citizens, businesses, social partners and the research and knowledge community, as well as strong partnerships between local, regional, national and European level. This strategy is in line with the ambitions and commitment set out in President von der Leyen's Political Guidelines and in the European Green Deal.\n\nAdopted in the heart of the COVID-19 pandemic, this strategy will also be a central element of the EU's recovery plan. It will be crucial to prevent and build resilience to future zoonosis outbreaks and to provide immediate business and investment opportunities for restoring the EU's economy.\n\nAll new initiatives and proposals will be underpinned by the Commission's better regulation tools. Based on public consultations and on the identification of the environmental, social and economic impacts, impact assessments will contribute to ensuring that all initiatives achieve their objectives in the most effective and least burdensome way and live up to a green oath to \"do no harm\".\n\n### **2. PROTECTING AND RESTORING NATURE IN THE EUROPEAN UNION**\n\nThe EU has legal frameworks, strategies and action plans to protect nature and restore habitats and species. But protection has been incomplete, restoration has been smallscale, and the implementation and enforcement of legislation has been insufficient17 .\n\nTo put biodiversity on the path to recovery by 2030, we need to step up the protection and restoration of nature. This should be done by improving and **widening our network of protected areas** and by developing an ambitious **EU Nature Restoration Plan**.\n\n#### **2.1. A coherent network of protected areas**\n\nBiodiversity fares better in protected areas. However, the current network of legally protected areas, including those under strict protection, is not sufficiently large to safeguard biodiversity. Evidence shows that the targets defined under the Convention on Biological Diversity are insufficient to adequately protect and restore nature18. Global\n\n<sup>17</sup> Mid-term review of the EU Biodiversity Strategy to 2020 (COM(2015) 478 and SWD(2015) 187); Fitness Check of the EU Nature Legislation (Birds and Habitats Directives) (SWD(2016) 472); Fitness Check of the EU Water Legislation (SWD(2019) 439).\n\n<sup>18</sup> The global Aichi biodiversity targets are that protected areas should cover 17% on land and 10% at sea, while scientific studies' figures range from 30% to 70%. See e.g. IPBES 2019.", - "page_start": 3, - "page_end": 3, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "targets, with the ability to ratchet up action if needed. These reviews should be based on an independent, science-based gap-analysis and foresight process, with common headline indicators for all Parties.\n\n- **An enabling framework** to bring the ambition to life, across areas such as finance, capacity, research, innovation and technology.\n- **Fair and equitable sharing of the benefits** from the use of genetic resources linked to biodiversity.\n- **A principle of equality**. This includes respect for the rights and the full and effective participation of indigenous peoples and local communities. There should be an inclusive approach with participation of all stakeholders, including women, youth, civil society, local authorities, the private sector, academia and scientific institutions.\n\n### **4.2. Using external action to promote the EU's ambition**\n\n### *4.2.1. International Ocean Governance*\n\nIn line with the International Ocean Governance agenda77, the EU will support the conclusion of an ambitious legally binding agreement on **marine biological diversity of areas beyond national jurisdiction** (BBNJ) by the end of 2020. It must set clear global procedures for identifying, designating and effectively managing ecologically representative marine protected areas in the high seas. It should be ratified and implemented as quickly as possible.\n\nThe EU should also use all of its diplomatic leverage and outreach capacities to help broker agreement on the designation of three vast **Marine Protected Areas in the Southern Ocean**78, two of which were co-proposed by the EU in East Antarctica and in the Weddell Sea. If agreed, this would constitute one of the biggest acts of nature protection in history.\n\nWork will continue with partner countries and regional organisations to put in place measures to protect and sustainably use sensitive maritime ecosystems and species, including in areas beyond national jurisdiction, with a focus on marine biodiversity hotspots. The EU should continue supporting Small Island Developing States and other relevant partner countries to participate in meetings of regional and global organisations and bodies, and to implement relevant international commitments and regulations.\n\nThe EU will apply **zero tolerance towards illegal, unreported and unregulated fishing** and will combat overfishing, including through WTO negotiations on a **global agreement to ban harmful fisheries subsidies**.\n\nIn international negotiations, the EU should advocate that marine minerals in the international seabed area cannot be exploited before the **effects of deep-sea mining** on the marine environment, biodiversity and human activities have been sufficiently researched, the risks are understood and the technologies and operational practices are able to demonstrate no serious harm to the environment, in line with the precautionary\n\n<sup>77</sup> International ocean governance agenda: an agenda for the future (JOIN(2016) 49).\n\n<sup>78</sup> In the framework of the Commission for the Conservation of Antarctic Marine Living Resources.", - "page_start": 20, - "page_end": 20, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "As regards the Birds and Habitats Directives, enforcement will focus on **completing the Natura 2000 network**, the effective management of all sites, species-protection provisions, and species and habitats that show declining trends. The Commission will also ensure that environment-related legislation with an impact on biodiversity62 is better implemented, enforced and – where necessary – reviewed and revised.\n\nThe Commission will strive to **improve compliance assurance**, working closely with Member States and European networks of environmental agencies, inspectors, auditors, police, prosecutors and judges.\n\nIn addition, the Commission will support civil society's role as a compliance watchdog and will engage with Member States to improve access to justice in national courts in environmental matters for individuals and NGOs. It will also broaden standing for NGOs by proposing **a revision of the Aarhus Regulation63** .\n\n### **3.3. Building on an integrated and whole-of-society approach**\n\n#### *3.3.1. Business for biodiversity*\n\nIn the partnership spirit of this strategy, all parts of the economy and society will have to play their role. Industry and business have an impact on nature, but they also produce the important innovations, partnerships and expertise that can help address biodiversity loss.\n\nTo ensure environmental and social interests are fully embedded into business strategies, the Commission will put forward a new initiative in 2021 on **sustainable corporate governance**. This initiative, which may take the form of a legislative proposal, will address human rights and environmental duty of care and due diligence across economic value chains in a proportionate way according to different sizes of entreprises64. This will help ensure that shareholder and stakeholder interests are fully aligned with the objectives set out in this strategy. In addition, in 2020, the Commission launched a review of the reporting obligations of businesses under the **Non-Financial Reporting Directive**65, with a view to improving the quality and scope of non-financial disclosures, including on environmental aspects such as biodiversity.\n\nThrough its existing platforms66, the Commission will help to build a **European Business for Biodiversity** movement, taking inspiration from recent initiatives67 and making this movement an integral part of the European Climate Pact. Particular attention will be paid to measures to incentivise and eliminate barriers for the take-up of naturebased solutions, as these can lead to significant business and employment opportunities in various sectors68 and are the key to innovation for economic or societal needs that rely on nature.\n\n<sup>62</sup> Such as the Directives on Environmental Impact Assessment (2014/52/EU), on Strategic Environmental Assessment (2001/42/EC), on Environmental Liability (2004/35/CE) and on Environmental Crime (2008/99/EC).\n\n<sup>63</sup> https://ec.europa.eu/environment/aarhus/\n\n<sup>64</sup> Study on due diligence requirements through the supply chain – Final Report.\n\n<sup>65</sup> Directive 2014/95/EU amending Directive 2013/34/EU as regards disclosure of non-financial and diversity information by certain large undertakings.\n\n<sup>66</sup> Such as the EU Business @ Biodiversity Platform (B@B).\n\n<sup>67</sup> See for example Business for Nature or One Planet Business for Biodiversity.\n\n<sup>68</sup> BenDor et al. (2015), Estimating the Size and Impact of the Ecological Restoration Economy.", - "page_start": 16, - "page_end": 16, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "What is the EU's tolerance for unauthorised fishing?", - "target_page": 21, - "target_passage": "The EU will apply zero tolerance towards illegal, unreported and unregulated fishing", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "**energy**41. It will also review in 2021 the data on biofuels with high indirect land-use change risk and establish a trajectory for their gradual phase out by 2030.\n\nThe overall objective is to ensure that EU regulatory framework on bioenergy is in line with the increased ambition set out in the European Green Deal.\n\n#### *2.2.6. Restoring the good environmental status of marine ecosystems*\n\n**Restored and properly protected marine ecosystems** bring substantial health, social and economic benefits to coastal communities and the EU as a whole. The need for stronger action is all the more acute as marine and coastal ecosystem biodiversity loss is severely exacerbated by global warming42 .\n\nAchieving good environmental status of marine ecosystems, including through strictly protected areas, must involve the restoration of carbon-rich ecosystems as well as important fish spawning and nursery areas. Some of today's sea uses endanger food security, fishers' livelihoods, and the fishery and seafood sectors. **Marine resources must be harvested sustainably and there must be zero-tolerance for illegal practices**. In this regard, the full implementation of the EU's Common Fisheries Policy, the Marine Strategy Framework Directive and the Birds and Habitats Directives is essential.\n\nThe application of an ecosystem-based management approach under EU legislation43 will reduce the adverse impacts of fishing, extraction and other human activities, especially on sensitive species and seabed habitats. To support this, **national maritime spatial plans**, which Member States have to deliver in 2021, should aim at covering all maritime sectors and activities, as well as area-based conservation-management measures.44 The Commission will also propose a **new action plan to conserve fisheries resources and protect marine ecosystems** by 2021. Where necessary, measures will be introduced to limit the use of fishing gear most harmful to biodiversity, including on the seabed. It will also look at how to reconcile the use of bottom-contacting fishing gear with biodiversity goals, given it is now the most damaging activity to the seabed. This must be done in a fair and just way for all. The European Maritime and Fisheries Fund should also support the transition to more selective and less damaging fishing techniques.\n\nHealthy fish stocks are key to the long-term prosperity of fishermen and the health of our oceans and biodiversity. This makes it all the more important to maintain or reduce fishing mortality at or under **Maximum Sustainable Yield levels**. This will help achieve a healthy population age and size distribution for fish stocks.\n\nThe **by-catch of species threatened with extinction** must also be eliminated or reduced to a level that allows full recovery. This should also be the case for those in bad conservation status or not in good environmental status. Furthermore, the by-catch of other species45 must be eliminated or, where this is not possible, minimised so as not to\n\n<sup>41</sup> Article 29 of the EU Renewable Energy Directive 2018/2001.\n\n<sup>42</sup> See for example Intergovernmental Panel on Climate Change (2019), Special Report on the Ocean and the Cryosphere in a Changing Climate.\n\n<sup>43</sup> The Common Fisheries Policy, the Marine Strategy Framework Directive (2008/56/EC) and the Maritime Spatial Planning Directive (2014/89/EU).\n\n<sup>44</sup> The Commission will report on the implementation of the Maritime Spatial Planning Directive by March 2022 at the latest, including the application of ecosystem-based management.\n\n<sup>45</sup> Protected by international and EU law.", - "page_start": 11, - "page_end": 11, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle79 and taking into account the call of the European Parliament80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n#### *4.2.2. Trade policy*\n\n**Trade policy will actively support and be part of the ecological transition**. In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market81, and to promote forest-friendly imports and value chains. The Commission will take a number of steps to **crack down on illegal wildlife trade**. This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further **tightening of the rules on EU ivory trade** later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n### *4.2.3. International cooperation, neighbourhood policy and resource mobilisation*\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to **double financial flows to developing countries for biodiversity**82. The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership\n\n<sup>79</sup> Under Article 191.2 TFEU, the Union policy on the environment shall aim at a high level of protection and shall be based on the precautionary principle.\n\n<sup>80</sup> European Parliament Resolution on international ocean governance (2017/2055(INI)).\n\n<sup>81</sup> In line with the Commission Communication on Stepping up EU Action to Protect and Restore the World's Forests (COM(2019) 352).\n\n<sup>82</sup> Including international financing where biodiversity is the principal objective and where it is a significant secondary objective, in line with CBD COP11 Decision XI/4 and EU and Member States financial reports submitted to the Convention on Biological Diversity in 2015 and 2018.", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. **Every Member State will have to do its fair share of the effort** based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up **ecological corridors** to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the **Overseas Countries and Territories** also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n#### **Nature protection: key commitments by 2030**\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.\n\n<sup>27</sup> Guidance on a strategic framework for further supporting the deployment of EU-level green and blue infrastructure (SWD(2019) 193).", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "targets, with the ability to ratchet up action if needed. These reviews should be based on an independent, science-based gap-analysis and foresight process, with common headline indicators for all Parties.\n\n- **An enabling framework** to bring the ambition to life, across areas such as finance, capacity, research, innovation and technology.\n- **Fair and equitable sharing of the benefits** from the use of genetic resources linked to biodiversity.\n- **A principle of equality**. This includes respect for the rights and the full and effective participation of indigenous peoples and local communities. There should be an inclusive approach with participation of all stakeholders, including women, youth, civil society, local authorities, the private sector, academia and scientific institutions.\n\n### **4.2. Using external action to promote the EU's ambition**\n\n### *4.2.1. International Ocean Governance*\n\nIn line with the International Ocean Governance agenda77, the EU will support the conclusion of an ambitious legally binding agreement on **marine biological diversity of areas beyond national jurisdiction** (BBNJ) by the end of 2020. It must set clear global procedures for identifying, designating and effectively managing ecologically representative marine protected areas in the high seas. It should be ratified and implemented as quickly as possible.\n\nThe EU should also use all of its diplomatic leverage and outreach capacities to help broker agreement on the designation of three vast **Marine Protected Areas in the Southern Ocean**78, two of which were co-proposed by the EU in East Antarctica and in the Weddell Sea. If agreed, this would constitute one of the biggest acts of nature protection in history.\n\nWork will continue with partner countries and regional organisations to put in place measures to protect and sustainably use sensitive maritime ecosystems and species, including in areas beyond national jurisdiction, with a focus on marine biodiversity hotspots. The EU should continue supporting Small Island Developing States and other relevant partner countries to participate in meetings of regional and global organisations and bodies, and to implement relevant international commitments and regulations.\n\nThe EU will apply **zero tolerance towards illegal, unreported and unregulated fishing** and will combat overfishing, including through WTO negotiations on a **global agreement to ban harmful fisheries subsidies**.\n\nIn international negotiations, the EU should advocate that marine minerals in the international seabed area cannot be exploited before the **effects of deep-sea mining** on the marine environment, biodiversity and human activities have been sufficiently researched, the risks are understood and the technologies and operational practices are able to demonstrate no serious harm to the environment, in line with the precautionary\n\n<sup>77</sup> International ocean governance agenda: an agenda for the future (JOIN(2016) 49).\n\n<sup>78</sup> In the framework of the Commission for the Conservation of Antarctic Marine Living Resources.", - "page_start": 20, - "page_end": 20, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "### **GETTING IN TOUCH WITH THE EU**\n\n#### **In person**\n\nAll over the European Union there are hundreds of Europe Direct centres. You can find the address of the centre nearest you online (european-union.europa.eu/contact-eu/meet-us_en).\n\nOn the phone or in writing\n\nEurope Direct is a service that answers your questions about the European Union. You can contact this service:\n\n- by freephone: 00 800 6 7 8 9 10 11 (certain operators may charge for these calls),\n- at the following standard number: +32 22999696,\n- via the following form: european-union.europa.eu/contact-eu/write-us_en.\n\n### **FINDING INFORMATION ABOUT THE EU**\n\n#### **Online**\n\nInformation about the European Union in all the official languages of the EU is available on the Europa website (european-union.europa.eu).\n\n#### **EU publications**\n\nYou can view or order EU publications at op.europa.eu/en/publications. Multiple copies of free publications can be obtained by contacting Europe Direct or your local documentation centre (european-union.europa.eu/contact-eu/meet-us_en).\n\n#### **EU law and related documents**\n\nFor access to legal information from the EU, including all EU law since 1951 in all the official language versions, go to EUR-Lex (eur-lex.europa.eu).\n\n#### **EU open data**\n\nThe portal data.europa.eu provides access to open datasets from the EU institutions, bodies and agencies. These can be downloaded and reused for free, for both commercial and non-commercial purposes. The portal also provides access to a wealth of datasets from European countries.", - "page_start": 162, - "page_end": 162, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "tackling undeclared work' provides fact sheets of the type and quantity of undeclared work in all EU Member States;464 Eurofound published several reports on platform work,465 and the FRA had a series of publications and fact sheets on severe cases of exploitation, particularly of migrant workforces.466 Also, the creation of the European Labour Authority (ELA)467 is partly a consequence of the **often irregular working conditions of mobile, posted, contracted or seasonal workers** who leave their country to work in the EU or in another European country. ELA particularly aims to mitigate such critical issues related to labour mobility and social security coordination between countries.\n\n**In this report**, the quantitative data and the interpretation of the developments will cover — in an ideal case — **the period 2005 to 2020**. In 2004, a major extension of the EU took place, from 15 to 25 Member States. If it is not possible to cover the whole period, the analysis is limited to the maximum possible period. If comparability is high, for a very few selected data a further look back to the 1990s was taken.\n\nMoreover, there can be **major comparability difficulties** caused by the change of methodological approaches, geographical coverage and other context factors during the last 10 to 30 years. Major challenges for comparative assessments of EU-wide harmonised data collections from different years were:\n\n- The EU went through **several enlargement processes**, expanded from EU-12 to EU-15 in 1994, expanded from EU-15 to EU-25 in 2004, to EU27 in 2007 and to EU28 in 2013, and from 2020 on — due to the departure of the United Kingdom — the EU consists of 27 Member States. In statistical publications the identifier EU27_2020 is often used to distinguish this period from the EU27 phase between 2008 and 2012, before Croatia joined and the EU27 became EU28.\n- **Methodologies of data collection changed**, questions in surveys were abandoned or changed, and sample sizes or structures changed, for example, the given period in survey questions changed. One example is from the EWCS: the time categories for health-related absence from work changed from 'between 10 and 20 days' to absence of 'more than 15 days'.\n- Important **structural decisions were taken in the sector of economic statistics**, like the change of the statistical composition and the coding of economic sectors, NACE Code 1, Revision 1 (NACE 1.1) was applied until 2007, and from 2008 NACE Code 2 is applied.\n- The survey providers use(d) for **occupation and educational attainment different categories** and aggregations levels, for example, ESEG, ISCED or ISCO.\n- Some important categories and definitions are **not fully harmonised** in statistics, for example, the definition of 'manual worker' or of 'migration status'.468\n\n### **7.3 Qualitative data and research**\n\n**Quantitative data gain importance by a comprehensive description of the reasons behind these data** and their development, **by interpretation and analysis**. Such analytical explanations are elaborated by (roughly categorised): the providers of the quantitative data themselves, in addition by scientists at universities and governmental institutions, by European, national or regional governmental organisations, by business federations and trade unions, by professional associations and by international organisations.\n\nThis analytical work covers a large variety of topics like detailed studies and reports on **risks, exposures and outcomes**, on the development and application of **effective technical and organisational preventive measures**, on preventive **OSH systems and infrastructures**, for example, evaluations and assessments of the level of implementation of OSH directives, and finally on the **societal, economic and legal frame and context** of OSH.\n\nThere is **no strict separation between the following four types for research categories**. For example, the EU-OSHA study 'Analysis of the determinants of workplace occupational safety and health practice in a selection of EU Member States'469 includes an analysis of the systems and infrastructures as well as of the framework and context influence. To fully cover understanding and support of OSH prevention in workplaces, all these types of research are needed.", - "page_start": 133, - "page_end": 133, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "develop an Integrated Nutrient Management Action Plan in 2022. The Farm to Fork strategy will address the reduction in the use and risk of pesticides and support wider implementation of Integrated Pest Management54. As part of this, **the environmental risk assessment of pesticides will be strengthened**. The pressure from plastics is notably addressed through the implementation of the European Strategy for Plastics55 and the new Circular Economy Action Plan56 .\n\nThe Commission will develop a **set of indicators for the progressive reduction of pollution**, and will establish baselines to help monitor progress. Pressures from marine litter and underwater noise are being addressed under the Marine Strategy Framework Directive.\n\n### *2.2.10. Addressing invasive alien species*\n\nInvasive alien species can significantly undermine efforts to protect and restore nature. Besides inflicting major damage to nature and the economy, many invasive alien species also facilitate the outbreak and spread of infectious diseases, posing a threat to humans and wildlife57. The rate of release of invasive alien species has increased in recent years. Of the 1,872 species now considered threatened in Europe, 354 are under threat from invasive alien species. Without effective control measures, the rate of invasion and the risks it brings to our nature and health will continue to rise.\n\nThe implementation of the **EU Invasive Alien Species Regulation**58 and other relevant legislation and international agreements must also be stepped up**.** This should aim to minimise, and where possible eliminate, the introduction and establishment of alien species in the EU environment. The aim will be to manage established invasive alien species and **decrease the number of Red List species they threaten by 50%**59 .\n\n### **EU Nature Restoration Plan: key commitments by 2030**\n\n- 1. Legally binding EU nature restoration targets to be proposed in 2021, subject to an impact assessment. By 2030, significant areas of degraded and carbon-rich ecosystems are restored; habitats and species show no deterioration in conservation trends and status; and at least 30% reach favourable conservation status or at least show a positive trend.\n- 2. The decline in pollinators is reversed.\n- 3. The risk and use of chemical pesticides is reduced by 50% and the use of more hazardous pesticides is reduced by 50%.\n- 4. At least 10% of agricultural area is under high-diversity landscape features.\n- 5. At least 25% of agricultural land is under organic farming management, and the uptake of agro-ecological practices is significantly increased.\n- 6. Three billion new trees are planted in the EU, in full respect of ecological principles.\n- 7. Significant progress has been made in the remediation of contaminated soil sites.\n- 8. At least 25,000 km of free-flowing rivers are restored.\n\n<sup>54</sup> Sustainable Use of Pesticides Directive (2009/128/EC).\n\n<sup>55</sup> European Strategy for Plastics in a Circular Economy (COM(2018) 28).\n\n<sup>56</sup> A new Circular Economy Action Plan for a cleaner and more competitive Europe (COM(2020) 98).\n\n<sup>57</sup> See for example: Hulme P. (2014). Invasive species challenge the global response to emerging diseases, *Trends in parasitology (2014) Vol. 30, Issue 6*; Duscher et al. (2017).\n\n<sup>58</sup> Regulation (EU) 1143/2014 on invasive alien species.\n\n<sup>59</sup> Red List of the International Union for the Conservation of Nature (IUCN).", - "page_start": 14, - "page_end": 14, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n- 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n- 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n- 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n- 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n- 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n### **3. ENABLING TRANSFORMATIVE CHANGE**\n\n### **3.1. A new governance framework**\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place **a new European biodiversity governance framework**. This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a **clear set of agreed indicators** and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## **3.2. Stepping up implementation and enforcement of EU environmental legislation**\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind60. This is having dramatic consequences on biodiversity and comes with a substantial economic cost61 . **The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy**, for which political support and financial and human resources will need to be prioritised.\n\n<sup>60</sup> See 2015 State of Nature in the EU report (COM (2015)219).\n\n<sup>61</sup> The costs of non-implementation are estimated at EUR 50 billion per year.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "#### **Table 32: Non-EU Migrants – over-represented in certain sectors and occupations in 2019**\n\n| | Share of the overall | Share of the overall |\n| --- | --- | --- |\n| Sector | employment of non-EU | employment of EU |\n| | citizens % | citizens % |\n| Accommodation and food service activities | 11.4% | 3.8% |\n| Administrative support activities | 7.1% | 3.7% |\n| Domestic work | 6.5% | 0.7% |\n| Construction | 8.6% | 6.4% |\n| Sector Occupational group | Share of the overall employment of non-EU | Share of the overall employment of EU |\n| | citizens % | citizens % |\n| Cleaners and helpers | 11.9% | 3.1% |\n| Personal service workers | 9.0% | 4.2% |\n| Personal care workers | 5.1% | 2.9% |\n| Building workers | 5.8% | 3.6% |\n| Labourers in mining, construction, manufacturing and transport | 5.6% | 2.4% |\n| Food preparation assistants | 2.7% | 0.5% |\n| Agriculture and fishery labourers | 2.6% | 0.6% |\n\nThe **highest share of intra-EU and extra-EU workers per occupation** is among cleaners and helpers (37% in total, intra-EU 11%, extra-EU 25%), labourers in mining and construction (24% in total, intra-EU 7%, extra-EU 17%), stationary plant and machine operators (20% in total, intra-EU 6%, extra-EU 14%), and personal care workers (19% in total, intra-EU 5%, extra-EU 14%).311\n\nThe **occupations with a high share of migrant workforce are those with higher physical risks and lower expectations to do this job until 60 years old**. The common characteristic of these occupations is the well-known 3-D assignment: dirty, dangerous and demanding.312\n\nBeside the occupation-related risks, specific health and safety issues might result from a lower level of language dominance; communication and instruction have to cope with different capacities to speak and understand. In a more diverse workforce other factors might differ, like awareness and traditions regarding aspects such as the importance of hierarchy, ways to communicate, perception of behaviour as aggression, harassment and discrimination. In general, a greater variety of the workforce poses wider challenges for prevention.\n\n**Posting of workers** has similar implications for the organisation of OSH in enterprises. 313 Posting means that companies provide services in other EU Member States without having to establish themselves in the other countries. They send out employees to carry out the tasks required. The latest official data from 2020 estimated 2.3 million posted workers in the EU.314", - "page_start": 112, - "page_end": 112, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **Foreword by Nicolas Schmit, European Commissioner for Jobs and Social Rights**\n\nOccupational safety and health (OSH) has been at the heart of the European project since the very beginning. OSH concerns all European citizens whether they work in a factory, in an office, sell goods in a shop or take care of patients in a hospital. Health and safety at work is an essential part of any organisation's operations.\n\nThis is why EU policy and legislation on OSH, based on both scientific and technical evidence and data, is a vital policy area for EU society and all of its citizens.\n\n\"Occupational safety and health in Europe – state and trends 2023\" is a very important contribution from the European Agency for Safety and Health at Work (EU-OSHA). The Agency's analysis is also particularly timely, as the EU takes stock of progress made under the 2021-2027 EU Strategic framework on health and safety at work.\n\nThis publication originates from a European Commission initiative, supported by its tripartite Advisory Committee on Health and Safety at Work, to create a comprehensive EU OSH Information System. Work in this area started in 2015 and the project was later transferred to EU-OSHA which together with the Commission put the information system online under the title of \"EU OSH Barometer\".\n\nThis particularly useful tool, notably for the stakeholders in this policy area, provides, on a permanent basis, graphical information for significant OSH indicators at EU and national level, drawing on statistics, surveys and public data. This first analytical report combines the quantitative data of the EU OSH information system with explanatory and analytical descriptions of trends that reach back between 10 and 25 years. The intention is to repeat this exercise on a regular basis, so that it can provide knowledge and insights for safer and healthier work, in an ever-changing world of work, to wider audiences.\n\nChanges at the workplace, caused by the COVID crisis, the green, digital and demographic transitions, as well as by scientific and technological progress, led the Commission to adopt, in June 2021, a new 2021-2027 EU Strategic framework on health and safety at work.\n\nThe Framework is part of the Commission's commitment to building a strong social European Union that protects. This is the foundation of all the initiatives that we are proposing. Every action we take in social policy comes under the umbrella of the European Pillar of Social Rights Action Plan that we presented in March 2021. The protection of workers' health and safety, enshrined in the EU Treaties and in the Charter of Fundamental Rights, is one of the key elements of an EU economy that works for people. In particular, the right to a healthy and safe workplace is reflected in principle 10 of the European Pillar of Social Rights, and is fundamental for reaching the United Nations' sustainable development goals. Our determined action to improve occupational safety and health and to consolidate a culture of prevention represents a substantial contribution to the objectives of the abovementioned Pillar.\n\nThe work of EU-OSHA is essential in this respect and this publication is a good example of the strong commitment shown by EU governments – and also employer and trade union organisations - to continuously improve OSH in Europe.\n\nNicolas Schmit", - "page_start": 7, - "page_end": 7, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "What are the missions of the Sumitomo Mitsui Financial Group?", - "target_page": 7, - "target_passage": "• To provide optimum added value to our customers and together with them achieve growth • To create sustainable shareholder value through business growth• To create sustainable shareholder value through business growth • To provide a challenging and professionally rewarding work environment for our dedicated employees• To provide a challenging and professionally rewarding work environment for our dedicated employee", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Sumitomo Mitsui Financial Group CSR Report **Digest version**", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Social Contribution Activities**\n\n**SMFG as a corporate citizen: Working to create a prosperous society for all**\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n# **SMFG and its Group companies participate in neighborhood cleanup programs**\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on \"SMFG Clean-up Day.\" This initiative is on \"SMFG Clean-up Day.\" This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n# **Supporting education in developing countries, together with our customers and employees**\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\nCollection box for used books and other items\n\nBuilding libraries in developing countries through the NGO Room to Read\n\ninstalled in an employee canteen Supporting education in developing countries\n\n# **Donations through \"The World Bank Green Fund\"**\n\nSMBC and SMBC Nikko Securities donate a SMBC and SMBC Nikko Securities donate a portion of the profits from marketing of the portion of the profits from marketing of the SMBC Nikko World Bank Bond Fund SMBC Nikko World Bank Bond Fund ( \"The World Bank Green Fund World Bank Green Fund\" ) to the Japanese ) to the Japanese Red Cross Society and the Japan Committee Red Cross Society and the Japan Committee for UNICEF. for UNICEF.\n\nThis investment trust is the world This investment trust is the world's first s first fund developed in cooperation with the fund developed in cooperation with the World Bank that invests in World Bank green World Bank that invests in World Bank green bonds, according to research by Nikko bonds, according to research by Nikko Asset Management Co., Ltd. Funds from Asset Management Co., Ltd. Funds from the World Bank green bonds support only the World Bank green bonds support only World Bank-funded projects in developing World Bank-funded projects in developing countries to mitigate global warming. countries to mitigate global warming.\n\n*Research by Nikko Asset Management Co., Ltd.\n\nDonating to the Japanese Red Cross\n\n# **SMBC Nikko Securities' \"Green Week\"**\n\nIn the fall of 2010, SMBC Nikko Securities In the fall of 2010, SMBC Nikko Securities established its \"Green Week\" for strength established its \"Green Week\" for strengthening environmental protection and social ening environmental protection and social contribution activities, with the aim of contribution activities, with the aim of promoting communication within regional promoting communication within regional society and among participating employees society and among participating employees and their families, while deepening under and their families, while deepening understanding of environmental protection through standing of environmental protection through participation in social contribution activities. participation in social contribution activities. Between November 13 and December 5, Between November 13 and December 5, 2010, environmental protection programs 2010, environmental protection programs were rolled out by cross-organizational were rolled out by cross-organizational \"Green Committees\" in four locations in \"Green Committees\" in four locations in Japan, with the participation of 280 employ Japan, with the participation of 280 employees and their families. In addition, regional ees and their families. In addition, regional contribution activities were carried out by contribution activities were carried out by\n\nRegional contribution activities at the branch level\n\nCollection of PET bottle caps Donating to Japan Committee for UNICEF for international contribution purposes\n\nbranches at their own initiative. A wide variety branches at their own initiative. A wide variety of social contribution activities, such as the of social contribution activities, such as the collection of used stamps and PET bottle collection of used stamps and PET bottle caps, were carried out for global causes. caps, were carried out for global causes. SMBC Nikko Securities will continue activi SMBC Nikko Securities will continue activities that contribute to society and prioritize ties that contribute to society and prioritize communication between employees. communication between employees.\n\nEmployees and their families pitch in to clean up the bed of the Ara River in Tokyo\n\n| Environmental protection activities |\n| --- |\n| Forestry management volunteering experience in Osaka |\n| (Izumi no Mori) |\n| 117 participants |\n| Volunteers at the Shonan Erosion Control Forest project |\n| 62 participants |\n| Helping clean up Senju Shinbashi bridge that spans Ara River |\n| 64 participants |\n| Helping clean up Nishi Araibashi bridge that spans Ara River |\n| 37 participants |\n| Social contribution collection activities |\n| Support for overseas causes through used-stamp collection |\n| 11.4 kg of stamps were collected |\n| Presentation of stationery to children in developing countries |\n| 788 ballpoint pens and pencils |\n| Vaccine donation from the collection of PET bottle caps |\n| 168.9 kg (enough to vaccinate 84.45 people against polio) |\n| Activities organized by branches |\n| Sendai Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Matsudo Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Shizuoka Branch |\n\nAbekawa River driftwood-clearing festival", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## **Corporate Outline (as of September 30, 2011)**\n\n| Company Name | : | Sumitomo Mitsui Financial Group, Inc. |\n| --- | --- | --- |\n| Business Description | : | Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of |\n| | | non-bank subsidiaries, as well as the performance of ancillary functions |\n| Established | : | December 2, 2002 |\n| Head Office | : | 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan |\n| Chairman of the Board | : | Masayuki Oku |\n| President | : | Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) |\n| Capital | : | ¥2,337.8 billion |\n| Stock Exchange Listings | : | Tokyo Stock Exchange (First Section) |\n| | | Osaka Securities Exchange (First Section) |\n| | | Nagoya Stock Exchange (First Section) |\n| | | Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange. |\n\n## **Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)**\n\n# **Our CSR reporting**\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n| CSR report 2011 (digest version) | CSR disclosure through |\n| --- | --- |\n| Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, | specific examples |\n| centered on specific examples centered on specific examples | |\n| CSR report 2011 | Comprehensive |\n| (digest version with examples of activities and | |\n| statistical performance, online PDF file) | disclosure of |\n| Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed | CSR activities |\n| information on CSR activities information on CSR activities | |\n| CSR report (online version, Japanese only) | Enriched |\n| www.smfg.co.jp/responsibility | CSR disclosure |\n| This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of | |\n| CSR activities at SMFG CSR activities at SMFG | |\n\n# **Editorial Policy**\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society. We have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is the essence of business itself, and our initiatives act upon this. Our CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version). We disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n# **Scope of this Report**\n\n- Sumitomo Mitsui Financial Group, Inc.\n- Sumitomo Mitsui Banking Corporation\n- SMFG Card & Credit, Inc.\n- Sumitomo Mitsui Card Company, Limited\n- Cedyna Financial Corporation\n- Sumitomo Mitsui Finance and Leasing Co., Ltd.\n- The Japan Research Institute, Limited\n- SMBC Friend Securities Co., Ltd.\n- SMBC Nikko Securities Inc.\n- THE MINATO BANK, LTD.\n- Kansai Urban Banking Corporation\n- Other Group companies\n\nThroughout this report, **\"Sumitomo Mitsui Financial Group\"** or **\"SMFG\"** refers to the holding company alone. **\"The SMFG Group\"** refers to the holding company and its primary domestic and international subsidiaries and affiliates. Company name abbreviations and other special terminology\n\n## **Reference guidelines**\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3) * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n# **About this Report**\n\n- Period Covered : April 1, 2010 to March 31, 2011 ( \"Fiscal 2010\" ) Note: Certain items in this report refer to activities taking place after April 2011.\nPublication Date of Japanese Document : December 2011\n\n- Contact :\n\t- 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111\n\nGroup CSR Department, Sumitomo Mitsui Financial Group, Inc.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n#### **BOARD OF DIRECTORS AND AUDITORS**\n\n#### **Representative Board Members**\n\nCarlos Ghosn President and Co-Chairman\n\nItaru Koeda Co-Chairman\n\nToshiyuki Shiga Co-Chairman\n\n#### **Board Members**\n\n- Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Shemaya Lévy Patrick Pélata\n- **Auditors** Hisayoshi Kojima Shinji Ichishima Keishi Imamura Haruo Murakami\n\n#### **EXECUTIVE COMMITTEE MEMBERS**\n\n- Carlos Ghosn Toshiyuki Shiga Itaru Koeda Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Alain-Pierre Raynaud\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# Commitment from the Top\n\n**A Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata** \n\n# **What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?**\n\n#### *Uplifting the nation's spirits Uplifting the nation's spirits*\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region o Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan) after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling birth rates birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues fa Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society ing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n# Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\n**Our measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits**\n\n̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrink ing a nd aging population, and global challenges. — population, and global challenges. —\n\n**Kunibe**: Japan is facing a difficult period Japan is facing a difficult period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japa n Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also ways to uplift the nation's spirits. ways to uplift the nation's spirits.\n\n**Ando**: Japan has achieved two miracles - the : Japan has achieved two miracles - the Meiji Restoration of 1868, and the economic Meiji Restoration of 1868, and the economic recovery following the end of World War II in recovery following the end of World War II in 1945. Both events are also regarded globally 1945. Both events are also regarded globally as being miraculous. as being miraculous.\n\nIn 1945, foreign diplomats and businessmen In 1945, foreign diplomats and businessmen visiting Japan were fully confident that the visiting Japan were fully confident that the country would recover as they surveyed the country would recover as they surveyed the ruins and the scorched earth around them, ruins and the scorched earth around them, because, in the words of one of them, \"People because, in the words of one of them, \"People really work hard and help each other, and really work hard and help each other, and children take heed of what their parents say children take heed of what their parents say and study hard. And because there is a and study hard. And because there is a sparkle in their eyes.\" sparkle in their eyes.\"\n\nThereafter, the Japanese worked furiously Thereafter, the Japanese worked furiously\n\nuntil the country became an economic until the country became an economic juggernaut. However, in the early 1970s, juggernaut. However, in the early 1970s, people became complacent about their people became complacent about their affluence, and stopped working hard and affluence, and stopped working hard and making efforts. Children assumed that if they making efforts. Children assumed that if they went to a top-class university they would walk went to a top-class university they would walk into a top-class company and have nothing to into a top-class company and have nothing to worry about thereafter. So they started going worry about thereafter. So they started going to cram schools even before kindergarten. to cram schools even before kindergarten. I give lectures on the theme \"students born in I give lectures on the theme \"students born in and after 1980 are hopeless cases\" (laughs). and after 1980 are hopeless cases\" (laughs). That was because of the prevailing attitude at That was because of the prevailing attitude at the time that Japan the time that Japan's national development s national development would go on for ever and the economy would would go on for ever and the economy would remain stable. As a result, parents spoilt their remain stable. As a result, parents spoilt their children, and we saw more children who children, and we saw more children who could not do anything. Many such children could not do anything. Many such children are in their 30s now. are in their 30s now.\n\nAnd in this situation, the asset bubble burst And in this situation, the asset bubble burst [in the early 1990s], and the collapse of [in the early 1990s], and the collapse of Lehman [hit world markets] in 2008, and Lehman [hit world markets] in 2008, and now we have the earthquake and tsunami now we have the earthquake and tsunami disaster. It seems that everything that disaster. It seems that everything that happens these days merely makes us more happens these days merely makes us more anxious. I think everyone needs to hit the anxious. I think everyone needs to hit the 'reset' button in some sense. If we don 'reset' button in some sense. If we don't, more difficulties lie ahead. more difficulties lie ahead.\n\n**Miyata**: Indeed, prior to 1970, living : Indeed, prior to 1970, living standards or wage levels were very low, standards or wage levels were very low, but I think it was a very happy time. People but I think it was a very happy time. People believed that if they really worked hard, believed that if they really worked hard, their daily lives would improve and their their daily lives would improve and their\n\n# Takeshi Kunibe\n\nPresident and CEO Sumitomo Mitsui Banking Corporation\n\ncompanies would do better and companies would do better and the whole country would benefit. the whole country would benefit. Returning to Mr. Ando Returning to Mr. Ando's words, s words, and his comments about a nd h is c omme n ts a b ou t clinging to the status quo, more clinging to the status quo, more people now think, \"Oh, well, my people now think, \"Oh, well, my life is fairly comfortable and life is fairly comfortable and that's enough for me.\" This sense that's enough for me.\" This sense of stagnation, or resignation, of stagnation, or resignation,\n\nthat people feel in their lives has spread that people feel in their lives has spread throughout Japan. But when the disaster throughout Japan. But when the disaster struck, people again came together and struck, people again came together and worked together in the recovery effort. I worked together in the recovery effort. I thought, \"Not everything that happened has thought, \"Not everything that happened has been bad.\" But I fear the consequences if we been bad.\" But I fear the consequences if we don't galvanize, coordinate and maximize t galvanize, coordinate and maximize efforts more effectively. efforts more effectively.\n\n**Kunibe**: As for SMBC, I wondered if : As for SMBC, I wondered if employees at all the branches and other employees at all the branches and other offices in the affected areas would be able to offices in the affected areas would be able to get to work and carry out their duties at such get to work and carry out their duties at such a difficult time for their own families; or if a difficult time for their own families; or if they would be able to open their offices for they would be able to open their offices for business on weekends and other holidays. business on weekends and other holidays. Despite the lack of water and gas, they really Despite the lack of water and gas, they really gave their all to provide banking services. gave their all to provide banking services. It was really uplifting to see such dedication It was really uplifting to see such dedication and sense of responsibility as an employee of and sense of responsibility as an employee of a financial institution entrusted with essential a financial institution entrusted with essential social infrastructure. I talk about \"the strength social infrastructure. I talk about \"the strength of our front-line staff,\" but I was able to fully of our front-line staff,\" but I was able to fully appreciate just how extraordinarily strong appreciate just how extraordinarily strong SMFG and SMBC are thanks to SMFG and SMBC are thanks to this display display of front-line commitment. of front-line commitment.\n\nMoving forward on the reconstruction of Moving forward on the reconstruction of the Tohoku region, I believe we can also the Tohoku region, I believe we can also contribute to the rebuilding of infrastructure contribute to the rebuilding of infrastructure through project finance and other t h roug h project f i n a nce a nd ot her fundamental businesses of financial f undamental businesses of financial institutions in which we excel. institutions in which we excel. We are now actively engaged in promoting We are now actively engaged in promoting business in the Tohoku region, including business in the Tohoku region, including business matching with parties outside business matching with parties outside the region. In addition, we have a range of the region. In addition, we have a range of support activities in partnership with the Miyagi support activities in partnership with the Miyagi prefectural government and The 77 Bank, prefectural government and The 77 Bank, Ltd., which is based in Miyagi. Ltd., which is based in Miyagi.\n\n**Miyata**: In the same way, other SMFG In the same way, other SMFG Group companies have been sending out Group companies have been sending out volunteers, and providing donations not only volunteers, and providing donations not only as a company, but also through individual as a company, but also through individual employees. SMBC was at the heart of all these employees. SMBC was at the heart of all these activities, and this was a good opportunity activities, and this was a good opportunity for us to appreciate anew how our business for us to appreciate anew how our business contributes to the public good. contributes to the public good.\n\n# Koichi Miyata\n\nPresident Sumitomo Mitsui Financial Group, Inc.\n\nThe SMFG Group has 62,000 employees, The SMFG Group has 62,000 employees, \"stepping up to the plate and working hard \"stepping up to the plate and working hard to give something back to society.\" I think it to give something back to society.\" I think it is important to develop ways of making this is important to develop ways of making this a shared aspiration of all the employees of a shared aspiration of all the employees of the Group. the Group.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Priority Issues for Us** As one of Japa As one of Japan's leading financial services groups, s leading financial services groups,\n\nthe SMFG Group is taking the lead in aggressively addressing the four priority issues the SMFG Group is taking the lead in aggressively addressing the four priority issues we have identified as significantly impacting the nation. we have identified as significantly impacting the nation.\n\n**Measures for Japan's regeneration**\n\n# **Reconstruction after the earthquake and tsunami**\n\nMitsui Charity Hospital at its establishment Mitsui Charity Hospital at its establishment\n\nBesshi copper mine in the Meiji era Besshi copper mine in the Meiji era And today And today\n\nThe March 11 earthquake and tsunami (The Gr The March 11 earthquake and tsunami (The Great East Japan Earthquake) undermined power eat East Japan Earthquake) undermined power generation capacity and severed manufacturing supply chains across the nation. This was in addition generation capacity and severed manufacturing supply chains across the nation. This was in addition to the severe damage sustained by agriculture and fisheries in the Northeast. to the severe damage sustained by agriculture and fisheries in the Northeast.\n\nThe disaster also threw into relief many social issues facing the nation. By leveraging our role as The disaster also threw into relief many social issues facing the nation. By leveraging our role as a leading financial services group, we are committing our full range of resources to dealing with the a leading financial services group, we are committing our full range of resources to dealing with the enormous task of regional reconstruction after the earthquake, in partnership with stakeholders enormous task of regional reconstruction after the earthquake, in partnership with stakeholders including enterprises, local governments and non-profit organizations. including enterprises, local governments and non-profit organizations.\n\n#### **Further measures needed**\n\n- Wide-ranging financial support for the reconstruction of infrastructure Wide-ranging financial support for the reconstruction of infrastructure\n- Ongoing disaster recovery activities by employee volunteers Ongoing disaster recovery activities by employee volunteers\n- Comprehensive support for industrial recovery Comprehensive support for industrial recovery in partnership with local governments and in partnership with local governments and financial institutions in the disaster-affected areas financial institutions in the disaster-affected areas\n\n**Environmental measures Creating systems for sustainability Global challenges**\n\nThe SMFG Group has positioned environmental businesses as an area where it can most effectively The SMFG Group has positioned environmental businesses as an area where it can most effectively leverage its role as a leading financial services group. This is a priority field for the future. leverage its role as a leading financial services group. This is a priority field for the future. Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to contribute to sustainable development by supporting contribute to sustainable development by supporting the worldwide adoption of Japan's much-admired the worldwide adoption of Japan's much-admired technological breakthroughs, with a particular focus on the Asian region. technological breakthroughs, with a particular focus on the Asian region.\n\n#### **Further measures needed**\n\n- Give further support for businesses involved in greenhouse gas Give further support for businesses involved in greenhouse gas reduction, water supply, new energy and resource initiatives reduction, water supply, new energy and resource initiatives\n- Do more to safeguard biodiversity, in our capacity as a Do more to safeguard biodiversity, in our capacity as a financial institution financial institution\n- Share our information assets and know-how globally in the Share our information assets and know-how globally in the environmental business environmental business\n\nprograms to solve the problem of programs to solve the problem of pollution around the Besshi copper pollution around the Besshi copper mine, while the Mitsui Group set up mine, while the Mitsui Group set up the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to give the poorest in society access to give the poorest in society access to basic medical care. Based on this basic medical care. Based on this corporate social responsibility corporate social responsibility DNA embedded in the business DNA embedded in the business philosophies of both the Sumitomo philosophies of both the Sumitomo and Mitsui groups over the 400 and Mitsui groups over the 400 years of their existence, we will years of their existence, we will continue to play our part in solving continue to play our part in solving problems facing the international problems facing the international community through our financial community through our financial service service operations. operations.\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group undertook large-scale afforestation undertook large-scale afforestation\n\n# **Shrinking and aging population Ensuring peace of mind for the future**\n\nCurrently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifest frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle yle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to crea planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound te systems and a corporate culture that foster a sound balance between work and care needs, given that many gr balance between work and care needs, given that many group employees will later need to nurse ailing relatives. oup employees will later need to nurse ailing relatives. *Estimates by the Statistics Bureau, Ministry of Internal Affairs and Communications (October 1, 2011)\n\n#### **Further measures needed**\n\n- nursing care nursing care\n- elderly (planning for asset management for old age) elderly (planning for asset management for old age)\n- Foster a better work-life balance Foster a better work-life balance\n\n# **Symbiosis and diversity**\n\nSupport businesses involved in health, medical and Support businesses involved in health, medical and\n\nExpand range of financial products and services for the Expand range of financial products and services for the\n\nIn anticipation of further global expansion, the SMFG Group is aggressively internationalizing its In anticipation of further global expansion, the SMFG Group is aggressively internationalizing its operations both in Japan and overseas. Initiative operations both in Japan and overseas. Initiatives include aggressive development of advisory include aggressive development of advisory services for infrastructure upgrades in emergi services for infrastructure upgrades in emerging economies, a cross-departmental endeavor, g economies, a cross-departmental endeavor, as well as contributions to the international community and the environmental business, chiefly as well as contributions to the international community and the environmental business, chiefly through branches and representative offices overseas. through branches and representative offices overseas.\n\nWe will continue to discuss and review various approaches to issues facing the international We will continue to discuss and review various approaches to issues facing the international community so as to build up trust internationally as a global player. community so as to build up trust internationally as a global player.\n\n#### **Further measures needed**\n\n- Share expertise in corporate social responsibility Share expertise in corporate social responsibility with the international community with the international community\n- Improve financial services in preparation for the Improve financial services in preparation for the globalization of operations in Japan (multilingual globalization of operations in Japan (multilingual support) support)\n- Promote diversity Promote diversity", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Environmental Activities**\n\n**International initiatives in Asian countries and others**\n\n# **Taking a leading role in environmental businesses in Asia**\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia 2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The exhibition, visited on successive days exhibition, visited on successive days by Malaysia Malaysia's King, prime minister, some of s King, prime minister, some of the regional Kings of Malaysia, the regional Kings of Malaysia, and cabinet ministers, raised awareness cabinet ministers, raised awareness of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of \"green townships\" based establishment of \"green townships\" based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n# **Promoting energy-saving and low-emission industries in China**\n\nIn China, which emits more carbon dioxide In China, which emits more carbon dioxide than any other country, finding ways of than any other country, finding ways of promoting new energy-saving measures promoting new energy-saving measures and restructuring industry have become and restructuring industry have become pressing issues. pressing issues.\n\nThe Japan Research Institute has built up a The Japan Research Institute has built up a successful track record in the course of its successful track record in the course of its advisory activities in China, in joint research advisory activities in China, in joint research into local-level microgrid construction at into local-level microgrid construction at the Tianjin Eco-City, and in policy-making the Tianjin Eco-City, and in policy-making relating to renewable energy management relating to renewable energy management systems and other areas. ems and other areas. In partnership with the Guangdong Provincial In partnership with the Guangdong Provincial Department of Science and Technology, the Department of Science and Technology, the Japan Research Institute also advises Japan Research Institute also advises government departments on system government departments on system establishment for new energy-saving establishment for new energy-saving businesses. Guangdong is China businesses. Guangdong is China's richest s richest province by gross provincial product, and province by gross provincial product, and here both needs and potential in the field here both needs and potential in the field of energy-saving are very great. The Japan of energy-saving are very great. The Japan Research Institute also supports industrial Research Institute also supports industrial restructuring and low-carbon projects in the restructuring and low-carbon projects in the province through model projects. province through model projects.\n\n**Support for adoption of electric vehicles and car-sharing**\n\nIn the battle against global warming, both In the battle against global warming, both public and private sectors are facing mounting public and private sectors are facing mounting pressure to curb carbon dioxide pollution from pressure to curb carbon dioxide pollution from transportation, one of the major sources of transportation, one of the major sources of emissions. Against this backdrop, the Japan emissions. Against this backdrop, the Japan Research Institute is supporting environmental Research Institute is supporting environmental businesses that map out pathways and businesses that map out pathways and develop projects, tailored to the needs of develop projects, tailored to the needs of particular localities, to bring about a particular localities, to bring about a low-carbon society. Experimental projects are low-carbon society. Experimental projects are currently underway in Kanagawa Prefecture, currently underway in Kanagawa Prefecture, Saitama Prefecture, Kyoto and Sapporo. Saitama Prefecture, Kyoto and Sapporo. These initiatives are aimed at hastening the These initiatives are aimed at hastening the adoption of electric vehicles and car-sharing adoption of electric vehicles and car-sharing to cut carbon dioxide emissions. The Institute to cut carbon dioxide emissions. The Institute is working in cooperation with government is working in cooperation with government bodies, car-rental, commercial vehicle-leasing bodies, car-rental, commercial vehicle-leasing and parking-facility management companies, and parking-facility management companies, railways, communications providers and railways, communications providers and other entities. other entities.\n\nElectric vehicles not only emit no carbon dioxide, but offer a comfortable drive as well\n\nIGEM2010 greeted many visitors", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# Today, Tomorrow and Beyond\n\n**President Sumitomo Mitsui Financial Group, Inc.**\n\n**Koichi Miyata**\n\nFirst, I would like to extend our deepest sympathies and heartfelt First, I would like to extend our deepest sympathies and heartfelt condolences to all those who have suffered and condolences to all those who have suffered and to the families and friends of those who tragically lost their lives in to the families and friends of those who tragically lost their lives in the devastating earthquake and tsunami the devastating earthquake and tsunami that struck northeastern Japan on March 11, 2011. We pray for the that struck northeastern Japan on March 11, 2011. We pray for the early recovery of the affected people and areas. early recovery of the affected people and areas. SMFG is dedicated to seamlessly responding to clients' needs by SMFG is dedicated to seamlessly responding to clients' needs by leveraging our group-wide capabilities, leveraging our group-wide capabilities, offering optimal products and services, and ensuring that every offering optimal products and services, and ensuring that every employee and the overall group are capable of employee and the overall group are capable of responding to the challenges of globalization. I believe that responding to the challenges of globalization. I believe that through these measures, through these measures, we will contribute to the growth and development of our clients we will contribute to the growth and development of our clients and society, and ourselves grow in partnership with them. and society, and ourselves grow in partnership with them. Through our basic policy of becoming \"a globally competitive Through our basic policy of becoming \"a globally competitive financial services group financial services group with the highest trust of our clients, society and other stakeholders\" with the highest trust of our clients, society and other stakeholders\" by maximizing our core strengths of by maximizing our core strengths of \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we will continue to stay ahead of the times, will continue to stay ahead of the times, no matter how challenging, and actively adapt to changes in our no matter how challenging, and actively adapt to changes in our business environment. business environment.\n\n## **INDEX**\n\n| Foreword | 1 |\n| --- | --- |\n| Commitment from the Top A Conversation with Tadao Ando, | 3 |\n| Takeshi Kunibe and Koichi Miyata | |\n| What can we do now to spur the reconstruction and revitalization of Japan, | |\n| and help resolve global issues? | |\n| Measures to Support Reconstruction | |\n| after the March 11 | |\n| Earthquake and Tsunami | 8 |\n| Priority Issues for Us | 9 |\n| Our Mission and CSR at SMFG | 11 |\n| 〈Specific Examples of CSR Activities〉 | |\n| Together with Our Customers | 13 |\n| Together with Our Shareholders | |\n| and Markets | 17 |\n| Together with Our Employees | 19 |\n| Environmental Activities | 21 |\n| Social Contribution Activities | 25 |\n| Corporate Outline/Editorial Policy | 29 |", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### CORPORATE OFFICERS\n\n#### Carlos Ghosn\n\nChief Executive Officer North American Operations (MC-NA & MC-US) Global Communications, CSR and IR Global Internal Audit\n\n#### Toshiyuki Shiga\n\nChief Operating Officer Japan Operations (MC-J) GOM Operations (MC-GOM) China Operations Global Marketing and Sales Global Aftersales and Conversion Business Corporate Quality Assurance and Customer Service Human Resources Treasury\n\n#### Itaru Koeda\n\nExecutive Vice President Administration for Affiliated Companies (MC-AFL) External and Government Affairs Intellectual Asset Management Industry Machinery Marine\n\n#### Tadao Takahashi\n\nExecutive Vice President Manufacturing SCM (Supply Chain Management) Global IS\n\n#### Hiroto Saikawa\n\nExecutive Vice President European Operations (MC-E) Purchasing\n\n#### Mitsuhiko Yamashita\n\nExecutive Vice President Research, Technology and Engineering Development Cost Engineering\n\n#### Carlos Tavares\n\nExecutive Vice President Design Corporate Planning Product Planning Market Intelligence LCV Business\n\n#### Takeshi Isayama\n\nVice Chairman External and Government Affairs Dept. Intellectual Asset Management Office\n\n#### Eiji Imai\n\nSenior Vice President Corporate Quality Assurance and Customer Service Div.\n\n#### Bernard Rey\n\nSenior Vice President CEO/COO Office Global Motorsports Alliance Coordination Office Security Office Legal Dept. Organization Development and Process Re-engineering Secretariat\n\nShiro Nakamura Senior Vice President Design\n\n#### Kazuhiko Toida\n\nSenior Vice President Japan Marketing & Sales MC-Dealer Dealer Network Div. Fleet Business Div.\n\n#### Hidetoshi Imazu\n\nSenior Vice President Cost Reduction Promotion Office Manufacturing and Industrial Engineering Div. Oppama Plant Tochigi Plant Kyushu Plant Yokohama Plant Iwaki Plant Overseas Parts Logistics Control Dept.\n\n#### Alain-Pierre Raynaud\n\nSenior Vice President Global Controller\n\n#### Sadao Sekiyama\n\nSenior Vice President Vehicle Production Engineering Div.\n\n#### Kimiyasu Nakamura\n\nSenior Vice President Vehicle Design Engineering Div. No.3 Vehicle Performance Development Dept. Body Engineering Dept. Interior and Exterior Trim Engineering Dept.\n\n#### Steven Wilhite\n\nSenior Vice President Global Sales Management Dept. Marketing and Sales Brand Management Office Global Marketing Dept. Global Infiniti Support Dept.\n\n#### Junichi Endo\n\nSenior Vice President Global Aftersales, Div. Aftersales Div. (Japan) GOM Aftersales Div. Conversion Business\n\n#### Hitoshi Kawaguchi\n\nSenior Vice President Human Resources Dept. Diversity Development Office\n\n#### Minoru Shinohara\n\nSenior Vice President Integrated System Planning Office Environmental and Safety Engineering Dept. Technology Planning Dept. Materials Engineering Dept. Advanced Vehicle Engineering Div. Electronics Engineering Div.\n\n#### Yo Usuba\n\nSenior Vice President Powertrain Engineering Div.", - "page_start": 110, - "page_end": 110, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# **Social Contribution Activities**\n\n# **Helping build prosperity in Asia and the world**\n\nThe SMFG Group is engaged in a range of activities The SMFG Group is engaged in a range of activities that contribute to development at both the regional that contribute to development at both the regional and international level. In addition to overseas units' and international level. In addition to overseas units' independent initiatives, which are geared to host independent initiatives, which are geared to host country issues and characteristics, the Group supports country issues and characteristics, the Group supports projects that have contributed to achievement of the projects that have contributed to achievement of the United Nations' global Millennium Development Goals, United Nations' global Millennium Development Goals, such as poverty eradication, health improvement and such as poverty eradication, health improvement and status improvement for education and women in status improvement for education and women in developing countries. Our support takes the form of developing countries. Our support takes the form of donations to non-profit and non-governmental donations to non-profit and non-governmental organizations, through the employee volunteer fund. organizations, through the employee volunteer fund. (The map shows areas where fund money is used, (The map shows areas where fund money is used, marked with a marked with a ★ symbol). Please see our website for symbol). Please see our website for more details. more details.\n\n### **International cooperation begins at home**\n\n#### **Employees put school meals on the table through their purchases in staff canteens**\n\nSMBC and Sumitomo Mitsui Finance and Leasing SMBC and Sumitomo Mitsui Finance and Leasing have a program that provides donations to the non have a program that provides donations to the nonprofit organization TABLE FOR TWO International to profit organization TABLE FOR TWO International to\n\nfund school meals in developing fund school meals in developing countries, for every low-calorie countries, for every low-calorie meal ordered for lunch. SMBC meal ordered for lunch. SMBC Friend Securities has also F riend S e curitie s ha s a l s o installed vending machines ins talled vending machines selling healthy drinks, donating selling healthy drinks, donating part of their sales to TABLE FOR part of their sales to TABLE FOR TWO International. TWO International.\n\n#### **Donation boxes for foreign currency coins**\n\nSMBC places donation boxes for foreign currency SMBC places donation boxes for foreign currency coins at the entrances of all manned branches and coins at the entrances of all manned branches and offices in Japan, and sorts such collected coins by offices in Japan, and sorts such collected coins by currency for delivery to UNICEF. currency for delivery to UNICEF.\n\n#### **The SMBC Foundation for International Cooperation**\n\nThe SMBC Foundation for International Cooperation The SMBC Foundation for International Cooperation strives to assist in developing the human resources strives to assist in developing the human resources necessary to achieve sustainable growth in develop necessary to achieve sustainable growth in developing economies as well as to promote international ing economies as well as to promote international exchange activities. The foundation has provided exchange activities. The foundation has provided financial support for students from Asian countries financial support for students from Asian countries each year, enabling them to attend universities in each year, enabling them to attend universities in Japan. The foundation also offers subsidies to Japan. The foundation also offers subsidies to research institutes and researchers undertaking research institutes and researchers undertaking projects related to developing countries. projects related to developing countries.\n\n#### **1 South Korea**\n\n**Support for a South Korean students' Japanese-language theater competition**\n\nAs a way of increasing understanding of Japanese culture, As a way of increasing understanding of Japanese culture, SMBC's Seoul Branch donates funds to make possible the s Seoul Branch donates funds to make possible the holding of a competition holding of a competition\n\ninvolving theatrical perfor involving theatrical performances in the Japanese mances in the Japanese language by South Korean language by South Korean students of Japanese. students of Japanese.\n\nPerforming a Japanese-language drama\n\n### **Scholarships at major universities**\n\nSumitomo Mitsui Banking Corporation (China) Limited Sumitomo Mitsui Banking Corporation (China) Limited established a scholarship program for students of Zhejiang established a scholarship program for students of Zhejiang\n\nUniversity, Shanghai Inter University, Shanghai International Studies University, national Studies University, Sun Yat-sen University, Sun Yat-sen University, and other universities. and other universities.\n\n#### Scholarship students at Sun Yat-sen University\n\n# **3 Hong Kong**\n\n**2**\n\n**China**\n\n#### **Supporting performances by young Asian musicians**\n\nSMBC Hong Kong Branch makes donations to the Asian SMBC Hong Kong Branch makes donations to the Asian\n\nYouth Orchestra (AYO), Youth Orchestra (AYO), comprising young Asian comprising young Asian musicians selected mu s i c i a n s s e l e c t e d through auditioning who through auditioning who perform all over Asia. perform all over Asia.\n\nPhotographs supplied by AYO\n\n#### **Providing work 4 Vietnam**\n\n**experience to students** SMBC's Hanoi Branch provided s Hanoi Branch provided international school students international school students with vocational experiences. with vocational experiences.\n\n#### **5 Thailand**\n\n#### **Supporting farming villages in the northeast**\n\nSMBC's Bangkok Branch assisted s Bangkok Branch assisted farmers by donating underground farmers by donating underground water storage tanks and assisting water storage tanks and assisting with vegetable planting and with vegetable planting and harvesting. harvesting.\n\nBank employees helped plant\n\n# vegetables as volunteers\n\n### **Donating furniture to welfare facilities 6 Malaysia**\n\nSMBC' s Labuan Branch in s L abuan Br anch in Malaysia, following its relocation, Malaysia, following its relocation, donated desks, chairs and donated desks , chair s and cabinets to occupational training cabinets to occupational training centers for the disabled. centers for the disabled.\n\n# **Europe**\n\n**7**\n\n#### **Donations to charity groups**\n\nEmployees of Sumitomo Mitsui Banking Corporation Europe Employees of Sumitomo Mitsui Banking Corporation Europe (SMBCE) conducted volunteer activities in their time off. (SMBCE) conducted volunteer activities in their time off. SMBCE contributes to charitable organizations through an SMBCE contributes to charitable organizations through an in-house fund and also uses a matching gifts program under in-house fund and also uses a matching gifts program under\n\nwhich it donates a which it donates a certain amount for certain amount for every donation made every donation made by its employees. by its employees.\n\nEmployee volunteers who participated in landscape improvement projects\n\n## **8 Europe**\n\n### **Donation for a Japanese-language speech contest**\n\nThe European office of the Japan Research Institute (JRI) The European office of the Japan Research Institute (JRI) made a donation in support of a Japanese-language speech made a donation in support of a Japanese-language speech contest. contest.\n\n## **UNICEF support initiatives**\n\nThrough the Climate & Children Supporters project, the bank Through the Climate & Children Supporters project, the bank has supported UNICEF projects in Mozambique benefitting has supported UNICEF projects in Mozambique benefitting children and improving children and improving\n\nfor further details (in Japanese): www.smbc.co.jp/ccs/\n\n#### **SMBC GLOBAL FOUNDATION 10 The United States**\n\nBased in the United States, SMBC Global Foundation has Based in the United States, SMBC Global Foundation has provided scholarships to more than 5,000 university students provided scholarships to more than 5,000 university students in Asian countries since its establishment in 1994. In the in Asian countries since its establishment in 1994. In the United States, it supports educational trips to Japan United States, it supports educational trips to Japan organized by a high school located in Harlem, New York City, organized by a high school located in Harlem, New York City, and volunteer employees of SMBC and JRI to participate in and volunteer employees of SMBC and JRI to participate in school beautification programs. The foundation also provides school beautification programs. The foundation also provides matching gifts for SMBC employees. matching gifts for SMBC employees.\n\nHigh school students from New York who visited Japan on a study trip\n\nScholarship award ceremony for university students in Vietnam", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_SMFG_2011.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "Did Katsutoshi Konuma participate in the August 2011 expert roundtable on the role of the Sumitomo Mitsui Financial Group's new Food and Agricultural Assessment Loan? ", - "target_page": 8, - "target_passage": "Key comments of participants Together with Our Customers Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "Sumitomo Mitsui Financial Group CSR Report **Digest version**", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n#### **BOARD OF DIRECTORS AND AUDITORS**\n\n#### **Representative Board Members**\n\nCarlos Ghosn President and Co-Chairman\n\nItaru Koeda Co-Chairman\n\nToshiyuki Shiga Co-Chairman\n\n#### **Board Members**\n\n- Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Shemaya Lévy Patrick Pélata\n- **Auditors** Hisayoshi Kojima Shinji Ichishima Keishi Imamura Haruo Murakami\n\n#### **EXECUTIVE COMMITTEE MEMBERS**\n\n- Carlos Ghosn Toshiyuki Shiga Itaru Koeda Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Alain-Pierre Raynaud\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# Commitment from the Top\n\n**A Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata** \n\n# **What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?**\n\n#### *Uplifting the nation's spirits Uplifting the nation's spirits*\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region o Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan) after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling birth rates birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues fa Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society ing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n# Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\n**Our measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits**\n\n̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrink ing a nd aging population, and global challenges. — population, and global challenges. —\n\n**Kunibe**: Japan is facing a difficult period Japan is facing a difficult period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japa n Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also ways to uplift the nation's spirits. ways to uplift the nation's spirits.\n\n**Ando**: Japan has achieved two miracles - the : Japan has achieved two miracles - the Meiji Restoration of 1868, and the economic Meiji Restoration of 1868, and the economic recovery following the end of World War II in recovery following the end of World War II in 1945. Both events are also regarded globally 1945. Both events are also regarded globally as being miraculous. as being miraculous.\n\nIn 1945, foreign diplomats and businessmen In 1945, foreign diplomats and businessmen visiting Japan were fully confident that the visiting Japan were fully confident that the country would recover as they surveyed the country would recover as they surveyed the ruins and the scorched earth around them, ruins and the scorched earth around them, because, in the words of one of them, \"People because, in the words of one of them, \"People really work hard and help each other, and really work hard and help each other, and children take heed of what their parents say children take heed of what their parents say and study hard. And because there is a and study hard. And because there is a sparkle in their eyes.\" sparkle in their eyes.\"\n\nThereafter, the Japanese worked furiously Thereafter, the Japanese worked furiously\n\nuntil the country became an economic until the country became an economic juggernaut. However, in the early 1970s, juggernaut. However, in the early 1970s, people became complacent about their people became complacent about their affluence, and stopped working hard and affluence, and stopped working hard and making efforts. Children assumed that if they making efforts. Children assumed that if they went to a top-class university they would walk went to a top-class university they would walk into a top-class company and have nothing to into a top-class company and have nothing to worry about thereafter. So they started going worry about thereafter. So they started going to cram schools even before kindergarten. to cram schools even before kindergarten. I give lectures on the theme \"students born in I give lectures on the theme \"students born in and after 1980 are hopeless cases\" (laughs). and after 1980 are hopeless cases\" (laughs). That was because of the prevailing attitude at That was because of the prevailing attitude at the time that Japan the time that Japan's national development s national development would go on for ever and the economy would would go on for ever and the economy would remain stable. As a result, parents spoilt their remain stable. As a result, parents spoilt their children, and we saw more children who children, and we saw more children who could not do anything. Many such children could not do anything. Many such children are in their 30s now. are in their 30s now.\n\nAnd in this situation, the asset bubble burst And in this situation, the asset bubble burst [in the early 1990s], and the collapse of [in the early 1990s], and the collapse of Lehman [hit world markets] in 2008, and Lehman [hit world markets] in 2008, and now we have the earthquake and tsunami now we have the earthquake and tsunami disaster. It seems that everything that disaster. It seems that everything that happens these days merely makes us more happens these days merely makes us more anxious. I think everyone needs to hit the anxious. I think everyone needs to hit the 'reset' button in some sense. If we don 'reset' button in some sense. If we don't, more difficulties lie ahead. more difficulties lie ahead.\n\n**Miyata**: Indeed, prior to 1970, living : Indeed, prior to 1970, living standards or wage levels were very low, standards or wage levels were very low, but I think it was a very happy time. People but I think it was a very happy time. People believed that if they really worked hard, believed that if they really worked hard, their daily lives would improve and their their daily lives would improve and their\n\n# Takeshi Kunibe\n\nPresident and CEO Sumitomo Mitsui Banking Corporation\n\ncompanies would do better and companies would do better and the whole country would benefit. the whole country would benefit. Returning to Mr. Ando Returning to Mr. Ando's words, s words, and his comments about a nd h is c omme n ts a b ou t clinging to the status quo, more clinging to the status quo, more people now think, \"Oh, well, my people now think, \"Oh, well, my life is fairly comfortable and life is fairly comfortable and that's enough for me.\" This sense that's enough for me.\" This sense of stagnation, or resignation, of stagnation, or resignation,\n\nthat people feel in their lives has spread that people feel in their lives has spread throughout Japan. But when the disaster throughout Japan. But when the disaster struck, people again came together and struck, people again came together and worked together in the recovery effort. I worked together in the recovery effort. I thought, \"Not everything that happened has thought, \"Not everything that happened has been bad.\" But I fear the consequences if we been bad.\" But I fear the consequences if we don't galvanize, coordinate and maximize t galvanize, coordinate and maximize efforts more effectively. efforts more effectively.\n\n**Kunibe**: As for SMBC, I wondered if : As for SMBC, I wondered if employees at all the branches and other employees at all the branches and other offices in the affected areas would be able to offices in the affected areas would be able to get to work and carry out their duties at such get to work and carry out their duties at such a difficult time for their own families; or if a difficult time for their own families; or if they would be able to open their offices for they would be able to open their offices for business on weekends and other holidays. business on weekends and other holidays. Despite the lack of water and gas, they really Despite the lack of water and gas, they really gave their all to provide banking services. gave their all to provide banking services. It was really uplifting to see such dedication It was really uplifting to see such dedication and sense of responsibility as an employee of and sense of responsibility as an employee of a financial institution entrusted with essential a financial institution entrusted with essential social infrastructure. I talk about \"the strength social infrastructure. I talk about \"the strength of our front-line staff,\" but I was able to fully of our front-line staff,\" but I was able to fully appreciate just how extraordinarily strong appreciate just how extraordinarily strong SMFG and SMBC are thanks to SMFG and SMBC are thanks to this display display of front-line commitment. of front-line commitment.\n\nMoving forward on the reconstruction of Moving forward on the reconstruction of the Tohoku region, I believe we can also the Tohoku region, I believe we can also contribute to the rebuilding of infrastructure contribute to the rebuilding of infrastructure through project finance and other t h roug h project f i n a nce a nd ot her fundamental businesses of financial f undamental businesses of financial institutions in which we excel. institutions in which we excel. We are now actively engaged in promoting We are now actively engaged in promoting business in the Tohoku region, including business in the Tohoku region, including business matching with parties outside business matching with parties outside the region. In addition, we have a range of the region. In addition, we have a range of support activities in partnership with the Miyagi support activities in partnership with the Miyagi prefectural government and The 77 Bank, prefectural government and The 77 Bank, Ltd., which is based in Miyagi. Ltd., which is based in Miyagi.\n\n**Miyata**: In the same way, other SMFG In the same way, other SMFG Group companies have been sending out Group companies have been sending out volunteers, and providing donations not only volunteers, and providing donations not only as a company, but also through individual as a company, but also through individual employees. SMBC was at the heart of all these employees. SMBC was at the heart of all these activities, and this was a good opportunity activities, and this was a good opportunity for us to appreciate anew how our business for us to appreciate anew how our business contributes to the public good. contributes to the public good.\n\n# Koichi Miyata\n\nPresident Sumitomo Mitsui Financial Group, Inc.\n\nThe SMFG Group has 62,000 employees, The SMFG Group has 62,000 employees, \"stepping up to the plate and working hard \"stepping up to the plate and working hard to give something back to society.\" I think it to give something back to society.\" I think it is important to develop ways of making this is important to develop ways of making this a shared aspiration of all the employees of a shared aspiration of all the employees of the Group. the Group.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Priority Issues for Us** As one of Japa As one of Japan's leading financial services groups, s leading financial services groups,\n\nthe SMFG Group is taking the lead in aggressively addressing the four priority issues the SMFG Group is taking the lead in aggressively addressing the four priority issues we have identified as significantly impacting the nation. we have identified as significantly impacting the nation.\n\n**Measures for Japan's regeneration**\n\n# **Reconstruction after the earthquake and tsunami**\n\nMitsui Charity Hospital at its establishment Mitsui Charity Hospital at its establishment\n\nBesshi copper mine in the Meiji era Besshi copper mine in the Meiji era And today And today\n\nThe March 11 earthquake and tsunami (The Gr The March 11 earthquake and tsunami (The Great East Japan Earthquake) undermined power eat East Japan Earthquake) undermined power generation capacity and severed manufacturing supply chains across the nation. This was in addition generation capacity and severed manufacturing supply chains across the nation. This was in addition to the severe damage sustained by agriculture and fisheries in the Northeast. to the severe damage sustained by agriculture and fisheries in the Northeast.\n\nThe disaster also threw into relief many social issues facing the nation. By leveraging our role as The disaster also threw into relief many social issues facing the nation. By leveraging our role as a leading financial services group, we are committing our full range of resources to dealing with the a leading financial services group, we are committing our full range of resources to dealing with the enormous task of regional reconstruction after the earthquake, in partnership with stakeholders enormous task of regional reconstruction after the earthquake, in partnership with stakeholders including enterprises, local governments and non-profit organizations. including enterprises, local governments and non-profit organizations.\n\n#### **Further measures needed**\n\n- Wide-ranging financial support for the reconstruction of infrastructure Wide-ranging financial support for the reconstruction of infrastructure\n- Ongoing disaster recovery activities by employee volunteers Ongoing disaster recovery activities by employee volunteers\n- Comprehensive support for industrial recovery Comprehensive support for industrial recovery in partnership with local governments and in partnership with local governments and financial institutions in the disaster-affected areas financial institutions in the disaster-affected areas\n\n**Environmental measures Creating systems for sustainability Global challenges**\n\nThe SMFG Group has positioned environmental businesses as an area where it can most effectively The SMFG Group has positioned environmental businesses as an area where it can most effectively leverage its role as a leading financial services group. This is a priority field for the future. leverage its role as a leading financial services group. This is a priority field for the future. Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to contribute to sustainable development by supporting contribute to sustainable development by supporting the worldwide adoption of Japan's much-admired the worldwide adoption of Japan's much-admired technological breakthroughs, with a particular focus on the Asian region. technological breakthroughs, with a particular focus on the Asian region.\n\n#### **Further measures needed**\n\n- Give further support for businesses involved in greenhouse gas Give further support for businesses involved in greenhouse gas reduction, water supply, new energy and resource initiatives reduction, water supply, new energy and resource initiatives\n- Do more to safeguard biodiversity, in our capacity as a Do more to safeguard biodiversity, in our capacity as a financial institution financial institution\n- Share our information assets and know-how globally in the Share our information assets and know-how globally in the environmental business environmental business\n\nprograms to solve the problem of programs to solve the problem of pollution around the Besshi copper pollution around the Besshi copper mine, while the Mitsui Group set up mine, while the Mitsui Group set up the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to give the poorest in society access to give the poorest in society access to basic medical care. Based on this basic medical care. Based on this corporate social responsibility corporate social responsibility DNA embedded in the business DNA embedded in the business philosophies of both the Sumitomo philosophies of both the Sumitomo and Mitsui groups over the 400 and Mitsui groups over the 400 years of their existence, we will years of their existence, we will continue to play our part in solving continue to play our part in solving problems facing the international problems facing the international community through our financial community through our financial service service operations. operations.\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group undertook large-scale afforestation undertook large-scale afforestation\n\n# **Shrinking and aging population Ensuring peace of mind for the future**\n\nCurrently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifest frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle yle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to crea planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound te systems and a corporate culture that foster a sound balance between work and care needs, given that many gr balance between work and care needs, given that many group employees will later need to nurse ailing relatives. oup employees will later need to nurse ailing relatives. *Estimates by the Statistics Bureau, Ministry of Internal Affairs and Communications (October 1, 2011)\n\n#### **Further measures needed**\n\n- nursing care nursing care\n- elderly (planning for asset management for old age) elderly (planning for asset management for old age)\n- Foster a better work-life balance Foster a better work-life balance\n\n# **Symbiosis and diversity**\n\nSupport businesses involved in health, medical and Support businesses involved in health, medical and\n\nExpand range of financial products and services for the Expand range of financial products and services for the\n\nIn anticipation of further global expansion, the SMFG Group is aggressively internationalizing its In anticipation of further global expansion, the SMFG Group is aggressively internationalizing its operations both in Japan and overseas. Initiative operations both in Japan and overseas. Initiatives include aggressive development of advisory include aggressive development of advisory services for infrastructure upgrades in emergi services for infrastructure upgrades in emerging economies, a cross-departmental endeavor, g economies, a cross-departmental endeavor, as well as contributions to the international community and the environmental business, chiefly as well as contributions to the international community and the environmental business, chiefly through branches and representative offices overseas. through branches and representative offices overseas.\n\nWe will continue to discuss and review various approaches to issues facing the international We will continue to discuss and review various approaches to issues facing the international community so as to build up trust internationally as a global player. community so as to build up trust internationally as a global player.\n\n#### **Further measures needed**\n\n- Share expertise in corporate social responsibility Share expertise in corporate social responsibility with the international community with the international community\n- Improve financial services in preparation for the Improve financial services in preparation for the globalization of operations in Japan (multilingual globalization of operations in Japan (multilingual support) support)\n- Promote diversity Promote diversity", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Social Contribution Activities**\n\n**SMFG as a corporate citizen: Working to create a prosperous society for all**\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n# **SMFG and its Group companies participate in neighborhood cleanup programs**\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on \"SMFG Clean-up Day.\" This initiative is on \"SMFG Clean-up Day.\" This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n# **Supporting education in developing countries, together with our customers and employees**\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\nCollection box for used books and other items\n\nBuilding libraries in developing countries through the NGO Room to Read\n\ninstalled in an employee canteen Supporting education in developing countries\n\n# **Donations through \"The World Bank Green Fund\"**\n\nSMBC and SMBC Nikko Securities donate a SMBC and SMBC Nikko Securities donate a portion of the profits from marketing of the portion of the profits from marketing of the SMBC Nikko World Bank Bond Fund SMBC Nikko World Bank Bond Fund ( \"The World Bank Green Fund World Bank Green Fund\" ) to the Japanese ) to the Japanese Red Cross Society and the Japan Committee Red Cross Society and the Japan Committee for UNICEF. for UNICEF.\n\nThis investment trust is the world This investment trust is the world's first s first fund developed in cooperation with the fund developed in cooperation with the World Bank that invests in World Bank green World Bank that invests in World Bank green bonds, according to research by Nikko bonds, according to research by Nikko Asset Management Co., Ltd. Funds from Asset Management Co., Ltd. Funds from the World Bank green bonds support only the World Bank green bonds support only World Bank-funded projects in developing World Bank-funded projects in developing countries to mitigate global warming. countries to mitigate global warming.\n\n*Research by Nikko Asset Management Co., Ltd.\n\nDonating to the Japanese Red Cross\n\n# **SMBC Nikko Securities' \"Green Week\"**\n\nIn the fall of 2010, SMBC Nikko Securities In the fall of 2010, SMBC Nikko Securities established its \"Green Week\" for strength established its \"Green Week\" for strengthening environmental protection and social ening environmental protection and social contribution activities, with the aim of contribution activities, with the aim of promoting communication within regional promoting communication within regional society and among participating employees society and among participating employees and their families, while deepening under and their families, while deepening understanding of environmental protection through standing of environmental protection through participation in social contribution activities. participation in social contribution activities. Between November 13 and December 5, Between November 13 and December 5, 2010, environmental protection programs 2010, environmental protection programs were rolled out by cross-organizational were rolled out by cross-organizational \"Green Committees\" in four locations in \"Green Committees\" in four locations in Japan, with the participation of 280 employ Japan, with the participation of 280 employees and their families. In addition, regional ees and their families. In addition, regional contribution activities were carried out by contribution activities were carried out by\n\nRegional contribution activities at the branch level\n\nCollection of PET bottle caps Donating to Japan Committee for UNICEF for international contribution purposes\n\nbranches at their own initiative. A wide variety branches at their own initiative. A wide variety of social contribution activities, such as the of social contribution activities, such as the collection of used stamps and PET bottle collection of used stamps and PET bottle caps, were carried out for global causes. caps, were carried out for global causes. SMBC Nikko Securities will continue activi SMBC Nikko Securities will continue activities that contribute to society and prioritize ties that contribute to society and prioritize communication between employees. communication between employees.\n\nEmployees and their families pitch in to clean up the bed of the Ara River in Tokyo\n\n| Environmental protection activities |\n| --- |\n| Forestry management volunteering experience in Osaka |\n| (Izumi no Mori) |\n| 117 participants |\n| Volunteers at the Shonan Erosion Control Forest project |\n| 62 participants |\n| Helping clean up Senju Shinbashi bridge that spans Ara River |\n| 64 participants |\n| Helping clean up Nishi Araibashi bridge that spans Ara River |\n| 37 participants |\n| Social contribution collection activities |\n| Support for overseas causes through used-stamp collection |\n| 11.4 kg of stamps were collected |\n| Presentation of stationery to children in developing countries |\n| 788 ballpoint pens and pencils |\n| Vaccine donation from the collection of PET bottle caps |\n| 168.9 kg (enough to vaccinate 84.45 people against polio) |\n| Activities organized by branches |\n| Sendai Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Matsudo Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Shizuoka Branch |\n\nAbekawa River driftwood-clearing festival", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Specific Examples of CSR Activities\n\n# **Together with Our Customers**\n\n**We work as a team to improve customer satisfaction and product quality, and, while supporting the customer, contribute to the sustainable development of society as a whole.**\n\n# **The financial sector's role in improving the nation's diet and in strengthening the agricultural and fisheries sectors**\n\nFor many years, food supply networks in For many years, food supply networks in Japan were premised on mass production and Japan were premised on mass production and mass consumption, enabling the country to mass consumption, enabling the country to meet soaring food demand at a time of rapid meet soaring food demand at a time of rapid growth in the population and economy. growth in the population and economy. But in recent years, consumers have come to But in recent years, consumers have come to place more priority on factors other than place more priority on factors other than volume and price, such as food safety and volume and price, such as food safety and healthiness, and the cultural aspects of diet. healthiness, and the cultural aspects of diet. As discussion continues on the need for As discussion continues on the need for farmers to increase production scale and farmers to increase production scale and move into processing and marketing, major move into processing and marketing, major changes are underway in the agriculture and changes are underway in the agriculture and fisheries sector in Japan. fisheries sector in Japan.\n\nAgainst this backdrop, SMBC has developed Against this backdrop, SMBC has developed a new financial product for this sector. a new financial product for this sector. The SMBC Food and Agricultural Assessment The SMBC Food and Agricultural Assessment Loan comes with conditions, depending on Loan comes with conditions, depending on the results of an evaluation of food-producers' the results of an evaluation of food-producers' progress in areas such as food safety and progress in areas such as food safety and environment-friendliness, healthiness and environment-friendliness, healthiness and nutritional value, and efficiency of distribution. nutritional value, and efficiency of distribution. The Japan Research Institute researches The Japan Research Institute researches\n\nmeasures in the me a s u r e s i n t h e areas of food and of food and farming being taken farming being taken by the loan applicant, by the loan applicant, and drafts a simple and drafts a simple \"diagnosis\" stating \"diagnosis\" stating whether there is room whether there is room\n\nfor future improvement. Ernst & Young for future improvement. Ernst & Young ShinNihon LLC provides expert opinions on ShinNihon LLC provides expert opinions on ongoing improvement of this system. ongoing improvement of this system.\n\nBy backing customer companies' own By backing customer companies' own initiatives in the areas of food and agriculture initiatives in the areas of food and agriculture in this way, SMBC will be supporting measures in this way, SMBC will be supporting measures to improve the diet of the Japanese and to improve the diet of the Japanese and strengthen the agriculture and fisheries sector. strengthen the agriculture and fisheries sector.\n\n#### **For further details, please see our website.**\n\nA roundtable session with experts held in August 2011 eyesight concerns. eyesight concerns. considered the role of the new SMBC Food and Agricultural Assessment Loan in improving the food supply chain that links food and fishery producers with food processors and consumers. Opinions were also exchanged on what other future role the bank might assume in this regard, given the current situation and issues facing the food industry\n\nand agriculture in Japan.\n\n**Roundtable session: SMBC Food and Agricultural Assessment Loan**\n\n#### **Key comments of participants**\n\n\"We want to deliver value by creating demand and quality combined with safety, peace of mind and trust.\" Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd.\n\nYasuhiro Nakashima Associate Professor Graduate School of Agricultural and Life Sciences, The University of Tokyo\n\n\"Eating should be something that generates emotion. New potential exists in the world of cuisine.\" Daisuke Yamamoto, Vice Senior Consultant, Research Department, The Japan Research Institute, Limited\n\n\"As consumer tastes go through a time of great change, I think it is important to prioritize ingredients and the attitude of customers toward eating.\"\n\n\"An important concept is multilateral dialogue as the number of parties involved in food production increases throughout the supply chain.\" Yoichiro Fukayama, Planning Dept., Deputy Head (with powers of representation) of the Corporate Banking Unit & Middle Market Banking Unit, SMBC\n\nModerated by Kenji Sawami, Partner, Ernst & Young ShinNihon LLC\n\n# **Making banking a more pleasant experience for all customers**\n\nWith the old-age dependency ratio soaring, With the old-age dependency ratio soaring, the SMFG Group aims to provide friendly, the SMFG Group aims to provide friendly, easy-to-use banking services for all its easy-to-use banking services for all its customers. customers.\n\nSome Group companies are likewise making Some Group companies are likewise making their facilities barrier-free at bank branches their facilities barrier-free at bank branches with large numbers of customers, to tailor with large numbers of customers, to tailor services to the needs of all customers. services to the needs of all customers.\n\nFor example at the Minato Bank, we have For example at the Minato Bank, we have equipped all ATMs at all our branches and equipped all ATMs at all our branches and cashpoints with voice-guidance handsets for cashpoints with voice-guidance handsets for the visually impaired. the visually impaired.\n\nIn addition, we have set up priority seating In addition, we have set up priority seating in the lobby of each of our branches for in the lobby of each of our branches for customers who are very old or who have customers who are very old or who have mobility problems. We are also steadily mobility problems. We are also steadily introducing queue-number displays using introducing queue-number displays using Color Universal Design (CUD) principles, Color Universal Design (CUD) principles, which are easier to read for customers with which are easier to read for customers with\n\nHandheld hearing support device (The Minato Bank)\n\nA further measure is installation of handheld A further measure is installation of handheld hearing support devices at all branches hearing support devices at all branches (except housing loan promotion offices), to (except housing loan promotion offices), to allay the concerns of hearing-impaired allay the concerns of hearing-impaired customers who find it difficult to converse customers who find it difficult to converse and follow spoken instructions. By using the and follow spoken instructions. By using the devices as communication tools, bank devices as communication tools, bank employees can respect customer privacy employees can respect customer privacy and do not have to talk loudly. and do not have to talk loudly. Further measures include posting of \"green Further measures include posting of \"green ear\" logos at branches to reassure customers ear\" logos at branches to reassure customers that the bank has facilities for conversing that the bank has facilities for conversing in writing. All branches are being equipped writing. All branches are being equipped with white boards and special message with white boards and special message tablets for dialogue with customers who ablets for dialogue with customers who have concerns about their hearing and who have concerns about their hearing and who dislike written conversations. dislike written conversations.\n\n# **Peace of mind at the bank counter**\n\nThe Minato Bank has created a position The Minato Bank has created a position titled \"Service Care Manager\" at each of titled \"Service Care Manager\" at each of its branches, filled by at least one branch its branches, filled by at least one branch managerial staffer, as part of measures to managerial staffer, as part of measures to make branch visits more pleasant for make branch visits more pleasant for customers, following earlier nuts-and-bolts customers, following earlier nuts-and-bolts improvements. improvements.\n\nService Care Managers are dedicated to Service Care Managers are dedicated to improving support and services for the improving support and services for the customer at each branch. Their training customer at each branch. Their training includes simulations of the problems faced includes simulations of the problems faced by persons with disabilities, awareness by persons with disabilities, awareness raising and support methods for the elderly raising and support methods for the elderly and persons with disabilities. and persons with disabilities.\n\n### **New queue-number display system installed at bank counters**\n\nColors and special designs are used to make queue-number displays more visible to all customers (The Minato Bank)\n\nTelephone handset-type ATM (The Minato Bank)\n\n# **Preparing our businesses for a higher old-age dependency ratio**\n\nIn addition to removing mobility barriers at In addition to removing mobility barriers at branches, the bank plans to aggressively branches, the bank plans to aggressively support installation of facilities needed to support installation of facilities needed to cope with the rapidly rising old-age cope with the rapidly rising old-age dependency ratio. As a first step, SMBC dependency ratio. As a first step, SMBC has established clear guidelines for has established clear guidelines for supporting the construction of rental supporting the construction of rental housing for the elderly, expected to be a housing for the elderly, expected to be a future growth area. future growth area.\n\nWhile continuing to tailor business While continuing to t ailor busines s activities to the needs of the community at activities to the needs of the community at large and ensuring a friendly banking large and ensuring a friendly banking environment for our customers, the SMFG environment for our customers, the SMFG Group also plans to support the creation of Group also plans to support the creation of frameworks that enable the elderly to live frameworks that enable the elderly to live active lives with peace of mind. active lives with peace of mind.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Environmental Activities**\n\n**International initiatives in Asian countries and others**\n\n# **Taking a leading role in environmental businesses in Asia**\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia 2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The exhibition, visited on successive days exhibition, visited on successive days by Malaysia Malaysia's King, prime minister, some of s King, prime minister, some of the regional Kings of Malaysia, the regional Kings of Malaysia, and cabinet ministers, raised awareness cabinet ministers, raised awareness of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of \"green townships\" based establishment of \"green townships\" based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n# **Promoting energy-saving and low-emission industries in China**\n\nIn China, which emits more carbon dioxide In China, which emits more carbon dioxide than any other country, finding ways of than any other country, finding ways of promoting new energy-saving measures promoting new energy-saving measures and restructuring industry have become and restructuring industry have become pressing issues. pressing issues.\n\nThe Japan Research Institute has built up a The Japan Research Institute has built up a successful track record in the course of its successful track record in the course of its advisory activities in China, in joint research advisory activities in China, in joint research into local-level microgrid construction at into local-level microgrid construction at the Tianjin Eco-City, and in policy-making the Tianjin Eco-City, and in policy-making relating to renewable energy management relating to renewable energy management systems and other areas. ems and other areas. In partnership with the Guangdong Provincial In partnership with the Guangdong Provincial Department of Science and Technology, the Department of Science and Technology, the Japan Research Institute also advises Japan Research Institute also advises government departments on system government departments on system establishment for new energy-saving establishment for new energy-saving businesses. Guangdong is China businesses. Guangdong is China's richest s richest province by gross provincial product, and province by gross provincial product, and here both needs and potential in the field here both needs and potential in the field of energy-saving are very great. The Japan of energy-saving are very great. The Japan Research Institute also supports industrial Research Institute also supports industrial restructuring and low-carbon projects in the restructuring and low-carbon projects in the province through model projects. province through model projects.\n\n**Support for adoption of electric vehicles and car-sharing**\n\nIn the battle against global warming, both In the battle against global warming, both public and private sectors are facing mounting public and private sectors are facing mounting pressure to curb carbon dioxide pollution from pressure to curb carbon dioxide pollution from transportation, one of the major sources of transportation, one of the major sources of emissions. Against this backdrop, the Japan emissions. Against this backdrop, the Japan Research Institute is supporting environmental Research Institute is supporting environmental businesses that map out pathways and businesses that map out pathways and develop projects, tailored to the needs of develop projects, tailored to the needs of particular localities, to bring about a particular localities, to bring about a low-carbon society. Experimental projects are low-carbon society. Experimental projects are currently underway in Kanagawa Prefecture, currently underway in Kanagawa Prefecture, Saitama Prefecture, Kyoto and Sapporo. Saitama Prefecture, Kyoto and Sapporo. These initiatives are aimed at hastening the These initiatives are aimed at hastening the adoption of electric vehicles and car-sharing adoption of electric vehicles and car-sharing to cut carbon dioxide emissions. The Institute to cut carbon dioxide emissions. The Institute is working in cooperation with government is working in cooperation with government bodies, car-rental, commercial vehicle-leasing bodies, car-rental, commercial vehicle-leasing and parking-facility management companies, and parking-facility management companies, railways, communications providers and railways, communications providers and other entities. other entities.\n\nElectric vehicles not only emit no carbon dioxide, but offer a comfortable drive as well\n\nIGEM2010 greeted many visitors", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## **Corporate Outline (as of September 30, 2011)**\n\n| Company Name | : | Sumitomo Mitsui Financial Group, Inc. |\n| --- | --- | --- |\n| Business Description | : | Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of |\n| | | non-bank subsidiaries, as well as the performance of ancillary functions |\n| Established | : | December 2, 2002 |\n| Head Office | : | 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan |\n| Chairman of the Board | : | Masayuki Oku |\n| President | : | Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) |\n| Capital | : | ¥2,337.8 billion |\n| Stock Exchange Listings | : | Tokyo Stock Exchange (First Section) |\n| | | Osaka Securities Exchange (First Section) |\n| | | Nagoya Stock Exchange (First Section) |\n| | | Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange. |\n\n## **Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)**\n\n# **Our CSR reporting**\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n| CSR report 2011 (digest version) | CSR disclosure through |\n| --- | --- |\n| Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, | specific examples |\n| centered on specific examples centered on specific examples | |\n| CSR report 2011 | Comprehensive |\n| (digest version with examples of activities and | |\n| statistical performance, online PDF file) | disclosure of |\n| Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed | CSR activities |\n| information on CSR activities information on CSR activities | |\n| CSR report (online version, Japanese only) | Enriched |\n| www.smfg.co.jp/responsibility | CSR disclosure |\n| This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of | |\n| CSR activities at SMFG CSR activities at SMFG | |\n\n# **Editorial Policy**\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society. We have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is the essence of business itself, and our initiatives act upon this. Our CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version). We disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n# **Scope of this Report**\n\n- Sumitomo Mitsui Financial Group, Inc.\n- Sumitomo Mitsui Banking Corporation\n- SMFG Card & Credit, Inc.\n- Sumitomo Mitsui Card Company, Limited\n- Cedyna Financial Corporation\n- Sumitomo Mitsui Finance and Leasing Co., Ltd.\n- The Japan Research Institute, Limited\n- SMBC Friend Securities Co., Ltd.\n- SMBC Nikko Securities Inc.\n- THE MINATO BANK, LTD.\n- Kansai Urban Banking Corporation\n- Other Group companies\n\nThroughout this report, **\"Sumitomo Mitsui Financial Group\"** or **\"SMFG\"** refers to the holding company alone. **\"The SMFG Group\"** refers to the holding company and its primary domestic and international subsidiaries and affiliates. Company name abbreviations and other special terminology\n\n## **Reference guidelines**\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3) * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n# **About this Report**\n\n- Period Covered : April 1, 2010 to March 31, 2011 ( \"Fiscal 2010\" ) Note: Certain items in this report refer to activities taking place after April 2011.\nPublication Date of Japanese Document : December 2011\n\n- Contact :\n\t- 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111\n\nGroup CSR Department, Sumitomo Mitsui Financial Group, Inc.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# Today, Tomorrow and Beyond\n\n**President Sumitomo Mitsui Financial Group, Inc.**\n\n**Koichi Miyata**\n\nFirst, I would like to extend our deepest sympathies and heartfelt First, I would like to extend our deepest sympathies and heartfelt condolences to all those who have suffered and condolences to all those who have suffered and to the families and friends of those who tragically lost their lives in to the families and friends of those who tragically lost their lives in the devastating earthquake and tsunami the devastating earthquake and tsunami that struck northeastern Japan on March 11, 2011. We pray for the that struck northeastern Japan on March 11, 2011. We pray for the early recovery of the affected people and areas. early recovery of the affected people and areas. SMFG is dedicated to seamlessly responding to clients' needs by SMFG is dedicated to seamlessly responding to clients' needs by leveraging our group-wide capabilities, leveraging our group-wide capabilities, offering optimal products and services, and ensuring that every offering optimal products and services, and ensuring that every employee and the overall group are capable of employee and the overall group are capable of responding to the challenges of globalization. I believe that responding to the challenges of globalization. I believe that through these measures, through these measures, we will contribute to the growth and development of our clients we will contribute to the growth and development of our clients and society, and ourselves grow in partnership with them. and society, and ourselves grow in partnership with them. Through our basic policy of becoming \"a globally competitive Through our basic policy of becoming \"a globally competitive financial services group financial services group with the highest trust of our clients, society and other stakeholders\" with the highest trust of our clients, society and other stakeholders\" by maximizing our core strengths of by maximizing our core strengths of \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we will continue to stay ahead of the times, will continue to stay ahead of the times, no matter how challenging, and actively adapt to changes in our no matter how challenging, and actively adapt to changes in our business environment. business environment.\n\n## **INDEX**\n\n| Foreword | 1 |\n| --- | --- |\n| Commitment from the Top A Conversation with Tadao Ando, | 3 |\n| Takeshi Kunibe and Koichi Miyata | |\n| What can we do now to spur the reconstruction and revitalization of Japan, | |\n| and help resolve global issues? | |\n| Measures to Support Reconstruction | |\n| after the March 11 | |\n| Earthquake and Tsunami | 8 |\n| Priority Issues for Us | 9 |\n| Our Mission and CSR at SMFG | 11 |\n| 〈Specific Examples of CSR Activities〉 | |\n| Together with Our Customers | 13 |\n| Together with Our Shareholders | |\n| and Markets | 17 |\n| Together with Our Employees | 19 |\n| Environmental Activities | 21 |\n| Social Contribution Activities | 25 |\n| Corporate Outline/Editorial Policy | 29 |", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### INFORMATION ON SUBSIDIARIES AND AFFILIATES\n\n| Consolidated subsidiaries | | | | As of Mar. 31, 2005 |\n| --- | --- | --- | --- | --- |\n| Company | Location | Principal business | Capital (millions) | Nissan share*(%) |\n| Japan | | | | |\n| Nissan Shatai Co., Ltd. | Hiratsuka-shi, Kanagawa | Manufacture and sales of automobiles and parts | ¥7,904 | 43.80 |\n| Aichi Machine Industry Co., Ltd. | Nagoya, Aichi | Manufacture and sales of automotive parts | ¥8,518 | 41.70 |\n| JATCO Ltd. | Fuji, Shizuoka | Manufacture and sales of automotive parts | ¥29,935 | 81.76 |\n| Nissan Kohki Co., Ltd. | Samukawa, Kanagawa | Manufacture and sales of automotive parts | ¥2,020 | 97.73 |\n| Calsonic Kansei Corporation | Tokyo | Manufacture and sales of automotive parts | ¥40,606 | 41.87 |\n| Nissan Motor Car Carrier Co., Ltd. | Tokyo | International automobile transport | ¥640 | 60.00 |\n| Nissan Trading Co., Ltd. | Yokohama, Kanagawa | Import and export of automobiles, parts, etc. | ¥320 | 100.00 |\n| Nissan Financial Services Co., Ltd. | Chiba, Chiba | Automobile financing and leasing | ¥16,387 | 100.00 |\n| Autech Japan, Inc. | Chigasaki, Kanagawa | Development, manufacture and sales of limited-edition automobiles | ¥480 | 100.00 |\n| Nissan Real Estate Development | Tokyo | Real estate sales, purchase and leasing | ¥1,000 | 70.50 |\n| Corporation | | | | |\n| Nissan Finance Co., Ltd. | Tokyo | Finance and accounting support | ¥2,491 | 100.00 |\n| Aichi Nissan Motor Co., Ltd. | Nagoya, Aichi | Sales of automobiles and parts | ¥100 | 100.00 |\n| Tokyo Nissan Motor Sales Co., Ltd. | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Nissan Prince Tokyo Motor Sales | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Co., Ltd. | | | | |\n| Nissan Chuo Parts Sales Co., Ltd. | Yokohama, Kanagawa | Sales of automobile repair parts | ¥545 | 80.61 |\n| US | | | | |\n| Nissan North America, Inc. | Gardena, California | Management of North American subsidiaries, manufacture and sales of automobiles and parts | $1,791 | 100.00 |\n| Nissan Motor Acceptance Corporation | Torrance California | Finance of wholesale and retail automobile sales in US | $499 | 100.00 |\n| Nissan Motor Corporation | Honolulu, Hawaii | Sales of automobiles and parts | $6 | 100.00 |\n| in Hawaii, Ltd. | | | | |\n| Nissan Capital of America, Inc. | Torrance, California | Financing for group companies | $1 | 100.00 |\n| Nissan Technical Center | Farmington Hills | Research and development, testing | $16 | 100.00 |\n| North America, Inc. | Michigan | | | |\n| Nissan Motor Insurance Corporation | Honolulu, Hawaii | Casualty insurance | $10 | 100.00 |\n| Nissan Forklift Co., North America | Marengo, Illinois | Manufacture and sales of forklifts and parts | $34 | 100.00 |\n| Canada | | | | |\n| Nissan Canada, Inc. | Mississauga, Ontario | Sales of automobiles and parts | CAN$68 | 100.00 |\n| Mexico | | | | |\n| Nissan Mexicana, S.A. de C.V. | Mexico D.F. | Manufacture and sales of automobiles and parts | P17,056 | 100.00 |", - "page_start": 107, - "page_end": 107, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "What is the trend of flood risk in Canada in 2024?", - "target_page": 1, - "target_passage": "(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n# **Three ways Canadian communities are reducing flood risks**\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com Word Count: 281\n\n#### M e d i a Att a c h m e n ts −\n\nHave your say! Complete our 2025 Media Survey\n\nRetrain your way to a new job\n\nThe top AI-powered tech trends in 2025", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). *The Economist*. 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n- 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). *CNN Business*. Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n- 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). *The New York Times*. Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n- 203. \"Electricity 2024 Analysis\" (https://www.iea.org/reports/electricity-2024). *IEA*. 24 January 2024. Retrieved 13 July 2024.\n- 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). *Vox*. New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n- 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm_campaign=wp_post_most&utm_medium =email&utm_source=newsletter&wpisrc=nl_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). *Washington Post*.\n- 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). *Goldman Sachs*. Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n- 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). *Wall Street Journal*. Dow Jones.\n- 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). *Wall Street Journal*. Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n- 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). *Bloomberg*.\n- 210. Halper, Evan (20 September 2024). \"Microsoft deal would reopen Three Mile Island nuclear plant to power AI\" (https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mi le-island-nuclear-constellation). *Washington Post*.", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Consolidated Financial Statements June 30, 2024 and 2023\n\n(With Independent Auditors' Report Thereon)", - "page_start": 0, - "page_end": 0, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "**Figure 1.** Hunger and Climate Vulnerability Index for 1981–2010 climate (ensemble mean across the bias-corrected HadGEM3 ensemble).\n\n**Table 2.** Proxies for flood and drought events used in the HCVI.\n\n| extreme weather event description of proxy |\n| --- |\n| average length of flood events number of days in which the cumulative daily rainfall excess is positive, |\n| compared with the 95th percentile in the 1981–2010 average |\n| |\n| average length of drought events number of days in which the cumulative daily rainfall deficit is positive, |\n| compared with the 20th percentile in the 1981–2010 average |\n| |\n\nUN Food and Agriculture Organization, UN Development Programme and UN Population Fund [22]. The exposure component comprised proxies for the average length of flood and drought events calculated with daily precipitation data [23] (table 2). These proxies were chosen above other possible metrics as they were required to replace self-reported instances of flood and drought events used in the original HCVI, which correlate with undernutrition data at the country-level [23]. The proxies were therefore masked to only include data where a significant proportion of people live and grow crops before aggregating to country level and combining to comprise a measure of exposure [23]; nevertheless, it is recognized that precipitation data alone may not always be adequate for representing flood and drought events, so the current method is regarded as preliminary.\n\nThe impacts of projected climate change, therefore, act through changes in these quantities. In the current version of the HCVI, climate-change impacts on other quantities such as crop yield are not considered. Socio-economic factors affecting sensitivity and adaptive capacity are fixed at present-day conditions.\n\nThe ensemble-mean baseline HCVI calculated with the high-resolution bias-corrected HadGEM3 ensemble is shown in figure 1. The spatial pattern is compatible with HCVI values calculated using reanalysis data at the CMIP5 grid-scale resolution [23]; the most vulnerable regions are sub-Saharan Africa and South Asia. This higher-resolution climate data enables inclusion of additional countries which were not resolved in the lower-resolution CMIP5 data.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed11.pdf" - }, - { - "text": "## **4. Results**\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\n#### **Chart 2: Projected monthly prison population (all scenarios)**\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n- 266. Russell & Norvig 2021, p. 1001.\n- 267. Bostrom (2014).\n- 268. Russell (2019).\n- 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n- 270. Harari (2023).\n- 271. Müller & Bostrom (2014).\n- 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n- 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). *CBS News*. 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n- 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis-1.6829302). *CBC*. Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n- 275. \" '50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). *Bloomberg BNN*. 14 June 2024. Retrieved 6 July 2024.\n- 276. Valance (2023).\n- 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). *The Guardian*. Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n- 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). *Fox News*. Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). *Forbes*. Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). *Financial Times*. Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n- 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). *Wired*. Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "# **5. Previous Projections**\n\nAt the end of September 2014 the published prison population was within 1.8 % of the 2013 Scenario 2 (central) projection, and within 3.4 % of the 2013 Scenario 1 projection and 0.2 % of the 2013 Scenario 3 projection. This does not indicate which scenario the actual prison population will track going forward.\n\nDifferences between the 2013 projections and the actual population could be explained by changes, different to those projected, in overall demand, offence mix, age and gender of defendants, court routes, custody rates or sentence lengths.\n\nChart 3 plots the 2014 Central Scenario projection against the three 2013 prison population projections. The 2014-2020 Central Scenario projection is above all three scenarios from last year. The higher level of the new projections can be attributed to a more serious case mix coming into the courts with a resulting increase in average custodial sentence lengths. The projection for June 2019 in the Central Scenario this year is 10.2 % above the equivalent scenario (Scenario 2) last year.\n\n**Chart 3: Comparing 2013 and 2014 projections (November 2014 – December 2020)**", - "page_start": 14, - "page_end": 14, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "#### Table of Contents\n\n#### PART II. OTHER INFORMATION ITEM 1. LEGAL PROCEEDINGS\n\nFor a description of our material pending legal proceedings, please see Note 10, Commitments and Contingencies, to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n# ITEM 1A. RISK FACTORS\n\nOur operations and financial results are subject to various risks and uncertainties, including the factors discussed in Part I, Item 1A, Risk Factors in our Annual Report on Form 10-K for the year ended December 31, 2023, which could adversely affect our business, financial conditions and future results.\n\n# ITEM 2. UNREGISTERED SALES OF EQUITY SECURITIES AND USE OF PROCEEDS\n\nIn connection with the offering of 2.00% Convertible Senior Notes due 2024 in May 2019, we sold warrants to each of Société Générale, Wells Fargo Bank, National Association, Credit Suisse Capital LLC (later assigned to UBS AG, London Branch) and Goldman, Sachs & Co. LLC (together, the \"2019 Warrantholders\"). Between August 19, 2024 and September 30, 2024, we issued an aggregate of 8,506,223 shares of our common stock to the 2019 Warrantholders pursuant to their exercise of such warrants, which were net of the applicable exercise prices. Such shares were issued pursuant to an exemption from registration provided by Rule 3(a)(9) of the Securities Act of 1933.\n\n# ITEM 3. DEFAULTS UPON SENIOR SECURITIES\n\nNone.\n\n# ITEM 4. MINE SAFETY DISCLOSURES\n\nNot applicable.\n\n# ITEM 5. OTHER INFORMATION\n\nNone of the Company's directors or officers adopted, modified or terminated a Rule 10b5-1 trading arrangement or a non-Rule 10b5-1 trading arrangement during the Company's fiscal quarter ended September 30, 2024, as such terms are defined under Item 408(a) of Regulation S-K, except as follows:\n\nOn July 25, 2024, Robyn Denholm, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 674,345 shares of our common stock (all resulting from stock options expiring in June 2025), subject to certain conditions. The arrangement's expiration date is June 18, 2025.\n\nOn July 31, 2024, Kimbal Musk, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 152,088 shares of our common stock, subject to certain conditions. The arrangement's expiration date is May 30, 2025.\n\nOn August 12, 2024, Kathleen Wilson-Thompson, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 300,000 shares of our common stock, subject to certain conditions. The arrangement's expiration date is February 28, 2025.\n\n36", - "page_start": 46, - "page_end": 46, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "# What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n# **OBSERVATIONS**\n\n### **Annual report: State of the UK Climate. Downloadable data.**\n\nThe \"State of the UK Climate\" report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence9. For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n# **MARINE PROJECTIONS**\n\n#### **Sea level rise. Storm surge. Past event case studies.**\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a \"plausible but highly unlikely\" scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report10.\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These \"storminess\" projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n8 The latest update can be found at **http://www.metoffice.gov.uk/climate/uk/about/state-of-climate**\n\n- 9 **http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/**\n10 **https://www.ipcc.ch/report/ar5/**", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "How flooding was prevented in Vancouver? ", - "target_page": 1, - "target_passage": "In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown’s east side, an area historically prone to flooding. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n# **Three ways Canadian communities are reducing flood risks**\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com Word Count: 281\n\n#### M e d i a Att a c h m e n ts −\n\nHave your say! Complete our 2025 Media Survey\n\nRetrain your way to a new job\n\nThe top AI-powered tech trends in 2025", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "**Figure 1.** Hunger and Climate Vulnerability Index for 1981–2010 climate (ensemble mean across the bias-corrected HadGEM3 ensemble).\n\n**Table 2.** Proxies for flood and drought events used in the HCVI.\n\n| extreme weather event description of proxy |\n| --- |\n| average length of flood events number of days in which the cumulative daily rainfall excess is positive, |\n| compared with the 95th percentile in the 1981–2010 average |\n| |\n| average length of drought events number of days in which the cumulative daily rainfall deficit is positive, |\n| compared with the 20th percentile in the 1981–2010 average |\n| |\n\nUN Food and Agriculture Organization, UN Development Programme and UN Population Fund [22]. The exposure component comprised proxies for the average length of flood and drought events calculated with daily precipitation data [23] (table 2). These proxies were chosen above other possible metrics as they were required to replace self-reported instances of flood and drought events used in the original HCVI, which correlate with undernutrition data at the country-level [23]. The proxies were therefore masked to only include data where a significant proportion of people live and grow crops before aggregating to country level and combining to comprise a measure of exposure [23]; nevertheless, it is recognized that precipitation data alone may not always be adequate for representing flood and drought events, so the current method is regarded as preliminary.\n\nThe impacts of projected climate change, therefore, act through changes in these quantities. In the current version of the HCVI, climate-change impacts on other quantities such as crop yield are not considered. Socio-economic factors affecting sensitivity and adaptive capacity are fixed at present-day conditions.\n\nThe ensemble-mean baseline HCVI calculated with the high-resolution bias-corrected HadGEM3 ensemble is shown in figure 1. The spatial pattern is compatible with HCVI values calculated using reanalysis data at the CMIP5 grid-scale resolution [23]; the most vulnerable regions are sub-Saharan Africa and South Asia. This higher-resolution climate data enables inclusion of additional countries which were not resolved in the lower-resolution CMIP5 data.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed11.pdf" - }, - { - "text": "threaten their conservation status. To support this, data collection on by-catch for all sensitive species needs to be stepped up.\n\nIn addition, **fisheries-management measures** must be established in all marine protected areas according to clearly defined conservation objectives and on the basis of the best available scientific advice.\n\n#### *2.2.7. Restoring freshwater ecosystems*\n\nThe EU's legal framework on water is ambitious but implementation is lagging behind and enforcement must be stepped up46. Greater efforts are needed to **restore freshwater ecosystems and the natural functions of rivers** in order to achieve the objectives of the Water Framework Directive. This can be done by removing or adjusting barriers that prevent the passage of migrating fish and improving the flow of water and sediments. To help make this a reality, **at least 25,000 km of rivers will be restored into free-flowing rivers by 2030**47 through the removal of primarily obsolete barriers and the restoration of floodplains and wetlands. Technical guidance and support to the Member States to identify sites and help mobilise funding will be provided by the Commission in 2021, in consultation with all relevant authorities48 . Member State authorities should review water abstraction and impoundment permits to implement ecological flows in order to achieve good status or potential of all surface waters and good status of all groundwater by 2027 at the latest, as required by the Water Framework Directive49 . To that effect, the Commission will provide technical support to Member States on their measures by 2023.\n\nOverall, large-scale river and floodplain restoration investments50 can provide a major economic boost for the restoration sector and for local socioeconomic activities such as tourism and recreation. At the same time, these investments can improve water regulation, flood protection, nursery habitats for fish, and the removal of nutrient pollution.\n\n#### *2.2.8. Greening urban and peri-urban areas*\n\n**Green urban spaces**, from parks and gardens to green roofs and urban farms, provide a wide range of benefits for people. They also provide opportunities for businesses and a refuge for nature. They reduce air, water and noise pollution, provide protection from flooding, droughts and heat waves, and maintain a connection between humans and nature51 .\n\nThe recent lockdowns due to the COVID-19 pandemic have shown us the **value of green urban spaces for our physical and mental wellbeing**. While protection of some urban\n\n<sup>46</sup> Fitness Check of the EU Water Legislation (SWD(2019) 439); Evaluation of the Urban Waste Water Treatment Directive (SWD(2019) 700).\n\n<sup>47</sup> The target of 25,000 km is based on the Commission's assessment of what is achievable in the EU by 2030.\n\n<sup>48</sup> The guidelines will take a wide range of issues into account, including hydropower generation, flood management, water supply, agriculture and navigability.\n\n<sup>49</sup> These measures should be planned in the 3rd River Basin Management Plans to be adopted by Member States in 2021, under the Water Framework Directive.\n\n<sup>50</sup> Fitness Check of the EU Water Legislation (SWD(2019) 439).\n\n<sup>51</sup> EnRoute project.", - "page_start": 12, - "page_end": 12, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "**21.**—(1) Workers engaged in essential or emergency works—\n\n- (a) related to water supplies and sewerage services; and\n- (b) carried out by, for, or on behalf of a water undertaker, sewerage undertaker, water supply licensee, sewerage licensee or local authority,\n\nwhere they have travelled to the United Kingdom in the course of their work.\n\n(2) For the purposes of sub-paragraph (1)—\n\n- (a) \"essential or emergency works\" includes—\n\t- (i) inspections, maintenance, repairs, and asset replacement activities,\n\t- (ii) monitoring, sampling and analysis of water supplies under the Private Water Supplies (England) Regulations 2016(**a**), the Water Supply (Water Quality) Regulations 2016(**b**), the Private Water Supplies (Wales) Regulations 2017(**c**), or the Water Supply (Water Quality) Regulations 2018(**d**);\n- (b) \"sewerage licensee\" means the holder of a sewerage licence under section 17BA of the Water Industry Act 1991(**e**);\n- (c) \"sewerage services\" has the meaning given in section 219(1) of the Water Industry Act 1991(**f**);\n- (d) \"water supply licensee\" has the meaning given in sections 17A(7) and 219(1) of the Water Industry Act 1991(**g**).\n\n**22.**—(1) Workers engaged in essential or emergency works relating to flood and coastal erosion risk management on behalf of—\n\n- (a) the Environment Agency; or\n- (b) a lead local flood authority in England.\n- (2) For the purposes of sub-paragraph (1)—\n\t- (a) \"flood\" and \"coastal erosion\" have the meanings given in section 1 of the Flood and Water Management Act 2010(**h**);\n\t- (b) \"lead local flood authority\" has the meaning given in section 6(7) of that Act;\n\t- (c) \"risk management\" has the meaning given in section 3 of that Act(**i**).\n- **23.**—(1) Workers engaged in essential or emergency works—\n\t- (a) related to—\n\t\t- (i) a generating station,\n\t\t- (ii) an electricity interconnector,\n\t\t- (iii) a district heat network as defined in regulation 2 of the Heat Network (Metering and Billing) Regulations 2014(**j**),\n\t\t- (iv) communal heating as defined in regulation 2 of the Heat Network (Metering and Billing) Regulations 2014,\n\t\t- (v) automated ballast cleaning and track re-laying systems on a network, or\n\t\t- (vi) the commissioning, maintenance and repair of industrial machinery for use on a network; or\n\n<sup>(<</sup>b>a) S.I. 2016/618; relevant amending instruments are S.I. 2017/506, 2018/707 and 2019/558.\n\n<sup>(<</sup>b>b) S.I. 2016/614; relevant amending instruments are S.I. 2017/506, 2018/706 and 378, 2019/526 and 558.\n\n<sup>(<</sup>b>c) S.I. 2017/1041 (W. 270), as amended by S.I. 2018/647 (W. 121), S.I. 2019/460 (W. 110) and S.I. 2019/463 (W. 111).\n\n<sup>(<</sup>b>d) S.I. 2018/647 (W. 121), as amended by S.I. 2019/463 (W. 111).\n\n<sup>(<</sup>b>e) 1991 c. 56. Section 17BA(6) was inserted by section 4(1) of the Water Act 2014 (c. 21). The reference to \"sewerage licensee\" was inserted in section 219(1) by paragraph 120(2)(f) of Schedule 7 to the Water Act 2014.\n\n<sup>(<</sup>b>f) The definition of \"sewerage services\" was amended by paragraph 120 of Schedule 7 to the Water Act 2014.\n\n<sup>(<</sup>b>g) Section 17A was inserted by section 1 of the Water Act 2014.\n\n<sup>(<</sup>b>h) 2010 c. 29.\n\n<sup>(<</sup>b>i) And see section 2 of the Flood and Water Management Act 2010 for the meaning of \"risk\".\n\n<sup>(<</sup>b>j) S.I. 2014/3120. There are no relevant amending instruments.", - "page_start": 39, - "page_end": 39, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n### (d) Freshwater resources: run-off\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28–30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32–34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for defining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981–2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].\n\nThis difference in approach led to inconsistencies in the definitions of the dates of GWLs in the two parts of the study. In the extremes analysis using raw model output, the dates of passing GWLs were defined on the basis of the global mean temperatures in the driving CMIP5 models relative to those models' simulations of global mean temperature in 1870–1899 (table 3). However, in the HCVI and JULES analyses which used bias-corrected data, it was considered more appropriate for the GWLs to be defined using the warming in the observational dataset", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "# CHAPTER 10:\n\n### LANGUAGE SKILLS AT WORK HOW TO WRITE A COVER LETTER\n\nIf you've ever applied for a job, you'll know that writing the cover letter is the most difficult part of almost any job application. Your cover letter creates the first impression, and often determines whether an employer will even look at your CV.\n\nYou need to use this opportunity to introduce yourself and your skills, and to set yourself apart from all the other candidates. You can also use this opportunity to explain any gaps in your CV, and to motivate why you are the right person for the job.\n\n### tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips", - "page_start": 44, - "page_end": 44, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "Louis XIV in 1685, the year he revoked the Edict of Nantes\n\nrewarded converts to Catholicism.[68] This discrimination did not encounter much Protestant resistance, and a steady conversion of Protestants occurred, especially among the noble elites.\n\nIn 1681, Louis dramatically increased his persecution of Protestants. The principle of *cuius regio, eius religio* generally also meant that subjects who refused to convert could emigrate, but Louis banned emigration and effectively insisted that all Protestants must be converted. Secondly, following the proposal of René de Marillac and the Marquis of Louvois, he began quartering dragoons in Protestant homes. Although this was within his legal rights, the *dragonnades* inflicted severe financial strain on Protestants and atrocious abuse. Between 300,000 and 400,000 Huguenots converted, as this entailed financial rewards and exemption from the *dragonnades*. [69]\n\nOn 15 October 1685, Louis issued the Edict of Fontainebleau, which cited the redundancy of privileges for Protestants given their scarcity after the extensive conversions. The Edict of Fontainebleau revoked the Edict of Nantes and repealed all the privileges that arose therefrom.[4] By his edict, Louis no longer tolerated the existence of Protestant groups, pastors, or churches in France.\n\nNo further churches were to be constructed, and those already existing were to be demolished. Pastors could choose either exile or secular life. Those Protestants who had resisted conversion were now to be baptised forcibly into the established church.[70]\n\nProtestant peasants rebelled against the officially sanctioned *dragonnades* (conversions enforced by dragoons, labeled \"missionaries in boots\") that followed the Edict of Fontainebleau.\n\nHistorians have debated Louis's reasons for issuing the Edict of Fontainebleau. He may have been seeking to placate Pope Innocent XI, with whom relations were tense and whose aid was necessary to determine the outcome of a succession crisis in the Electorate of Cologne. He may also have acted to upstage Emperor Leopold I and regain international prestige after the latter defeated the Turks without Louis's help. Otherwise, he may simply\n\nhave desired to end the remaining divisions in French society dating to the Wars of Religion by fulfilling his coronation oath to eradicate heresy. [71][72]\n\nMany historians have condemned the Edict of Fontainebleau as gravely harmful to France.[73] In support, they cite the emigration of about 200,000 highly skilled Huguenots (roughly one quarter of the Protestant population, or 1% of the French population) who defied royal decrees and fled France for various Protestant states, weakening the French economy and enriching that of Protestant states. On the other hand, some historians view this as an exaggeration. They argue that most of France's preeminent Protestant businessmen and industrialists converted to Catholicism and remained.[74]\n\nWhat is certain is that the reaction to the Edict was mixed. Even while French Catholic leaders exulted, Pope Innocent XI still argued with Louis over Gallicanism and criticized the use of violence. Protestants across Europe were horrified at the treatment of their co-religionists, but most Catholics in France applauded the move. Nonetheless, it is indisputable that Louis's public image in most of Europe, especially in Protestant regions, was dealt a severe blow.\n\nIn the end, however, despite renewed tensions with the Camisards of south-central France at the end of his reign, Louis may have helped ensure that his successor would experience fewer instances of the religion-based disturbances that had plagued his forebears. French society would sufficiently change by the time of his descendant, Louis XVI, to welcome tolerance in the form of the 1787 Edict of Versailles, also known as the Edict of Tolerance. This restored to non-Catholics their civil rights and the freedom to worship openly. [75] With the advent of the French Revolution in 1789, Protestants were granted equal rights with their Roman Catholic counterparts.\n\n## **Nine Years' War**\n\n#### **Causes and conduct of the war**", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia5.pdf" - }, - { - "text": "## **Climate**\n\nLyon has a humid subtropical climate (Köppen: *Cfa*), bordering an oceanic climate (*Köppen*: *Cfb*, Trewartha: *Do*).[38] The mean temperature in Lyon in the coldest month is 4.1 °C (39.4 °F) in January and in the warmest month in July is 22.6 °C (72.7 °F). Precipitation is adequate year-round, at an average of 820 mm (32.3 in), the winter months are the driest. The highest recorded temperature was 40.5 °C (104.9 °F) on 13 August 2003 while the lowest recorded temperature was −24.6 °C (−12.3 °F) on 22 December 1938.[39]\n\nIce on the Saône, 2012", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Figure 3.14. General Effects of Sweepbock", - "page_start": 244, - "page_end": 244, - "source_file": "00-80T-80.pdf" - }, - { - "text": "breakwater will be an over capping type, which interrupts the waves progress, but does not totally protect from wave penetration. These events are manageable and estimated as a once in 50 years possibility.\n\nThe breakwater core will be used as a construction causeway allowing land based equipment to perform the work. The greater part of the breakwater work involves winning the material as opposed to actual construction.\n\n#### **E. CYCLONE MOORINGS.**\n\nThe extent of the cyclone problem in Australia's north and north west was emphasised when Cyclone Tracey struck Darwin in 1974. The most powerful cyclone to cross the Australian coast was Cyclone Vance in 1999, which passed near Dampier, destroying large parts of the towns of Onslow and Exmouth further to the south.\n\nThe problem is acute, particularly in the area between Exmouth and Port Hedland, which suffers cyclones of an intensity and frequency as high as anywhere in the world. The Mermaid Base is typically on cyclone alert three times per season. The season is November to April.\n\nTo date there have been three options available to vessel owners when a cyclone approaches:.\n\n- Run to sea\n- Take refuge with crew onboard, on a mooring in the most sheltered location available such as the Dampier Archipelago or the Monte Bello Islands.\n- Construct a cyclone shelter.\n\nThere are serious personal safety and environmental considerations related to Options 1 and 2 and it is obvious that best practice universally adopted by large responsible Companies can be satisfied in this way.\n\nOnly Woodside at Dampier and BHP at Port Hedand have taken the step of building shelters which provides protection to 12 of the region's 60 vessels and this at very considerable cost.\n\nMermaid has undertaken significant engineering work on the placing of vessels on partially sheltered spread moorings, allowing the vessels to be secured near to shore and the crews demobilized to take care of their families and attend to household cyclone preparation.\n\nMermaid is taking a leadership role with a technical solution which will lead to wider adoption as vessel owners and the insurance industry fully value the arrangements. Mermaid will provide 1 2", - "page_start": 15, - "page_end": 15, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "How can citizens in Fredericton easily access flood risk data?", - "target_page": 1, - "target_passage": "New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n# **Three ways Canadian communities are reducing flood risks**\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com Word Count: 281\n\n#### M e d i a Att a c h m e n ts −\n\nHave your say! Complete our 2025 Media Survey\n\nRetrain your way to a new job\n\nThe top AI-powered tech trends in 2025", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "**Figure 1.** Hunger and Climate Vulnerability Index for 1981–2010 climate (ensemble mean across the bias-corrected HadGEM3 ensemble).\n\n**Table 2.** Proxies for flood and drought events used in the HCVI.\n\n| extreme weather event description of proxy |\n| --- |\n| average length of flood events number of days in which the cumulative daily rainfall excess is positive, |\n| compared with the 95th percentile in the 1981–2010 average |\n| |\n| average length of drought events number of days in which the cumulative daily rainfall deficit is positive, |\n| compared with the 20th percentile in the 1981–2010 average |\n| |\n\nUN Food and Agriculture Organization, UN Development Programme and UN Population Fund [22]. The exposure component comprised proxies for the average length of flood and drought events calculated with daily precipitation data [23] (table 2). These proxies were chosen above other possible metrics as they were required to replace self-reported instances of flood and drought events used in the original HCVI, which correlate with undernutrition data at the country-level [23]. The proxies were therefore masked to only include data where a significant proportion of people live and grow crops before aggregating to country level and combining to comprise a measure of exposure [23]; nevertheless, it is recognized that precipitation data alone may not always be adequate for representing flood and drought events, so the current method is regarded as preliminary.\n\nThe impacts of projected climate change, therefore, act through changes in these quantities. In the current version of the HCVI, climate-change impacts on other quantities such as crop yield are not considered. Socio-economic factors affecting sensitivity and adaptive capacity are fixed at present-day conditions.\n\nThe ensemble-mean baseline HCVI calculated with the high-resolution bias-corrected HadGEM3 ensemble is shown in figure 1. The spatial pattern is compatible with HCVI values calculated using reanalysis data at the CMIP5 grid-scale resolution [23]; the most vulnerable regions are sub-Saharan Africa and South Asia. This higher-resolution climate data enables inclusion of additional countries which were not resolved in the lower-resolution CMIP5 data.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed11.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## **Investment in the Urban Centres of New Brunswick and PEI**\n\n26% of Killam's apartment NOI is currently generated in New Brunswick, split principally between the province's three major urban centres, Fredericton, Moncton and Saint John. Fredericton and Moncton both experienced high population growth over the last number of years, posting 9.3% and 8.7% growth, respectively, between the 2006 and 2011 Census periods. Fredericton is the provincial capital and home to the province's largest university. Moncton is the largest city and a transportation and distribution hub for Atlantic Canada. Population growth in Moncton in recent years has been driven by urbanization from French communities in Northern New Brunswick. The Saint John market, representing 5.6% of Killam's apartment NOI, is focused on industry and energy. After strong energy investments in the city in the mid‑2000s, the city has seen a reduction in economic projects over the last three years. Home to Irving Oil's refinery operations, the proposed Energy East Pipeline project to bring oil from Western Canada to refineries in Quebec and New Brunswick, has potential for strong economic growth for the city and the province.\n\nKillam also has a 19% market share in Charlottetown, the capital and economic center of Prince Edward Island.\n\n#### **Expanding Ownership in Ontario**\n\nKillam's apartment portfolio includes 1,359 apartment units in Ontario, up from 225 units three years ago, and includes properties in Ottawa, Toronto, London and Cambridge. In addition to apartments, 42% of Killam's MHC sites are located in Ontario. Killam is focused on increasing its geographic diversification by acquiring more properties in Ontario.\n\n## **A Diversified Portfolio of Apartment Properties**\n\nKillam's apartment portfolio includes a variety of property types, including high‑rise (24% of units), mid‑rise with elevators (33%) , walk‑ups (41%) and a small number of townhouses (2%). The portfolio includes rents ranging from affordable to high‑end Class A properties. The average rent for Killam's apartment units at the end of 2013 was $915.\n\nThe average age of Killam's apartment portfolio is 28 years. With a focus on both developing and acquiring newer properties, 23% of Killam's apartments are considered new (built after 2001), on a unit count basis. Compared to the national average of 7%, as per CMHC's 2010 Housing Observer, Killam's portfolio is considerably newer and should result in lower capital and maintenance costs for the foreseeable future. 43% of Killam's NOI is generated from apartment units that are considered new, with 20% of the Company's NOI generated from units built in the last five years.\n\n### **MHCs Compliment Killam's Apartment Portfolio**\n\nWith MHCs, Killam owns the land and infrastructure supporting each community and leases the sites to the tenants, who own their own homes and pay Killam a monthly rent. In addition to site rent, the tenant may have a mortgage payment to a financial institution for their home. The average site rent in Killam's MHC portfolio was $222 per month, which offers value and affordability to tenants. The homeowner is responsible for property taxes based on the assessed value of their home and Killam is responsible for the property tax on the land.\n\nMHCs require less recurring capital investment and deliver a more predictable and stable cash flow than apartments. MHC home owners are responsible for the repair, maintenance and operating costs of their homes, which removes significant variable costs that are typically borne by Killam for apartments. The operating profit margin in Killam's MHC business averaged 62.4% over the last two years, compared to 58.9% for apartments.\n\nThe most significant costs to operate MHCs are water, land property tax and general repairs and maintenance to the water and sewer infrastructure. Killam's experience with MHCs has shown that the largest variable expenses are costs related to the water and sewer infrastructure. The majority of other costs have little variability. Killam's MHCs enjoy a stable tenant base, with consistently strong occupancy of approximately 98%. Should a tenant choose to leave a community, they sell their home, with the home typically remaining on the site and rent collection continuing uninterrupted from the new homeowner, who Killam approves as part of the sale process.", - "page_start": 32, - "page_end": 32, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind *prove*, that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n- 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n- 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, **nor there is any actual demand for them by Open Data advocates**\n- 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n- 4. Very often, in practice, Open Data struggles only happen about *when and how* to make available in the most effective way for society information that was *already* recognized as public. *What* to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n### **3.8. Need to better define what is Public Data**\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "decisions. Ideally, this training should be provided at a local level with local programs, in a way that makes it possible to use it on local issues, for the reasons and in the ways discussed in the next paragraph. For example, visualization techniques like those used by ABC News to show the effects of the March 2011 Japan Earthquake, in which all the user has to do to compare scenes from before and after the earthquake is to move a slider, should be routinely used to explain proposals about urban planning, zoning and related topics.\n\n### **4.6. Focus on local, specific issues to raise interest for Open Data**\n\nConsidering the continuous evidence and concerns about scarce interest and preparation of citizens to use Open Data in their political, economic and professional decisions, one of the final recommendations of the Open Data, Open Society report confirms its importance and needs to be repeated: it is very effective, if not simply necessary if the goal is to generate a critical mass of citizens that demand and use Open Data in the shortest possible time, to practice all the recommendations of this report *at the local level*,\n\nMost people encounter their local governments much more often then their national ones. When working within a single city or region it is much easier to inform citizens, raise their interest and involve them, because they would be searching *local* solutions to improve *local* services and/or save *local* money. There may also be much more opportunities to do so, especially in this period of financial crisis that will see substantial decreases both in credit by financial institutions and in subsidies from central governments. Concreteness and, as they say in marketing, \"customer focus\" must be the keys for local activists and public employees working on local Open Data:\n\n- work on specific issues and with precise objectives\n- focus on immediate usefulness\n- work on demand, on the *services* that people want. Required services define what data must be open, not the contrary\n\nThis is the most effective, if not the only strategy, to solve one of the biggest debates in open data: *\"how do we get people to use the data that we publish?\"*. The right question, instead, is \"what data do people want?\". Even if citizens don't realize yet that what they actually want is more Open Data, or that what they need can be done more quickly and cheaply by releasing some information in that way.\n\nA great example of what all this means is the Great British Public Toilet Map: a public participation", - "page_start": 30, - "page_end": 30, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "based PSI analysis and presentation, not just to crime mapping:\n\nIn general, a map is just a map, not reality. It doesn't always and necessarily provide scientific evidence. Crime maps, for example, are NOT safety maps, as most citizens would, more or less consciously, like them to be: a tool that tells them where to buy a house their according to the level of criminality in the district.\n\nWhen used in that way, crime maps can give unprepared users two false impressions: the first, obvious one, is that certain areas are only criminal spaces, exclusively inhabited by criminals. The other is to encourage a purely egoistic vision of the city, where the need for safety becomes paranoia and intolerance and all that matters is to be inside some gated community. This doesn't lower crime levels at all: the only result is to increase urban segregation.\n\nTo make things worse, crime data not analyzed and explained properly don't just contribute to strengthen egoistic attitudes and lock the urban areas that are actually the most plagued by crime into their current difficult state indefinitely. Sometimes, they may even perpetuate beliefs that are, at least in part, simply false. Of course, when those beliefs not grounded in facts already existed, open crime data can help, by finding and proving the gaps between perception of criminality and reality. Belleri, for example, notes that residents of Milan consider the outskirts of their city more dangerous than downtown Milan, while Londoners think the opposite about London... but in both cities the truth emerging from data is exactly the opposite (at least for certain categories of crime) of what their residents believe.\n\n#### **3.6.3. Unequal access**\n\nEven ignoring crime mapping, in some worst case scenarios, data openness may be not only hindered by social divisions, but also create or enhance them. If citizens can't find and recognize real, relevant *meaning* and practical value in data, as well as way to use them to make change happen, there won't be any widespread, long lasting benefit from openness. How can we guarantee, instead, that such meaning and value will be evident and usable? What are the ingredients for success here?\n\nEnhancing access to PSI it's harder than it may seem because it isn't just a matter of physical infrastructure. It is necessary that those who access Open Data are in a position to actually understand them and use them in their own interest.\n\nThis is far from granted also because, sometimes, the citizens who would benefit the most from certain data are just those, already poor, marginalized and/or without the right education, who have the least chances to actually discover and be able to use them. This is what G. Friedman was", - "page_start": 18, - "page_end": 18, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "speaking about when, in September 2010, he wrote about the great divide caused by Open Health Data:\n\n> [in the USA] *\"statistically speaking, chronic disease is associated with being older, African American, less educated, and living in a lower-income household. By contrast, Internet use is statistically associated with being younger, white, collegeeducated, and living in a higher-income household. Thus, it is not surprising that the chronically ill report lower rates of Internet access.*\n\nStarting from this, and commenting a study of the performances, with respect to coronary artery bypass grafting, of several medical centers, Frydman expressed his concern that:\n\n> *the empowered will have access to [this data] and will act upon it, while many of the people suffering from chronic diseases (the same population that would benefit most from access to this information) won't. Over time it is therefore probable that the current centers of excellence will treat an ever growing number of empowered while the centers that currently experience high mortality rates will get worse and worse result, simply because they will treat an ever growing number of digital outliers who haven't the possibility to obtain health data and apply filters.*\n\nSince one of the topics of this project is the *economic* value of Open Data, it is necessary to add a somewhat obvious observation to Frydman's concerns (regardless of their probability). Even if it is difficult now to make accurate estimates, such negative developments would surely impact also the costs of health services and insurances, not to mention healthcare-related jobs, both in the communities hosting centers of excellence and in those with the worst ones.\n\n#### **3.6.4. Lack of education to data**\n\nBoris Müller, professor for interface and interaction design at the University of Applied Sciences in Potsda, said in an April 2011 interview: *\"I think that really a citizen needs to know how visualizations work in order to really evaluate the quality of the data and the quality of the evaluation.\"* As data visualization and analysis becomes more popular easier to use (even as a tool for manipulating the public opinion), it's important for the public to:\n\n- understand that, before becoming digital, information was coded, stored and used in many ways, through social norms and human interactions more complex than computer ones (cfr the digitization of India land ownership records), therefore making exact, one-to-one equivalence between analog and digital procedures hard or impossible in many cases\n- think critically about where data comes from\n- *remember* to always follow the *development* of data-based stories, or accusation.", - "page_start": 19, - "page_end": 19, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "with a project called \"Tales of Things\" to allow people to leave messages for each other (or just for the world) at the bus stops. Scanning the QR code now allows people to see not just the bus timetable, but also the notes other travelers have left on that stop, including *\"what's nearby, who's waiting for whom, what number can you call for a good time. It's a cross between bus stop Facebook and digital graffiti\"*, that happened thanks to the openness of the original bus stop data.\n\nThe Social Life of Data Project will study instead how particular datasets have been used, who used them, how those people are connected and what conversations happen around Open Data.\n\n### **3.3. Legal issues remain crucial**\n\nProper licensing of Public data is essential. The more Open Data activities continue, the clearer this rule becomes. What distinguishes Open Data from \"mere\" transparency is reuse. Paraphrasing Eaves, until a government get the licensing issue right, Open Data cannot bring all the possible benefits in that country. If there are no guarantees that public data can be used without restriction, very little happens in practice, and when it happens it may be something against the public interest.\n\nCanadian Company Public Engines Inc, that is paid by local police departments to collect, process and analyze official crime data, also publishes online, with a proprietary license, anonymized summaries of those data. When in 2010 another company, Report See Inc, scraped those data from their website to reuse them, Public Engines sued.\n\nReporting this, D. Eaves rightly points out that *both* companies are right: one is trying to protect its investment, the other is simply trying to reuse what IS public data, by getting it from the ONLY place where it's available. This is what happens when public officials leave the ownership of *public* data to the third parties hired to collect them. Please note that, in practice, it makes very little difference whether those third parties are private, for-profit corporations or even other Public Administrations. Unless, of course, there are national laws already in place that define in advance what is the license of all present and future Public Data, *no matter how they were generated and by whom*, those data can be lost in any moment for society. In all other cases, the legal status of data will be either officially closed and locked, or uncertain enough to prevent most or all reuses. In February 2011, the news came that, even if they weren't the original copyright holders, Public Engines had been able to put together enough legal claims to convince Report See to give up.\n\nDisputes like this should not happen and would not happen if all contracts regarding collection and management of PSI clearly specified that all the resulting data either go directly into the public domain (after being anonymized if necessary, of course) or remain exclusive property of the", - "page_start": 12, - "page_end": 12, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "What is, exactly, Public Data? A definition that is accepted almost implicitly is *\"data that is of public interest, that belongs to the whole community, data that every citizen is surely entitled to know and use\"*. This definition is so generic that accepting it together with the assumption that all such data should be open as preached by the Open Data movement (online, as soon as possible, in machine readable format with an open license etc...) doesn't create any particular problem or conflict.\n\nReal problems however start as it has happened all too often so far, whenever we assume more or less consciously that \"Public Data\" in the sense defined above and data directly produced by Governments and Public Administrations, that is what's normally called PSI (Public Sector Information) are the same thing.\n\nThere is no doubt that Governments and Public Administrations produce huge quantities of Public Data. But this is an age of privatization of many public services, from transportation to healthcare, energy and water management. This is an age in which many activities with potentially very serious impacts on whole communities, like processing of hazardous substances or toxic waste, happen *outside* Public Administrations. The paradox is that, as Sasaki put it, this increased privatization is happening in the very same period in which *\" we are observing a worldwide diffusion of access to information laws that empower citizens to hold government agencies accountable.\"*\n\nIn such a context, \"Public Data\"is critical just because it is a much bigger set of data than what constitutes traditional, official PSI. \"Public Data\" includes all that information *plus* the much bigger amount of data describing and measuring all the activities of private companies, from bus timetables to packaged food ingredients, aqueducts performances and composition of fumes released in the atmosphere, that have a *direct impact* on the health and rights of all citizens of the communities affected by the activities of those companies.\n\nAre such data \"Public\" today, in the sense defined at the beginning of this paragraph, that is something every citizen has the right to know without intermediaries or delegates, or not? Should they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as soon as possible, in machine readable format with an open license etc...) just like, for example, the budget of some Ministry? Answering these questions may be one of the biggest challenges for the Open Data community, and for society as a whole, in the next years.\n\nHere are, in order to facilitate reflection on this issue, a few recent, real world examples of \"Public Data\" that are *not* PSI, and of the impacts of their lack of openness.", - "page_start": 23, - "page_end": 23, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "benefit when local businesses make more money) are aware of this opportunity?\n\n# **4. Conclusion: seven Open Data strategy and best practices suggestions**\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n### **4.1. Properly define and explain both Open Data and Public Data**\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n- 1. **Privacy issues are almost always a non-issue.** Quoting from What \"open data\" means and what it doesn't): *Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.*\n- 2. Defining as Public and consequently opening them in the right way, *much more data* than those born and stored *inside* Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n### **4.2. Keep political issues separated by economics ones**\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data *even if* they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "In these mice, which lumbar levels were the dorsal root ganglion removed from?", - "target_page": 3, - "target_passage": "L3 to L5 DRGs were removed and postfixed for another 2 hours", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "cell death and apoptosis with more than 10 genes were examined. Filtered count data of expressed and nondifferentially expressed genes were used as a background.\n\n#### 2.8. Dorsal root ganglion culture\n\nDorsal root ganglia were dissected from MrgDCreERT2;Ai32 and CalcaCreERT2;Ai32 mice .1 week after dosing with tamoxifen and enzymatically digested at 37˚˚C for 80 minutes in dispase type II (4.7 mg/mL) plus collagenase type II (4 mg/mL) (Worthington Biochemical), as described previously.63 Mechanically dissociated cells were plated onto laminin/poly-D-lysine (R&D Systems, Minneapolis, MN) treated coverslips in complete Neurobasal Plus medium (Neurobasal Plus media supplemented with 2% (vol/vol) B27 Plus, 1% N2, 1% Glutamax, and 1% antibiotic–antimycotic [ThermoFisher Scientific, Waltham, MA]). Mouse nerve growth factor (GF) (50 ng/mL; nerve growth factor (NGF), PeproTech, Cranbury, NJ) and 10 ng/mL glial-derived neurotrophic factor (GDNF, PeproTech) were added to the media under some conditions. Cytosine b-D-arabinofuranoside (4 mM) was added to the media for 24 hours the day after plating to reduce the proliferation of nonneuronal cells. Media was refreshed 3 times per week thereafter. Cultures were fixed for 10 minutes at room temperature with 4% paraformaldehyde and subsequently processed by immunocytochemistry (described earlier).\n\n#### 2.9. Statistical analysis\n\nData are expressed as mean 6 SEM unless otherwise specified, and P values of less than 0.05 were considered significant. Power calculations were performed using G*Power 3.1.9.7.15 A quantitative Venn diagram was created using BioVenn.25 All other statistical analyses were performed in Prism 10 (GraphPad Software, Inc, Boston, MA) or R using paired t tests or 1- or 2-way RM ANOVAs (repeated measures analysis of variance), where appropriate. Normality was assessed by the Shapiro–Wilk test. If the main analysis of variance effect was significant, Sˇ ´ıd ´ak or Tukey multiple comparisons tests were performed. To compare population distributions of soma cross-sectional area or volume, Kolmogorov–Smirnov tests were performed.\n\n#### 3. Results\n\n# 3.1. Peripheral nerve injury induces a loss of small neurons from the dorsal root ganglion\n\nTo assess the gross loss of neurons from DRG following nerve injury, we generated the AvilFlpO;Atf3CreERT2;RC::FLTG mouse line in which na¨ıve and axotomized sensory neurons were differentially labelled. In this mouse line, all neurons express tdTomato (Flp-dependent) in the na¨ıve state and switch to expressing green fluorescent protein (GFP) upon axonal damage and concurrent tamoxifen treatment (Flp- and Cre-dependent) (Figs. 1A and B). Following pilot experiments to optimize tamoxifen dosing regimen, this approach was both highly efficient and specific (with the caveat that it was necessary to wait for several days after nerve injury for Cre-induced GFP expression): 14 days after SNItrans surgery, GFP was expressed by 99.1 6 0.6% of Atf3-expressing ipsilateral L4 DRG neurons, while we observed GFP in only 4.6 6 0.7% of contralateral DRG neurons (Figs. S2A–D, http://links.lww.com/PAIN/C84). We then used a stereological approach to quantify the total number of neurons in L4 DRG ipsilateral to injury 1, 2, 4, and 8 weeks after SNItrans, as well as contralateral to injury. One week after SNItrans, we observed 7809 6 153 neurons per DRG; this was not significantly different to the number of neurons in the contralateral DRG (7917 6 349), whereas cell number approximately halved by 8 weeks postinjury to 3963 6 410 neurons per DRG (Fig. 1C). Separating analysis into intact vs axotomized afferents revealed that only axotomized afferents were lost, with no difference observed in numbers of intact afferents (Fig. 1D). Between 1 and 8 weeks after injury, we observed a 61.0 6 7.0% decrease in the number of GFP1 neurons. This loss of injured afferents resulted in a loss of neuron-containing (ie, excluding white matter regions) DRG volume (Fig. 1E), but not neuron density (Fig. 1F). Cell loss predominantly occurred between 1 and 2 weeks postinjury and stabilized after this timepoint. Population distributions of the cross-sectional area of nucleated, tdTomato-expressing cell profiles were not significantly different at 1 vs 8 weeks post-SNItrans, in contrast to GFP-expressing/injured afferents, in which a loss of a population of small afferents at 8 weeks postinjury was observed (Fig. 1G).\n\nSNItrans resulted in a mixed population of axotomized and intact afferents within the L4 DRG. Therefore, we developed an approach to restrict our analysis to axotomized afferents, without relying on transgenic labelling, and used this as a complementary approach to confirm our findings. We injected the neuronal tracer FB into the glabrous, tibial innervation territory of both hindpaws 1 week before common peroneal and tibial transection (SNItrans) or crush (SNIcrush) surgeries (Figs. 2A and B). FastBlue-uptake was complete across neurons of all sizes by 1 week (Fig. S3, http://links.lww.com/PAIN/ C84), so this approach allowed us to profile a sample of the axotomized afferents. Both SNItrans (Fig. 2C) and SNIcrush (Fig. 2D) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post–nerve injury.\n\nAs a third complementary approach, we applied semiautomated volumetric analyses of nuclei size following tissue clearing. In this study, whole DRGs were cleared 4 weeks after SNItrans for nuclei counting in \"complete\" tissue (Figs. 2E–H). Nuclei were labelled by TDP-43, in line with the study by West et al.,67 and were quantified using Imaris software (Fig. 2F, Video 1). We observed a slight but significant rightward shift in nuclear spot volume population distribution 4 weeks after SNItrans (Fig. 2G). In addition, there was a significant reduction in the number of small but not medium or large nuclear spots, in support of a loss of small-diameter neuron populations (Fig. 2H).\n\nTogether, our data derived from several different experimental approaches show that a population of small-diameter afferents are lost following peripheral nerve injury.\n\n# 3.2. Spared nerve crush or transection results in death of Mrgprd-expressing neurons\n\nTo date, determining cell loss among specific populations of afferent neurons has proved challenging due to the downregulation of subpopulation-specific marker genes following axonal transection.37,44 To overcome this issue, we took advantage of transgenic strategies to label populations in a manner that persisted after injury. Owing to the bias for the loss of small neurons and the known loss of IB4-binding central terminals postinjury,36 we initially focused on nonpeptidergic nociceptive neurons. We used MrgDChR2-YFP mice to identify neurons belonging to the largest of the 3 classes of nonpeptidergic nociceptors, NP1.55,59 To determine whether these neurons are lost following nerve injury, we used a stereological method to quantify L4 DRG MrgD-YFP1 (yellow fluorescent", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n# 2. Methods\n\n#### 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light–dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel ofthe University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7to 16 weeks atthe start ofthe experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of MrgprdCreERT2;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1. Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgDCreERT2;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al.50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n#### 2.2. Spared nerve transection and crush surgeries\n\nSpared nerve injury (transection of the common peroneal and tibial branches of the sciatic nerve; SNItrans) and common peroneal and tibial crush injury (SNIcrush), in which nerve axons were severed but the epineurium remained intact, were performed as previously described.12 Anesthesia was induced with 3% to 5% isoflurane and then maintained at 1.5% to 2% as required. Analgesia, consisting of carprofen (10 mg/kg) and buprenorphine (0.05 mg/kg) (Glasgow) or carprofen (5 mg/kg) and local bupivacaine (2 mg/kg) (Oxford) was provided perioperatively. The left hindpaw was secured with tape in hip abduction, and the operative field (lateral surface of the thigh) was shaved. Ophthalmic ointment was applied to the eyes, and the shaved area was swabbed with chlorhexidine solution. A longitudinal incision was made in the skin at the lateral mid-thigh. Using blunt dissection, an opening was made through the biceps femoris, exposing the sciatic nerve and the 3 peripheral branches (sural, tibial, and common peroneal nerves). For SNItrans, the common peroneal and tibial nerves were ligated using a 6-0 Vicryl suture (Ethicon, Raritan, NJ), and a 1- to 2-mm piece distal to the suture was removed using spring scissors. For SNIcrush, the exposed tibial and common peroneal nerves were clamped using a pair of fine hemostats (Fine Science Tools, Heidelberg, Germany) closed to their second clip, leaving the nerve branches intact but translucent. The muscle was closed with one 6-0 Vicryl suture (Ethicon), and the skin incision was closed with one 10 mm wound clip (Alzet, Cupertino, CA). Animals were monitored daily for self-mutilation, and no animals required sacrifice due to tissue damage.\n\n#### Table 1\n\n#### Transgenic lines used in the study.\n\n| Used name | Full name | Putative population | Ref | Source | Tamoxifen regime |\n| --- | --- | --- | --- | --- | --- |\n| Atf3CreERT2 | Atf3tm1.1(cre/ERT2)Msra | Axotomised afferents | 13 | Gift: Dr Franziska Denk | 50 mg/kg on days 0, 3, and 7 after surgery |\n| AvilFlpO | Aviltm1(flpo)Ddg | Sensory neurons | 1 | Gift: Prof David Ginty | N.A. |\n| MrgDCreERT2 | Mrgprdtm1.1(cre/ERT2)Wql | Major class of nonpeptidergic | 39 | The Jackson Laboratory (RRID: | General: 1x 50 mg/kg in adulthood, (.1 week |\n| | | neurons | | IMSR_JAX:031286) | before experiment) |\n| | | | | | 3D volumetric analysis: 5x i.p. (0.5 mg/animal/ |\n| | | | | | day), beginning between P10 and P17 |\n| MrgDChR2- | Mrgprdtm4.1(COP4)Mjz | Major class of nonpeptidergic | 59 | Mutant Mouse Resource & Research | N.A. |\n| YFP | | neurons | | Centers (RRID:MMRRC_036112-UNC) | |\n| CalcaCreERT2 | Calcatm1.1(cre/ERT2)Ptch | Peptidergic neurons | 51 | Gift: Prof Pao-Tien Chuang | 1x 75 mg/kg in adulthood (.1 week before |\n| | | | | | experiment) |\n| Trpm8FlpO | | Cold afferents | 4 | Gift: Dr Mark Hoon | N.A. |\n| Thy1-CFP | B6.Cg-Tg(Thy1-CFP) | Sample of myelinated afferents | 16 | The Jackson Laboratory (RRID: | N.A. |\n| | 23Jrs/J | | | IMSR_JAX:003710) | |\n| ThCreERT2 | Thtm1.1(cre/ERT2)Ddg/J | C low threshold | 1 | Gift: Prof David Ginty; The Jackson | 1x 50 mg/kg in adulthood (.2 weeks before |\n| | | mechanoreceptors | | Laboratory (RRID:IMSR_JAX:025614) | experiment) |\n| RC::FLTG | B6.Cg- Gt(ROSA) | Flp-mediated tdTomato; | 40 | The Jackson Laboratory (RRID: | N.A. |\n| | tm1.3(CAG-tdTomato,- 26Sor | Cre1Flp-mediated GFP | | IMSR_JAX:026932) | |\n| | EGFP)Pjen /J | expression | | | |\n| Ai14 | B6.Cg- Gt(ROSA) | Cre-mediated tdTomato | 33 | The Jackson Laboratory (RRID: | N.A. |\n| | tm14(CAG-tdTomato)Hze 26Sor / | expression | | IMSR_JAX:007914) | |\n| J | | | | | |\n| Ai32 | B6.Cg- Gt(ROSA) | Cre-mediated ChR2-eYFP | 32 | The Jackson Laboratory (RRID: | N.A. |\n| | tm32(CAG 26Sor | expression | | IMSR_JAX:024109) | |\n| | COP4*H134R/EYFP)Hze | | | | |\n\nCFP, cyan fluorescent protein; GFP, Green fluorescent protein; YFP, yellow fluorescent protein.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - }, - { - "text": "# Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Coopera , Allison M. Barryb , Paschalina Chrysostomidoua , Romane Loligniera , Jinyi Wanga , Magdalena Redondo Canalesa , Heather F. Tittertona , David L. Bennettb , Greg A. Weira,*\n\n# Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury–induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n# 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability,57 which is a key pathological driver of neuropathic pain.20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell37,44 and subpopulation-specific sequencing studies.3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury.3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSponsorships or competing interests that may be relevant to content are disclosed at the end of this article.\n\n*Corresponding author. Address: School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QQ, United Kingdom. Tel.: 144 (0) 141 330 7023. E-mail address: gregory.weir@glasgow.ac.uk (G.A. Weir).\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models.24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts,48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers53 but some contrasting studies describe the preferential loss of large cells6 or loss of cells of all sizes.46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods.56 Shi et al.50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after \"mid-thigh\" sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression,5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI,49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush.44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize\n\na School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom, b Nuffield Department of Clinical Neurosciences, University of\n\nOxford, Oxford, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3-mm intervals through the entirety of a 30-mm-thick tissue section. Scale bar 5 100 mm. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov–Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 mm. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov–Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F2,145 8.26, P 5 0.004; n 5 4 to 5 mice; Sˇ ´ıd ´ak multiple comparisons tests: **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n# 3.3. Spared nerve injury induces a loss of Trpm81 and calcitonin gene-related peptide1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injuryinduced loss. To investigate potential loss of Trpm81 (coldsensitive), calcitonin gene-related peptide1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8FlpO; RC::FLTG (FlpO-dependent tdTom expression), CalcaCreERT2; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively (Figs. 4A–D). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A–C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations was not biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm81 and CGRP1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84).3 We found that early after injury (3 days post-SNItrans), nonpeptidergic (MrgDCreERT2-expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10aCreERT2), peptidergic nociceptors (Calca- CreERT2), C-LTMRs (ThCreERT2), and Ab-RA (rapidly adapting) and Ad-LTMRs (Ad/Ab-LTMR, Ntrk2CreERT2;AdvillinFlpO), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and Ad/Ab-LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n# 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection.5 We sought to use the DRG neurons from MrgDCreERT2;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury (Figs. 5A–E). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF (Fig. 5F). However, in the absence of GFs, YFP1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density (Figs. 5C and F). YFP1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media (Figs. 5D–F). These results contrasted with experiments using neurons derived from CalcaCreERT2;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP1 after 28 days in culture, regardless of exogenous GF addition (Figs. 5G–L). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n# 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly developed transgenic recombinase driver lines, we have shown that loss is biased across molecularly defined subpopulations. Nonpeptidergic nociceptive neurons are particularly susceptible to loss, with almost all Mrgprd1 axotomized afferents lost following an unrepaired transection injury (SNItrans) and roughly half lost following a model which contrastingly allows for nerve regenerations (SNIcrush). Finally, we have observed that the vulnerability of Mrgprd1 neurons extends to the in vitro setting and provide data to support the hypothesis that loss is driven by a lack of neurotrophic support following injury.\n\n# 4.1. Neuronal loss\n\nThe question of whether DRG neurons die following traumatic injury has been addressed by several groups over the last few decades. Despite contrasting findings on the extent, timing, and form that loss takes, most studies have observed frank loss of DRG neurons.6,38,46,53 However, more recent studies using recombinase driver lines and novel machine-learning approaches have cast doubt on this consensus.44,49 Our data strongly support the loss hypothesis and suggest that approximately 60% of axotomized afferents die within 2 weeks of SNI. The discrepancy between our findings and other recent studies may be partly explained by the sampling method used to estimate neuronal numbers. For example, Schulte et al.49 developed a novel machine-learning approach and found no reduction in neuron density across serial sections of rat DRG following SNI, and they inferred from this that frank loss did not occur. Our results are congruous, in that we also observed no reduction in neuron density. However, we found a substantial loss in the total neuron-containing volume of injured DRG, which underlies our contrasting conclusion of frank loss. Of note, morphological volumetric analysis and MRI have also previously demonstrated volume loss in both rodent and human DRG following nerve injury.35,65,66 These findings occur despite a major increase of nonneuronal cells in the injured DRG30 and support the notion that the total DRG neuron number is decreased.\n\n#### 4.2. Selectivity of neuron loss\n\nWhile definitively characterizing loss of molecularly defined subpopulations was challenging before the advent of recombinase driver lines, a consensus emerged that small-diameter neurons are more vulnerable to nerve injury–induced loss.50,53 Our data support this consensus and extend it to reveal that while there is a generalized partial loss of C-fiber populations including CGRP- and Trpm8-expressing neurons, Mrgprd-expressing neurons are particularly sensitive to loss. This selective vulnerability has been hinted at previously by the stark reduction in the number of DRG neurons and their central terminals that bind IB4 and express canonical markers such as the P2X3 receptor following nerve injury.5,8,29,36 Type 1a glomeruli are also reduced in lamina II, suggesting a structural loss of central terminals and not simply a loss of IB4-binding.2 However, it was not clear whether these data represented phenotypic changes in nonpeptidergic nociceptors or frank loss of neurons. We describe neuron loss that is delayed (occurring .7 days postinjury) with respect to histochemical and structural changes (occurring 1- 5 days postinjury2,29), suggesting that these changes precede and are not in themselves indicative of neuron loss.\n\nThe vulnerability of Mrgprd-expressing neurons is congruous with recent subpopulation bulk RNA-seq data, which found that", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the AvilFlpO;Atf3CreERT2;RC:: FLTG mouse line and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 mm. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): †P , 0.05 vs contra, ‡P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov–Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. *P , 0.05, **P , 0.01, ***P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\nprotein) neurons 28 days after sham surgery or SNItrans (Figs. 3A and B). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP1 neurons in L4 DRG (Fig. 3C).\n\nYellow fluorescent protein expression in MrgDChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage.44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgDCreERT2;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprd-expressing afferents with ChR2-YFP (Figs. 3D–F). We then tested whether the proportion of cutaneous tibial afferents that were YFP1 was altered following nerve injury. Following hindpaw FB injection, ;15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9% 28 days after SNItrans (Fig. 3G). Uptake by uninjured YFP1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP1 neurons in uninjured DRG revealed an area of 361 6 138 mm2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd-expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "**Fig. 3 | Subcortical GMV changed throughout gestation. a**, Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at *q* < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline—36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). **b**, The participant's hippocampus and surrounding cortex were segmented\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline—36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at *q* < 0.05. For **a** and **b**, nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg48*.* DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia or neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).\n\nCritically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography14. Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "# 2.3. FastBlue tracer injections\n\nTable 2\n\nMice were briefly anesthetized during the procedure, induced with 3% to 5% isoflurane, and then maintained at 1.5% to 2% as required. Hindlimbs were taped with the plantar surface of the paw facing up, and a custom, 26G removable needle with a 30˚ bevel, attached to a 25-mL Hamilton syringe, was inserted between the 2 distal-most footpads, towards the medial aspect of the hindpaw. The needle was then rotated 90˚, so the bevel faced medially. Furthermore, 4-mL FastBlue (FB; 2% in sterile phosphate-buffered saline (PBS); CAS# 73819-41-7; Polysciences, Inc, Warrington, PA) per paw was then slowly injected, and the needle was left in place for 10 seconds, before rotating and carefully retracting to avoid backflow of FB along the needle track. This prevented the FB bolus from contacting the sural innervation territory of the lateral hindpaw, restricting it largely to the tibial innervation territory of the glabrous hindpaw skin.\n\n# 2.4. Immunohistochemistry and image acquisition\n\nMice were anesthetized with an overdose of pentobarbital (20 mg) and transcardially perfused with a fixative containing 4% formaldehyde. L3 to L5 DRGs were removed and postfixed for another 2 hours, cryoprotected in 30% sucrose overnight, and then embedded in optimal cutting temperature media (OCT; Tissue Tek, Alphen aan den Rijn, the Netherlands). Dorsal root ganglia were sectioned on a Leica CM1950 cryostat at 30 mm, with every section collected serially on 5 Superfrost Plus slides (VWR, Lutterworth, United Kingdom) and each slide containing 1 in every 5 sections (4-7 sections per slide). One slide per DRG was selected at random and was washed with PBS, before being incubated with appropriate primary antibodies (Table 2) diluted in 5% normal donkey serum and 0.3% Triton X-100 in PBS for 3 days at 4˚C. After PBS washes, slides were incubated with appropriate secondary antibodies (Table 2) in the same PBS/ (normal donkey serum) NDS/Triton-X100 solution as for primaries, overnight at room temperature. Slides were washed and coverslipped with VectaShield Vibrance Hardset mounting media (Vector Labs, Newark, CA), with 4',6-diamidino-2-phenylindole included in mounting media where FB-labelled cells were not being examined. Sections were imaged using a Zeiss LSM900 Airyscan confocal microscope equipped with 405-, 488-, 561-,\n\n| Primary and secondary antibodies used in the study. | | | |\n| --- | --- | --- | --- |\n| Antibody | Source | Identifiers | Working dilution |\n| Anti-GFP (Chicken polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab13970 | 1:1000 |\n| | | RRID: AB_300798 | |\n| Anti-NeuN (Guinea pig polyclonal) | Synaptic Systems, G ¨ottingen, Germany | Cat#: 266004 | 1:500 |\n| | | RRID: AB_2619988 | |\n| Anti-mCherry (Rat monoclonal) | Invitrogen, Waltham, MA; Thermo Fisher Scientific, | Cat#: M11217 | 1:500 |\n| United Kingdom | | RRID: AB_2536611 | |\n| Anti-Atf3 (Rabbit polyclonal) | Novus Biologicals, Minneapolis, MN | Cat#: NBP1-85816 | 1:500 |\n| | | RRID: AB_11014863 | |\n| Anti-NF200 (Rabbit polyclonal) | Sigma-Aldrich, Saint Louis, MO | Cat#: N4142 | 1:1000 |\n| | | RRID: AB_477272 | |\n| Anti-TrkA (Goat polyclonal) | R&D Systems, Minneapolis, MN | Cat#: AF1056 | 1:500 |\n| | | RRID: AB_2283049 | |\n| Anti-TDP43 (Rabbit polyclonal) | Abcam, plc, Cambridge, United Kingdom | Cat#: ab133547 | 1:100 |\n| | | RRID: AB_2920621 | |\n| Anti-RFP (Mouse monoclonal) | Thermo Fisher Scientific, United Kingdom | Cat#: MA5-15257 | 1:200 |\n| | | RRID: AB_10999796 | |\n| Anti-RFP (Chicken polyclonal) | Sigma-Aldrich, United Kingdom | Cat#: AB3528 | 1:200 |\n| | | RRID: AB_11212735 | |\n| Alexa Fluor 488 Donkey Anti-Chicken IgY | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 703-545-155 | 1:500 |\n| (Donkey polyclonal) | | RRID: AB_2340375 | |\n| Alexa Fluor 647 Donkey Anti-Guinea pig IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 706-605-148 | 1:250 |\n| (Donkey polyclonal) | | RRID: AB_2340476 | |\n| Rhodamine Red-X Donkey Anti-Rat IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 712-295-153 | 1:100 |\n| polyclonal) | | RRID: AB_2340676 | |\n| Alexa Fluor 647 Donkey Anti-Rabbit IgG (Donkey | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-605-152 | 1:250 |\n| polyclonal) | | RRID: AB_2492288 | |\n| Rhodamine Red-X Donkey Anti-Rabbit IgG | Jackson ImmunoResearch, Ely, United Kingdom | Cat#: 711-295-152 RRID: AB_2340613 | 1:100 |\n| (Donkey polyclonal) | | | |\n| Alexa Fluor 546 Goat Anti-Chicken IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11040 | 1:400 |\n| polyclonal) | | RRID: AB_2534097 | |\n| Alexa Fluor 488 Goat Anti-Rabbit IgG (Goat | Thermo Fisher Scientific, United Kingdom | Cat#: A11008 | 1:400 |\n| polyclonal) | | RRID: AB_143165 | |\n| Alexa Fluor 546 Donkey Anti-Mouse IgG (Donkey | Thermo Fisher Scientific, United Kingdom | Cat#: A10036 | 1:400 |\n| polyclonal) | | RRID: AB_2534012 | |\n\nGFP, green fluorescent protein; RFP, red fluorescent protein", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 3. Spared nerve crush or transection results in death of nonpeptidergic neurons. (A) Schematic of experimental approach for (B and C). (B) MrgDChR2-YFP L4 DRGs 4 weeks after SNI, contralateral or ipsilateral to injury. Images are projections of optical sections at 3-mm intervals through the entirety of 30-mm-thick tissue sections. Scale bars 5 100 mm. (C) Quantification of total number of MrgD-YFP1 cells per L4 DRG 4 weeks after SNI revealed a significant loss in ipsilateral DRG. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; Side x Treatment interaction: F1,5 5 9.23, P 5 0.029; n 5 3 mice. (D) The experimental approach used to generate data presented in (E–G). (E and F) MrgD-YFP expression and FB labelling in the L4 DRG, 14 days after SNI or crush surgery or contralateral to injury. White boxes represent regions enlarged in (F). Scale bars 5 100 mm (E) or 20 mm (F). (G) The proportion of FB-labelled DRG neurons decreased after spared nerve crush injury, and co-labelling is almost completely absent after SNI. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; side 3 injury interaction: F1,4 5 7.80, P 5 0.049; n 5 3 mice. Posttests: *P , 0.05, **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; SNI, spared nerve injury; FB, FastBlue; RM, repeated measures.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed2.pdf" - }, - { - "text": "- [30] Liang Z, Hore Z, Harley P, Uchenna Stanley F, Michrowska A, Dahiya M, La Russa F, Jager SE, Villa-Hernandez S, Denk F. A transcriptional toolbox for exploring peripheral neuroimmune interactions. PAIN 2020; 161:2089–106.\n- [31] Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol 2014;15:550.\n- [32] Madisen L, Mao T, Koch H, Zhuo J, Berenyi A, Fujisawa S, Hsu YWA, Garcia AJ, Gu X, Zanella S, Kidney J, Gu H, Mao Y, Hooks BM, Boyden ES, Buzs ´aki G, Ramirez JM, Jones AR, Svoboda K, Han X, Turner EE, Zeng H. A toolbox of Cre-dependent optogenetic transgenic mice for light-induced activation and silencing. Nat Neurosci 2012;15:793–802.\n- [33] Madisen L, Zwingman TA, Sunkin SM, Oh SW, Zariwala HA, Gu H, Ng LL, Palmiter RD, Hawrylycz MJ, Jones AR, Lein ES, Zeng H. A robust and high-throughput Cre reporting and characterization system for the whole mouse brain. Nat Neurosci 2010;13:133–40.\n- [34] McCoy ES, Taylor-Blake B, Street SE, Pribisko AL, Zheng J, Zylka MJ. Peptidergic CGRPa primary sensory neurons encode heat and itch and tonically suppress sensitivity to cold. Neuron 2013;78:138–51.\n- [35] McKay Hart A, Brannstrom T, Wiberg M, Terenghi G. Primary sensory neurons and satellite cells after peripheral axotomy in the adult rat: timecourse of cell death and elimination. Exp Brain Res 2002;142:308–18.\n- [36] Molander C, Wang H, Rivero-Meli ´an C, Grant G. Early decline and late restoration of spinal cord binding and transganglionic transport of isolectin B4 from Griffonia simplicifolia I after peripheral nerve transection or crush. Restor Neurol Neurosci 1996;10:123–33.\n- [37] Nguyen MQ, Le Pichon CE, Ryba N. Stereotyped transcriptomic transformation of somatosensory neurons in response to injury. Elife 2019;8:e49679.\n- [38] Oliveira ALR. Apoptosis of sensory neurons and satellite cells after sciatic nerve transection in C57BL/6J mice. Braz J Med Biol Res 2001;34: 375–80.\n- [39] Olson W, Abdus-Saboor I, Cui L, Burdge J, Raabe T, Ma M, Luo W. Sparse genetic tracing reveals regionally specific functional organization of mammalian nociceptors. Elife 2017;6:e29507.\n- [40] Plummer NW, Evsyukova IY, Robertson SD, de Marchena J, Tucker CJ, Jensen P. Expanding the power of recombinase-based labeling to uncover cellular diversity. Development 2015;142:4385–93.\n- [41] Prescott SA, Ratt ´e S. Pain processing by spinal microcircuits: afferent combinatorics. Curr Opin Neurobiol 2012;22:631–9.\n- [42] Qi L, Iskols M, Shi D, Reddy P, Walker C, Lezgiyeva K, Voisin T, Pawlak M, Kuchroo VK, Chiu I, Ginty DD, Sharma N. A DRG genetic toolkit reveals molecular, morphological, and functional diversity of somatosensory neuron subtypes. bioRxiv 2023.2023.04.22.537932.\n- [43] Reid AJ, Mantovani C, Shawcross SG, Terenghi G, Wiberg M. Phenotype of distinct primary sensory afferent subpopulations and caspase-3 expression following axotomy. Histochem Cell Biol 2011;136:71–8.\n- [44] Renthal W, Tochitsky I, Yang L, Cheng YC, Li E, Kawaguchi R, Geschwind DH, Woolf CJ. Transcriptional reprogramming of distinct peripheral sensory neuron subtypes after axonal injury. Neuron 2020; 108:128–44.e9.\n- [45] Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez J-Y, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A. Fiji: an open-source platform for biological-image analysis. Nat Methods 2012;9:676–82.\n- [46] Schmalbruch H. Loss of sensory neurons after sciatic nerve section in the rat. Anat Rec 1987;219:323–9.\n- [47] Schmitz C, Hof PR. Design-based stereology in neuroscience. Neuroscience 2005;130:813–31.\n- [48] Schulte A, Degenbeck J, Aue A, Schindeh ¨utte M, Schlott F, Schneider M, Monoranu CM, Bohnert M, Pham M, Antoniadis G, Blum R, Rittner HL. Human dorsal root ganglia after plexus injury: either preservation or loss of the multicellular unit. bioRxiv 2023.02.06.526934.\n- [49] Schulte A, Lohner H, Degenbeck J, Segebarth D, Rittner HL, Blum R, Aue A. Unbiased analysis of the dorsal root ganglion after peripheral nerve injury: no neuronal loss, no gliosis, but satellite glial cell plasticity. PAIN 2023;164:728–40.\n- [50] Shi TJS, Tandrup T, Bergman E, Xu ZQD, Ulfhake B, H ¨okfelt T. Effect of peripheral nerve injury on dorsal root ganglion neurons in the C57 BL/6J\n\nmouse: marked changes both in cell numbers and neuropeptide expression. Neuroscience 2001;105:249–63.\n\n- [51] Song H, Yao E, Lin C, Gacayan R, Chen MH, Chuang PT. Functional characterization of pulmonary neuroendocrine cells in lung development, injury, and tumorigenesis. Proc Natl Acad Sci 2012;109:17531–6.\n- [52] Takasu K, Sakai A, Hanawa H, Shimada T, Suzuki H. Overexpression of GDNF in the uninjured DRG exerts analgesic effects on neuropathic pain following segmental spinal nerve ligation in mice. J Pain 2011;12: 1130–1139.\n- [53] Tandrup T, Woolf CJ, Coggeshall RE. Delayed loss of small dorsal root ganglion cells after transection of the rat sciatic nerve. J Comp Neurol 2000;422:172–80.\n- [54] Terenghi G, Hart A, Wiberg M. The nerve injury and the dying neurons: diagnosis and prevention. J Hand Surg Eur Vol 2011;36:730–4.\n- [55] Usoskin D, Furlan A, Islam S, Abdo H, Lonnerberg P, Lou D, Hjerling-Leffler J, Haeggstrom J, Kharchenko O, Kharchenko PV, Linnarsson S, Ernfors P. Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing. Nat Neurosci 2015;18:145–53.\n- [56] Vestergaard S, Tandrup T, Jakobsen J. Effect of permanent axotomy on number and volume of dorsal root ganglion cell bodies. J Comp Neurol 1997;388:307–12.\n- [57] Wall PD, Gutnick M. Properties of afferent nerve impulses originating from a neuroma. Nature 1974;248:740–43.\n- [58] Wang C, Gu L, Ruan Y, Geng X, Xu M, Yang N, Yu L, Jiang Y, Zhu C, Yang Y, Zhou Y, Guan X, Luo W, Liu Q, Dong X, Yu G, Lan L, Tang Z. Facilitation of MrgprD by TRP-A1 promotes neuropathic pain. FASEB J 2019;33: 1360–73.\n- [59] Wang H, Zylka MJ. Mrgprd-expressing polymodal nociceptive neurons innervate most known classes of substantia gelatinosa neurons. J Neurosci 2009;29:13202–9.\n- [60] Wang R, Guo W, Ossipov MH, Vanderah TW, Porreca F, Lai J. Glial cell line-derived neurotrophic factor normalizes neurochemical changes in injured dorsal root ganglion neurons and prevents the expression of experimental neuropathic pain. Neuroscience 2003; 121:815–24.\n- [61] Wang X, Archibald ML, Stevens K, Baldridge WH, Chauhan BC. Cyan fluorescent protein (CFP) expressing cells in the retina of Thy1-CFP transgenic mice before and after optic nerve injury. Neurosci Lett 2010; 468:110–4.\n- [62] Warwick C, Cassidy C, Hachisuka J, Wright MC, Baumbauer KM, Adelman PC, Lee KH, Smith KM, Sheahan TD, Ross SE, Koerber HR. MrgprdCre lineage neurons mediate optogenetic allodynia through an emergent polysynaptic circuit. PAIN 2021;162:2120–31.\n- [63] Weir GA, Middleton SJ, Clark AJ, Daniel T, Khovanov N, McMahon SB, Bennett DL. Using an engineered glutamate-gated chloride channel to silence sensory neurons and treat neuropathic pain at the source. Brain 2017;140:2570–85.\n- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315–23.\n- [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22–33.\n- [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632–40.\n- [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n- [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779–85.\n- [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n- [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065–75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "Did the researcher responsible for quantifying the cells in the dorsal root ganglion know which group each mouse belonged to?", - "target_page": 4, - "target_passage": "During all image quantification, the experimenter was blind to the experimental groups.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "cell death and apoptosis with more than 10 genes were examined. Filtered count data of expressed and nondifferentially expressed genes were used as a background.\n\n#### 2.8. Dorsal root ganglion culture\n\nDorsal root ganglia were dissected from MrgDCreERT2;Ai32 and CalcaCreERT2;Ai32 mice .1 week after dosing with tamoxifen and enzymatically digested at 37˚˚C for 80 minutes in dispase type II (4.7 mg/mL) plus collagenase type II (4 mg/mL) (Worthington Biochemical), as described previously.63 Mechanically dissociated cells were plated onto laminin/poly-D-lysine (R&D Systems, Minneapolis, MN) treated coverslips in complete Neurobasal Plus medium (Neurobasal Plus media supplemented with 2% (vol/vol) B27 Plus, 1% N2, 1% Glutamax, and 1% antibiotic–antimycotic [ThermoFisher Scientific, Waltham, MA]). Mouse nerve growth factor (GF) (50 ng/mL; nerve growth factor (NGF), PeproTech, Cranbury, NJ) and 10 ng/mL glial-derived neurotrophic factor (GDNF, PeproTech) were added to the media under some conditions. Cytosine b-D-arabinofuranoside (4 mM) was added to the media for 24 hours the day after plating to reduce the proliferation of nonneuronal cells. Media was refreshed 3 times per week thereafter. Cultures were fixed for 10 minutes at room temperature with 4% paraformaldehyde and subsequently processed by immunocytochemistry (described earlier).\n\n#### 2.9. Statistical analysis\n\nData are expressed as mean 6 SEM unless otherwise specified, and P values of less than 0.05 were considered significant. Power calculations were performed using G*Power 3.1.9.7.15 A quantitative Venn diagram was created using BioVenn.25 All other statistical analyses were performed in Prism 10 (GraphPad Software, Inc, Boston, MA) or R using paired t tests or 1- or 2-way RM ANOVAs (repeated measures analysis of variance), where appropriate. Normality was assessed by the Shapiro–Wilk test. If the main analysis of variance effect was significant, Sˇ ´ıd ´ak or Tukey multiple comparisons tests were performed. To compare population distributions of soma cross-sectional area or volume, Kolmogorov–Smirnov tests were performed.\n\n#### 3. Results\n\n# 3.1. Peripheral nerve injury induces a loss of small neurons from the dorsal root ganglion\n\nTo assess the gross loss of neurons from DRG following nerve injury, we generated the AvilFlpO;Atf3CreERT2;RC::FLTG mouse line in which na¨ıve and axotomized sensory neurons were differentially labelled. In this mouse line, all neurons express tdTomato (Flp-dependent) in the na¨ıve state and switch to expressing green fluorescent protein (GFP) upon axonal damage and concurrent tamoxifen treatment (Flp- and Cre-dependent) (Figs. 1A and B). Following pilot experiments to optimize tamoxifen dosing regimen, this approach was both highly efficient and specific (with the caveat that it was necessary to wait for several days after nerve injury for Cre-induced GFP expression): 14 days after SNItrans surgery, GFP was expressed by 99.1 6 0.6% of Atf3-expressing ipsilateral L4 DRG neurons, while we observed GFP in only 4.6 6 0.7% of contralateral DRG neurons (Figs. S2A–D, http://links.lww.com/PAIN/C84). We then used a stereological approach to quantify the total number of neurons in L4 DRG ipsilateral to injury 1, 2, 4, and 8 weeks after SNItrans, as well as contralateral to injury. One week after SNItrans, we observed 7809 6 153 neurons per DRG; this was not significantly different to the number of neurons in the contralateral DRG (7917 6 349), whereas cell number approximately halved by 8 weeks postinjury to 3963 6 410 neurons per DRG (Fig. 1C). Separating analysis into intact vs axotomized afferents revealed that only axotomized afferents were lost, with no difference observed in numbers of intact afferents (Fig. 1D). Between 1 and 8 weeks after injury, we observed a 61.0 6 7.0% decrease in the number of GFP1 neurons. This loss of injured afferents resulted in a loss of neuron-containing (ie, excluding white matter regions) DRG volume (Fig. 1E), but not neuron density (Fig. 1F). Cell loss predominantly occurred between 1 and 2 weeks postinjury and stabilized after this timepoint. Population distributions of the cross-sectional area of nucleated, tdTomato-expressing cell profiles were not significantly different at 1 vs 8 weeks post-SNItrans, in contrast to GFP-expressing/injured afferents, in which a loss of a population of small afferents at 8 weeks postinjury was observed (Fig. 1G).\n\nSNItrans resulted in a mixed population of axotomized and intact afferents within the L4 DRG. Therefore, we developed an approach to restrict our analysis to axotomized afferents, without relying on transgenic labelling, and used this as a complementary approach to confirm our findings. We injected the neuronal tracer FB into the glabrous, tibial innervation territory of both hindpaws 1 week before common peroneal and tibial transection (SNItrans) or crush (SNIcrush) surgeries (Figs. 2A and B). FastBlue-uptake was complete across neurons of all sizes by 1 week (Fig. S3, http://links.lww.com/PAIN/ C84), so this approach allowed us to profile a sample of the axotomized afferents. Both SNItrans (Fig. 2C) and SNIcrush (Fig. 2D) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post–nerve injury.\n\nAs a third complementary approach, we applied semiautomated volumetric analyses of nuclei size following tissue clearing. In this study, whole DRGs were cleared 4 weeks after SNItrans for nuclei counting in \"complete\" tissue (Figs. 2E–H). Nuclei were labelled by TDP-43, in line with the study by West et al.,67 and were quantified using Imaris software (Fig. 2F, Video 1). We observed a slight but significant rightward shift in nuclear spot volume population distribution 4 weeks after SNItrans (Fig. 2G). In addition, there was a significant reduction in the number of small but not medium or large nuclear spots, in support of a loss of small-diameter neuron populations (Fig. 2H).\n\nTogether, our data derived from several different experimental approaches show that a population of small-diameter afferents are lost following peripheral nerve injury.\n\n# 3.2. Spared nerve crush or transection results in death of Mrgprd-expressing neurons\n\nTo date, determining cell loss among specific populations of afferent neurons has proved challenging due to the downregulation of subpopulation-specific marker genes following axonal transection.37,44 To overcome this issue, we took advantage of transgenic strategies to label populations in a manner that persisted after injury. Owing to the bias for the loss of small neurons and the known loss of IB4-binding central terminals postinjury,36 we initially focused on nonpeptidergic nociceptive neurons. We used MrgDChR2-YFP mice to identify neurons belonging to the largest of the 3 classes of nonpeptidergic nociceptors, NP1.55,59 To determine whether these neurons are lost following nerve injury, we used a stereological method to quantify L4 DRG MrgD-YFP1 (yellow fluorescent", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "# Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Coopera , Allison M. Barryb , Paschalina Chrysostomidoua , Romane Loligniera , Jinyi Wanga , Magdalena Redondo Canalesa , Heather F. Tittertona , David L. Bennettb , Greg A. Weira,*\n\n# Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury–induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n# 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability,57 which is a key pathological driver of neuropathic pain.20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell37,44 and subpopulation-specific sequencing studies.3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury.3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSponsorships or competing interests that may be relevant to content are disclosed at the end of this article.\n\n*Corresponding author. Address: School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QQ, United Kingdom. Tel.: 144 (0) 141 330 7023. E-mail address: gregory.weir@glasgow.ac.uk (G.A. Weir).\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models.24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts,48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers53 but some contrasting studies describe the preferential loss of large cells6 or loss of cells of all sizes.46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods.56 Shi et al.50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after \"mid-thigh\" sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression,5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI,49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush.44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize\n\na School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom, b Nuffield Department of Clinical Neurosciences, University of\n\nOxford, Oxford, United Kingdom", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3-mm intervals through the entirety of a 30-mm-thick tissue section. Scale bar 5 100 mm. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov–Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 mm. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov–Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F2,145 8.26, P 5 0.004; n 5 4 to 5 mice; Sˇ ´ıd ´ak multiple comparisons tests: **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n# 3.3. Spared nerve injury induces a loss of Trpm81 and calcitonin gene-related peptide1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injuryinduced loss. To investigate potential loss of Trpm81 (coldsensitive), calcitonin gene-related peptide1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8FlpO; RC::FLTG (FlpO-dependent tdTom expression), CalcaCreERT2; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively (Figs. 4A–D). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A–C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations was not biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm81 and CGRP1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84).3 We found that early after injury (3 days post-SNItrans), nonpeptidergic (MrgDCreERT2-expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10aCreERT2), peptidergic nociceptors (Calca- CreERT2), C-LTMRs (ThCreERT2), and Ab-RA (rapidly adapting) and Ad-LTMRs (Ad/Ab-LTMR, Ntrk2CreERT2;AdvillinFlpO), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and Ad/Ab-LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n# 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection.5 We sought to use the DRG neurons from MrgDCreERT2;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury (Figs. 5A–E). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF (Fig. 5F). However, in the absence of GFs, YFP1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density (Figs. 5C and F). YFP1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media (Figs. 5D–F). These results contrasted with experiments using neurons derived from CalcaCreERT2;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP1 after 28 days in culture, regardless of exogenous GF addition (Figs. 5G–L). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n# 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly developed transgenic recombinase driver lines, we have shown that loss is biased across molecularly defined subpopulations. Nonpeptidergic nociceptive neurons are particularly susceptible to loss, with almost all Mrgprd1 axotomized afferents lost following an unrepaired transection injury (SNItrans) and roughly half lost following a model which contrastingly allows for nerve regenerations (SNIcrush). Finally, we have observed that the vulnerability of Mrgprd1 neurons extends to the in vitro setting and provide data to support the hypothesis that loss is driven by a lack of neurotrophic support following injury.\n\n# 4.1. Neuronal loss\n\nThe question of whether DRG neurons die following traumatic injury has been addressed by several groups over the last few decades. Despite contrasting findings on the extent, timing, and form that loss takes, most studies have observed frank loss of DRG neurons.6,38,46,53 However, more recent studies using recombinase driver lines and novel machine-learning approaches have cast doubt on this consensus.44,49 Our data strongly support the loss hypothesis and suggest that approximately 60% of axotomized afferents die within 2 weeks of SNI. The discrepancy between our findings and other recent studies may be partly explained by the sampling method used to estimate neuronal numbers. For example, Schulte et al.49 developed a novel machine-learning approach and found no reduction in neuron density across serial sections of rat DRG following SNI, and they inferred from this that frank loss did not occur. Our results are congruous, in that we also observed no reduction in neuron density. However, we found a substantial loss in the total neuron-containing volume of injured DRG, which underlies our contrasting conclusion of frank loss. Of note, morphological volumetric analysis and MRI have also previously demonstrated volume loss in both rodent and human DRG following nerve injury.35,65,66 These findings occur despite a major increase of nonneuronal cells in the injured DRG30 and support the notion that the total DRG neuron number is decreased.\n\n#### 4.2. Selectivity of neuron loss\n\nWhile definitively characterizing loss of molecularly defined subpopulations was challenging before the advent of recombinase driver lines, a consensus emerged that small-diameter neurons are more vulnerable to nerve injury–induced loss.50,53 Our data support this consensus and extend it to reveal that while there is a generalized partial loss of C-fiber populations including CGRP- and Trpm8-expressing neurons, Mrgprd-expressing neurons are particularly sensitive to loss. This selective vulnerability has been hinted at previously by the stark reduction in the number of DRG neurons and their central terminals that bind IB4 and express canonical markers such as the P2X3 receptor following nerve injury.5,8,29,36 Type 1a glomeruli are also reduced in lamina II, suggesting a structural loss of central terminals and not simply a loss of IB4-binding.2 However, it was not clear whether these data represented phenotypic changes in nonpeptidergic nociceptors or frank loss of neurons. We describe neuron loss that is delayed (occurring .7 days postinjury) with respect to histochemical and structural changes (occurring 1- 5 days postinjury2,29), suggesting that these changes precede and are not in themselves indicative of neuron loss.\n\nThe vulnerability of Mrgprd-expressing neurons is congruous with recent subpopulation bulk RNA-seq data, which found that", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the AvilFlpO;Atf3CreERT2;RC:: FLTG mouse line and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 mm. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): †P , 0.05 vs contra, ‡P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov–Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. *P , 0.05, **P , 0.01, ***P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\nprotein) neurons 28 days after sham surgery or SNItrans (Figs. 3A and B). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP1 neurons in L4 DRG (Fig. 3C).\n\nYellow fluorescent protein expression in MrgDChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage.44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgDCreERT2;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprd-expressing afferents with ChR2-YFP (Figs. 3D–F). We then tested whether the proportion of cutaneous tibial afferents that were YFP1 was altered following nerve injury. Following hindpaw FB injection, ;15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9% 28 days after SNItrans (Fig. 3G). Uptake by uninjured YFP1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP1 neurons in uninjured DRG revealed an area of 361 6 138 mm2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd-expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n# 2. Methods\n\n#### 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light–dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel ofthe University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7to 16 weeks atthe start ofthe experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of MrgprdCreERT2;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1. Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgDCreERT2;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al.50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n#### 2.2. Spared nerve transection and crush surgeries\n\nSpared nerve injury (transection of the common peroneal and tibial branches of the sciatic nerve; SNItrans) and common peroneal and tibial crush injury (SNIcrush), in which nerve axons were severed but the epineurium remained intact, were performed as previously described.12 Anesthesia was induced with 3% to 5% isoflurane and then maintained at 1.5% to 2% as required. Analgesia, consisting of carprofen (10 mg/kg) and buprenorphine (0.05 mg/kg) (Glasgow) or carprofen (5 mg/kg) and local bupivacaine (2 mg/kg) (Oxford) was provided perioperatively. The left hindpaw was secured with tape in hip abduction, and the operative field (lateral surface of the thigh) was shaved. Ophthalmic ointment was applied to the eyes, and the shaved area was swabbed with chlorhexidine solution. A longitudinal incision was made in the skin at the lateral mid-thigh. Using blunt dissection, an opening was made through the biceps femoris, exposing the sciatic nerve and the 3 peripheral branches (sural, tibial, and common peroneal nerves). For SNItrans, the common peroneal and tibial nerves were ligated using a 6-0 Vicryl suture (Ethicon, Raritan, NJ), and a 1- to 2-mm piece distal to the suture was removed using spring scissors. For SNIcrush, the exposed tibial and common peroneal nerves were clamped using a pair of fine hemostats (Fine Science Tools, Heidelberg, Germany) closed to their second clip, leaving the nerve branches intact but translucent. The muscle was closed with one 6-0 Vicryl suture (Ethicon), and the skin incision was closed with one 10 mm wound clip (Alzet, Cupertino, CA). Animals were monitored daily for self-mutilation, and no animals required sacrifice due to tissue damage.\n\n#### Table 1\n\n#### Transgenic lines used in the study.\n\n| Used name | Full name | Putative population | Ref | Source | Tamoxifen regime |\n| --- | --- | --- | --- | --- | --- |\n| Atf3CreERT2 | Atf3tm1.1(cre/ERT2)Msra | Axotomised afferents | 13 | Gift: Dr Franziska Denk | 50 mg/kg on days 0, 3, and 7 after surgery |\n| AvilFlpO | Aviltm1(flpo)Ddg | Sensory neurons | 1 | Gift: Prof David Ginty | N.A. |\n| MrgDCreERT2 | Mrgprdtm1.1(cre/ERT2)Wql | Major class of nonpeptidergic | 39 | The Jackson Laboratory (RRID: | General: 1x 50 mg/kg in adulthood, (.1 week |\n| | | neurons | | IMSR_JAX:031286) | before experiment) |\n| | | | | | 3D volumetric analysis: 5x i.p. (0.5 mg/animal/ |\n| | | | | | day), beginning between P10 and P17 |\n| MrgDChR2- | Mrgprdtm4.1(COP4)Mjz | Major class of nonpeptidergic | 59 | Mutant Mouse Resource & Research | N.A. |\n| YFP | | neurons | | Centers (RRID:MMRRC_036112-UNC) | |\n| CalcaCreERT2 | Calcatm1.1(cre/ERT2)Ptch | Peptidergic neurons | 51 | Gift: Prof Pao-Tien Chuang | 1x 75 mg/kg in adulthood (.1 week before |\n| | | | | | experiment) |\n| Trpm8FlpO | | Cold afferents | 4 | Gift: Dr Mark Hoon | N.A. |\n| Thy1-CFP | B6.Cg-Tg(Thy1-CFP) | Sample of myelinated afferents | 16 | The Jackson Laboratory (RRID: | N.A. |\n| | 23Jrs/J | | | IMSR_JAX:003710) | |\n| ThCreERT2 | Thtm1.1(cre/ERT2)Ddg/J | C low threshold | 1 | Gift: Prof David Ginty; The Jackson | 1x 50 mg/kg in adulthood (.2 weeks before |\n| | | mechanoreceptors | | Laboratory (RRID:IMSR_JAX:025614) | experiment) |\n| RC::FLTG | B6.Cg- Gt(ROSA) | Flp-mediated tdTomato; | 40 | The Jackson Laboratory (RRID: | N.A. |\n| | tm1.3(CAG-tdTomato,- 26Sor | Cre1Flp-mediated GFP | | IMSR_JAX:026932) | |\n| | EGFP)Pjen /J | expression | | | |\n| Ai14 | B6.Cg- Gt(ROSA) | Cre-mediated tdTomato | 33 | The Jackson Laboratory (RRID: | N.A. |\n| | tm14(CAG-tdTomato)Hze 26Sor / | expression | | IMSR_JAX:007914) | |\n| J | | | | | |\n| Ai32 | B6.Cg- Gt(ROSA) | Cre-mediated ChR2-eYFP | 32 | The Jackson Laboratory (RRID: | N.A. |\n| | tm32(CAG 26Sor | expression | | IMSR_JAX:024109) | |\n| | COP4*H134R/EYFP)Hze | | | | |\n\nCFP, cyan fluorescent protein; GFP, Green fluorescent protein; YFP, yellow fluorescent protein.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - }, - { - "text": "SNI-related gene expression signatures were less evident in Mrgprd-expressing and C-LTMR neurons at later timepoints, compared with other populations in injured DRG.3 This could be explained by a loss of axotomized neurons of these classes and therefore sampling of only uninjured neurons at this timepoint.24,43,64 In terms of the transcriptional response to injury, nonpeptidergic nociceptors show enrichment of individual proapoptotic factors early after injury,23,68 and we extend these results in this study, by describing a subpopulation-specific enrichment of GO terms associated with apoptosis that is evident as early as 3 days after injury. Such data and single-cell transcriptomic profiling of all DRG neurons following injury37,44 may offer the opportunity to elucidate the cell death pathways engaged and upstream effectors that enrich this process to nonpeptidergic nociceptive neurons.\n\n#### 4.3. Implications for pain pathogenesis\n\nNeuronal loss has been proposed as a key contributor to poor functional recovery following nerve injury,54 and biased survival of different afferent types might be expected to contribute to modality-specific sensory deficits. Beyond loss of function, does DRG neuron loss contribute to chronic pain, in either an adaptive or maladaptive manner? Intrathecal delivery of GDNF is neuroprotective and reverses the reduction in the number of IB4-binding DRG neurons and central terminals seen following transection.5 Treatment is concurrently analgesic and abrogates pain-related behaviors.7,60 However, the pleiotropic nature of GDNF makes it impossible to directly attribute the analgesic effects to the reversal of neuron loss. Indeed, it is possible that GDNF exerts its effect by actions on intact nonpeptidergic nociceptive afferents,52 activation of which is known to drive aversive behaviors in the neuropathic state.62 These data leave the contribution of nonpeptidergic nociceptor loss to behavior in the GDNF treatment paradigm ambiguous. Other pharmacological approaches have been found effective at reversing a neuronal loss in rodent models, but the impact on pain behavior was not studied.21,22\n\nRodents develop marked mechanical and thermal hypersensitivity rapidly following nerve injury and before timepoints at which neuron loss is observed.10 This lack of a temporal correlation may suggest a limited contribution to evoked hypersensitivities. The temporal profile of ongoing tonic pain (eg, pain aversiveness as measured by condition place preference assays26) is less defined and so is its correlation to the timing of neuron loss.\n\nThere are many anatomical sites within the somatosensory nervous system where differential loss of sensory neuron populations could impact neurobiology. For example, loss of cutaneous afferents may afford more opportunity for plasticity in reinnervation patterns, such as collateral sprouting of uninjured or surviving afferents, and the types of nerve endings made by different molecular subpopulations.17,27 It also seems likely that the death of many neurons within a DRG could contribute to the expansion and activation of immune cell types, which are known to play a major role in neuropathic pain.30,69 Finally, under normal conditions, peripheral sensory input is integrated into the dorsal horn of the spinal cord by complex interneuron circuitry. Many spinal circuits are engaged by convergent input from different afferent types.9,41,70 Therefore, selective loss of input from discrete afferent types could undoubtedly impact the normal processing of remaining afferent signals.34 Experimentally abrogating neuronal loss may be a fruitful approach to assess the contribution to nervous system plasticity (adaptive or maladaptive) following injury. In this regard, our in vitro readout would be a useful experimental platform to help delineate the precise cell death pathways and signaling cascades engaged (which could then be experimentally manipulated). Such studies should consider that plasticity may evolve over time. The loss of IB41 central terminals is transient following crush and has even been observed to reverse at longer timepoints following SNItrans. 36 These observations, in conjunction with ours of loss of neurons, raise the intriguing question of the source of such central reinnervation.\n\n#### 4.4. Study limitations\n\nOur efforts focused on traumatic nerve injury paradigms owing to previous contrasting results using these robust and reproducible experimental models. We did not extend our studies to systemic neuropathy models, such as chemotherapy or diabetic neuropathy. A recent postmortem analysis reported a neuronal loss in the DRG from patients with painful diabetic peripheral neuropathy.19 Transcriptional responses vary substantially across different nerve insults,44 so it would be of interest to test whether neuronal loss and the subpopulation vulnerability reported in this study are common features across different types of insults.\n\nUsing multiple approaches, we assess the na¨ıve mouse L4 DRG to contain approximately 8000 neurons, consistent with a previous estimate,67 and observed a frank loss of smalldiameter neurons following injury. However, the extent of loss observed using our semiautomated approach was less than that observed using manual techniques.67 Two major limitations in this study may explain this discrepancy: First, owing to technical issues, the cleared DRG dataset is unpaired ipsilateral–contralateral which adds larger variability. Second, the analysis method is prone to undercounting deep nuclei. The signal-to-noise is better for superficial nuclei and smaller tissue volumes. Given the reduction in DRG volume after SNItrans, nuclei in larger contralateral DRG may be undercounted.\n\nWhile we made efforts to profile the loss of several molecularly discrete sensory neuron populations, we acknowledge that not all subtypes were profiled. Furthermore, recent single-cell RNA sequencing has given us a more granular appreciation of the heterogeneity of sensory neurons.42 Future studies could leverage our experimental approach and new transgenic lines to characterize the loss of neurons in more detail. Such experiments may be pertinent before embarking on molecular or functional profiling of populations post–nerve injury.\n\n#### 4.5. Conclusions\n\nIn sum, we have provided data from multiple complementary experimental approaches to support the hypothesis that DRG neurons are lost following nerve injury in mice. We describe a substantial loss, which is biased towards specific subpopulations and particularly present in small-diameter nonpeptidergic nociceptive neurons.\n\n# Conflict of interest statement\n\nD.L.B. has acted as a consultant in the last 2 years for AditumBio, Biogen, Biointervene, Combigene, LatigoBio, GSK, Ionis, Lexicon therapeutics, Neuvati, Olipass, Orion, Replay, SC Health Managers, Theranexus, Third Rock Ventures, and Vida Ventures on behalf of Oxford University Innovation. D.L.B. has received research funding from Lilly and Astra Zeneca, and G.A.W. has received research funding from Ono Pharmaceutical. D.L.B. has received", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 4. Spared nerve injury induces a loss of Trpm81 and CGRP1 but not myelinated DRG neurons. (A) Schematic of experimental approach. (B–D) FastBlue labelling and Trpm8-tdTom (B), Calca-YFP (C), or Thy1-CFP expression (D) 28 days after SNItrans in the L4 DRG, contralateral (top) or ipsilateral (bottom) to injury. Images are projections of optical sections at 3-mm intervals through the entirety of 30-mm-thick tissue sections. Scale bars 5 100 mm. (E–G) Quantification of the proportion of FB-labelled neurons also expressing Trpm8-tdTom (E), Calca-YFP (F), or Thy1-CFP (G) in L4 DRG contralateral or ipsilateral to SNItrans. Paired t tests; Trpm8-tdTom: t2 5 5.31, P 5 0.034, n 5 3 mice; Calca-YFP: t3 5 4.12, P 5 0.026, n 5 4 mice; Thy1-CFP: t3 5 4.42, P 5 0.022, n 5 4 mice. *P , 0.05. CFP, cyan fluorescent protein; CGRP, calcitonin gene-related peptide; DRG, dorsal root ganglion; FB, FastBlue.\n\nby a population of small-diameter, putative cold-sensitive neurons (Fig. 4B), accounting for 8.3 6 0.27% of FB-labelled neurons in contralateral DRG. This decreased to 4.2 6 0.96% ipsilateral to SNItrans injury (Fig. 4E), indicating a partial loss of Trpm81 afferents. When examining peptidergic afferents, we found that 48.1 6 2.42% of FB-labelled neurons in contralateral DRG were Calca-YFP1, compared with 34.3 6 2.54% 4 weeks after SNItrans injury (Figs. 4C and F), consistent with a partial loss of CGRP1 afferents. We used a Thy1-CFP line that demonstrates consistent expression postinjury61 and labels a sample of medium/large diameter myelinated afferents. CFP was largely restricted to NF2001 neurons, labelling 56% of this population. Expression was present in a heterogenous population of nociceptive (TrkA1) and nonnociceptive (TrkA-) myelinated neurons (Fig. S5, http://links.lww.com/PAIN/C84). Contralateral to injury, 15.6 6 1.8% of FB-labelled neurons expressed Thy1- CFP (Figs. 4D and G). In contrast to unmyelinated subpopulations, this proportion was higher in ipsilateral DRG following SNItrans (23.3 6 3.2%), consistent with no (or minimal) loss of Thy1-CFP-expressing afferents, accompanied by a loss of Thy1- CFP-negative neurons. We did not observe significant alterations in the population distributions of the cross-sectional area of surviving, damaged Trpm8-tdTom1, Calca-YFP1, or Thy1- CFP1 DRG neurons when compared with DRG contralateral to", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 3. Spared nerve crush or transection results in death of nonpeptidergic neurons. (A) Schematic of experimental approach for (B and C). (B) MrgDChR2-YFP L4 DRGs 4 weeks after SNI, contralateral or ipsilateral to injury. Images are projections of optical sections at 3-mm intervals through the entirety of 30-mm-thick tissue sections. Scale bars 5 100 mm. (C) Quantification of total number of MrgD-YFP1 cells per L4 DRG 4 weeks after SNI revealed a significant loss in ipsilateral DRG. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; Side x Treatment interaction: F1,5 5 9.23, P 5 0.029; n 5 3 mice. (D) The experimental approach used to generate data presented in (E–G). (E and F) MrgD-YFP expression and FB labelling in the L4 DRG, 14 days after SNI or crush surgery or contralateral to injury. White boxes represent regions enlarged in (F). Scale bars 5 100 mm (E) or 20 mm (F). (G) The proportion of FB-labelled DRG neurons decreased after spared nerve crush injury, and co-labelling is almost completely absent after SNI. Two-way RM ANOVA with Sˇ ´ıd ´ak multiple comparisons tests; side 3 injury interaction: F1,4 5 7.80, P 5 0.049; n 5 3 mice. Posttests: *P , 0.05, **P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; SNI, spared nerve injury; FB, FastBlue; RM, repeated measures.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed2.pdf" - }, - { - "text": "- [30] Liang Z, Hore Z, Harley P, Uchenna Stanley F, Michrowska A, Dahiya M, La Russa F, Jager SE, Villa-Hernandez S, Denk F. A transcriptional toolbox for exploring peripheral neuroimmune interactions. PAIN 2020; 161:2089–106.\n- [31] Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol 2014;15:550.\n- [32] Madisen L, Mao T, Koch H, Zhuo J, Berenyi A, Fujisawa S, Hsu YWA, Garcia AJ, Gu X, Zanella S, Kidney J, Gu H, Mao Y, Hooks BM, Boyden ES, Buzs ´aki G, Ramirez JM, Jones AR, Svoboda K, Han X, Turner EE, Zeng H. A toolbox of Cre-dependent optogenetic transgenic mice for light-induced activation and silencing. Nat Neurosci 2012;15:793–802.\n- [33] Madisen L, Zwingman TA, Sunkin SM, Oh SW, Zariwala HA, Gu H, Ng LL, Palmiter RD, Hawrylycz MJ, Jones AR, Lein ES, Zeng H. A robust and high-throughput Cre reporting and characterization system for the whole mouse brain. Nat Neurosci 2010;13:133–40.\n- [34] McCoy ES, Taylor-Blake B, Street SE, Pribisko AL, Zheng J, Zylka MJ. Peptidergic CGRPa primary sensory neurons encode heat and itch and tonically suppress sensitivity to cold. Neuron 2013;78:138–51.\n- [35] McKay Hart A, Brannstrom T, Wiberg M, Terenghi G. Primary sensory neurons and satellite cells after peripheral axotomy in the adult rat: timecourse of cell death and elimination. Exp Brain Res 2002;142:308–18.\n- [36] Molander C, Wang H, Rivero-Meli ´an C, Grant G. Early decline and late restoration of spinal cord binding and transganglionic transport of isolectin B4 from Griffonia simplicifolia I after peripheral nerve transection or crush. Restor Neurol Neurosci 1996;10:123–33.\n- [37] Nguyen MQ, Le Pichon CE, Ryba N. Stereotyped transcriptomic transformation of somatosensory neurons in response to injury. Elife 2019;8:e49679.\n- [38] Oliveira ALR. Apoptosis of sensory neurons and satellite cells after sciatic nerve transection in C57BL/6J mice. Braz J Med Biol Res 2001;34: 375–80.\n- [39] Olson W, Abdus-Saboor I, Cui L, Burdge J, Raabe T, Ma M, Luo W. Sparse genetic tracing reveals regionally specific functional organization of mammalian nociceptors. Elife 2017;6:e29507.\n- [40] Plummer NW, Evsyukova IY, Robertson SD, de Marchena J, Tucker CJ, Jensen P. Expanding the power of recombinase-based labeling to uncover cellular diversity. Development 2015;142:4385–93.\n- [41] Prescott SA, Ratt ´e S. Pain processing by spinal microcircuits: afferent combinatorics. Curr Opin Neurobiol 2012;22:631–9.\n- [42] Qi L, Iskols M, Shi D, Reddy P, Walker C, Lezgiyeva K, Voisin T, Pawlak M, Kuchroo VK, Chiu I, Ginty DD, Sharma N. A DRG genetic toolkit reveals molecular, morphological, and functional diversity of somatosensory neuron subtypes. bioRxiv 2023.2023.04.22.537932.\n- [43] Reid AJ, Mantovani C, Shawcross SG, Terenghi G, Wiberg M. Phenotype of distinct primary sensory afferent subpopulations and caspase-3 expression following axotomy. Histochem Cell Biol 2011;136:71–8.\n- [44] Renthal W, Tochitsky I, Yang L, Cheng YC, Li E, Kawaguchi R, Geschwind DH, Woolf CJ. Transcriptional reprogramming of distinct peripheral sensory neuron subtypes after axonal injury. Neuron 2020; 108:128–44.e9.\n- [45] Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez J-Y, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A. Fiji: an open-source platform for biological-image analysis. Nat Methods 2012;9:676–82.\n- [46] Schmalbruch H. Loss of sensory neurons after sciatic nerve section in the rat. Anat Rec 1987;219:323–9.\n- [47] Schmitz C, Hof PR. Design-based stereology in neuroscience. Neuroscience 2005;130:813–31.\n- [48] Schulte A, Degenbeck J, Aue A, Schindeh ¨utte M, Schlott F, Schneider M, Monoranu CM, Bohnert M, Pham M, Antoniadis G, Blum R, Rittner HL. Human dorsal root ganglia after plexus injury: either preservation or loss of the multicellular unit. bioRxiv 2023.02.06.526934.\n- [49] Schulte A, Lohner H, Degenbeck J, Segebarth D, Rittner HL, Blum R, Aue A. Unbiased analysis of the dorsal root ganglion after peripheral nerve injury: no neuronal loss, no gliosis, but satellite glial cell plasticity. PAIN 2023;164:728–40.\n- [50] Shi TJS, Tandrup T, Bergman E, Xu ZQD, Ulfhake B, H ¨okfelt T. Effect of peripheral nerve injury on dorsal root ganglion neurons in the C57 BL/6J\n\nmouse: marked changes both in cell numbers and neuropeptide expression. Neuroscience 2001;105:249–63.\n\n- [51] Song H, Yao E, Lin C, Gacayan R, Chen MH, Chuang PT. Functional characterization of pulmonary neuroendocrine cells in lung development, injury, and tumorigenesis. Proc Natl Acad Sci 2012;109:17531–6.\n- [52] Takasu K, Sakai A, Hanawa H, Shimada T, Suzuki H. Overexpression of GDNF in the uninjured DRG exerts analgesic effects on neuropathic pain following segmental spinal nerve ligation in mice. J Pain 2011;12: 1130–1139.\n- [53] Tandrup T, Woolf CJ, Coggeshall RE. Delayed loss of small dorsal root ganglion cells after transection of the rat sciatic nerve. J Comp Neurol 2000;422:172–80.\n- [54] Terenghi G, Hart A, Wiberg M. The nerve injury and the dying neurons: diagnosis and prevention. J Hand Surg Eur Vol 2011;36:730–4.\n- [55] Usoskin D, Furlan A, Islam S, Abdo H, Lonnerberg P, Lou D, Hjerling-Leffler J, Haeggstrom J, Kharchenko O, Kharchenko PV, Linnarsson S, Ernfors P. Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing. Nat Neurosci 2015;18:145–53.\n- [56] Vestergaard S, Tandrup T, Jakobsen J. Effect of permanent axotomy on number and volume of dorsal root ganglion cell bodies. J Comp Neurol 1997;388:307–12.\n- [57] Wall PD, Gutnick M. Properties of afferent nerve impulses originating from a neuroma. Nature 1974;248:740–43.\n- [58] Wang C, Gu L, Ruan Y, Geng X, Xu M, Yang N, Yu L, Jiang Y, Zhu C, Yang Y, Zhou Y, Guan X, Luo W, Liu Q, Dong X, Yu G, Lan L, Tang Z. Facilitation of MrgprD by TRP-A1 promotes neuropathic pain. FASEB J 2019;33: 1360–73.\n- [59] Wang H, Zylka MJ. Mrgprd-expressing polymodal nociceptive neurons innervate most known classes of substantia gelatinosa neurons. J Neurosci 2009;29:13202–9.\n- [60] Wang R, Guo W, Ossipov MH, Vanderah TW, Porreca F, Lai J. Glial cell line-derived neurotrophic factor normalizes neurochemical changes in injured dorsal root ganglion neurons and prevents the expression of experimental neuropathic pain. Neuroscience 2003; 121:815–24.\n- [61] Wang X, Archibald ML, Stevens K, Baldridge WH, Chauhan BC. Cyan fluorescent protein (CFP) expressing cells in the retina of Thy1-CFP transgenic mice before and after optic nerve injury. Neurosci Lett 2010; 468:110–4.\n- [62] Warwick C, Cassidy C, Hachisuka J, Wright MC, Baumbauer KM, Adelman PC, Lee KH, Smith KM, Sheahan TD, Ross SE, Koerber HR. MrgprdCre lineage neurons mediate optogenetic allodynia through an emergent polysynaptic circuit. PAIN 2021;162:2120–31.\n- [63] Weir GA, Middleton SJ, Clark AJ, Daniel T, Khovanov N, McMahon SB, Bennett DL. Using an engineered glutamate-gated chloride channel to silence sensory neurons and treat neuropathic pain at the source. Brain 2017;140:2570–85.\n- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315–23.\n- [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22–33.\n- [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632–40.\n- [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n- [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779–85.\n- [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n- [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065–75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "Does the Oxbridge Academy have a guide on how to apply to college?", - "target_page": 21, - "target_passage": "To make the college registration process easier for you, we’ve compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/).", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# CHAPTER 5:\n\n## TIPS FOR FILLING IN YOUR COLLEGE REGISTRATION FORM\n\nApplying for college (www.oxbridgeacademy.co.za/enrol-now/) can be a daunting experience. Not only do you need to choose a course, but you also need to make sure that you:\n\n- meet the entry requirements\n- meet the deadlines\n- fill in the forms correctly\n- send the forms to the right address\n- include all the necessary attachments\n\nTo make the college registration process easier for you, we've compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/). The guide also includes general tips that will be relevant to the application and registration processes at other colleges.\n\n#### **There are 4 steps you need to follow when you want to register as a student at Oxbridge Academy:**\n\n- **1.** Select Your Course\n- **2.** Fill in Your Student Details\n- **3.** Select Your Delivery Option\n- **4.** Pay Your Registration Fee and Send in Your Form", - "page_start": 20, - "page_end": 20, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# Did you enjoy reading this book?\n\nJoin our online social community and share your opinion:\n\nwww.facebook.com/oxbridgeacademysa twitter.com/oxbridgeEdu www.linkedin.com/company/oxbridge-academy\n\nOxbridge Academy is an established distance learning college offering skills courses, national qualifications, and internationally recognised courses to students in South Africa and abroad.\n\nWith our head office in Stellenbosch in the Western Cape, we cater to our students' needs by recruiting industry-expert tutors to provide academic assistance via telephone and e-mail, as well as by designing our study material in such a way that it is clear, simple, and easy for our students to understand.\n\nWith us, studying from home is easy, affordable, and convenient.\n\n### CONTACT NUMBERS:\n\nTel: 021 1100 200 Tel:+2721 883 2454 (international) Fax: 086 111 2121 Fax: +2721 883 2378 (international)\n\nWhatsapp: 0605671585 Email: info@oxbridgeacademy.co.za\n\nPostal Address: PO Box 12723, Die Boord, Stellenbosch, 7613\n\nWe are registered with the Department of Higher Education and Training as a Private College in terms of Section 31(6)(a) of the Continuing Education and Training Act, 2006 (Act No. 16 of 2006). Registration No. 2009/FE07/070.", - "page_start": 58, - "page_end": 58, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### STEP 1 – SELECT YOUR COURSE\n\n| Oxbridge Academy Short Course: Marketing Management |\n| --- |\n| ADV101 |\n\nBefore you start filling in the registration form, you need to choose your course. Once you've identified the course that you would like to study, remember to check that you meet the entry requirements.\n\nYou can find the course name and course code for your chosen course on the relevant detailed course information page on our website. Have a look at the example in the screenshot below (the course name and course code are circled in red):\n\n| 021 110 0200 |\n| --- |\n| HOME ABOUT US COURSES s excellence in education |\n| Oxbridge Academy Short Course: Marketing Management |\n| Home / Oxbridge Academy snore |\n| This short course is designed to introduce you to the field of marketing management. It will equip you with the knowledge and skills you need to define the marketing concept, apply marketing decision-making, and explain marketing opportunities. |\n| Course code: |\n| ADV101 |\n| Accreditation status: |\n| This is an Oxbridge Academy Skills Course. |\n\nPlease make sure to check the accreditation status of your chosen course. Some of our courses are non-credit bearing skills development courses, which are neither accredited by external bodies nor registered on the NQF. Please go to our website: *oxbridgeacademy.co.za* for more information about our skills development courses.", - "page_start": 21, - "page_end": 21, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## A Summary of the Registration Process at Oxbridge Academy\n\n#### SEND YOUR REGISTRATION FORM\n\nSend your registration form to the registrations office at Oxbridge Academy via one of the following channels:\n\nFax: 086 262 5550 Post: PO Box 12723, Die Boord, 7613 E-mail: registrar@oxbridgeacademy.co.za\n\n#### FILL IN THE REGISTRATION FORM\n\n**2**\n\nThe registration form follows an easy-to-complete four step layout.\n\n#### IF YOU ARE REGISTERING FOR an ICB, or NATED COURSE\n\nmake sure to indicate your preferred exam centre.\n\n**3**\n\nAs soon as your details have been captured on our system you will receive confirmation of your registration via e-mail or SMS\n\n#### ATTACH THE FOLLOWING DOCUMENTS **6**\n\n- 1. Copy of your ID\n- 2. Proof of highest grade passed\n- 3. Proof of other qualifications\n- 4. Proof of payment\n\n**5**\n\n#### IF YOU ARE UNDER 18, OR IF YOU ARE UNEMPLOYED\n\nmake sure that your parent/guardian/guarantor signs the form.\n\n**4**\n\nPAY YOUR REGISTRATION FEE", - "page_start": 26, - "page_end": 26, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### IN THIS E-BOOK, WE'LL BE HELPING YOU TO:\n\n- Develop your basic English language skills.\n- Improve your English grammar.\n\nApply your language and communication skills in a business contexT. (www.oxbridgeacademy.co.za/find-a- course/business-administrationcourses/)\n\n> *\"Grammar is a litmus test. If job hopefuls can't distinguish between 'to' and too', their applications go into the bin\"*\n\nKyle Wiens, CEO of iFixit\n\n*\"Grammar often seems to be a low priority in education. Are school undervaluing grammar, given that employers may rule out applications with sloppy writing?\"*\n\nThe New York Times", - "page_start": 5, - "page_end": 5, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 7:\n\n## HOW TO ASK FOR HELP FROM YOUR TUTOR\n\nAs a student, you are going to experience times when you need help with your studies. You might be unsure about an assignment question, you might be confused by a particular concept, or you might be stressed about the upcoming exams.\n\nAnd if you are studying via distance learning (www.oxbridgeacademy.co. za/distance-learning/), where you don't have any face-to-face interaction with lecturers, you will need to rely on your tutors for the necessary academic support.", - "page_start": 32, - "page_end": 32, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### HERE ARE 10 TIPS FOR HOW YOU CAN ACHIEVE HIGHER MARKS FOR YOUR WRITTEN ASSIGNMENTS:\n\n#### 1. Read (and follow) the instructions carefully.\n\nIf you are an Oxbridge Academy student, the general assignment guidelines will be provided in your \"Success\" Study Guide. Specific instructions will also be included at the beginning of each of your assignments.\n\n#### 2. Read the questions carefully.\n\nMake sure you understand what is being asked of you, so that you focus on answering the right questions, instead of providing irrelevant information.\n\n#### 3. Remember that presentation is important.\n\nNeatness, spelling, and the structure of your assignment will all count toward the mark that you receive for your assignment.\n\n#### 4. Use your course material and other external sources to find answers to the assignment questions.\n\nBut make sure to use your own words – don't just copy. You need to show the person marking your assignment that you have developed a sound understanding of the subject.\n\n#### 5. When you use external resources, remember to reference them properly, and to include them in a bibliography.\n\nIf you don't, you may be guilty of plagiarism (www.oxforddictionaries. com/definition/english/plagiarism), which is a serious offence.\n\n6. Always hand in your own work, and make sure that you use your own words when you formulate your answers.\n\n#### 7. When it comes to essay questions:\n\n- Plan/outline your answer before doing the final draft.\n- Remember that essays have titles, introductions, bodies, and conclusions.\n- Use headings and paragraphs to structure your answer.", - "page_start": 37, - "page_end": 37, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# Quick Start Guide\n\nNew to Word? Use this guide to learn the basics.", - "page_start": 0, - "page_end": 0, - "source_file": "Word QS.pdf" - }, - { - "text": "# CHAPTER 8:\n\n### TIPS FOR COMPLETING YOUR WRITTEN ASSIGNMENTS\n\nDepending on which course you study, you will either be assessed by means of written assignments, or through a combination of written assignments and exams. Assignments not only help to deepen your understanding of the work, but they often also count toward your final mark.\n\nIt is therefore important that you put effort into your assignments, and that you complete them to the best of your ability.\n\nWe realise that, like many other students, you might be unsure of how to go about completing your assignments, or that you might be afraid of failure.\n\nIf you are an Oxbridge Academy student, we'd like you to know that we are here to help you every step of the way, and that we will give you the opportunity to resubmit your assignments if you don't achieve a pass mark the first time around.", - "page_start": 36, - "page_end": 36, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### ASSIGNMENT\n\n- 1. Identify the verb in the following sentence:\nThe grey elephant drinks water from the largest lake in Africa.\n\n- 2. Identify the collective noun in the following sentence:\nThe board of directors voted in favour of the decision.\n\n- 3. Correct the punctuation in the following sentence:\nAnthea will you please buy bread milk and eggs when you go to the shop.\n\n- 4. Choose the correct word:\nCharles was accepted/excepted into the engineering studies course at Oxbridge Academy.\n\n- 5. Choose the correct word:\nIts/It's time to go home now.\n\n- 6. Choose the correct word:\nThey were late for work, because there/their train was delayed.\n\n7. Choose the correct word:\n\nYou're/Your going to write your exam next week.", - "page_start": 54, - "page_end": 54, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "I have trouble writing effective summaries in English, do you have any tips?", - "target_page": 29, - "target_passage": "To make a good summary, you need to: • Keep it brief. • Make sure to use main headings and keywords. • Focus on the main ideas. • Classify and organise the information in a logical manner. • Use your own words where possible. • Include examples. • Remember that your summaries are there to help you", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### 9. Use correct grammar and spelling.\n\nThis will contribute to the clarity of your answers, and will prevent the person marking your paper from having to guess what you mean.\n\n#### 10. For longer questions and essay-style questions: plan your answers before you start writing.\n\nThis will help you to formulate logical arguments, as well as to structure your answers clearly. In essay questions, you will get marks for using the correct format, which includes making sure that you have an introduction, sub-headings and paragraphs, and a conclusion.\n\n#### 11. Where relevant, give examples.\n\nThis will help to demonstrate that you understand the topic.\n\n#### 12. If you are writing an open-book exam, keep in mind that you won't have enough time to look up all the answers.\n\nMake sure that you know your work, and that you know where to look for key information. These types of exams are more focused on testing your understanding than on testing your knowledge, which means that you need to have a thorough grasp of the work.\n\n#### 13. If you have to answer multiple-choice questions, make sure that you read the questions very carefully.\n\nTry to think of the correct answer before you read through the options, as you are less likely to become confused. When in doubt, go with your first instinct. If there is more than one correct answer, go with the answer that appears to be most correct.\n\n#### 14. If you start running out of time towards the end of the exam, write short notes as answers to each of the remaining questions, instead of trying to answer each question perfectly.\n\nThis way, you should still earn some marks for writing down the most important points.\n\n#### 15. If you have time left at the end of the exam, go back and read through your answers to make sure that you are happy with them.\n\n### tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips", - "page_start": 43, - "page_end": 43, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 10:\n\n### LANGUAGE SKILLS AT WORK HOW TO WRITE A COVER LETTER\n\nIf you've ever applied for a job, you'll know that writing the cover letter is the most difficult part of almost any job application. Your cover letter creates the first impression, and often determines whether an employer will even look at your CV.\n\nYou need to use this opportunity to introduce yourself and your skills, and to set yourself apart from all the other candidates. You can also use this opportunity to explain any gaps in your CV, and to motivate why you are the right person for the job.\n\n### tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips tips", - "page_start": 44, - "page_end": 44, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# TABLE OF CONTENTS:\n\n- 1. General Language Tips to Get You Started\n- 2. Parts of Speech\n- 3. Punctuation\n- 4. Commonly Confused Words and Phrases\n- 5. Tips for Filling in Your College Registration Form\n- 6. Learn How to Summarise Your Study Material\n- 7. How to Ask for Help from Your Tutor\n- 8. Tips for Completing Your Written Assignments\n- 9. Tips for Answering Exam Questions\n- 10. Language Skills at Work How to Write a Cover Letter\n- 11. Language Skills at Work How to Write a Resignation Letter\n- 12. Language Skills at Work Sending E-mails to Your Colleagues", - "page_start": 2, - "page_end": 2, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### SUMMARIES\n\n#### General Tips for Making Summaries\n\n- Underline or highlight key points as you work through your study material, and make notes.\n- When you come across a word or concept you don't understand, look it up in a dictionary, or do some research on the concept, and add your own definition to your summary.", - "page_start": 31, - "page_end": 31, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### To start off with, here are a few tips for improving your general language and communication skills:\n\n- 1. Read as much as possible. Reading improves your vocabulary, and helps you to become familiar with sentence structure, word order, and the correct use of punctuation.\n- 2. Invest in a good dictionary. When you are unsure of the meaning of a word, or when you come across an unfamiliar word, make sure to look it up in your dictionary.\n- 3. Keep a journal. This will give you an opportunity to practice your writing skills on a regular basis.", - "page_start": 6, - "page_end": 6, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "19. You cannot use a dictionary when summarising your study material.\n\n20. Plagiarism is not a serious offence.\n\n21. When writing an exam, you should always answer the questions in numerical order.\n\n22. E-mail etiquette is important in the workplace.\n\n23. Mind maps help you to understand the relationships between concepts.\n\n24. When you answer an essay question, you should try to include as much information as possible.\n\nDo the following:\n\n25. Create a mind map to summarise Chapter 7 (How to Ask for Help from Your Tutor). (5)\n\n26. List 3 things you need to do if you want to earn good marks for your written assignments. (3)\n\n27. List 5 important things to keep in mind when writing a cover letter. (5)\n\n28. List 5 of the things that you should include in a resignation letter. (5)\n\n29. List 3 methods you can use to summarise your study material. (3)\n\n30. Give 2 examples of how good language skills can benefit your career. (2)\n\n31. Complete the following sentence:\n\nSummarising your study material gives you the opportunity to", - "page_start": 57, - "page_end": 57, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 1:\n\n### GENERAL LANGUAGE TIPS TO GET YOU STARTED\n\nThis chapter focuses on the importance of language skills in the workplace, and covers basic tips for how you can improve your command of the English language.\n\n*\"The English language is nobody's special property. It is the property of the imagination. It is the property of the language itself\"*\n\n*Derek Walcott*", - "page_start": 3, - "page_end": 3, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "### HERE ARE 10 TIPS FOR HOW YOU CAN ACHIEVE HIGHER MARKS FOR YOUR WRITTEN ASSIGNMENTS:\n\n#### 1. Read (and follow) the instructions carefully.\n\nIf you are an Oxbridge Academy student, the general assignment guidelines will be provided in your \"Success\" Study Guide. Specific instructions will also be included at the beginning of each of your assignments.\n\n#### 2. Read the questions carefully.\n\nMake sure you understand what is being asked of you, so that you focus on answering the right questions, instead of providing irrelevant information.\n\n#### 3. Remember that presentation is important.\n\nNeatness, spelling, and the structure of your assignment will all count toward the mark that you receive for your assignment.\n\n#### 4. Use your course material and other external sources to find answers to the assignment questions.\n\nBut make sure to use your own words – don't just copy. You need to show the person marking your assignment that you have developed a sound understanding of the subject.\n\n#### 5. When you use external resources, remember to reference them properly, and to include them in a bibliography.\n\nIf you don't, you may be guilty of plagiarism (www.oxforddictionaries. com/definition/english/plagiarism), which is a serious offence.\n\n6. Always hand in your own work, and make sure that you use your own words when you formulate your answers.\n\n#### 7. When it comes to essay questions:\n\n- Plan/outline your answer before doing the final draft.\n- Remember that essays have titles, introductions, bodies, and conclusions.\n- Use headings and paragraphs to structure your answer.", - "page_start": 37, - "page_end": 37, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "# CHAPTER 9:\n\n### TIPS FOR ANSWERING EXAM QUESTIONS\n\n*You're sitting at a table in a room full of students, hunched over your exam paper, with your pen in hand. Your brain feels fried, and your hand is starting to cramp. You look at the clock, and you realise that you have only ten minutes left to answer Question 5b – which counts for 50 marks.*\n\nExams can be a stressful experience. To help reduce the stress and anxiety surrounding exams, and to help you achieve the best possible marks, we've compiled a list of exam-writing tips for you.\n\n# IMPROVE YOUR MARKS!", - "page_start": 41, - "page_end": 41, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- Each paragraph should contain one main thought or idea, and there should be a logical link between each paragraph and the next.\n- Make sure that you focus on answering the question only include relevant information, and remember to present logical arguments in support of your answer.\n\n8. Proofread your assignment before handing it in. Tip: read your answers out loud to make sure that they sound logical.\n\n#### 9. Always keep a copy or electronic backup of your assignment.\n\nThis way, you won't have to start over if your computer crashes, or redo the whole assignment if the original goes missing.\n\n#### 10. When you get your assignment back from your tutor:\n\nRead through the feedback, and learn from your mistakes. This will help you to prepare for your exams (if you have to write them), as well as to help you achieve better marks in future assignments.\n\n### TYPES OF QUESTIONS THAT YOU WILL FREQUENTLY COME ACROSS IN ASSIGNMENTS\n\nIn your assignments, you will often be asked to write short paragraphs or longer essays in which you have to \"explain\" a particular concept, \"identify\" certain features, or \"prove\" a certain point.\n\nIt's sometimes difficult to figure out exactly what these questions mean -- which is why we are providing you with the following explanations:", - "page_start": 38, - "page_end": 38, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "Is exposure to risk factors that may affect mental wellbeing at work comparable across European countries?", - "target_page": 25, - "target_passage": "The country data vary significantly. Sweden, Greece and Luxembourg report over two-thirds such exposures, and Germany, Lithuania and Czechia one-third or less.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In 2007, 2013 and 2020, **Eurostat** asked employed persons in its ad hoc surveys to the Labour Force Survey (LFS) whether they had **'… exposure to risk factors that can adversely affect mental wellbeing'**.10 In 2007 and 2013, the questions covered four items (time pressure and overload of work, violence or threat of violence, harassment and bullying, other factors). In the 2020 survey,11 'Mental well-being' was operationalised by an additional four response options, resulting in a total of eight options:12\n\n- *1. Severe time pressure or overload of work;*\n- *2. Violence or threat of violence;*\n- *3. Harassment or bullying;*\n- *4. Poor communication or cooperation within the organisation;*\n- *5. Having to deal with difficult customers, patients, pupils etc.;*\n- *6. Job insecurity;*\n- *7. Lack of autonomy, or lack of influence over the work pace or work processes; and*\n- *8. Another significant risk factor for mental well-being.*\n\nForty-five per cent of the employed persons reported being exposed to risk factors that can adversely affect mental wellbeing. The country data vary significantly. Sweden, Greece and Luxembourg report over two-thirds such exposures, and Germany, Lithuania and Czechia one-third or less.13", - "page_start": 24, - "page_end": 24, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Some of these groups are **directly addressed by European and national legislation**, for example, workers with disabilities, young workers or pregnant women. For other groups of workers, for example, for women or migrant workers, the legislative protection is formulated as a general 'equal treatment' prescription, like to provide preventive measures for all groups in an enterprise (Framework Directive, Article 15 'Risk groups'), or to provide solutions that fit to the individual (Framework Directive, Art. 6.2.d.). There are some prescriptions that refer to specific preventive activities, for example, to provide written instructions in different languages for safe work with chemicals.\n\n### **3.6 Conclusions**\n\nThe exposure **to psychosocial risks** is increasing, with mental health prevalence still emerging. Major work-related exposures have grown in the past 15 to 25 years that is, time pressure, difficult clients, longer working hours and poor communication. There is also some evidence that countries with overaverage employment in sectors like health and care or other human and client-oriented services (education, social work, tourism, entertainment) suffer from longer working hours and more mental burden. The northern countries are at the top of the countries with highest mental burden. The southern countries have a high share of specific psychosocial risks related to work in tourism and entertainment, characterised by atypical working times and issues with difficult clients.\n\n#### EU-OSHA found in its ESENER 2014 data analysis:112\n\n*'Concerning the sectors, national context appears to be related to differences in psychosocial risk management in all types of organisations, although in some sectors this relationship is weak. In the agriculture, forestry and fishing sector and the sectors of mining, construction, electricity, trade, transport, and accommodation and food, the low level of psychosocial risk management is observed also in a favourable national context. An explanation for this finding might relate to the large proportion of small organisations in these sectors, which, as concluded earlier, have poorer psychosocial risk management independently of the national context.'*\n\nThere is a stable **block of 'conventional' physical health risks** — ergonomics and risk from the work environment — and ergonomic risks that did not significantly change since 1990. It varies between 15% for exposure to smoke, fumes and dusts to over 60% for repetitive hand/arm movements. **Ergonomic risks** develop in two directions: 1) traditional risks stagnate in total, that is, lifting and moving heavy loads, painful or tiring positions, and shifts between sectors (from industry to transport, health and care); 2) risks of inactivity and highly repetitive hand/arm movements increase. Beside sectoral and occupational differences, it can be noted that in general higher percentages of exposed employed persons (workers and self-employed) are working in eastern and southern Member States.\n\nSince 2006 the average **working time** per week went down by 15 minutes for employees, and a slight reduction of most atypical — or unsocial — working times can be observed. Work intensification has emerged until 2005 but seems to stagnate since then. There are strong indications but no quantitative evidence on the extent to which working long hours, work at atypical times and probably also work with higher risks were **transferred to workers in non-standard types of employment**.\n\n**Non-standard forms of employment** are — according to EU-OSHA — characterised by a nonpermanent employment contract and the work not being performed at the premises of the employer. Most studies that dealt with the **connection between the employment forms and health outcomes** and in particular safety and health aspects found significant correlations. **New forms of employment** have a wider spectrum of contract types — e.g. voucher, platform — and of places of work — for many types of work practically everywhere.\n\n**Non-standard locations of work** — mobile work, homes as workplaces, domestic and care work have as common characteristics special conditions concerning implementation of OSH standards and legislation, be it for technical or legal reasons. Quantitative evidence on working conditions in these types of work is less available than for stationary workplaces; moreover, the OSH responsibility can be blurred. **Mobile ICT work** is a field of new contractual arrangements that besides other aspects in", - "page_start": 58, - "page_end": 58, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "### **3.1 Psychosocial risks at work**\n\nDuring the last 30 years, the scientific, political and practical discussions on **psychosocial risks** and preventive measures against psychosocial risks have gained strong importance. After a period of doubts and resistance, today they are regarded as risks of the same severity as the classical physical safety and health risks.4 (Chapter 1 covers the psychosocial risk aspect; for the prevalence of mental diseases and the burden of mental diseases see Chapter 2.2.5)\n\nLooking at the steady increase of certain psychosocial risk indicators at workplace level, either the **risks have increased** and/or the **number of people working in occupations** with higher psychosocial risks has increased.6,7 This is valid, for example, for the indicator time pressure, for example, in delivery services, transport, and often also clerical work; the workforce has grown in sectors where emotional demands from dealing with difficult clients, customers, pupils or patients are common; there are also more workers employed (or self-employed) in interactional occupations, for example, in call centres, or in occupations with a high level of emotional tensions, for example, education, health and care.\n\n#### **Figure 2: Risk factors that can adversely affect mental wellbeing – EWCS8 and ESENER9**\n\nA major difference between the ESENER and the EWCS survey is the respondent. In ESENER those persons who are most familiar with OSH or responsible for OSH in an enterprise were asked whether a certain risk factor exists in the enterprise; in the EWCS survey workers themselves were asked whether they are exposed to a risk factor.", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "#### **ILO 'List of Occupational Diseases Recommendation'**\n\n*2.4. Mental and behavioural disorders* \n\n- *2.4.1. Post-traumatic stress disorder*\n- *2.4.2. Other mental or behavioural disorders not mentioned in the preceding item where a direct link is established scientifically, or determined by methods appropriate to national conditions and practice, between the exposure to risk factors arising from work activities and the mental and behavioural disorder(s) contracted by the worker*\n\nAnd there are also **emerging and new risks** where health data will **not be available until a certain number of workers are exposed for quite a while**. Some prominent examples are nanotechnologies, the significant increase of new chemically based technologies, vision impairment due to long hours of work under artificial light at the same distance with small digital equipment,183 more exposure to 'global' biological agents due to more interactional tasks, and travel and transport between countries and continents. On that note, the Covid-19 pandemic could also be used as an example. In 2022, the Commission proposed an update of the Recommendation on the ESOD to recognise Covid-19 as an occupational disease for workers particularly concerned: health and social care, home help or where there is a proven risk of infection (during a pandemic) in other sectors184.\n\nIt adds to these difficulties that workers are often not only exposed to one disease causing exposure but to **several exposures** at the same time (exposure is understood here in a broad sense: ranging from long working hours over postures and movements to harassment and violence and to noise and chemical and biological substances, etc.). **In theory, a single risk** — if below the threshold limit values and in line with legislation and standards — **will not cause harm — given that it is the only exposure**. The impact of this single exposure is not strong enough to generate a disease on the level of severity of a recognised occupational disease. A **combination of several risks** might add several exposures, worsen the impact and cause serious harm.\n\nQuite well studied is the increased prevalence of musculoskeletal diseases, if not only ergonomic risks but also high psychosocial risks are prevalent at the workplace.185 Research has also found unexpected connections like the synergistic effect of noise and certain chemicals on hearing impairments. Such outcomes of multi-risk profiles are often particularly difficult to identify and understand. Obviously, most sectors and occupations involve workplaces with **multi-risk profiles**. Some prominent major risks in certain sectors or occupations are:\n\n- agriculture = accidents, chemical and biological agents, UV exposure;\n- delivery services = traffic accidents, ergonomics, time pressure, exhaust fumes;\n- decentralised renewable energy construction and maintenance = falls from height, electricity;\n- waste and recycling = biological and chemical agents, cuts and accidents;\n- mobile work = ergonomics, work without time and space limits;\n- care at home = emotional, ergonomic, difficult clients, unsafe household situations, infection risks;\n- healthcare = emotional, ergonomics, biological;\n- personal and household services = emotional, ergonomic, unsafe household situations, e.g. unsafe electrical equipment, exposure to unknown chemicals;\n- long-haul sea, train, road or air transport = atypical working times, shift work, monotony, long phases of physical inactivity;\n- car repair = ergonomics, dust and fumes, chemicals;\n- construction = falls from height, accidents with machinery or vehicles, slips, trips and falls, ergonomics, noise, chemicals, dust, UV exposure, etc.", - "page_start": 75, - "page_end": 75, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **References and notes**\n\n1 OSH Barometer data visualisation tool: https://visualisation.osha.europa.eu/osh-barometer\n\n2 Methodological remark: Many workers in the service sectors have similar physically demanding work like workers in manufacturing, construction and agriculture. The statistical assignment of enterprises of a certain type to the service sectors and the sectors industry/construction/agriculture is a too rough approach to describe and analyse working conditions, particularly if more detailed data on working conditions are available. For that reason, when talking about health outcomes, in this report often more informative categories are used, for example, managerial jobs (LFS, Eurostat terminology), or high-, medium- and low-skilled clerical work (EWCS), or high-skilled manual and low-skilled manual work (Eurostat), independent on the sector where this work is performed.\n\n3 EU-OSHA – European Agency for Safety and Health at Work: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3), ESENER Data visualisation, section 'Comparisons 2014-2019'; for 'Prolonged sitting' value from 'Data visualisation 2019' not from 'Comparisons'.\n\n4 Some of the very first OSH regulations on psychosocial risks at workplaces were issued by Denmark in the early 1980s, dealing with monotony at work, stress, risk of violence at work and risks of working alone.\n\n5 Psychosocial risks are regarded as reason, and mental health/disease as consequence or outcome of these risks.\n\n6 OSHWiki, 2022: Psychosocial issues – the changing world of work; OSHWiki, 2022: Psychosocial risks and workers health\n\n7 EU-OSHA, 2007: Expert forecast on emerging psychosocial risks related to occupational safety and health\n\n8 Eurofound, 2017: Sixth European Working Conditions Survey – Overview report (2017 Update) (p. 48). Raw data for 2015: Eurofound: European Working Conditions Survey - Data Visualisation; Data for 2005: Eurofound: Fourth European Working Conditions Survey\n\n9 EU-OSHA: ESENER Data visualisation, Comparisons 2014-2019.\n\n10 Due to the change of possible response items, the data for the three surveys cannot be compared; the number of mental risk factors increased from three in 2007 and 2013 to eight in 2020.\n\n11 Eurostat, 2021: EU labour force survey 2020 module on accidents at work and other work-related health problems : assessment report : 2021 edition\n\n*12 Eurostat: Persons reporting exposure to risk factors that can adversely affect mental well-being by sex, age and factor, data here and explanatory metadata here*\n\n13 It has to be noted that in 2007 and 2013 the interviews were done face-to-face. In 2020 the interviews were conducted either face-to-face or by phone, depending on the public health measures in each country. The responses were influenced by work under conditions of the pandemic.\n\n14 Eurostat: Persons reporting exposure to risk factors that can adversely affect mental well-being by sex, age and educational attainment level\n\n*15 Rigó et al., 2021: Work stress on rise? Comparative analysis of trends in work stressors using the European working conditions survey*\n\n16 WHO/ILO, 2021: WHO/ILO joint estimates of the work-related burden of disease and injury, 2000–2016: Global monitoring report (p. 35ff).\n\n*17 Eurostat provide data for the periods before and after the NACE revision in 2008. Data for 2019: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2), here Filter: Full-time, 15-64 years, all NACE sectors. Data for 2006: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (1998-2008, NACE Rev. 1.1), here*\n\n18 Eurostat, 2018: How many hours do Europeans work per week? Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2) - hours[lfsa_ewhun2], here\n\n19 Mean duration of commuting time one-way between work and home by sex and age (source: Eurofound), Here\n\n20 Eurostat definition: The atypical work distinguishes between \"evening or night work\", \"Saturday or Sunday working\", and \"shift work\". Data for 2020 are available but indicate a strong reduction of atypical working times, the reason is probably that sectors with a high rate of atypical working times like tourism, transport, entertainment, hotels and restaurants could not work as in previous years, and also production lines in industry, often shift work, were stopped.", - "page_start": 140, - "page_end": 140, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **3 Status of working conditions**\n\nThis chapter on health and safety-related working conditions provides an overview on status and development of working conditions; it is mainly based on the indicators that were **selected for the data visualisation in the OSH Barometer**. This is a quite limited selection of major data; in surveys and statistics many more indicators on working conditions are provided, particularly at national level.\n\nPractically all working conditions influence **mental health**, that is, they involve **psychosocial risks**, and all also involve **'physical risks'**, including safety aspects of these risks. Mental health risks are illustrated in the OSH Barometer by datasets on time pressure, poor communication, dealing with difficult clients, discrimination and harassment, and similar. **Physical risks** include datasets on accidents at work, exposures to chemical and biological substances, exposure to noise, vibrations, high or low temperatures, and working tasks with ergonomic risks, like carrying, lifting heavy loads or work in tiring or painful positions; and also permanent physical inactivity, mainly sitting or long standing.2\n\nThe figure below shows the percentage of enterprises reporting OSH risks 'present in the establishment', compared between 2014 and 2019 (ESENER) and covering mental and physical risks.3\n\n#### **Figure 1: Risk factors present (% of establishments) – ESENER 2014 and 2019**\n\nNote: Prolonged sitting was a new item in the 2019 survey.\n\nBetween 2014 and 2019, some risk factors increased, like 'Repetitive hand and arm movements', 'Lifting or moving people of heavy loads', and 'Having to deal with difficult customer, patient and pupils; many others showed no changes, like 'Risk of accidents with machines or hand tools', 'Chemical or biological substances', and 'Loud noise', or minor decreases like 'Risk of accidents with vehicles'.", - "page_start": 22, - "page_end": 22, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **List of figures**\n\n| Figure 1: Risk factors present (% of establishments) – ESENER 2014 and 2019 23 |\n| --- |\n| Figure 2: Risk factors that can adversely affect mental wellbeing – EWCS and ESENER 24 |\n| Figure 3: 'Exposure to risk factors adversely affecting mental wellbeing' – LFS Ad hoc survey 2020 . 26 |\n| Figure 4: Psychosocial risk factors – Differences between skill groups (Job strain) 27 |\n| Figure 5: Psychosocial risk factors – Differences between skill groups (Psychological demand) 28 |\n| Figure 6: Psychosocial risk factors – Differences between skill groups (Decision authority) 28 |\n| Figure 7: Psychosocial risk factors – Differences between skill groups (Skill discretion) 29 |\n| Figure 8: Hours worked per week of full-time employment, EU27 – Eurostat 31 |\n| Figure 9: Average working time and work during unsocial hours – Eurostat LFS 32 |\n| Figure 10: Development of work intensity indicators between 1991 and 2015 – Eurofound 33 |\n| Figure 11: Establishment size and 'Pressure due to time constraints' – ESENER 2014 and 2019 34 |\n| Figure 12: Establishment size and 'Long or irregular working hours' – ESENER 2014 and 2019 34 |\n| Figure 13: 'Pressure due to time constraints', Yes responses – ESENER 2019 35 |\n| Figure 14: Employed persons and percentage of working time under pressure – Eurostat LFS Ad hoc |\n| 2019 35 |\n| Figure 15: Percentage of employed persons with working time under pressure (per country, sum of |\n| responses 'Always' and 'Often') – LFS Ad hoc 2019 36 |\n| Figure 16: Exposure to physical risks – ESENER, EWCS and LFS 39 |\n| Figure 17: Physical health risks compared (%) – EWCS 2015 42 |\n| Figure 18: Employment types in EU27, development 2005 to 2022 – Eurostat 47 |\n| Figure 19: Employed persons by main place of work – Eurostat 51 |\n| Figure 20: Employees working mostly from home (in % of employed persons) – Eurostat 52 |\n| Figure 21: Development of the total number of non-fatal accidents at work and incidence rates (accidents |\n| per 100,000 workers), 1998 and 2019 – Eurostat 65 |\n| Figure 22: Share of people reporting any accident and accidents resulting in time off work by country, |\n| 2020 70 |\n| Figure 23: Comparison of the average incidence rate of fatal accidents in two periods: 2010-2014 and |\n| 2015-2020 71 |\n| Figure 24: Main causes of mortality 2019, EU27 79 |\n| Figure 25: Work-related deaths – estimates by WHO/ILO and ICOH for EU27 83 |\n| Figure 26: Work-related DALYs – estimates by WHO/ILO and ICOH for the EU27 84 |\n| Figure 27: Prevalence of musculoskeletal diseases – EWCS 2015 88 |\n| Figure 28: Satisfaction with working conditions in the main paid job – EWCS 2015 89 |\n| Figure 29: Flash Eurobarometer 2014 – Satisfaction with health and safety at work 90 |\n| Figure 30: 'Health at risk', sectoral responses for EU and three countries – EWCS 2015 91 |\n| Figure 31: 'Health at risk', responses in groups of EU Member States – EWCS 92 |\n| Figure 32: Age classes and work-related health problems in 2007, 2013, 2020 – LFS ad hoc module93 |\n| Figure 33: People reporting a work-related health problem and People reporting a work-related health |\n| problem causing daily limitations 2020 – LFS Ad hoc module 2020 94 |\n| European Agency for Safety and Health at Work – EU-OSHA 5 |", - "page_start": 4, - "page_end": 4, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "If **a risk assessment is conducted just for compliance purposes**, and not used appropriately for the successful management of OSH and reduction of accidents and occupational diseases, the risk assessment may lose its dynamic nature, and findings may be neither implemented nor communicated appropriately to employees.\n\nThe **types of risks included in risk assessments** are related to the risk profiles of different sectors, for example, it is likely that risk assessments in heavy industries and manual occupations focus more on safety risks. However, while sectoral risk profiles will naturally bias the identification of risks, smaller establishments seem to have **less of a focus on MSDs or psychosocial risk factors**, which would suggest that they are less well recognised or understood, in particular for MSEs.415 Establishments also report that psychosocial risk factors are more difficult to manage than other OSH risks, while as business size grows, so does the proportion of respondents who perceive psychosocial risks as more difficult to manage than other OSH risks.416\n\nESENER 2019 shows that a **reluctance to talk openly** about these issues seems to be the main difficulty for addressing psychosocial risks (60% of establishments in the EU27). This, as with all the other difficulties considered (lack of awareness among staff/management and lack of expertise or specialist support), is reported in all enterprise sizes but more frequently as establishment size grows.\n\nSpecifically, among those establishments that report having to deal with difficult customers, patients or pupils, 51% of those employing 20 or more workers report having a procedure in place to deal with possible cases of threats, abuse or assaults by clients, patients or other external persons. This share rises to 74% among establishments in human health and social work activities.\n\nThe development of concrete outputs such as measures to better manage risks that can result in **musculoskeletal diseases** has actually seen a decline between 2014 and 2019, as follows:\n\n- 85% to 77% on the measure of 'provision of equipment to help with the lifting or moving of loads or other physical heavy work';417\n- 73% to 67% concerning 'provision of ergonomic equipment'; and\n- 66% to 60% regarding 'encouraging regular breaks for people in uncomfortable or static postures including prolonged sitting'.418", - "page_start": 127, - "page_end": 127, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "highest quintile, a difference of 21% (EU27, 2019).196 The **European Quality of Life Survey** (EQLS) finds that 13% of the lowest quartile report bad health (EU28, 2016), compared to only 4% of the respondents of the highest income quartile.197\n\nThe **relation between socioeconomic status — measured by income — and working conditions** is often not further analysed, at least not on an aggregated statistical level. Due to complex methodological difficulties and strong national variations of the health systems, there are until now **no EU-wide morbidity statistics available, based on administrative data**.198 A 'Morbidity Task Force' at EU level worked between 2005 and 2011 on the development of such statistics.199 Country-specific data — without a harmonised approach between countries — are provided in EU and OECD publication series.200\n\nThe system of **European Core Health Indicators (ECHI)** provides an overview on prevalence of major diseases.201 Main morbidities covered until now are asthma, chronic obstructive pulmonary diseases (COPD), communicable diseases, depression, dementia, diabetes, diseases caused by drugs, HIV/AIDS, and physical or sensory functional limitations. However, in ECHI there is no option to relate these diseases to sectors or occupations.\n\nThe impact of work — as one essential element of the socioeconomic status — on health was the subject of numerous academic studies, often performed as specific case studies. The authors of an overview study on 'Cross-country inequality in the EU' summarise (more references in the original text):\n\n*'Occupational grade and labour market status are among the factors most often studied in relation to health and mortality. Occupational grade has been found to be associated with self-rated health, mental and physical health, such as the presence of long-standing illness and a number of diseases. Lower occupation might affect health through poor working conditions, such as the higher exposure to occupational hazards and toxic compounds, health-damaging behaviours and psychosocial stress. Work-based stress combined with a lack of autonomy over one's work are believed to be the psychosocial factors that can cause physiological changes, such as increased risk of cardiovascular diseases and reduced immune system response. It has been shown that the gaps in mortality between different occupational grades persist in old age and tend to widen with age.*202\n\nEurostat provides in the LFS **2020 Ad hoc module** on 'Accidents at work and other work-related health problems' a rough overview on such relations, with some specification, for example, for sectors, attainment levels, professional status, size of enterprise or occupation.203 The differences between four aggregated occupational groups and work-related health problems is shown in the next table.\n\n| Work-related health problems | 2020 |\n| --- | --- |\n| Managers, professionals, technicians and associate professionals | 9.40% |\n| Clerical support workers, service and sales workers report | 9.40% |\n| Skilled agricultural, forestry and fishery workers, craft and related trades workers | 13.40% |\n| Plant and machine operators and assemblers, elementary occupations | 11.80% |\n| Total | 10.30% |\n\n**Table 23: People reporting work-related health problems by group of occupations (ISCO) – LFS Ad hoc 2020204**\n\n9.4% of the group of 'Managers, professionals, technicians and associate professionals' and also 9.4% of the group of 'Clerical support workers, service and sales workers' report work-related health problems, 2.4% to 4% lower than the two groups with predominantly manual occupations.\n\nBased on a systematic review of literature on the topic of health factors, a consortium of World Bank and Harvard School of Public Health developed for the WHO in the early 1990s a new approach, the **Global Burden of Disease (BoD)**.205 This approach is meanwhile used by researchers and health institutes across the globe.206", - "page_start": 79, - "page_end": 79, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "224 Pega et al., 2022: Global, regional and national burden of disease attributable to 19 selected occupational risk factors for 183 countries, 2000–2016: A systematic analysis from the WHO/ILO Joint Estimates of the Workrelated Burden of Disease and Injury, here\n\n225 Kauppinen et al., 1998: Occupational exposure to carcinogens in the European Union in 1990-1993: international information system on occupational exposure to carcinogens, here CAREX Canada\n\nFevotte et al., 2011: Matgéné: A Program to Develop Job-Exposure Matrices in the General Population in France Mannetje et al., 2011: Developing a general population job-exposure matrix in the absence of sufficient exposure monitoring data\n\n226 YLDs = years lived with disability, together with YLLs = years of life lost, it composes the DALY (DALY = YLL + YLD).\n\n227 GBD 2019 Mental Disorders Collaborators, 2022: Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990–2019: a systematic analysis from the Global Burden of Disease Study 2019, here\n\n228 WHO: Mental disorders, Key facts and\n\nIHME: Global Health Data Exchange (GHDx), here\n\n229 OECD, 2015: Sick on the Job?: Myths and Realities about Mental Health and Work\n\n230 OECD/European Union, 2018: Health at a Glance: Europe 2018: State of Health in the EU Cycle\n\n231 Andlin-Sobocki et al., 2005: Cost of disorders of the brain in Europe\n\n232 Niedhammer et al.; 2021: Update of the fractions of cardiovascular diseases and mental disorders attributable to psychosocial work factors in Europe, here\n\n233 Norder et al., 2017: Beyond return to work from sickness absence due to mental disorders: 5-year longitudinal study of employment status among production workers, here\n\n234 Leka & Jain, 2017: EU Compass for Action on Mental Health and Well-Being - Mental Health in the Workplace in Europe\n\n235 Musculoskeletal disorders refer to backache and/or muscular pains in shoulders, neck, upper limbs and/or lower limbs (hips, legs, knees, feet, etc.). In the medical systematic it is the IC 10 group of diseases: Diseases of the musculoskeletal system and connective tissue.\n\n236 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 237 Graveling, 2018: Ergonomics and Musculoskeletal Disorders (MSDs) in the Workplace. A Forensic and Epidemiological Analysis\n\n238 Da Costa & Viera, 2010: Risk factors for work-related musculoskeletal disorders: a systematic review of recent longitudinal studies, here\n\n239 EU-OSHA, 2020: Work-related musculoskeletal disorders: why are they still so prevalent? Evidence from a literature review (p. 15).\n\n240 EU-OSHA, 2019: Summary - Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU (p. 8).\n\n241 EU-OSHA, 2019: Work-related musculoskeletal disorders: prevalence, costs and demographics in the EU 242 Ibid., p. 174ff.\n\n243 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (p. 77).\n\n244 United Nations Economic Commission for Europe (UNECE), 2015: Handbook on measuring quality of employment: A statistical framework, here\n\n245 Quinlan & Bohle, 2013: Re-invigorating industrial relations as a field of study: Changes at work, substantive working conditions and the case of OHS, here (p. 8).\n\n246 The percentages of responses to this question in the European Working Conditions Survey (EWCS, 2015) are displayed. Each bar shows the percentages of the four possible responses for each EU Member State, the average for the EU Member States, and the responses for Switzerland and Norway. Responses are displayed for the question below: How satisfied are you with working conditions in your main paid job? Answer options were: Not at all satisfied; Not very satisfied; Satisfied; Very satisfied. See here\n\n247 Flash Eurobarometer 398, 2014, p 2, https://www.cesi.org/wp-content/uploads/2014/04/fl_398_sum_en.pdf . The displayed Flash Eurobarometer data refer to the 'working population', with two subgroups A (employees and manual workers), and B (self-employed). In the Flash Eurobarometer sample these two groups are separated from three further groups forming the 'Not working' population These groups are: subgroups: students, retired, looking for a job.\n\n248 Ibid., p. 58.\n\n249 Eurofound, 2007: Fourth European Working Conditions Survey (2005) (pp. 77-81).", - "page_start": 149, - "page_end": 149, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "Has the average working week for employees working full-time decreased since 2006?", - "target_page": 31, - "target_passage": ". The statistical data (Eurostat) show a slight decrease of the average weekly working time for full-time employees (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019.", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "**Figure 7: Psychosocial risk factors – Differences between skill groups (Skill discretion)**\n\nFor 'Decision authority' and 'Skill discretion', the authors found a stable situation since 1995, even a small rise of skill discretion for manual workers after 2010. Regarding 'Psychological demands' and 'Job strain', the major increase for all groups took place between 1995 and 2005. This growth decelerated after 2005, this observation is also valid for other working conditions, like work intensity.\n\n### *3.1.1 Working time in hours and at atypical times*\n\n**Too many hours of working time and/or working hours at atypical or unsocial times** can put **the mental** and **the physical health** of humans at risk. It is also regarded as a major **contributing factor to work accidents**, due to fatigue or exhaustion.16\n\nThe main indicator to describe working time is the **number of the weekly average working hours** of full-time employees. However, regarding its impact on health and safety, **other aspects of working time are of the same relevance**:\n\n- How long is the average working day?\n- At which times and days is this work done (typical, atypical times)?\n- How often do long working hours take place?\n- Is the work split between two jobs?\n- How flexible are start and end?\n- How intense is the work during this time (breaks, deadlines)?\n- Which groups of workers have standard working times and which do not (e.g. depending on the sector or the type of contract, e.g. sub-contracted workers or self-employed)?\n\nThere is a **slight trend towards fewer working hours** for full-time **employees** (not 'Employed persons') in the EU27; between 2006 and 2019 the average weekly working time dropped from 40.2 to 39.9 hours, a decrease of approximately 15 minutes.17\n\nRegarding the weekly hours, there are **no striking differences** between the EU27 Member States. In 2019, Cyprus, Austria and Malta with a high share of workers in the sector of tourism (accommodation) had the highest number of working hours per week (above 41 hours), and Denmark, the Netherlands and Italy the lowest number (39 or fewer) (full-time, employees, 15-64 years, all NACE codes).18", - "page_start": 28, - "page_end": 28, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Two country examples might illustrate these developments (all data for 2019): Slovakia, a country with a high share of process-based industries, reports that 15.0% of its workforce is working at night and 29% in shifts; for the EU27 this rate is 5.2% respectively and 18.3%.25 Regarding work on Sundays three other countries are at the top of the EU27, the Netherlands, Ireland and Spain; they report between 18% and 21% (EU27 average = 13.5%); all three countries have an above-average share of sectors like transport, tourism and agriculture.26\n\nFor all these types of work it should be take into account that other groups of **workers under nonstandard types of employment contracts** (self-employed, agency workers, students, pensioners, undeclared workers) might have taken over work at these atypical working times.\n\nConcluding, it can be stated that there is a **slight trend towards a reduction of weekly working hours for regularly employed** workers, including a stable commuting time. Working hours at atypical times show a mixed picture. Looking at most types of employees, **atypical working time decreased, except work on Sundays**. For self-employed with employees, the working time at atypical hours is in general at a higher level. The number of employees in night work is decreasing. More employees in service and client-related occupations at night or in shifts but also here the atypical times are slightly decreasing.\n\nProbably these changes **mirror the structural economic changes**, that is, the shift of workforce between sectors. Night work was common in many industries as part of a three 8-hours shifts, not only in industries with permanent production processes (steel, chemicals, etc.).27 Moreover night work is and was common in essential services like health, transport, technical infrastructure and security. The", - "page_start": 31, - "page_end": 31, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "21 Eurostat: Ad hoc module 2019 on work organisation and working time arrangements. Employment at an atypical working time (time period start with 2011), here and here\n\n22 Eurostat Data for 2019: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2). here Filter: Employees, Full-time, All NACE, EU27 2019 Q4.\n\nEurostat Data for 2006: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (1998-2008, NACE Rev. 1.1), here Filter: Employees, Full-time, All NACE, EU27 2019 Q4.\n\n23 Eurostat definition of atypical work: The atypical work distinguishes between \"evening or night work\", \"Saturday or Sunday working\", and \"shift work\".\n\n24 All data were retried from tables in: Labour market > Employment and unemployment (Labour force survey) M > LFS series - detailed annual survey results M > Population in employment working during unsocial hours - LFS series\n\n25 Eurostat: Employed persons working at nights as a percentage of the total employment, by sex, age and professional status (%)\n\n26 Eurostat: Employed persons working on Sundays as a percentage of the total employment, by sex, age and professional status (%)\n\n27 Fiz Perez et al., 2019: Shift and night work management in European companies\n\n28 OSHWiki, 2022: Psychosocial issues – the changing world of work\n\n29 Eurofound, 2003: Time and work: Work intensity\n\nEurofound, 2009: Working conditions in the European Union: Working time and work intensity 30 Eurofound, 2017: Sixth European Working Conditions Survey – Overview report (2017 Update) (p. 48).\n\n31 ESENER addresses the person in an enterprise responsible for or closest to the topic of OSH; the EWCS is a worker survey. In addition, the response options were different from the EWCS. Two options in ESENER, 'Yes' or 'No', compared to three options in the EWCS: '(Almost) all of the time', 'Between ¼ and ¾ of the time', '(Almost) never'.\n\n32 EU-OSHA: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3), ESENER Data visualisation, section 'Comparisons 2014-2019', section 'Psychosocial risk factors present in the establishment', 'Pressure due to time constraints'.\n\n33 Ibid., Section 'Psychosocial risk factors present in the establishment', 'Long or irregular working hours'. 34 Ibid., Section 'Psychosocial risk factors present in the establishment', The exact question was: 'Please tell me for each of the following risks whether or not it is present in the establishment?' 'Pressure due to time constraints'. Response option: Time pressure.\n\n35 Ibid., Section 'Psychosocial risk factors present in the establishment', The exact question was: 'Please tell me for each of the following risks whether or not it is present in the establishment?' 'Pressure due to time constraints'. Response option: Time pressure.\n\n36 EU-OSHA: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3), ESENER Data visualisation, section 'Comparisons 2014-2019', section 'Psychosocial risk factors present in the establishment', The exact question was: 'Please tell me for each of the following risks whether or not it is present in the establishment?' 'Pressure due to time constraints'. Response option: Time pressure.\n\n37 Eurostat, 2019: Persons in employment by frequency of working under time pressure, educational attainment level and professional status, 20-64 years, percentages calculated from numerical data\n\n38 Kelliher & Anderson, 2010: Doing more with less? Flexible working practices and the intensification of work 39 Piaska, 2018: Scheduled to work hard: The relationship between non-standard working hours and work intensity among European workers (2005–2015)\n\n40 See also the overview in: EU-OSHA, OSHWiki, Guyot, S: Psychosocial issues – the changing world of work, here\n\n41 Newer literature: James & Walters, 2022: Work and Health: 50 Years of regulatory failure.\n\n42 Davis & Kim, 2015: Financialization of the Economy\n\n43 Ethics & Compliance Initiative, 2020: Global Business Ethics Survey Report. Pressure in the Workplace: Possible Risk Factors and Those at Risk\n\n44 Johnstone et al., 2005: Statutory Occupational Health and Safety Workplace Arrangements for the Modern Labour Market\n\n45 Lorenz & Valeyre, 2005: Organisational Innovation, Human Resource Management and Labour Market Structure: A comparison of the EU-15\n\n46 Directive 2003/88/EC of 4 November 2003 concerning certain aspects of the organisation of working time", - "page_start": 141, - "page_end": 141, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "methodology, the OSH practitioners who were asked in ESENER seem to have a different view on time pressure than the workers themselves who are respondents in the LFS.\n\n**Figure 15: Percentage of employed persons with working time under pressure (per country, sum of responses 'Always' and 'Often') – LFS Ad hoc 2019** \n\nOne hypothesis to explain the increased time pressure is to draw a direct **connection between short weekly working time and more intense work**; or in other words, a short weekly working time leads to more **intensification of work or more long hours or atypical working times** ('trading flexibility for effort').38\n\nThe analysis of EU survey data shows **a mixed picture**: Firstly, ESENER data corroborate this hypothesis, the three countries with highest percentage of work under time constraints — that is, Finland, Sweden and Denmark — all have working hours under the EU average. Secondly, LFS data show a different picture; a country like Greece has the longest working hours and also reports the highest time pressure, the same 'combination' — but less extreme — applies to Austria, Cyprus and Malta. Trends of low or less than average working time and no time constraints are reported for Lithuania, and medium working time and low time constraints for Italy and Ireland.\n\nAn analysis of EWCS data concluded39 that in general intensity increases with long working hours, in enterprises with 1-19 the work intensity index (on a scale between 0 and 12) is 4.4, in larger enterprises with above 40 employees it is 6.3. This is in line with ESENER data that corroborate the importance of the **size of the enterprise** for time pressure and long working hours.\n\nLiterature — from very diverse disciplines — on work intensification points to **reasons for intensification on developments as:40**\n\n- Economic developments, particularly the dominance of neoliberalist policies and enhanced competition between workers, companies and states; reduction of state influence and privatisation.41\n- Pressure due to substantial organisational changes, for example, introduction of short-term economic objectives in enterprise policies, 42 expansion into new markets or new countries, acquiring other enterprises or merging, being acquired, restructuring of management or of basic staff working conditions (contracts, working time, flexibility).43\n- Decrease of trade union influence or worker participation regarding labour relations.\n- Liberalisation of labour legislation, creation of 'new forms of work' and new contract types, beyond the permanent full-time employment.44\n- New forms of management, application of management concepts like just-in-time production or lean management, higher flexibility of production and higher customer orientation, 45", - "page_start": 35, - "page_end": 35, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "In several occupations, **classical safety risks often add to the above-mentioned exposures**, that is, slips, trips and falls, risks related to moving parts of machinery, moving vehicles, exposure to hot, cold, or hazardous materials, loud noise, chemical or biological substances, and in general physically exhaustive work.\n\nA certain **ergonomic risk** of many administrative and supervisory jobs is **physical inactivity** (61%), in practice meaning sitting most of the working time in front of digital equipment, sitting to make phone calls or sitting in meetings. Not only administrative tasks but also many occupations in transport and industry require prolonged sitting (transport, cashiers, parts assembly, etc.).\n\nIn the 10-year period before 2005, EU-wide surveys found a significant increase in work intensity. Major differences in work intensity and working time patterns can be seen between occupations, forms of work, sectors and enterprise size, for example. The length of the daily or weekly working time and its allocation with the 24 hours of a day or at night are important factors for health and wellbeing. The Eurostat data show a slight decrease **in the average weekly working time for full-time employees** (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019.\n\nEurostat reports for all types of **'employment at atypical working time'** a minor decrease between 2011 and 2019, from 38.8% to 37.2% (EU27 average), for all employed workforce and all types of such atypical time. The data also document slight increases or decreases of the different types of work during atypical times > on Saturdays the percentage decreased from 28% to 25%, working in the evenings decreased from 19% to 15%, working on Sundays remained stable at around 13.5%, work at night fell from 7% to 5%, and shift work increased slightly from 17% to 18%. Some **groups of self-employed** show a higher rate of atypical working times: for **high-managerial self-employed**, this rate is 43.2% and for **low-managerial self-employed** 64.5%.\n\n**Significant differences also exist between eastern/southern and central/northern/western European countries.** More physical and ergonomic risks (except inactivity) are reported from eastern and southern EU Member States but more emotional demands (e.g. difficult clients, poor communication and long working hours) in northern and central European countries. One of the major reasons might be the reallocation of industrial production to eastern countries after the EU extension to 24 and later to 27 Member States.\n\n#### **Conditions of employment and workforce development**\n\nDuring the past decades and at faster pace after 1990, a **greater variety of non-standard contractual relations** has emerged. Typical characteristics of non-standard work are part-time work, temporary (or fixed-term) work, seasonal work, casual work, home-based work, telework, self-employment or family work. Currently, high public awareness is directed to those types of non-standard work that are connected either to **new forms of contracts** (voucher, platform, zero-hours, portfolio, etc.) or increasing **types of work not bound to the premises of the employer** (mobile, at home, at client's place), mostly made possible by the increased use of modern information and communication technologies (ICT). These forms of work often have as a — additional — major characteristic a **less clear employer– worker relationship**.\n\nHowever, in 2019 the conventional employment contract still accounted for around 86% of the workforce (EU27), 9% are 'own-account' workers, that is, self-employed without employees. The remaining 4% were self-employed with employees (employers) and less than 1% were contributing family workers. Of all employed workers, 17.2% worked part-time and 13.3% had temporary contracts.\n\nNon-standard types of work that are characterised by the circumstance that **the work is not taking place at the premises of the employer** are mobile and home-based work, domestic work, care work and long-term domestic care work, and online platform work. In 2019, approximately 77% worked at the employer's premises, 5% at home, 9% at the clients' places and 8% at non-fixed workplaces. With the onset of the COVID-19 pandemic in 2020, the share of work at home more than doubled; in the EU27 it increased from 5.4% in 2019 to 13.4% in 2021.\n\nCompared to work at the premises of the employer, such non-standard workplaces often miss basic OSH facilities (Minimum requirements at workplaces directive), availability and suitability of help tools (Work equipment directive and Personal protective equipment directive), or provision of adequate digital and mobile tools (Display screen equipment directive).", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "The **commuting time** between home and workplace is quite stable; in 2005 at EU27 level, it stood at 42.4 minutes, and in 2015 Eurostat reports 40.2 minutes (time for both ways, to the workplace and back).19\n\n**Work at atypical working times** is in general regarded as a working condition with negative health impact, called **work extensity**. The two major indicators of atypical working times are work at **'atypical working times'** and **'long working hours'**.\n\nEurostat reports for **'Employment at atypical working time'**20 a minor decrease between 2011 and 2019, from 38.8% to 37.2% (EU27), for all employed workforce and all types of such atypical time.21 Some **groups of self-employed** show a higher rate of atypical working times but also for most of the categories of self-employed the rates decreased during the period 2011 to 2019. **High managerial selfemployed** had a slight increase from 42.1% to 43.2% in this period. For the **low managerial selfemployed** Eurostat finds a decrease from 69.2% to 64.5%. The figures for **small entrepreneurs** dropped slightly from 56.6% to 54.1%, the same applies for employed persons in **personal care work** with a minor change (50.6% to 49.8%). **Agricultural self-employed** had the highest level of such working times; they showed a decrease from 68.4% to 63.4%.\n\nThe length of the daily or weekly working time, its allocation over the 24 hours of a day or at night are important factors for health and wellbeing. The statistical data (Eurostat) show a slight decrease **of the average weekly working time for full-time employees** (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019.22 The data also document slight increases and decreases of work at atypical times (response option for frequency: 'usual').23 In 2006 and 2019, the following percentages of all employed persons worked at atypical times: on **Saturdays** the percentage decreased from 28% to 25%, **working on Sundays** remained stable at around 13.5%, **working in the evenings** decreased from 19% to 15%, **work at night** fell from 7% to 5% and **shift work** increased slightly from 17% to 18%.24", - "page_start": 30, - "page_end": 30, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "number of workers in industry decreased, but the number of workers in the above-mentioned service sectors increased.\n\n### *3.1.2 Work intensity*\n\nThere are numerous references showing that during the period **between 1990 and 2005 work intensity has considerably increased**.28\n\nFor example, Eurofound has analysed the responses to the two EWCS questions on high speed at work and tight deadlines. The EWCS found a significant increase of work intensity between 1991 and 2005. In 1991, **'Working at a very high speed'** was for the majority of respondents not an issue. Fifty-two per cent of the workers responded to this statement 'Never' or 'Almost never'; in 1991, 24% worked at high speed and responded 'Around ¾ of the time', 'Almost all of the time' and 'All of the time'; until 2005 this response rate went up by 11% to 35%.\n\n**Working to tight deadlines** was not an issue for 34% in 1990, and in 2005 only for 19%, a reduction of 15%. The percentage of the sum of responses 'Around ¾ of the time', 'Almost all of the time' or 'All of the time' to this question on tight deadlines increased between 1991 and 2005 from 29% to 37%. Regarding these two indicators, **work intensity has evidently increased** between 1991 and 2005.29\n\n#### **Figure 10: Development of work intensity indicators between 1991 and 2015 – Eurofound**\n\nAfter that first period between 1991 and 2005, **this development seems to stagnate between 2005 and 2015**.30 The responses 'Almost all of the time' or 'All of the time' vary only slightly, between 33% and 37% depending on year and question ('Working at high speed' or 'Working to tight deadlines').\n\nDifferences can be seen regarding sector, company size and occupation. **Regarding work intensity**, ESENER enterprise data on time pressure for the EU27 indicate a slight increase of 2.3% between 2014 and 2019 from 43% to 45%.31 Interestingly, according to ESENER, time pressure drastically **increases with the size of the enterprise**. In enterprises with 5 - 9 employees, 39% report time pressure, and in enterprises with above 250 employees 69%. 32 The same applies for long working hours, where enterprises with 5 - 9 employees report 19% 'long working hours', and in enterprises with above 250 employees this percentage increases to about 39% (EU27, 2019).33", - "page_start": 32, - "page_end": 32, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**Figure 18: Employment types in EU27, development 2005 to 202265 – Eurostat**\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by **two major contractual distinctions** that are important for OSH: 1) **full- or part-time** work, and 2) the **time limit of the contract** (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n#### **Definitions Eurostat66**\n\n**Employers = self-employed with employee:** employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\n**Self-employed:** not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\n**Employees:** persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "use of modern and powerful ICT technologies, or a combination of those two forms. Depending on the work tasks, ICT infrastructures enable complete independence and separation from the premises of the employer. In addition, they open opportunities for new forms of contracts and can go along with blurred OSH responsibilities.\n\nThese forms of work often have as a major quality (feature) a **less clear employer–employee relation**. The main structural element of the EU OSH legislation is the dual role of employers and employees in OSH. The **employer has the overall responsibility** for OSH (Framework Directive Article 5: *'… duty to ensure the safety and health of workers in every aspect related to the work …'*), and the **worker the obligation to contribute** (Framework Directive Article 13 … *'to take care as far as possible of his own safety and health and that of other persons affected by his acts or omissions …'*). Where OSH legislation has to be applied in less clear employer–employee relations, for example, in the case of self-employed, the relevance and impact of 'dyadic' OSH regulations seem to fade.\n\nNot only the EU is struggling with this development, but Australia also introduced the legal identity of a **PCBU**: 'Significantly, the primary duty of care will shift from the \"employer\" to the broader **\"person conducting a business or undertaking\"** (PCBU) and duties previously owed to \"employees\" will now apply to all workers.'64\n\nDuring the past decades, and especially after 1990, a much **greater variety of such contractual relations** has emerged. However, in 2019 the conventional employment contract (part- or full-time) still accounts for around 86% of the workforce (EU27), they are employees. Seventeen per cent of these employed persons have a part-time contract, 13% of the employees have a temporary contract, or both combined. Nine per cent are self-employed without employees. The remaining 4% are self-employed with employees (employers) and 1% are contributing family workers. The number of self-employed in agriculture halved between 2005 and 2019, which is the biggest factor in the reduction of contributing family workers and the stagnation of the number of self-employed.", - "page_start": 45, - "page_end": 45, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "*8.7 Take immediate and effective measures to eradicate forced labour, end modern slavery and human trafficking and secure the prohibition and elimination of the worst forms of child labour, including recruitment and use of child soldiers, and by 2025 end child labour in all its forms*\n\n*8.8 Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment*\n\nThe **WHO** is following a global approach towards **occupational health**. They summarised their base of evidence on global working conditions in some **key facts**:332\n\n- *In many countries more than half of workers are employed in the informal sector with no social protection for seeking health care and lack of regulatory enforcement of occupational health and safety standards.*\n- *Occupational health services to advise employers on improving working conditions and monitoring the health of workers cover mostly big companies in the formal sector and more than 85% of workers in small enterprises, informal sector, agriculture and migrants worldwide do not have any occupational health coverage.*\n- *Work-related health problems result in an economic loss of 4–6% of GDP for most countries. The basic health services to prevent occupational and work-related diseases cost on average between US$ 18 and US$ 60 (purchasing power parity) per worker.*\n- *About 70% of workers do not have any insurance to compensate them in case of occupational diseases and injuries.*\n- *Research has demonstrated that workplace health initiatives can help reduce sick leave absenteeism by 27% and health-care costs for companies by 26%.*\n\nBased on this evidence, the WHO Global Assembly agreed on a 'Worker health global plan of action' in 2007333 (updated 2013) that included targets like better prevention at workplaces, that is, Objective 2: *to protect and promote health at the workplace.* The WHO has worked together with the ILO to estimate the burden of diseases from work and published the 'WHO/ILO joint estimates of the work-related burden of disease and injury'.\n\nWhen looking at the work of global institutions during the past two to three decades — and for the ILO also much further back — many important **agreements, conventions, government actions and global business** programmes have been negotiated, agreed and issued. The objectives and necessary measures at a global level have been made much more concrete by these efforts. OSH and working conditions are on the agenda of these organisations, and general and concrete targets and indicators have been set. The **task is the implementation of these principles and programmes** in every region and country of the world in a way that it reaches all workplaces.\n\n**OSH Barometer – OSH Infrastructure – International organisations and international programmes** https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-organisations https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-programmes\n\n**ESENER – Data visualisation** https://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019", - "page_start": 116, - "page_end": 116, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "What is the definition of a work accident according to the International Labour Organisation?", - "target_page": 38, - "target_passage": "ILO Definition of accident: ‘An occupational accident is an unexpected and unplanned occurrence, including acts of violence, arising out of or in connection with work, which results in one or more workers incurring a personal injury, disease or death.’", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "*8.7 Take immediate and effective measures to eradicate forced labour, end modern slavery and human trafficking and secure the prohibition and elimination of the worst forms of child labour, including recruitment and use of child soldiers, and by 2025 end child labour in all its forms*\n\n*8.8 Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment*\n\nThe **WHO** is following a global approach towards **occupational health**. They summarised their base of evidence on global working conditions in some **key facts**:332\n\n- *In many countries more than half of workers are employed in the informal sector with no social protection for seeking health care and lack of regulatory enforcement of occupational health and safety standards.*\n- *Occupational health services to advise employers on improving working conditions and monitoring the health of workers cover mostly big companies in the formal sector and more than 85% of workers in small enterprises, informal sector, agriculture and migrants worldwide do not have any occupational health coverage.*\n- *Work-related health problems result in an economic loss of 4–6% of GDP for most countries. The basic health services to prevent occupational and work-related diseases cost on average between US$ 18 and US$ 60 (purchasing power parity) per worker.*\n- *About 70% of workers do not have any insurance to compensate them in case of occupational diseases and injuries.*\n- *Research has demonstrated that workplace health initiatives can help reduce sick leave absenteeism by 27% and health-care costs for companies by 26%.*\n\nBased on this evidence, the WHO Global Assembly agreed on a 'Worker health global plan of action' in 2007333 (updated 2013) that included targets like better prevention at workplaces, that is, Objective 2: *to protect and promote health at the workplace.* The WHO has worked together with the ILO to estimate the burden of diseases from work and published the 'WHO/ILO joint estimates of the work-related burden of disease and injury'.\n\nWhen looking at the work of global institutions during the past two to three decades — and for the ILO also much further back — many important **agreements, conventions, government actions and global business** programmes have been negotiated, agreed and issued. The objectives and necessary measures at a global level have been made much more concrete by these efforts. OSH and working conditions are on the agenda of these organisations, and general and concrete targets and indicators have been set. The **task is the implementation of these principles and programmes** in every region and country of the world in a way that it reaches all workplaces.\n\n**OSH Barometer – OSH Infrastructure – International organisations and international programmes** https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-organisations https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-programmes\n\n**ESENER – Data visualisation** https://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019", - "page_start": 116, - "page_end": 116, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "**ICOH** stated in its Centennial Declaration:\n\n*'The globalization process has not succeeded in equalising the conditions of work but in fact the opposite has occurred; the gaps are increasing. Poverty, inequality and under-development are closely associated with the poor safety, health and social conditions of work, as they are also linked with illiteracy, lack of education, poor access to health services and low or non-existent social protection.*323\n\nInternational organisations like the ILO, WHO and UN have also taken up **the task to promote OSH worldwide**. The ILO has established a system of conventions; their implementation is monitored in the signature states.324 The ILO has issued and decided on nine 'Fundamental conventions' that have been signed by 92% of the ILO member states.325 These fundamental conventions are:\n\n- 1. Freedom of Association and Protection of the Right to Organise Convention, 1948 (No. 87);\n- 2. Right to Organise and Collective Bargaining Convention, 1949 (No. 98);\n- 3. Forced Labour Convention, 1930 (No. 29) (and its 2014 Protocol);\n- 4. Abolition of Forced Labour Convention, 1957 (No. 105);\n- 5. Minimum Age Convention, 1973 (No. 138);\n- 6. Worst Forms of Child Labour Convention, 1999 (No. 182);\n- 7. Equal Remuneration Convention, 1951 (No. 100);\n- 8. Discrimination (Employment and Occupation) Convention, 1958 (No. 111); and\n\n9. (since 2022) Two conventions on Occupational Safety and Health, that is, C-155 Occupational Safety and Health Convention, 326 and C-187 Promotional Framework for OSH Convention. 327\n\nThe ILO also promotes the **'Decent work' approach** to improve working conditions, covering aspects like fair income, social protection for families, better prospects for personal development and social integration, and equal opportunities and treatment. In the frame of this approach, the ILO has developed flagship programmes like *'Safety and Health for all' 328* and the **'Global Action for Prevention on Occupational Safety and Health' (OSH-GAP)**, a programme to support and promote OSH globally.329 Its priorities are:\n\n- *legal, regulatory and adjudicative frameworks that address and integrate OSH, including core OSH laws and technical regulations;*\n- *enforcement and compliance with OSH in workplaces, including public, private and nongovernmental systems that operate independently or in concert;*\n- *employer and worker competencies that are necessary to achieve and sustain OSH at global, national and enterprise levels;*\n- *social dialogue that supports OSH;*\n- *public and private financial resources for investment in OSH;*\n- *occupational health services including public and private health services;*\n- *employment injury insurance programmes that support prevention of OSH fatalities, injuries and illnesses;*\n- *OSH professionals, institutions and networks;*\n- *OSH indicators and implementation of effective methodologies for OSH data collection; and*\n- *demand for the safety and health of workers and workplaces.*\n\nThe **International Social Security Association** (ISSA) developed the **Vision Zero initiative**.330 ISSA promotes together with enterprises and many global OSH organisations this concept, aiming at the complete elimination of work accidents and occupational diseases.\n\nThe **UN** has developed a set of targets and indicators, **the Social Development Goals** (SDG).331 Target 8 is dedicated to *'Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all'.* Sub targets are:\n\n> *8.5 By 2030, achieve full and productive employment and decent work for all women and men, including for young people and persons with disabilities, and equal pay for work of equal value*", - "page_start": 115, - "page_end": 115, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "segmentation of enterprises into profit centres, quality management obligations, externalisation/subcontracting of service areas like cleaning, canteen, security and so on.\n\n- Increased communication and interdependency, time coordination and synchronisation requirements between units, enterprises and in supply chains.\n- Less direct supervision and more objective and results-based management.\n- Last but not least the massive introduction of ICT and other work-intensifying technologies.\n\n**The main reasons for stagnation after 2005 might be** that many of the above-mentioned concepts or policies were developed or had their peak during the 1980s, 1990s or the first decade of the 21st century. Some of them lost their dynamic (e.g. privatisation), or have become a kind of standard (management by objectives), or were widely implemented in the first decade of the 21st century (ICT facilities at most workplaces); also, some negative impacts on working time were mitigated by state interventions (i.e. the EU Working time directive46) or labour agreements.47\n\nOf particular interest for OSH probably is that the changes in labour legislation, the production in international supply chains and technological improvements were sufficiently developed to shift quite a relevant part of work to other types of contracts, that is, to **subcontractors, self-employed or temporary agent workers** and other forms of non-standard work contracts. Reasons were economic savings but also better management of **intense work periods, peak times and risky work**.\n\nThese developments are probably the main reason that work intensity **stayed at a similar level for the employed workers** with a standard contract while the working conditions of other types of work degraded. EU-OSHA has **taken this conclusion** already in 2002 in its report48 on 'New Forms of Contractual Relationships and the Implications for Occupational Safety and Health':\n\n*'1. the transfer of risks in the (practical) conditions of work to non-permanent employees and to subcontractors;* \n\n*2. segmentation in the workforce based on differences in contractual conditions of employment (working hours, job insecurity, and qualifications).* \n\n*In the first scenario, risks directly related to working conditions (bad ambient and ergonomic conditions)*", - "page_start": 36, - "page_end": 36, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "workers and those with care duties at home. Digitalisation also offers opportunities for more effective OSH training, advanced workplace risk assessment, communication and OSH inspections.\n\n**Digital technologies can worsen the OSH situation at workplaces.** Depending on how technologies are designed and implemented, on the organisational context and on the employment status, digitalisation may result in workers being more exposed to OSH risks such as ergonomic and psychosocial risks, with an increase in work-related stress, increasing performance pressure and work complexity, facilitating irregular working hours, reducing social interaction and support at work, blurred boundaries between work and private life, and new forms of dislocated work with unclear employment status. Technical concerns relate to aspects like safe interaction of workers with robots and semiautonomous machines and vehicles. The extensive use of data has the potential to harm privacy interests. **Digitalisation can create abrupt (disruptive) and emerging changes at workplaces** and with that very different challenges for OSH.275 Eurofound summarised the opportunities and risks of **ICTbased mobile work** in a table format.276\n\n| Opportunities | Risks |\n| --- | --- |\n| Potential transformation of work organisation | |\n| Contribution to inclusive labour markets | Potential exclusion of certain groups from the labour market |\n| Addressing (regional) labour shortages | (for example, low-skilled workers, older people, place-bound |\n| Job creation and retention | occupations) |\n| Flexibility and autonomy | Advanced monitoring and control |\n| Increased work intensity and stress | |\n| Improved work-life balance | 'Limitless work' |\n| Potential expected 24/7 availability | |\n| Long working hours, limited rest time | |\n| Blurring spheres of work and private life | |\n| Productivity, costs, results-based remuneration | |\n| Improved communication and collaboration | Information overload |\n| Conflicts due to a lack of coordination | |\n| Skills development (technical applications) | Social and professional isolation |\n| High demands for self-management and self-organisation | |\n| Outsourcing of employer responsibilities (equipment, health | |\n| and safety data protection) | |\n\n#### **Table 27: Opportunities and risks of ICT-based mobile work – Eurofound**\n\nEU-OSHA observes particular risks for safety and health in:277\n\n- low standards of OSH (particularly ergonomic) in mobile and home-based work,\n- safety of robots, cobots and autonomous vehicles,\n- platform work with low OSH standards,\n- enhanced and detailed surveillance,\n- permanent availability, and\n- physical inactivity, permanent sitting and focusing on digital equipment.\n\nEU-OSHA included in its ESENER 2019 survey several questions regarding **digitalisation and OSH** in enterprises. There is a great diversity when it comes to the types of digital technologies reported by the establishments. PCs at fixed workplaces (86% of surveyed establishments in the EU27) and laptops, tablets, smartphones or other mobile devices (77%) are frequently reported across all activity sectors and business size classes. Only 6% of surveyed establishments in the EU27 reported using none of the digital technologies.278", - "page_start": 104, - "page_end": 104, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "# **1 Executive summary**\n\n#### **How can the 'state of OSH' in the EU be assessed?**\n\nThis report describes the **state of OSH in the EU**, and accordingly the **trends and the developments**, that is, the changes in state over time. The report refers to different periods in time, mostly to the situation between 2005 — after the substantive enlargement of the EU in 2004 — and 2019; if the use of earlier or more recent start or endpoints was reasonable and data were available, a different time frame was applied.\n\n**Two criteria were crucial for the selection of these indicators: availability of reliable data and the relevance of the indicators.** An ideal and complete set of indicators would cover even more indicators than presented in this report, but major limits were set by the availability of reliable data.\n\nThe main data sources **comprise a large variety of quantitative datasets**, for example, Eurostat statistics and EU-wide surveys (e.g. EU-OSHA's European Survey of Enterprises on New and Emerging Risks (ESENER), Eurofound's European Working Conditions Survey (EWCS), Eurostat's Labour Force Survey (LFS) and its ad hoc modules, and the Flash Eurobarometer, detailed background reports on risks, groups of workers, OSH systems and infrastructures (e.g. by EU-OSHA, Eurofound, the Fundamental Rights Agency, etc.), and evaluations and assessments of the level of implementation of OSH directives (e.g. by the Directorate-General for Employment, Social Affairs and Inclusion (DG EMPL) or the Senior Labour Inspectors Committee (SLIC) surveys facilitated by the National Labour Inspectorates). Regarding the description of developments beyond the EU, data were taken from the International Labour Organisation (ILO), the World Health Organisation (WHO), the International Social Security Association (ISSA), the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), the International Commission on Occupational Health (ICOH) and the International Association of Labour Inspection (IALI).\n\nPlease note that Eurostat employment data and ICOH data were retrieved in 2023. Current figures might slightly deviate due to updates and corrections.\n\n#### **Working conditions – Risk factors at work**\n\n**Shifts in work tasks and workforce between sectors, technological progress and the development of higher skill levels** have led to less work in manual occupations and more work in administrative (clerical, professional, managerial, etc.) occupations as well as in client-oriented and communicative occupations.\n\nConsequently, these developments caused a **shift of risks to psychosocial and emotional challenges**. This can be documented by the growing percentage of workers who report difficult clients (60%), long or irregular working hours (22%), and poor communication in the organisation (18%) (all data from ESENER 2019 or EWCS 2015) The OSH risks for these occupations — gradually but also significantly — shifted from safety risks to health risks. The psychosocial risks for mental health and the emotional challenges increased; they clearly correlate with more work in emotionally demanding and/or client-oriented sectors, be it in tourism, entertainment or education, public transport, social work, or health and care.\n\nThe trend towards more psychosocial and emotional challenges at work **does not mean that 'classical' exposures** or **ergonomically burdensome work has disappeared**. There is a large number of workers in all sectors — between 40% and 75% in ESENER and the EWCS — who report **ergonomic risks**. These are, for example, repetitive hand and arm movements in industry and service occupations, where a particularly high percentage is reported by low-skilled manual workers; moving heavy loads in craft occupations, or patients in health and care occupations, where a particularly high percentage is reported by high-skilled manual workers; and tiring and painful positions, where again the highest level is reported by high-skilled manual workers.\n\nStill a quite constant share of workers reports **exposure to physical risks like noise, vibrations, high or low temperatures and to chemical and biological agents**; depending on occupation and sector, between 15% and 30% of workers are exposed to such risks (EWCS). No or very minor decreases in these risks can be seen during the past 15 years.", - "page_start": 9, - "page_end": 9, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "methodology, the OSH practitioners who were asked in ESENER seem to have a different view on time pressure than the workers themselves who are respondents in the LFS.\n\n**Figure 15: Percentage of employed persons with working time under pressure (per country, sum of responses 'Always' and 'Often') – LFS Ad hoc 2019** \n\nOne hypothesis to explain the increased time pressure is to draw a direct **connection between short weekly working time and more intense work**; or in other words, a short weekly working time leads to more **intensification of work or more long hours or atypical working times** ('trading flexibility for effort').38\n\nThe analysis of EU survey data shows **a mixed picture**: Firstly, ESENER data corroborate this hypothesis, the three countries with highest percentage of work under time constraints — that is, Finland, Sweden and Denmark — all have working hours under the EU average. Secondly, LFS data show a different picture; a country like Greece has the longest working hours and also reports the highest time pressure, the same 'combination' — but less extreme — applies to Austria, Cyprus and Malta. Trends of low or less than average working time and no time constraints are reported for Lithuania, and medium working time and low time constraints for Italy and Ireland.\n\nAn analysis of EWCS data concluded39 that in general intensity increases with long working hours, in enterprises with 1-19 the work intensity index (on a scale between 0 and 12) is 4.4, in larger enterprises with above 40 employees it is 6.3. This is in line with ESENER data that corroborate the importance of the **size of the enterprise** for time pressure and long working hours.\n\nLiterature — from very diverse disciplines — on work intensification points to **reasons for intensification on developments as:40**\n\n- Economic developments, particularly the dominance of neoliberalist policies and enhanced competition between workers, companies and states; reduction of state influence and privatisation.41\n- Pressure due to substantial organisational changes, for example, introduction of short-term economic objectives in enterprise policies, 42 expansion into new markets or new countries, acquiring other enterprises or merging, being acquired, restructuring of management or of basic staff working conditions (contracts, working time, flexibility).43\n- Decrease of trade union influence or worker participation regarding labour relations.\n- Liberalisation of labour legislation, creation of 'new forms of work' and new contract types, beyond the permanent full-time employment.44\n- New forms of management, application of management concepts like just-in-time production or lean management, higher flexibility of production and higher customer orientation, 45", - "page_start": 35, - "page_end": 35, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "horticulture and forestry, Luxembourg: Publications Office of the European Union, 2012, ISBN 978-92-79-22673- 1, doi:10.2767/53801\n\n391 European Commission Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs Industrial Transformation and Advanced Value Chains, Advanced Engineering and Manufacturing Systems*: Guide to application of the Machinery Directive 2006/42/EC,* Edition 2.2 – October 2019, p174 (Machinery directive annex 1, 1.1.6, Official Journal of the European Union, L157/37\n\n392 EU-OSHA: COVID-19: Back to the workplace - Adapting workplaces and protecting workers, 2020, https://osha.europa.eu/en/publications/covid-19-back-workplace-adapting-workplaces-and-protecting-workers 393 European Commission: Guidelines of the Commission on seasonal workers in the EU Factsheet on practical\n\nexamples and best practices, 2020 394 European Agency for Safety and Health at Work: E-guide to managing stress and psychosocial risks; https://osha.europa.eu/en/tools-and-resources/e-guides/e-guide-managing-stress-and-psychosocial-risks European Agency for Safety and Health at Work, 2020: Healthy workers, thriving companies - a practical guide to wellbeing at work, 2018. https://osha.europa.eu/en/publications/healthy-workers-thriving-companies-practicalguide-wellbeing-work\n\nISO 2021: ISO 45003 Occupational health and safety management - Psychological health and safety at work - Guidelines for managing psychosocial risks. https://www.iso.org/standard/64283.html\n\nEU Commission, 2018: Promoting mental health in the workplace. Guidance to implementing a comprehensive approach. https://ec.europa.eu/social/main.jsp?catId=738&langId=en&pubId=8098&furtherPubs=yes\n\nILO – International Labour Organization, Stress Prevention at Work Checkpoints. Practical improvements for stress prevention in the workplace, 2012. https://www.ilo.org/global/publications/books/WCMS_168053/lang- en/index.htm\n\nILO – International Labour Organization, 2021: Violence and harassment in the world of work: A guide on Convention No. 190 and Recommendation No. 206, 2021. https://www.ilo.org/global/topics/violenceharassment/resources/WCMS_814507/lang--en/index.htm\n\nWHO - World Health Organization, PRIMA-EF, 2008: Guidance on the European framework for psychosocial risk management : a resource for employer and worker representatives, https://apps.who.int/iris/handle/10665/43966 Senior Labour Inspectors Committee (SLIC), 2018: Labour inspectors' guide for assessing the quality of risk assessments and risk management measures with regard to prevention of psychosocial risks, here 395 Two EU-OSHA databases present several hundred guidance documents on Dangerous Substances https://osha.europa.eu/en/themes/dangerous-substances/practical-tools-dangerous-substances and Musculoskeletal Disorders (MSD): https://osha.europa.eu/en/themes/musculoskeletal-disorders/practical-toolsmusculoskeletal-disorders\n\n396 Examples of such tools and database are: EU-OSHA's Online Interactive Risk Assessment tool (OIRA) with more than 250 tools and more than 180,000 risk assessments https://osha.europa.eu/en/tools-and-resources/oira 397 See the series of EU-OSHA reports on 'Safety and health in micro and small enterprises in the EU: https://osha.europa.eu/en/themes/safety-and-health-micro-and-small-enterprises\n\nDescriptions of the good examples are available at: https://osha.europa.eu/en/tools-andpublications/publications/safety-andhealth-micro-and-small-enterprises-eu-policy-practice/view\n\n398 European Commission / Senior Labour inspectors committee: Guidance for National Labour Inspectors on addressing risks from worker exposure to respirable crystalline silica (RCS) on construction sites, October 2016\n\n399 Two of many examples: SME United, (Employers Federation) https://www.smeunited.eu/policies/policies/employment/health-safety , EFBWW European Federation of Building and Woodworkers (Trade Union), https://efbww.eu/activities/occupational-health-and-safety\n\n400 OSHWiki: Section 'OSH System at national level', descriptions of the social dialogue in each EU Member State https://oshwiki.eu/wiki/Category:OSH_systems_at_national_level\n\n401 DG Employment: Website on 'Social Dialogue', https://ec.europa.eu/social/main.jsp?catId=329&langId=en 402 Eurofound 'Database of wages, working time and collective disputes', see:\n\nhttps://www.eurofound.europa.eu/data/database-of-wages-working-time-and-collective-disputes 403 E.g.: Prevent (Sweden), DGUV (Germany) AUVA (Austria), see for all EU Member Stress the OSHWiki article on 'OSH-systems at national level' https://oshwiki.eu/wiki/Category:OSH_systems_at_national_level\n\n404 European Agency for Safety and Health at Work, 2021: Improving compliance with occupational safety and health regulations: an overarching review - Executive summary, https://osha.europa.eu/en/publications/summaryimproving-compliance-occupational-safety-and-health-regulations-overarching\n\nEuropean Agency for Safety and Health at Work, 2021: Improving compliance with occupational safety and health regulations: an overarching review, Literature review; Chapter 3: Societal norms, social reporting, corporate social responsibility and support for securing compliance, https://osha.europa.eu/en/publications/literature-reviewimproving-compliance-occupational-safety-and-health-regulations-0\n\nPodgorski, D., 2015: Measuring operational performance of OSH management systems – A demonstration of AHP-based selection of leading key performance indicators, in Safety Science, Vol. 73, March 2015, p146-166, https://doi.org/10.1016/j.ssci.2014.11.018", - "page_start": 155, - "page_end": 155, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "A1 forms issued for postings of workers by EU27 countries, 2011-2020')\n\nEurostat: EU citizens living in another Member State - statistical overview, here\n\nEuropean Commission, 2019: Towards Fair Labour Mobility: Revision of EU Posting of Workers Rules, 2019\n\n309 The statistics distinguish between many different categories of migrants, for example, inside EU, from non-EUcountries, first generation, second generation, seasonal temporary, permanent status, etc. More information here\n\n310 Ibid.\n\n311 Fasani & Mazza, 2020: Immigrant Key Workers: Their Contribution to Europe's COVID-19 Response (p. 8). 312 Sometimes 'difficult' or 'demeaning' instead of 'demanding'. Taken from the Japanese: kitanai, kiken, kitsui\n\n313 Danaj et al., 2020: Labour Mobility and OSH Vulnerability of Posted Workers: The Cases of Austria and the Slovak Republic\n\n314 European Commission, 2022: Annual report on intra-EU labour mobility 2021 (p. 108, table 'Numbers of PD A1 forms issued for postings of workers by EU27 countries, 2011-2020').\n\n315 European Parliament, 2017: Posted workers: better protection and fair conditions for all\n\n316 It is hardly foreseeable how far the experience of interrupted supply chains during the COVID-19 pandemic will contribute to a de-globalisation and reduction of international supply chain dependency.\n\n317 Such methodologies exist for the environmental field, well-known is the 'ecological footprint'.\n\n318 Eurofound and the ILO have jointly produced a pilot report on worldwide working conditions to achieve a better evidence base for actions and policies, see: Eurofound & ILO, 2019: Working conditions in a global perspective\n\n319 See: https://www.globalreporting.org/ or UN-PRI (UN Principles of responsible investment)\n\nhttps://www.unpri.org/\n\n320 United Nations, Global Compact, here\n\n321 European Commission: Corporate sustainability due diligence\n\n322 Regulation (EU) 2017/821 of the European Parliament and of the Council of 17 May 2017 laying down supply chain due diligence obligations for Union importers of tin, tantalum and tungsten, their ores, and gold originating from conflict-affected and high-risk areas, here\n\n323 Centennial Declaration of the International Commission on Occupational Health, ICOH\n\n324 ILO: Monitoring Compliance with International Labour Standards The key role of the ILO Committee of Experts on the Application of Conventions and Recommendations, here\n\n325 ILO: Conventions and Recommendations\n\n326 ILO : Convention C-155\n\n327 ILO : Convention C-187\n\n328 ILO: Safety and health at work\n\n329 ILO: Health and Safety at the Workplace\n\n330 International Social Security Association (ISSA): Vison Zero Overview, Section Companies, here\n\n331 United Nations, Social Development Goals (SDGs), Goal 8, here and here\n\n**332** WHO: Protecting workers' health, Key facts\n\n333 WHO, 2013: WHO Global Plan of Action on Workers' Health (2008-2017): baseline for implementation: global country survey 2008/2009: executive summary and survey findings, here\n\n334 United Nations, SDGs, Goal 8, here and here\n\n335 ILO Constitution\n\n336 ILO: Conventions and Recommendations\n\n337 Treaty Establishing the European Coal and Steel Community and Annexes I-III, PARIS, 18 APRIL 1951, Article 3e\n\n(DRAFT ENGLISH TEXT), here\n\n338 Consolidated Version of the Treaty on the Functioning of the European Union Official Journal of the European Union, C 326/47, 6.10.2012, Article 151 and Article 153, here\n\n339 The European Parliament, the Council and the Commission: The European Pillar of Social Rights in 20 principles, here\n\n340 EU-OSHA, 2021: Directive 89/391/EEC – OSH \"Framework Directive\" of 12 June 1989 on the introduction of measures to encourage improvements in the safety and health of workers at work - \"Framework Directive\", here 341 Ibid., Framework Directive – Section 2 Employers' obligations. 342 Ibid., Framework Directive – Section 3 Workers' obligations.", - "page_start": 152, - "page_end": 152, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "*are shifted towards non-permanent workers and subcontractors, who have less protection and/or knowledge to cope with these risks. This scenario is not easy to verify in quantitative data, although it is frequently stated in case study research.'*\n\nAlso, Eurofound draws such conclusions on the **impact of subcontracting on working conditions**: *'First, employees in subcontracting perceive higher health and safety risks, notably through more workrelated accidents and increased time pressure. Second, there are a number of psychological risk factors, such as perceived economic insecurity and worries about losing one's job, that are more likely among subcontracting workers.'*49\n\nThere is even an evident **relation between such forms of employment and higher rates of work accidents**. In a first systematic review the authors conclude:50\n\n*'This review supports an association between some of the dimensions of precarious employment and occupational injuries; most notably for multiple jobholders and employees of temp agencies or subcontractors at the same worksite. However, results for temporary employment are inconclusive.'*\n\n#### **OSH Barometer – Mental risks:**\n\nhttps://visualisation.osha.europa.eu/osh-barometer/working-conditions-preventions/workingconditions\n\n#### **ESENER – Data visualisation:**\n\nhttps://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019\n\n### **3.2 Physical health risks at work**\n\nRisks at work that can result in physical harm can be divided into **safety** and **health risks**.\n\nThe main result of insufficient safety is a work accident. A **work accident** has as immediate consequences either a personal injury, a disease, or death of one or more workers. Eurostat distinguishes between non-fatal and fatal work accidents, and for the majority of sectors it provides also the duration of the absence due to the accident — an indicator for the severity of the injury. Non-fatal accidents at work can cause medium- or long-term health consequences, and in the worst case a permanent disability.\n\nILO Definition of accident: 'An occupational accident is an unexpected and unplanned occurrence, including acts of violence, arising out of or in connection with work, which results in one or more workers incurring a personal injury, disease or death.'51\n\n**Physical health risks** can be caused by a **variety of circumstances and exposures** or by **inadequate ergonomics**. Natural **circumstances** at work can pose such health risks, that is, temperature, storms and floods, unsafe terrain, biological agents and so on; or the risks are due to manmade circumstances, that is, work in buildings, on roofs and towers, on traffic routes, under artificial ventilation. **Exposure** is a general term to describe the interaction between environment / emissions / contaminants and the human organism. In a workplace context, 'exposure' mainly covers emissions from machinery or from tools and materials, for example, noise, vibration, dust, electromagnetic fields and chemical substances.\n\nRisks from **inadequate ergonomics** harm in particular the musculoskeletal system. Ergonomic risks of manual work are typically caused by repetitive hand and arm movements, tiring positions, for example, permanent kneeling or overhead work, lifting and moving of heavy loads, or of patients and so on. A certain ergonomic risk is **physical inactivity**, in practice sitting most of the working time. Not only administrative tasks but also many occupations in service or industry require permanent sitting, for example, drivers, cashiers, part assembly operators and so on (often called 'sedentary occupations').\n\nIn general, the EU-wide surveys (self-reported working conditions or health problems) show a high prevalence of ergonomic risks. Between 40% and 65% of the respondents in ESENER and the EWCS report **classical ergonomic risks**. A quite constant share of workers reports **physical exposures** like noise, vibrations, high or low temperatures and exposure to chemical and biological agents; depending", - "page_start": 37, - "page_end": 37, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "367 ETUC, 2021: Huge fall in labour inspections raises Covid risk\n\nETUC observed in the same period a fall of 0.5 million inspections. Quote: *'New ETUC research reveals that safety inspections have been cut by a fifth since 2010, falling from 2.2 million annual visits to 1.7 million.'* 368 Eurostat: Annual enterprise statistics by size class for special aggregates of activities (NACE Rev. 2), here 21.2 million businesses have between 0 and 9 employees.\n\n369 EPSU, 2012: A mapping report on Labour Inspection Services in 15 European countries (p. 15ff). 370 ETUC, 2021: Huge fall in labour inspections raises Covid risk\n\n371 European Agency for Safety and Health at Work, ESENER 2019, Question: Whether establishments have been visited by inspectorates in the last three years (% establishments by country ESENER 2019 and 2014, here 372 SLIC, 2018: Labour inspectors' guide for assessing the quality of risk assessments and risk management measures with regard to prevention of psychosocial risks, No-Binding Publication for EU Labour Inspectors, here 373 EU-OSHA: E-guide to managing stress and psychosocial risks\n\nEU-OSHA, 2018: \"Healthy workers, thriving companies - a practical guide to wellbeing at work\"\n\nISO, 2021: ISO 45003 Occupational health and safety management - Psychological health and safety at work - Guidelines for managing psychosocial risks, here\n\n374 European Commission, 2018: Promoting mental health in the workplace. Guidance to implementing a comprehensive approach\n\n375 ILO, 2012: Stress Prevention at Work Checkpoints. Practical improvements for stress prevention in the workplace\n\nILO, 2021: Violence and harassment in the world of work: A guide on Convention No. 190 and Recommendation No. 206\n\n376 WHO, 2008: PRIMA-EF : guidance on the European framework for psychosocial risk management : a resource for employer and worker representatives\n\n377 OSH Barometer, section Steering of OSH, National strategies, Activities, here\n\n378 Arbetsmiljöverket, 2015: Organisatorisk och social arbetsmiljö (AFS 2015:4), föreskrifter (Organisational and social work environment, Ordinance).\n\n379 The European Commission (2002) defined stress as the pattern of emotional, cognitive, behavioural and physiological reactions to adverse and noxious aspects of work content, work organisation and work environment.\n\nEuropean Commission, Directorate-General for Employment, Social Affairs and Inclusion, 2000: Guidance on work-related stress – Spice of life or kiss of death?\n\n380 See the OSHWiki article on Psychosocial issues\n\n381 OSHWiki provides in its articles on 'OSH System at national level' for the EU Member States a chapter on the 'National strategy', here\n\n382 COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS: EU strategic framework on health and safety at work 2021-2027: Occupational safety and health in a changing world of work, {SWD(2021) 148 final} - {SWD(2021) 149 final, Brussels, 28.6.2021, here **383** OSHWiki: EU OSH Strategic framework\n\n384 EU-OSHA, 2019: National Strategies in the field of Occupational Safety and Health in the EU 385 The OSHWiki articles 'OSH System at national level'\n\n(https://oshwiki.eu/wiki/Category:OSH_systems_at_national_level ) contain a chapter on the 'National Strategy', mostly updated to the newest strategy. in the OSH Barometer see the Section National Strategies\n\n386 European Agency for Safety and Health at Work, 2019: National Strategies in the field of Occupational Safety and Health in the EU, p 9, https://osha.europa.eu/en/file/108414/download?token=2yF1UnxW 387 The OSH Barometer contains a special section dedicated to National OSH Strategies:\n\nhttps://visualisation.osha.europa.eu/osh-barometer/osh-steering/national-strategies\n\n388 One of the many schemes for the characterisation of national OSH-policies and practical implementation was developed in the Nordic Council of Ministers publication: Suikkanen, A., & Kunnari, M. 2008: Principles and concepts in Nordic occupational safety and health policies: dimensions of strategic thinking and approaches. Nordic Council of Ministers.\n\nIt distinguishes between eight categories of state actions: punitive / supervising / regulative / legislative / incentivising / consultative / informative / networking / collaborating / awareness raising / knowledge enhancing (from training to research)\n\n389 European Agency for Safety and Health at Work, 2021: Improving compliance with occupational safety and health regulations: an overarching review- Report Executive summary, European Risk Observatory Report, p43 390 European Commission, DG EMPL: A non–binding guide to best practice with a view to improving the application of related directives on protecting health and safety of workers in agriculture, livestock farming,", - "page_start": 154, - "page_end": 154, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Was knowledge domain agnosticism a goal in the development of OLAF?", - "target_page": 1, - "target_passage": "Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations represented largely depend on one or more business use cases. As we designed our framework with industry application in mind, we need to consider it within its real-world usage context.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]\n\n## **Reasoning and problem-solving**\n\nEarly researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. [13] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. [14]\n\nMany of these algorithms are insufficient for solving large reasoning problems because they experience a \"combinatorial explosion\": They become exponentially slower as the problems grow. [15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16] Accurate and efficient reasoning is an unsolved problem.\n\n### **Knowledge representation**\n\nKnowledge representation and knowledge engineering[17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18] scene interpretation,[19] clinical decision support,[20] knowledge discovery (mining \"interesting\" and actionable inferences from large databases),[21] and other areas.[22]\n\nA knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23] Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24] situations, events, states, and time;[25] causes and effects;[26] knowledge about knowledge (what we know about what other people\n\nAn ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.\n\nknow);[27] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28] and many other aspects and domains of knowledge.\n\nAmong the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29] and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as \"facts\" or \"statements\" that they could express verbally).[16] There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.[c]\n\n## **Planning and decision-making**", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia3.pdf" - }, - { - "text": "of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world.[77][78]\n\n#### **Criticisms**\n\nThe main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called *Moorean Arguments*. A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument.[79]\n\nThe roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception.[note 1][80]\n\nIn the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase *\"Je pense, donc je suis\"* (\"I think, therefore I am\").[81] Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite.[82]\n\nThis same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct \"acquaintance\" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise.[83] Chalmers concludes that \"there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy.\"[84]\n\nEliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled \"The Consciousness Deniers\". In it, Strawson describes illusionism as the \"silliest claim ever made\", next to which \"every known religious belief is only a little less sensible than the belief that the grass is green.\"[85] Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book *The Feeling of Life Itself*. In the early pages of the book, Koch describes eliminativism as the \"metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive.\"[86] Koch takes the prevalence of eliminativism as evidence that \"much of twentieth-century analytic philosophy has gone to the dogs\".[87]\n\n### **Type-B Materialism**\n\nType-B Materialism, also known as *Weak Reductionism* or *A Posteriori Physicalism*, is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world.[43] Like Type-A Materialists, Type-B Materialists are", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia2.pdf" - }, - { - "text": "impossible within the bounds of nature but possible within the bounds of logic.[47] This would imply that facts about experience are not logically entailed by the \"physical\" facts. Therefore, consciousness is irreducible. In Chalmers' words, \"after God (hypothetically) created the world, he had more work to do.\"[48] Daniel Dennett, a philosopher of mind, criticised the field's use of \"the zombie hunch\" which he deems an \"embarrassment\"[49] that ought to \"be dropped like a hot potato\".[29]\n\n#### **Knowledge argument**\n\nThe knowledge argument, also known as *Mary's Room*, is another common thought experiment: A hypothetical neuroscientist named Mary has lived her whole life in a black-and-white room and has never seen colour before. She also happens to know everything there is to know about the brain and colour perception.[50] Chalmers believes[48] that when Mary sees the colour red for the first time, she gains new knowledge — the knowledge of \"what red looks like\" — which is distinct from, and irreducible to, her prior physical knowledge of the brain or visual system. A stronger form of the knowledge argument[50] claims not merely that Mary would lack subjective *knowledge* of \"what red looks like,\" but that she would lack knowledge of an objective *fact* about the world: namely, \"what red looks like,\" a non-physical fact that can be learned only through direct experience (qualia). Others, such as Thomas Nagel, take a \"physicalist\" position, disagree with the argument in its stronger and/or weaker forms.[50] For example, Nagel put forward a \"speculative proposal\" of devising a language that could \"explain to a person blind from birth what it is like to see.\"[31] The knowledge argument implies that such a language could not exist.\n\n# **Philosophical responses**\n\nDavid Chalmers' formulation of the hard problem of consciousness provoked considerable debate within philosophy of mind, as well as scientific research.[43]\n\nThe hard problem is considered a problem primarily for physicalist views of the mind (the view that the mind is a physical object or process), since physical explanations tend to be functional, or structural. Because of this, some physicalists have responded to the hard problem by seeking to show that it dissolves upon analysis. Other researchers accept the problem as real and seek to develop a theory of consciousness' place in the world that can solve it, by either modifying physicalism or abandoning it in favour of an alternative ontology (such as panpsychism or dualism). A third response has been to accept the hard problem as real but deny human cognitive faculties can solve it.\n\nA diagram showing the relationship between various views concerning the relationship between consciousness and the physical world\n\nPhilPapers is an organization that archives academic philosophy\n\npapers and periodically surveys professional philosophers about their views. It can be used to gauge professional attitudes towards the hard problem. As of the 2020 survey results, it seems that the majority of philosophers (62.42%) agree that the hard problem is real, with a substantial minority that disagrees (29.76%).[25]", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Figure 4.15 Domain and Range inferred by the reasoner\n\nIt is possible to specify more than one class as the domain or range of a property. One of the most common mistakes of new users is to do this and expect that the resulting domain/range is the union of the two classes. However, note that next to the Domain and Range in the Description view it says (intersection). This is because the semantics of having 2 or more classes as the domain or range is the *intersection* of those classes *not* the union. E.g., if one defined the domain for a property to be Pizza and then added another domain IceCream that would mean that for something to be in the domain of that property it would have to be an instance of *both* Pizza *and* IceCream not (as people often expect) the *union* of those two sets which would be *either* the class Pizza *or* the class IceCream. Also, note that the domain and range are for inferencing, they are not data integrity constraints. This distinction will be explained in more detail below in the section on SHACL.", - "page_start": 28, - "page_end": 28, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "meta-problem will solve or dissolve the hard problem. A weaker line holds that it will not remove the hard problem, but it will constrain the form of a solution.\n\nIn other words, the 'strong line' holds that the solution to the meta-problem would provide an explanation of our beliefs about consciousness that is independent of consciousness. That would debunk our beliefs about consciousness, in the same way that explaining beliefs about god in evolutionary terms may provide arguments against theism itself.[144]\n\n# **In popular culture**\n\nTom Stoppard's play *The Hard Problem*, first produced in 2015, is named after the hard problem of consciousness, which Stoppard defines as having \"subjective First Person experiences\".[145]\n\n# **See also**\n\n*Philosophy portal*\n\n- Animal consciousness\n- Artificial consciousness\n- Binding problem\n- Blindsight\n- Chinese room\n- *Cogito, ergo sum*\n- Cryonics\n- Free will\n- Ideasthesia\n- Introspection\n- Knowledge by acquaintance\n- List of unsolved problems in biology\n- Mind–body problem\n- Phenomenalism\n- Philosophy of self\n- Primary–secondary quality distinction\n- Problem of mental causation\n- Problem of other minds\n- Vertiginous question\n- Von Neumann–Wigner interpretation\n\n# **Notes**\n\n- 1. \"But, without any delusive representations of images or phantasms, I am most certain that I am, and that I know and delight in this. In respect to these truths I am not at all afraid of the arguments of the Academians, who say, What if you are deceived? For if I am deceived, I am. For he who is not, cannot be deceived...\"\n- 2. There has been debate over how best to characterize James' position. The *Stanford Encyclopedia of Philosophy* states: \"James's commitment to panpsychism remains somewhat controversial, since he also advanced a cogent set of objections against a version of the view, which he labelled the 'mind dust' theory, in chapter six of The Principles of Psychology ([1890] 1981). These objections are the inspiration for the so-called 'combination problem', around which much of the twenty first century literature on panpsychism focuses.\"\n\n# **References**", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Attitudes towards physicalism also differ among professionals. In the 2009 PhilPapers survey, 56.5% of philosophers surveyed subscribed to physicalism and 27.1% of philosophers surveyed rejected physicalism. 16.4% fell into the \"other\" category. [51] In the 2020 PhilPapers survey, 51.93% of philosophers surveyed indicated that they \"accept or lean towards\" physicalism and 32.08% indicated that they reject physicalism. 6.23% were \"agnostic\" or \"undecided\".[25]\n\nDifferent solutions have been proposed to the hard problem of consciousness. The sections below taxonomizes the various responses to the hard problem. The shape of this taxonomy was first introduced by Chalmers in a 2003 literature review on the topic.[52] The labelling convention of this taxonomy has been incorporated into the technical vocabulary of analytic philosophy, being used by philosophers such as Adrian Boutel,[53] Raamy Majeed,[54] Janet Levin,[55] Pete Mandik & Josh Weisberg,[56] Roberto Pereira,[57] and Helen Yetter-Chappell.[58]\n\n### **Type-A Materialism**\n\nType-A materialism (also known as *reductive materialism* or *a priori physicalism*) is a view characterized by a commitment to physicalism and a full rejection of the hard problem. By this view, the hard problem either does not exist or is just another easy problem, because every fact about the mind is a fact about the performance of various functions or behaviours. So, once all the relevant functions and behaviours have been accounted for, there will not be any facts left over in need of explanation.[52] Thinkers who subscribe to type-A materialism include Paul and Patricia Churchland, Daniel Dennett, Keith Frankish, and Thomas Metzinger.\n\nSome type-A materialists believe in the reality of phenomenal consciousness but believe it is nothing extra in addition to certain functions or behaviours. This view is sometimes referred to as *strong reductionism*. [43][52] Other type-A materialists may reject the existence of phenomenal consciousness entirely. This view is referred to as eliminative materialism or illusionism. [59][60][61]\n\n#### **Strong reductionism**\n\nMany philosophers have disputed that there is a hard problem of consciousness distinct from what Chalmers calls the easy problems of consciousness. Some among them, who are sometimes termed *strong reductionists*, hold that phenomenal consciousness (i.e., conscious experience) does exist but that it can be fully understood as reducible to the brain.[43]\n\nBroadly, strong reductionists accept that conscious experience is real but argue it can be fully understood in functional terms as an emergent property of the material brain.[43] In contrast to weak reductionists (see above), strong reductionists reject ideas used to support the existence of a hard problem (that the same functional organization could exist without consciousness, or that a blind person who understood vision through a textbook would not know everything about sight) as simply mistaken intuitions.[43][52]\n\nA notable family of strong reductionist accounts are the higher-order theories of consciousness. [62][43] In 2005, the philosopher Peter Carruthers wrote about \"recognitional concepts of experience\", that is, \"a capacity to recognize [a] type of experience when it occurs in one's own mental life,\" and suggested that such a capacity could explain phenomenal consciousness without positing qualia.[63] On the higher-order view, since consciousness is a representation, and representation is fully functionally analyzable, there is no hard problem of consciousness.[43]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia2.pdf" - }, - { - "text": "In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics.[197]\n\nIn India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation.[198] In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object.[199] A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources.[200] Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number. [201]\n\nThe syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic.[202] Many see Gottlob Frege's *Begriffsschrift* as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work *Principia Mathematica*. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language.[203] Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic.[204] Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic.[205]\n\n# **See also**\n\n*Philosophy portal*", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia1.pdf" - }, - { - "text": "report that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fillin-the-blank\" cloze statements. Language\n\nAbstract\n\nRecent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these\n\nFabio Petroni1 Tim Rocktaschel ¨\n\n#### 3.2 Semantic knowledge models have many advantages over structured knowledge bases: they require no schema en-\n\narXiv:1909.01066v2 [cs.CL] 4 Sep 2019\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLM probing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles, since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nBERT struggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA. 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nOut-of-the-box BERT is surprisingly brittle to named entity replacements: e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence (*e.g.* \"Dante was born in [Mask] in the year 1265.\"). The parameters of these models appear to store\n\n#### 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nMemory Query Answer\n\nSymbolic Memory Access\n\nFlorence\n\n(Dante, born-in, X)\n\n1,2 Patrick Lewis1,2 Anton Bakhtin1\n\nDante\n\nFlorence born-in\n\nLanguage Models as Knowledge Bases?\n\nYuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2 1Facebook AI Research 2University College London {fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com\n\nKG\n\nFigure 1: Querying knowledge bases (KB) and language models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. blanks (e.g. \"Cats like to chase [___]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nIn contrast, knowledge bases are effective solutions for accessing annotated gold-standard relational data by enabling queries such as (Dante, born-in, X). However, in practice we often need to *extract* relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014) components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like \"Dante was born However, BERT cannot reason based on its world knowledge. Forbes et al. (2019) show that BERT can \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it \"knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n#### 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, \"the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction.[139] They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel. [140]\n\n# **Informal**\n\nInformal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments.[141]\n\nThe *pragmatic* or *dialogical approach* to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion.[142] As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments.[143] A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion.[144] This is achieved by making arguments: arguments are the moves of the game.[145] They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion.[144] Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules.[146] These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations.[147]\n\nThe *epistemic approach* to informal logic, on the other hand, focuses on the epistemic role of arguments.[148] It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified.[149] Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion.[150] For example, the fallacy of begging the question is a *fallacy* because it fails to provide independent justification for its conclusion, even though it is deductively valid.[151] In this sense, logical normativity consists in epistemic success or rationality. [149] The Bayesian approach is one example of an epistemic approach.[152] Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called *credence*. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true.[153] On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Finding a provably correct or optimal solution is intractable for many important problems.[15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.\n\n#### **Narrow vs. general AI**\n\nAI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[378][379] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.\n\n#### **Machine consciousness, sentience, and mind**\n\nThe philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that \"[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on.\"[380] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.\n\n#### **Consciousness**\n\nDavid Chalmers identified two problems in understanding the mind, which he named the \"hard\" and \"easy\" problems of consciousness.[381] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this *feels* or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a colorblind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to *know what red looks like*. [382]\n\n#### **Computationalism and functionalism**\n\nComputationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. [383]\n\nPhilosopher John Searle characterized this position as \"strong AI\": \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\"[ac] Searle challenges this claim with his Chinese room argument, which attempts to", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Is OLAF a specific strategy for ontological learning or is it a toolbox of different strategies?", - "target_page": 1, - "target_passage": "Our vision is to implement a toolbox of methods we can gather to build pipelines. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## **OLAF : Ontology Learning Applied Framework**\n\nMarion SCHAEFFER (marion.schaeffer@insa-rouen.fr) - Matthias SESBOUE (matthias.sesboue@insa-rouen.fr) Jean-Philippe KOTOWICZ - Nicolas DELESTRE - Cecilia ZANNI-MERK\n\nSince the beginning of the century, research on ontology learning has gained popularity. Automatically **extracting and structuring knowledge** relevant to a domain of interest from unstructured data is a major scientific challenge. We propose a new approach with a **modular ontology learning framework** considering tasks from data pre-processing to axiom extraction. Whereas previous contributions considered ontology learning systems as tools to help the domain expert, we developed the proposed framework with **full automation** in mind. An implementation as an **opensource and collaborative python library** is available at https://gitlab.insa-rouen.fr/msesboue/ontology-learning.\n\n## **STATE OF THE ART**\n\n| System | Overview | Pros and cons |\n| --- | --- | --- |\n| | It is the reference in the field as it defines a | Ontologies can be exported in |\n| Text2Onto, | representation-agnostic structure with modular | various formats. GATE system |\n| 2005, [1] | steps and takes into account uncertainty. The | adds great visualisations. But it is |\n| | system is implemented as a GATE module. | not maintained since 2011. |\n| | It focuses on multiword terms to construct a | It considers only multiword |\n| | \"lexicalised ontology\" by adapting an agglomerative | terms and relies on WordNet |\n| OntoGain, | clustering and an FCA method. It implements 4 | and POS tags. It does not |\n| 2010, [2] | steps: text preprocessing, concept extraction (C/NC | distinguish between terms and |\n| | value), taxonomy construction, and non-taxonomic | concepts and implements |\n| | relation acquisition (rule-based and probabilistic). | different adaptable approaches. |\n| | It focuses on \"lexicalised ontologies\" and uses seed | It relies on WordNet and POS |\n| OntoLearn | knowledge. It implements 5 steps: terminology | tags and does not distinguish |\n| (Reloaded), | extraction, hypernym graph construction, domain | between terms and concepts. |\n| 2013, [3] | filtering of hypernyms, hypernym graph pruning and | It implements different |\n| | edge recovery. | adaptable approaches. |\n\n## **OLAF IN A PRACTICAL CONTEXT**\n\n## **ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE**\n\nOur framework provides several algorithms for the different stages of the pipeline. The algorithms are taken from external libraries or directly implemented in the framework. The goal is to have as many methods as possible to cover the maximum needs.\n\nMost ontology learning systems do not consider the targeted ontologybased system. Though an ideal ontology should model a domain in an application-independent manner, in practice, **concepts and relations represented largely depend on one or more business use cases**. As we designed our framework with industry application in mind, we need to consider it within its **real-world usage context**.\n\n> We choose **Python** as it eases access to the vast python community and its library ecosystem, particularly **NLP tools** and numerous **Machine Learning (ML) libraries**.\n\nWe designed the proposed framework focusing on **automation** with very little, if any, human involvement in mind. Unlike most existing approaches, particular attention is brought to the **learned ontology final production use case**. We implement the framework as an open-source and openaccess python library. We aim to **gather feedback and grow a community** to develop and test multiple algorithms. Various satellite tools could be developed to enhance the framework implementation. However, we should focus on developing **axiom extraction** and **automatic ontology evaluation**. One exciting research area might be the adaptation of the software industry's \"DevOps\" concepts to knowledge management. The latter field is known as \"SemOps\".\n\nCimiano P, Völker J. Text2Onto. Natural Language Processing and Information Systems. Berlin, Heidelberg: Springer Berlin Heidelberg; 2005.p. 227-238. ISBN: 978-3-540-32110-1 1.\n\nDrymonas E, Zervanou K, Petrakis EGM. Unsupervised Ontology Acquisition from Plain Texts: The OntoGain System. Natural Language Processing and Information Systems. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010. p. 277-87. ISBN: 978-3-642-13881-2 2.\n\nPaola Velardi, Stefano Faralli, Roberto Navigli; OntoLearn Reloaded: A Graph-Based Algorithm for Taxonomy Induction. Computational Linguistics 2013; 39 (3): 665–707. DOI: 10.1162/COLI_a_00146 3.\n\nMuhammad Nabeel Asim, Muhammad Wasim, Muhammad Usman Ghani Khan, Waqar Mahmood, Hafiza Mahnoor Abbasi, A survey of ontology learning techniques and applications, Database, Volume 2018, 2018, bay101, DOI: 10.1093/database/bay101 4.", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing. The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to \"peak under the hood\" of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1. In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class. Similarly when you add NamedPizza as a subclass of Pizza, Protégé adds the triple: NamedPizza rdfs:**s**ubClassOf Pizza.\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs:**s**ubClassOf and the object is any other entity. The *?* before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n#### PREFIX pizza: <http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#>\n\nWe are almost ready to query the actual ontology. For our first query let's find all the Pizzas purchased by a Customer. The SPARQL code for this is:", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "example, in Sweden. 378 Meanwhile, the spectrum of guidance developed regarding work-related psychosocial risks is very wide; it covers aspects such as job satisfaction (overall level of wellbeing), engagement, performance and work-related stress,379 and also discrimination, harassment, aggression and violence.380\n\n### **6.2 EU and national OSH strategies**\n\nThe EU and many Member States **applied and apply strategic approaches**, based on EU or national evidence of the state of OSH. OSH strategies are a steering instrument to focus the activities of all actors on major recognised deficits of OSH infrastructures or processes.381\n\nThe newest **EU Strategic Framework on Health and Safety at Work 2021-2027** puts the focus on change, with the title *'Occupational safety and health in a changing world of work'*.382 Consequently, the strategic framework focuses on three key objectives for these years:\n\n- • *anticipating and managing change in the new world of work brought about by the green, digital and demographic transitions;*\n- •*improving prevention of workplace accidents and illnesses;*\n- •*increasing preparedness for any potential future health crises.*\n\nThe proposed focus areas and actions are related to these three objectives. Under the first key objective there are actions like 'Modernising and simplifying EU OSH rules in the context of the green and digital transitions'; a special focus is on psychosocial and ergonomic risks. The second objective promotes a vision zero approach to work-related deaths, particularly referring to hazardous substances and cardiovascular diseases, the promotion of health at work and inclusive workplaces for all.383\n\nThe third objective responds to the impact of the pandemic situation in 2020 and 2021. It includes the development of emergency procedures for future similar situations ('Health crisis'). The Strategic Framework repeats and corroborates the value of research and data-based evidence by stating: *'Research and data collection, both at EU and national level, are a pre-condition for the prevention of work-related diseases and accidents. Scientific advice and the latest technological developments feed into OSH legislation and policy.'*\n\nAlso, many Member States have agreed on provision of better data as an objective in their national strategies.384 The EU strategy often gives orientation for the development of national OSH strategies. Under the last strategy period, 24 of the 27 Member States had applied a strategy. Many national OSH strategies contained similar targets. EU-OSHA published an overview report on national strategies, and the OSH Barometer contains as one indicator a harmonised overview on the aspects of national strategies.385\n\nOSH strategies are regarded as an important and innovative policy area, a chance for better collaboration, and also a very relevant joint national OSH activity. Those strategies help in priority setting and focused action on weaknesses. Strategies were often agreed in social dialogue processes, and many strategy actors also developed new and better monitoring instruments and indicators.386 Labour inspections play an important or essential role in most of these strategies.387\n\n#### **OSH Barometer – Steering of OSH, National strategies:**\n\nhttps://visualisation.osha.europa.eu/osh-barometer/osh-steering/national-strategies\n\n**OSHWiki: Section 'OSH System at national level', descriptions of the OSH Systems of the EU Member States:** https://oshwiki.eu/wiki/Category:OSH_systems_at_national_level", - "page_start": 123, - "page_end": 123, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Chapter 13 Conclusion: Some Personal Thoughts and Opinions\n\nThis tutorial is just the entry point to a technology that is entering the *Slope of Enlightenment* in the Gartner technology hype cycle [Gartner Hype Cycle]. Tim Berners-Lee published his paper on the Semantic Web [Berners-Lee 2001] way back in 2001. At least in my experience for most large US corporations the excitement around Machine Learning seemed for a while to eclipse serious interest in OWL, SPARQL, and other Semantic Web technologies in the United States. Then influential technology companies such as Google [Singhal 2012], Facebook [Olanof 2013], and Amazon [Neptune 2017] started to embrace the technology using the term Knowledge Graphs [Noy 2019] and the corporate world is finally realizing that machine learning and knowledge graphs are complimentary not competitive technologies.\n\nThe term knowledge graph itself can be used in different ways. The best definition I've heard is that an ontology provides the vocabulary (i.e., essentially the T-Box) and a knowledge graph is an ontology combined with data (A-Box). Although in the corporate world I often hear people simply talk about knowledge graphs without much interest in the distinction between the vocabulary and the data.\n\nThere are a number of vendors emerging who are using the technology in very productive ways and are providing the foundation for federated knowledge graphs that can scale to hundreds of millions of triples or more and provide a framework for all corporate data. I've listed several in the bibliography but those are only the ones I've had some experience with. I'm sure there are many others. One of the products I've had the best experience with is the AllegroGraph triplestore and the Gruff visualization tool from Franz Inc. Although Allegro is a commercial tool, the free version supports most of the core capabilities of the commercial version. I've found the Allegro triplestore easy to use on a Windows PC with the Docker tool to emulate a Linux server.\n\nI first started working with classification-based languages when I worked at the Information Sciences Institute (ISI) and used the Loom language [Macgregor 91] to develop B2B systems for the US Department of Defense and their contractors. Since then, I've followed the progress of the technology, especially the DARPA knowledge sharing initiative [Neches 91] and always thought there was great promise in the technology. When I first discovered Protégé it was a great experience. It is one of the best supported and most usable free tools I've ever seen, and it always surprised me that there weren't more corporate users leveraging it in major ways. I think we are finally starting to see this happen and I hope this tutorial helps in a small way to accelerate the adoption of this powerful and robust tool.", - "page_start": 88, - "page_end": 88, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "### SO WHAT EXACTLY IS A SUMMARY?\n\nA summary is more than just a condensed or shortened version of your work. A summary requires you to analyse your study material, to identify the key concepts, and to explain it in your own words.\n\n#### To make a good summary, you need to:\n\n- Keep it brief.\n- Make sure to use main headings and keywords.\n- Focus on the main ideas.\n- Classify and organise the information in a logical manner.\n- Use your own words where possible.\n- Include examples.\n- Remember that your summaries are there to help you.\n\n### YOU CAN MAKE YOUR SUMMARIES IN DIFFERENT FOR-MATS. HERE ARE SOME EXAMPLES:\n\n#### Mind Maps (Spider Diagrams)\n\nA mind map is a visual expression of thoughts, ideas and concepts. It usually takes the form of a diagram, with the main concept in the centre, and the related concepts branching out from there. Here is an example:", - "page_start": 28, - "page_end": 28, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## Chapter 4 Building an OWL Ontology\n\nThis chapter describes how to create an ontology of Pizzas. We use Pizzas because it is something almost everyone is familiar with.\n\n_____________________________________________________________________________________\n\n#### **Exercise 1: Create a new OWL Ontology**\n\n1. Start Protégé. When Protégé opens for the first time each day it puts up a screen of all the available plugins. You can also bring this up at any time by using File>Check for plugins. You won't need any plugins at this point of the tutorial so just click the Not now button.\n\n2. The Protégé user-interface consists of several tabs such as Active ontology, Entities, etc. When you start Protégé you should be in the Active Ontology tab. This is for overview information about the entire ontology. Protégé always opens with a new untitled ontology you can start with. Your ontology should have an IRI something like: http://www.semanticweb.org/yourname/ontologies/2020/4/untitled-ontology-27 Edit the name of the ontology (the part after the last \"/\" in this case untitled-ontology-27) and change it to something like PizzaTutorial. Note: the Pizza ontology IRIs shown below (e.g., figure 4.3) show the IRI after I edited the default that Protégé generated for me. Your IRI will look different and will be based on your name or the name of your organization.\n\n3. Now you want to save your new ontology. Select File>Save. This should bring up a window that says: Choose a format to use when saving the 'PizzaTutorial' ontology. There is a drop down menu of formats to use. The default RDF/XML Syntax should be selected by clicking the OK button. This should bring up the standard dialog your operating system uses for saving files. Navigate to the folder you want to use and then type in the file name, something like Pizza Tutorial and select Save.\n\n____________________________________________________________________________________\n\nAs with any file you work on it is a good idea to save your work at regular intervals so that if something goes wrong you don't lose your work. At certain points in the tutorial where saving is especially important the tutorial will prompt you to do so but it is a good idea to save your work often, not just when prompted.\n\nThe next step is to set some preferences related to the names of new entities. Remember than in Protégé any class, individual, object property, data property, annotation property, or rule is referred to as an entity. The term name in OWL can actually refer to two different concepts. It can be the last part of the IRI3 or it can refer to the annotation property (usually rdfs:label) used to provide a more user friendly name for the entity. We will discuss this in more detail below in chapter 7. For now, we just want to set the parameters correctly so that future parts of the tutorial (especially the section on SPARQL queries) will work appropriately.\n\n<sup>3</sup> An IRI is similar to a URL. This will be discussed in detail below in chapter 7.", - "page_start": 10, - "page_end": 10, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "relations, transitive relations, and many more. An understanding of the basic concepts of set theory will help the user get the most out of OWL but is not required. One of the benefits of Protégé is that it presents an intuitive GUI that enables domain experts to define models without a background in set theory. However, developers are encouraged to refresh their knowledge on logic and set theory. A good source is the first 3 chapters in Elements of the Theory of Computation by Lewis and Papadamitrious. Another good source is the PDF document *Overview of Set Theory* available at: https://www.michaeldebellis.com/post/owl-theoretical-basics\n\n### 3.1.1 Individuals\n\nIndividuals represent objects in the domain of interest. An important difference between OWL and most programming and knowledge representation languages is that OWL does not use the Unique Name Assumption (UNA). This means that two different names could actually refer to the same individual. For example, \"Queen Elizabeth\", \"The Queen\" and \"Elizabeth Windsor\" might all refer to the same individual. In OWL, it must be explicitly stated that individuals are the same as each other, or different from each other. Figure 3.1 shows a representation of some individuals in a domain of people, nations, and relations — in this tutorial we represent individuals as diamonds.\n\nFigure 3.2: Representation of Properties\n\nIndividuals are also known as *instances*. Individuals can be referred to as *instances of classes*.", - "page_start": 7, - "page_end": 7, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "We focus on the fundamentals of growth.\n\nProfitability. Productivity. Strategic management.", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \"∧\" has the meaning of \"and\".\n\n# **Informal logic**\n\nWhen understood in a wide sense, logic\n\nencompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\".[36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Is Text2Onto still updated nowadays?", - "target_page": 1, - "target_passage": "But it is not maintained since 2011.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- 3. Confirm your selection and click **Remove** as shown in Figure 11-138.\n\n| Remove Relationship From Consistency Group |\n| --- |\n| A Removing relationship(s) from consistency group cannot be undone. Are you sure you want to continue? |\n| You selected 1 relationship to remove from consistency group ITSO-RB01-001. ALN01_rel |\n| Cancel Remove |\n\n*Figure 11-138 Confirm the removal of relationships from a Consistency Group*\n\n# **11.9.8 Starting remote copy relationships**\n\nWhen a remote copy relationship is created, the remote copy process can be started. Only relationships that are not members of a Consistency Group, or the only relationship in a Consistency Group, can be started. In any other case, consider starting the Consistency Group instead.\n\nTo start one or multiple relationships, complete the following steps:\n\n- 1. Open the **Copy Services** → → **Remote Copy** panel.\n- 2. Right-click the relationships to be started and select **Start**, as shown in Figure 11-139.\n\n| + Create Consistency Group | 三 Actions ▼ | | | Default | > Contains V | Filter | 网 |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | State | Master Volume ← | Auxiliary Volume | | | | IIi |\n| V Not in a Group | | | | | | | |\n| RSRR01_rel | Conclotent Stanner | ITSO-RR100 | ITSO-SS200 | | | | |\n| V 日 -> ITSO-RS-TST01 | Rename ... 00 | Master System: ITSO-5V1 -> Auxiliary System: ITSO-SV1 | | | | | |\n| rcre10 | Add to Consistency Group | ITSO-SRC-CG1 | ITSO-TGT-CG1 | | | | |\n| ▽ 目一->目 ITSO-RB01-001 | Change Volumes | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | |\n| ALN01_rel | Start | ITSO-NGro001 | ITSO-NGro002 | | | | |\n| SJC-LA01_rel | Stop | TODES-OSSII | ITSO-LA001 | | | | |\n| | Switch | | | | | | |\n| | Delete | | | | | | |\n| | Edit Relationship | | | | | | |\n\n*Figure 11-139 Starting remote copy relationships*", - "page_start": 611, - "page_end": 611, - "source_file": "sg247938.pdf" - }, - { - "text": "- 2. Right-click the relationships that you want to delete and select **Delete**, as shown in Figure 11-150.\n\n| Create Consistency Group | = Actions ▼ | | | | Default | > | Contains V | Fifter | 12 |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | State | | 个 Master Volume | Auxillary Volume | | | | | lill |\n| V Not in a Group | | | | | | | | | |\n| RSRR01_rel | | Consistent Synchronized B | ITSO-RR100 | ITSO-SS200 | | | | | |\n| √ 目一-> ITSO-RS-TSTO1 | Rename ... | nehronized | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1 | | | | | | |\n| rcrelO | Add to Consistency Group | ronized | ITSO-SRC-CG1 | ITSO-TGT-CG1 | | | | | |\n| ▽ 目一→目 ITSO-RB01-001 | Change Volumes | opped | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | | |\n| ALN01_rel | Start | ed | ITSO-NGro001 | ITSO-NGro002 | | | | | |\n| SJC-LA01_rel | Stop | ed | FO2CS-OSLI | ITSO-LA001 | | | | | |\n| | Switch | | | | | | | | |\n| | Delete | | | | | | | | |\n| | Edit Relationship | | | | | | | | |\n\n*Figure 11-150 Deleting Remote Copy Relationships*\n\n- 3. A confirmation message is displayed, requesting the user to enter the number of relationships to be deleted, as shown in Figure 11-151.\n\n| × Delete Relationship |\n| --- |\n| Deleting relationship(s) cannot be undone. Are you |\n| sure you want to continue? |\n| You selected 1 relationship to delete. Verify the relationship to delete: |\n| RSRR01_rel (ITS0-RR100 -> ITS0-55200) |\n| Verify the number of relationships that you are deleting: |\n| 8 |\n| Delete the relationship even when the data on the target volume is inconsistent, or if the target volume has other |\n| dependencies. |\n| Cancel Delete |\n\n*Figure 11-151 Confirmation of relationships deletion*", - "page_start": 617, - "page_end": 617, - "source_file": "sg247938.pdf" - }, - { - "text": "- 6. Select the type of update you want to perform, as shown in Figure 13-15. Select **Automatic update** unless IBM Support suggests **Service Assistant Manual update**. The manual update might be preferable in cases where misbehaving host multipathing is known to cause loss of access. Click **Finish** to begin the update package upload process.\n\n| Update System | × |\n| --- | --- |\n| The system is ready to install update package: IBM2076_INSTALL_8.2.1.0 | |\n| Select the type of update to complete: | |\n| Automatic update | |\n| Choose this option for a faster update. | |\n| Installation can take approximately 20 minutes for each node and 30 minutes for the system. | |\n| Service Assistant Manual update | |\n| Attention: Automatic update is the preferred method for updating software. | |\n| Manual updates, if completed incorrectly, can result in data loss. | |\n| Need Help | Cancel Next |\n\n*Figure 13-15 The update type selection*\n\nWhen updating from a V8.1 or later level, another window is displayed at this point in which you can choose a fully automated update, one that pauses when half the nodes complete the update, or one that pauses after each node update, as shown in Figure 13-16. The pause option requires that you click **Resume** to continue the update after each pause. Click **Finish**.\n\n*Figure 13-16 New V8.1 update pause options*\n\n- 7. After the update packages upload, the update test utility looks for any known issues that might affect a concurrent update of your system. Click **Read more** (see Figure 13-17 on page 692).", - "page_start": 712, - "page_end": 712, - "source_file": "sg247938.pdf" - }, - { - "text": "# **11.9.9 Starting a remote copy Consistency Group**\n\nWhen a remote copy consistency group is created, the remote copy process can be started, for all the relationships that are part of the consistency groups.\n\nTo start a consistency group, open the **Copy Services** → **Remote Copy** panel, right-click the consistency group to be started, and select **Start**, as shown in Figure 11-140.\n\n| Create Consistency Group | = Actions ▼ | | | Default | > | Contains V | Filter | ps |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | State | Master Volume ← | Auxiliary Volume | | | | | lli |\n| V Not in a Group | | | | | | | | |\n| RSRR01_rel | Consistent Stopped | ITSO-RR100 | ITSO-SS200 | | | | | |\n| √ 目 -> ITSO-RS-TST01 | State: Inconsistent Stopped | Master System: ITSO-SV1. > Auxiliary System: ITSO-SV1. | | | | | | |\n| rcre10 | Inconsistent Stopped | ITSO-SRC-CG1 | ITSO-TGT-CG1 | | | | | |\n| ∨ 目一→目 ITSO-RB01-001 | State\" Consistent Stonned | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1 | | | | | | |\n| ALNO1_rel | Create Relationship .. | ITSO-NGro001 | ITSO-NGro002 | | | | | |\n| SJC-LA01_rel | Rename | ITSO-SJC01 | ITSO-LA0D1 | | | | | |\n| | Start | | | | | | | |\n| | Stop | | | | | | | |\n| | Switch | | | | | | | |\n| | Edit Consistency Group | | | | | | | |\n| | Delete | | | | | | | |\n\n*Figure 11-140 Starting a remote copy Consistency Group*\n\n# **11.9.10 Switching a relationship copy direction**\n\nWhen a remote copy relationship is in the Consistent synchronized state, the copy direction for the relationship can be changed. Only relationships that are not a member of a Consistency Group, or the only relationship in a Consistency Group, can be switched. In any other case, consider switching the Consistency Group instead.\n\n**Important:** When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that changes from primary to secondary because all of the I/O is inhibited to that volume when it becomes the secondary. Therefore, careful planning is required before you switch the copy direction for a relationship.\n\nTo switch the direction of a remote copy relationship, complete the following steps:\n\n- 1. Open the **Copy Services** → → **Remote Copy** panel.\n- 2. Right-click the relationship to be switched and select **Switch**, as shown in Figure 11-141.\n\n| + Create Consistency Group | = Actions ▼ | | | | Default | > | Contains V | Filter | D |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | | State | Master Volume ← | Auxiliary Volume | | | | | = |\n| V Not in a Group | | | | | | | | | |\n| RSRR01_rel | | Rename ... | ITSO-RR100 | ITSO-SS200 | | | | | |\n| √ 日 -> ITSO-RS-TSTOT | | Add to Consistency Group | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | | |\n| rcre10 | | Change Volumes | ITSO-SRC-CG1 | ITSO-TGT-CG1 | | | | | |\n| ▽ 目一>目 ITSO-RB01-001 | | Start | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | | |\n| ALN01_rel | | Stop | ITSO-NGro001 | ITSO-NGro002 | | | | | |\n| SJC-LA01_rel | | | TTSO-SJC01 | ITSO-LA001 | | | | | |\n| | | Switch | | | | | | | |\n| | | Delete | | | | | | | |\n| | | Edit Relationship | | | | | | | |\n\n*Figure 11-141 Switching remote copy relationship direction*", - "page_start": 612, - "page_end": 612, - "source_file": "sg247938.pdf" - }, - { - "text": "| Type of content | Tags |\n| --- | --- |\n| WordArt without Alt Text or Decorative | |\n| | <Sect> |\n| | text content |\n| Picture with Attribution | |\n| | <Figure> |\n| | Alt=alt tex <Sect> |\n| | text content |\n| Group without Alt Text | |\n| | tags for child objects |\n| Hyperlink on Text | |\n| | <Link> |\n| | Link - OBJR |\n| | text content |\n| Hyperlink on Object | |\n| | tag for object |\n| | <Link> |\n| | Link - OBJR |\n| | alt text |\n| | Note: the <Link> is a sibling of the tag for |\n| | the object. |\n\n### **Artifacts**\n\nThe following types of content are marked as <Artifact> in the PDF Content Tree and have no PDF/UA tags:\n\n- Header and footer\n- Decorative graphical objects\n- Gray space on the right side of the page for comments\n- Pictures in picture bullets\n- Underlines\n- Borders around text and quote paragraphs\n- Lines above footnotes/endnotes\n- Text in SmartArt objects", - "page_start": 59, - "page_end": 59, - "source_file": "office-pdf.pdf" - }, - { - "text": "- 2. Right-click the relationships to be stopped and select **Stop**, as shown in Figure 11-146.\n\n| Create Consistency Group | = Actions ▼ | | | | Default | V | Contains V | Filter | నే |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | State | | Master Volume 个 | Auxiliary Volume | | | | | IIi |\n| V Not in a Group | | | | | | | | | |\n| RSRR01_rel | Rename ... | Canalata Synchronized B | ITSO-RR100 | ITSO-SS200 | | | | | |\n| V 日 -> ITSO-RS-TST01 | | istent Stopped | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | | |\n| rcre10 | Add to Consistency Group | ht Stopped | ITSO-SRC-CG1 | ITSO-TGT-CG1 | | | | | |\n| √ 目一>目 ITSO-RB02-001 | Change Volumes | tent Stopped | Master System: ITSO-SV1 -> Auxiliary System: ITSO-5V1. | | | | | | |\n| ALN01_rel | Start | Stopped | ITSO-NGroD01 | ITSO-NGro002 | | | | | |\n| SJC-LA01_rel | Stop | Stopped | TODES-OSII | ITSO-LA001 | | | | | |\n| | Switch | | | | | | | | |\n| | Delete | | | | | | | | |\n| | Edit Relationship | | | | | | | | |\n\n*Figure 11-146 Stopping a Remote Copy relationship*\n\n- 3. When a remote copy relationship is stopped, access to the auxiliary volume can be changed so it can be read and written by a host. A confirmation message is displayed, as shown in Figure 11-147.\n\n| Stop Remote-Copy Relationship |\n| --- |\n| Do you want to allow read/write access to the secondary volume when stopping remote-copy |\n| relationship RSRR01_rel? |\n| Allow secondary read/write access |\n| Cancel Stop Relationship |\n\n*Figure 11-147 Grant access in read and write to the auxiliary volume*", - "page_start": 615, - "page_end": 615, - "source_file": "sg247938.pdf" - }, - { - "text": "| Type of content | Tags |\n| --- | --- |\n| Table | |\n| | <Table> |\n| | <THead> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TH> |\n| | text content |\n| | <TBody> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TD> |\n| | text content |\n| Table Header Cell | |\n| | <TH> |\n| Decorative Graphical Object | |\n| | no tags |\n| Graphical Object with Alt Text | |\n| | <Figure> |\n| | Alt=alt text (object type) |\n| Graphical Object other than Shape without Alt Text | |\n| | <Figure> |\n| | Alt=blank |\n| Shape without Alt Text with text | |\n| | <Sect> |\n| | text content |\n| Shape with Alt Text with text | |\n| | <Figure> |\n| | Alt=alt text + text (shape |\n| | type) |", - "page_start": 58, - "page_end": 58, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type of content | Tags |\n| --- | --- |\n| Shape without Alt Text with whitespace text or no text | |\n| | <Figure> |\n| | Alt=blank |\n| Shape without Alt Text with Equation | |\n| | <Formula> |\n| | Alt=equation spelled out |\n| | in words |\n| Shape without Alt Text with non-whitespace text | |\n| without Equation | <Sect> |\n| | text content |\n| Shape with Alt Text with non-whitespace text without | |\n| Equation | <Figure> |\n| | Alt=alt text + text (shape |\n| | type) |\n| WordArt without Alt Text or Decorative | |\n| | <Sect> |\n| | text content |\n| Group without Alt Text | |\n| | tags for child objects |\n| Group with Alt Text | |\n| | <Figure> |\n| | Alt=alt text (object type) |\n| | tags for child objects |\n| Decorative Picture in Cell | |\n| | <TD> |\n| | no tags |\n| Picture without Alt Text in Cell | |\n| | <TD> |\n| | no tags |", - "page_start": 46, - "page_end": 46, - "source_file": "office-pdf.pdf" - }, - { - "text": "- 3. Select the Consistency Group for this remote copy relationship by using the menu, as shown in Figure 11-136. Click **Add to Consistency Group** to confirm your changes.\n\n| × Add Relationship to Consistency Group |\n| --- |\n| Select the consistency group to move the relationship ALN01_ rel |\n| Consistency Group ITSO-RB01-001 |\n| Cancel Add to Consistency Group |\n\n*Figure 11-136 Selecting the Consistency Group to add the relationships to*\n\n# **11.9.7 Removing remote copy relationships from Consistency Group**\n\nTo remove one or multiple relationships from a remote copy consistency group, complete the following steps:\n\n- 1. Open the **Copy Services** → → **Remote Copy** panel.\n- 2. Right-click the relationships to be removed and select **Remove from Consistency Group**, as shown in Figure 11-137.\n\n| ( Create Consistency Group | = Actions ▼ | | | Default | V | Contains V | Filter | 区 |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Name | State | Master Volume ← | Auxiliary Volume | | | | | Ili |\n| Not in a Group | | | | | | | | |\n| 日 -- > ITSO-RS-TST01 | State: Inconsistent Stopped | Master System: ITSO-SV1 -> Auxiliary System: ITSO-SV1. | | | | | | |\n| ▽ 目一→目 ITSO-RB01-001 | State: Consistent Stopped | Master System: ITSO-5V1 > Auxiliary System: ITSO-SV1. | | | | | | |\n| ALN01_rel | Canadaat Staanad | ITSO-NGro001 | ITSO-NGro002 | | | | | |\n| SJC-LA01_rel | Rename ... | TODCS-OSLI | ITSO-LA0D1 | | | | | |\n| | Add to Consistency Group | | | | | | | |\n| | Change Volumes | | | | | | | |\n| | Remove from Consistency Group | | | | | | | |\n| | Delete | | | | | | | |\n\n*Figure 11-137 Removing relationships from a Consistency Group*", - "page_start": 610, - "page_end": 610, - "source_file": "sg247938.pdf" - }, - { - "text": "| Type of content | Tags |\n| --- | --- |\n| Table without Alt Text | |\n| | <Table> |\n| | <THead> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TH> |\n| | text content |\n| | <TBody> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TD> |\n| | text content |\n| | <TFoot> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TD> |\n| | text content |\n| Table Header Cell | |\n| | <TH> |\n| | Scope=Row, Column, |\n| | or Both |\n| Table Merged Cell | |\n| | <TH> or <TD> |\n| | Row span=r |\n| | Column span=c |\n| Group without Alt Text | |\n| | tags for child objects |\n| Summary Zoom, Section Zoom, and Slide Zoom | |\n| | <TOC> |\n| | Alt=alt text |\n| | <TOCI> |\n| | <Link> |\n| | Link - OBJR |\n| | <Span> |", - "page_start": 51, - "page_end": 51, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "What was the proportion of revenue generated by wireless telecommunications operations in 2009?", - "target_page": 91, - "target_passage": "6,685", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Network revenue was higher this year compared to last year. This was the net effect of:\n\n- higher data revenue related to an increase in subscriber levels and higher usage of wireless data services\n- partially offset by our introduction of new lower priced US and international roaming plans and rates which offer consumers more value, and\n- the continued adoption of customer friendly simplified plans, which often bundle in certain features like voicemail, caller ID and long distance that we have charged for separately in the past.\n\nExcluding the decline in US and international roaming revenue this year, network revenue would have increased 1%.\n\nData revenue was 17% higher this year mainly because of the continued penetration and growing use of smartphones, tablet devices and wireless laptops, which increased the use of e-mail, wireless, Internet access, text messaging and other wireless data services. Data revenue represented approximately 47% of total network revenue this year, compared to approximately 41% last year.\n\nPostpaid churn was 1.24% this year, compared to 1.29% in 2012. The lower churn rate is partly attributable to the new simplified plans and the roaming plans we introduced.\n\nGross postpaid subscriber additions were 1.4 million this year, or 3% lower than last year, which reduced net postpaid subscriber additions to 228,000, despite a lower postpaid churn. We believe the industry transition from three year to two year plans resulting from the recent adoption of the Canadian Radio-television and Telecommunications Commission (CRTC) Wireless Code may have slowed our overall wireless subscriber growth from the second half of the year. See \"Regulation in Our Industry\" for more information on the Wireless Code.\n\nWe activated and upgraded approximately 2.7 million smartphones this year, compared to approximately 2.9 million in 2012. Approximately 34% of these were for new subscribers. The decrease was mainly because there was a 10% reduction in hardware upgrades by existing subscribers during the year, which we also believe is at least partly due to the move from three to two year contracts and the associated pricing changes.\n\nThe percentage of subscribers with smartphones increased to 75% of our overall postpaid subscriber base, compared to 69% at the end of 2012. Smartphone subscribers typically generate significantly higher ARPU and are less likely to churn.\n\nThe decrease in prepaid subscriber net additions was mainly because of increasing competition at the lower end of the wireless market where prepaid products are mainly sold.\n\nBlended ARPU was down slightly this year compared to last year because the voice component declined at a faster rate than the data component increased.\n\n#### (%) **DATA REVENUE PERCENT OF BLENDED ARPU**\n\n#### *Lower Equipment Sales*\n\nEquipment sales (net of subsidies) include revenue from sales to:\n\n- independent dealers, agents and retailers\n- directly to subscribers through fulfillment by Wireless' customer service groups, websites, telesales and corporate stores.\n\nRevenue from equipment sales was lower this year, mainly because fewer existing subscribers upgraded their devices and there were fewer gross activations.\n\n#### **Lower Operating Expenses**\n\nWe assess operating expenses in two categories:\n\n- the cost of wireless handsets and equipment\n- all other expenses involved in day-to-day operations, to service existing subscriber relationships and attract new subscribers.\n\nThe cost of equipment was $50 million lower than last year, or 3%, mainly because fewer existing subscribers upgraded hardware and fewer new customers were added during the year as discussed above. We activated and upgraded fewer devices compared to 2012.\n\nTotal customer retention spending (including subsidies on handset upgrades) was $939 million, 0.3% lower than last year. The reduction was mainly because fewer existing subscribers upgraded their hardware as discussed above, which we partially attribute to the recent shift to two year contracts.\n\nOther operating expenses (excluding retention spending), were down slightly from 2012, due to a continued focus on cost productivity initiatives we are implementing across various functions.\n\n#### **Higher Adjusted Operating Profit**\n\nAdjusted operating profit was 3% higher this year compared to last year because of continued growth of wireless data, our improvements in cost management and efficiency and lower volumes of hardware sales and upgrades. Adjusted operating profit margin as a percentage of network revenue increased this year to 46.8% from 45.6% in 2012.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### INDUSTRY TRENDS\n\n#### The telecommunications industry in Canada, and our business segments, is affected by several overarching trends.\n\n#### CHANGING TECHNOLOGIES AND CONSUMER DEMANDS\n\nConsumer demand for mobile devices, digital media and on-demand content across platforms is pushing providers to build networks that can provide more data faster, cheaper and more easily. Increased adoption of smartphones and double digit growth in our data revenue continued this year, reflecting expanded use of applications, mobile video, messaging and other wireless data.\n\n#### COMPETITION\n\nCompetition in wireless from national and regional operators as well as smaller new entrants changes how we compete for wireless services. This puts downward pressure on pricing affecting profit margins and impacts customer churn.\n\nTraditional wireline telephone and television services are now offered over the Internet, opening the door to more non-traditional competitors, and changing how traditional providers compete. This is changing the mix of packages and pricing that service providers offer, affecting profit margins and customer churn.\n\n#### **WIRELESS TRENDS**\n\nMore sophisticated wireless networks, devices and applications are making it easier and faster to receive data, driving growth in wireless data services.\n\nWireless providers are investing in the next generation of broadband wireless data networks, such as LTE, to support the growing data demand.\n\nWireless market penetration in Canada is approximately 80% of the population, and is expected to grow at an estimated 2% annually.\n\nThe new CRTC code of conduct has limited wireless term contracts to two years from three years. Although the code of conduct has only been in place for a month, we believe this is currently reducing churn and slowing growth in the wireless marketplace.\n\n#### **CABLE TRENDS**\n\nYounger generations are increasingly using the Internet and social media as a substitute for traditional wireline telephone services, and televised content is increasingly available online, both on wireline and on wireless devices.\n\nWe face new competition from companies like Skype and Vonage, who market Voice over Internet Protocol (VoIP) telephony services, and Netflix and Apple TV, who provide televised content over the Internet.\n\nNorth American cable companies are improving their cable networks and expanding their service offerings to include Internet, digital cable and VoIP telephony services, while competition from telco IPTV deployments and non-facilities based service providers continues to cause pricing pressures which negatively impacts revenue growth.\n\nIn the media industry, there continues to be a shift towards on-line media consumption by consumers which in turn drives advertisers to spend more on-line versus traditional media. In addition, there are more media competitors as additional on-line media companies enter the market, including large global companies.\n\n#### REGULATION\n\nMost areas of our business are highly regulated, which affects who we compete with, the programming we can offer, where and how we use our networks, how we build our businesses and the spectrum we purchase. The telecommunications industry is being affected by more regulation and more reviews of the current regulations.\n\n#### ECONOMIC CONDITIONS\n\nOur businesses are affected by general economic conditions and consumer confidence and spending, especially in our Media segment, where advertising revenue is directly affected by the economy.\n\n#### **BUSINESS SOLUTIONS TRENDS**\n\nCompanies are using fibre-based access and cloud computing to capture and share information in more volume and detail. This, combined with the rise of multimedia and Internet-based applications, is driving exponential growth in data demand.\n\nLarge enterprises and all levels of government are dramatically transforming data centre infrastructure and moving toward virtual data storage and hosting. This is driving demand for more advanced network functionality, robust, scalable services and supportive dynamic network infrastructure.\n\nIn response, carriers are dismantling legacy networks and investing in next generation platforms that converge voice, data and video solutions onto a single distribution and access platform.\n\n#### **MEDIA TRENDS**\n\nConsumer demand for digital media, mobile devices and ondemand content is pushing advertisers to shift some of their spending to digital platforms.\n\nTraditional media assets in Canada have become increasingly controlled by a small number of competitors with significant scale and financial resources, while technology has allowed new entrants and even individuals to become media players in their own right. Across both traditional and emerging platforms, many players have become more vertically integrated, as both providers and purchasers of content.\n\nAccess to premium content has become even more important for acquiring audiences that attract advertisers and subscribers. Ownership of content or longterm agreements with content owners, have also become increasingly important to Media companies.", - "page_start": 34, - "page_end": 34, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### COMPETITION\n\nWe compete on quality of service, scope of services, network coverage, sophistication of wireless technology, breadth of distribution, selection of devices, branding and positioning, and price.\n\n- Wireless technology: we were the first carrier in Canada to launch an LTE network catering to customers seeking the increased capacity and speed it provides. We compete with Bell, Telus MTS and Eastlink, all of whom operate LTE networks and we expect competition to grow over time as LTE becomes the prevailing technology in Canada. We also compete with these providers and other regional providers such as Wind Mobile, on HSPA and GSM networks and with providers that use alternative wireless technologies, like Wi-Fi \"hotspots\".\n- Product, branding and pricing: we compete nationally with Bell and Telus. We also complete with newer entrants, various regional players and resellers.\n- Distribution: we compete with other service providers for both dealers and prime locations for our own stores as well as third party retail distribution shelf space outlets.\n- Wireless networks and handset devices: the parity of wireless devices across networks has dramatically transformed the competitive landscape, and we expect this to continue and even intensify. Consolidation among new entrants or with incumbent carriers could alter the competitive landscape for Wireless regionally or nationally.\n- Spectrum: we are currently participating in an auction for 700 MHz spectrum. Industry Canada has also announced an auction for additional 2500 MHz spectrum in 2015 in which we may be restricted from participating in the geographic areas where we already hold more than 40 MHz of 2500 MHz spectrum. The outcomes of both of these auctions may increase competition.\n\n#### WIRELESS FINANCIAL RESULTS\n\n| | | Years ended December 31 | |\n| --- | --- | --- | --- |\n| (In millions of dollars, except percentages) | 2013 | 2012 | % Chg |\n| Operating revenue | | | |\n| Network revenue | $ 6,748 | $ 6,719 | – |\n| Equipment sales | 522 | 561 | (7) |\n| Operating revenue – Wireless | 7,270 | 7,280 | – |\n| Operating expenses | | | |\n| Cost of equipment 1 | (1,535) | (1,585) | (3) |\n| Other operating expenses | (2,578) | (2,632) | (2) |\n| | (4,113) | (4,217) | (2) |\n| Adjusted operating profit – Wireless | $ 3,157 | $ 3,063 | 3 |\n| Adjusted operating profit margin as | | | |\n| % of network revenue | 46.8% | 45.6% | |\n| Additions to property, plant and equipment | $ 865 | $ 1,123 | (23) |\n| Data revenue included in network revenue | $ 3,175 | $ 2,722 | 17 |\n| Data revenue as % of network revenue | 47% | 41% | |\n\n1 Includes the cost of equipment sales and direct channel subsidies.\n\n#### WIRELESS SUBSCRIBER RESULTS 1, 2\n\n| (Subscriber statistics in thousands, | | | Years ended December 31 | |\n| --- | --- | --- | --- | --- |\n| except ARPU and churn) | 2013 | 2012 | | Chg |\n| Postpaid | | | | |\n| Gross additions | 1,409 | 1,457 | | (48) |\n| Net additions | 228 | 268 | | (40) |\n| Total postpaid subscribers | 8,074 | 7,846 | | 228 |\n| Monthly churn | 1.24% | 1.29% | | (0.05)pts |\n| Monthly average revenue per user | | | | |\n| (ARPU) | $ 67.76 | $ 69.30 | $ | (1.54) |\n| Prepaid | | | | |\n| Gross additions | 525 | 627 | | (102) |\n| Net losses | (162) | (170) | | 8 |\n| Total prepaid subscribers | 1,429 | 1,591 | | (162) |\n| Monthly churn | 3.85% | 3.98% | | (0.13)pts |\n| ARPU | $ 15.64 | $ 15.84 | $ | (0.20) |\n| Blended ARPU | $ 59.58 | $ 59.79 | $ | (0.21) |\n\n1 Does not include subscribers from our wireless home phone product.\n\n**WIRELESS POSTPAID AND PREPAID SUBSCRIBERS**\n\n2 ARPU, subscriber counts and subscriber churn are key performance indicators. See \"Key Performance Indicators*\"*.\n\n#### **Operating Revenue**\n\nOur operating revenue depends on the size of our subscriber base, the average revenue per user and revenue from equipment sales.\n\n#### *Higher Network Revenue*\n\nNetwork revenue includes revenue derived from voice and data services from postpaid monthly fees, airtime, data usage, long distance charges, optional service charges, inbound and outbound roaming charges and certain fees, as well as prepaid usage for airtime, data and other ancillary charges such as long distance.", - "page_start": 42, - "page_end": 42, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Business Solutions generates revenue from services and equipment sales.\n\nNext generation revenue is generated by the provision of high-speed, high-reliability data and voice communications, provided on Rogers advanced IP and Ethernet and Cloud platforms and mainly over the extensive Rogers fibre, cable and wireless networks. Next generation revenue also includes Data Centre services revenue from the 2013 dates of business acquisitions.\n\nLegacy revenue is generated mainly by long distance, switched voice services and lower speed data communications, provided over TDM and end of life data platforms with client access primarily delivered through the use of third-party networks and tariffed ILEC services.\n\nBusiness Solutions continues to focus mainly on next generation IPbased services, and on leveraging higher margin on-net and near-net service revenue opportunities, using existing network facilities to expand offerings to the medium and large sized enterprise, public sector and carrier markets. Next generation services now represent 59% of total service revenue.\n\nRevenue from the lower margin off-net legacy business generally includes local and long-distance voice services and legacy data services which often use facilities that are leased rather than owned.\n\nFollowing our recent data centre business acquisitions, Business Solutions is now also focused on data centre colocation, hosting, cloud and disaster recovery services.\n\n#### **Higher Operating Revenue**\n\nOperating revenue was 7% higher this year compared to last year, the net result of:\n\n- higher revenue from next generation services, which grew by 31%, reflecting the impact of our acquisitions of Blackiron and Pivot Data Centres\n- continued execution of our plan to grow higher margin on-net and next generation IP-based services revenue\n- partially offset by ongoing decline in the legacy voice and data business, a trend management expects to continue as customers move to faster and more reliable IP services.\n\n#### **Higher Operating Expenses**\n\nWe assess Business Solutions operating expenses in two categories:\n\n- the cost of operating and maintaining telecom and data networking equipment\n- all other expenses involved in day-to-day operations, to service existing subscriber relationships and attract new subscribers.\n\nOperating expenses were higher this year, the net result of:\n\n- higher expenses related to our data centre acquisitions\n- partially offset by expected lower legacy service-related costs related to lower volumes and customer levels and ongoing initiatives to improve costs and productivity.\n\n#### **Higher Adjusted Operating Profit**\n\nAdjusted operating profit was 19% higher this year because of the contribution of new data centres, the ongoing growth in the higher margin on-net next generation business and cost efficiencies.\n\nExcluding the impact of the Blackiron and Pivot Data Centres acquisitions:\n\n- operating revenue would have been 3% lower this year compared to last year, instead of 7% higher as reported\n- adjusted operating profit would have been 11% higher this year compared to last year, instead of 19% higher as reported\n\nWe continue to work on data centre business integration and the optimization of Business Solutions' overall cost structures.", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Executive Summary\n\n#### ABOUT ROGERS COMMUNICATIONS INC.\n\n#### Rogers Communications is one of Canada's leading diversified communications and media companies.\n\n(%)\n\nWe provide a broad range of services: wireless and wired voice and data communications, cable television, high-speed Internet, cable telephony, wired telecom and data networking services to consumers and businesses. We also compete in television and radio broadcasting, multi-platform shopping, sports media and entertainment, digital media and consumer, trade and professional publications.\n\nAlmost all of our operations and sales are in Canada. We have a highly skilled and diversified workforce of approximately 28,000 employees. Our head-office is in Toronto, Ontario and we have numerous offices across Canada.\n\n#### FOUR BUSINESS SEGMENTS\n\nWe report our results of operations in four segments.\n\n| Wireless | Wireless telecommunications operations |\n| --- | --- |\n| | for consumers and businesses |\n| Cable | Cable telecommunications operations, |\n| | including cable television, Internet and |\n| | cable telephony for |\n| | Canadian consumers and businesses |\n| Business Solutions | Network connectivity through our fibre |\n| | network assets to support a range of |\n| | voice, data, networking, data centre and |\n| | cloud-based services for medium and |\n| | large Canadian businesses, governments, |\n| | and other telecommunications providers |\n| Media | A diversified portfolio of media |\n| | properties, including television and radio |\n| | broadcasting, digital media, multi |\n| | platform shopping, publishing and sports |\n| | media and entertainment |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "marketing Sprint PCS. If financial difficulties are experienced by Sprint or any Affiliate, it could have an adverse impact on the Company's results. The Company's PCS network is part of Sprint's nationwide wireless network. The network is owned and operated by Sprint and its Affiliates. The financial viability of Sprint and its Affiliates is critical to the success of operating and\n\nThe current competitive nature of the wireless industry may prompt major wireless providers to strive for financial improvements through industry consolidation. Such consolidation could include Sprint. It is not clear to what extent consolidation may occur or which companies will be involved, but certain consolidation transactions may have an adverse impact on the operating results and valuation of the Company's wireless operations.\n\nThe Company's access revenue may be adversely impacted by legislative or regulatory actions that decrease access rates or exempt certain traffic from paying access to the Company's regulated telephone network. The Federal Communications Commission is currently reviewing the issue of Voice Over Internet Protocol (VOIP) as it relates to access charges. An unfavorable finding may have an adverse effect on the Company's telephone operations.\n\nThere has been a trend for incumbent local exchange carriers to see a decrease in access lines due to the effect of wireless and wireline competition, a slow down in the economy, and the elimination of a second line dedicated to dial up Internet as customers migrate to broadband connections. Although the Company has not seen a material reduction in its number of access lines to date, it experienced line decreases in each of the last two quarters. There is a significant risk that this trend could have a material adverse effect on the Company's telephone operations in the future.\n\nOn May 24, 2004, Local Number Portability (LNP) will be required in the Company's local wireline service area. The Company's customers will be able to retain their existing wireline phone number and use it to obtain service from a competing wireline or wireless provider in the service area. At this time, the Company cannot estimate the potential impact on its telephone operations. If a significant number of customers disconnect the Company's service, it will have an adverse impact on the Company's telephone operating results.\n\nThe Company's revenue from fiber leases may be adversely impacted by further erosion in demand or in price competition for these facilities. There is also the potential for additional bankruptcies of the Company's customers. The Company monitors each of its fiber lease customers closely to minimize the risk related to this business.\n\nThe Company operates the cable television system in Shenandoah County, Virginia. The Company has seen increased competition from satellite providers that are larger and have cost advantages over the Company in the procurement of programming. The continued success of the satellite television providers may have an adverse impact on the Company's cable television results.\n\nThe Company currently has a 12-month, $1.2 million contract with the Virginia Department of Transportation (VDOT) to provide 511 Travel services in the I-81 corridor of Virginia. This contract expires in February 2005. VDOT has recently requested a proposal for a three-year contract with two two-year extensions to extend 511 services to the entire state. Although the Company plans to submit a proposal for the new VDOT contract, there is no certainty that the Company will be selected to provide these services after the end of its current contract.\n\nThe Company may not be able to utilize all of its net operating loss carry forwards for taxes in certain states before they expire, resulting in the Company writing off some of its deferred tax assets and impacting its cash position.\n\n#### **Market Risk**\n\nThe Company's market risks relate primarily to changes in interest rates on instruments held for other than trading purposes. Our interest rate risk involves three components, although only one is of any significance at this time. The first component is outstanding debt with variable rates. As of December 31, 2003, the Company's variable rate debt balance was zero. The Company has a variable rate line of credit totaling $0.5 million with SunTrust Banks. The Company's remaining debt has fixed rates through its maturity. A 10.0% decline in interest rates would increase the fair value of the fixed rate debt by approximately $1.1 million, while the estimated current fair value of the fixed rate debt is approximately $42.6 million.\n\nThe second component of interest rate risk is temporary excess cash, primarily invested in overnight repurchase agreements and short-term certificates of deposit and money market funds. The Company currently has approximately $27.9 million of cash equivalents in money market funds, which are earning rates of approximately 1% per year. The cash is currently in short-term investment vehicles that have limited interest rate risk. Management continues to evaluate the most beneficial use of these funds.", - "page_start": 56, - "page_end": 56, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "#### RISKS AND UNCERTAINTIES AFFECTING OUR BUSINESS\n\nThis section describes the principal risks and uncertainties that could have a material adverse effect on our business and financial results.\n\n#### GENERAL RISKS\n\n#### **Economic Conditions**\n\nOur businesses are affected by general economic conditions and consumer confidence and spending. Recessions, declines in economic activity and economic uncertainty can erode consumer and business confidence and reduce discretionary spending. Any of these factors can negatively affect us through reduced advertising, lower demand for our products and services, decreased revenue and profitability, higher churn and bad debt expense. A significant portion of our broadcasting, publishing and digital revenues come from the sale of advertising.\n\nPoor economic conditions can also have an impact on our pension plans because there is no assurance that the plans will be able to earn the assumed rate of return. Capital market volatility may result in changes in the discount rates and other variables, requiring us to make contributions in the future that differ significantly from current contributions and assumptions being used in the actuarial valuation process.\n\n#### **Substantial Competition**\n\nThere is no assurance that our current or future competitors will not provide services that are superior to ours or at lower prices, adapt more quickly to evolving industry trends or changing market requirements, enter markets we operate in, or introduce competing services. Any of these factors could reduce our business market share or revenues, or increase churn.\n\nWe expect to have ongoing re-pricing of products and services with our existing subscribers as we extend lower wireless pricing offers to attract and retain customers. As such, wireless penetration of the population deepens, new wireless customers may generate lower average monthly revenue and this could slow revenue growth.\n\nWireless could face increased competition due to recent changes to foreign ownership and control of wireless licences.\n\n- Foreign telecommunication companies could enter the Canadian market by acquiring wireless licences or a holder of wireless licences. If companies with significantly greater capital resources enter the Canadian market, it could reduce our wireless market share. See \"Foreign ownership and control\" in \"Regulation in Our Industry\" for details.\n- Industry Canada's new policy regarding the transfer of spectrum licenses, combined with 2012 legislation that allows foreign ownership of wireless providers with less than 10% market share, could make it harder for incumbent wireless carriers to acquire additional spectrum, including the completion of our previously announced arrangements with Shaw and Videotron, while making it less expensive for foreign wireless carriers to enter the Canadian wireless market. This could increase the intensity of competition in the Canadian wireless sector.\n\nIn addition, the CRTC *Broadcasting Distribution Regulations* do not allow cable operators to obtain exclusive contracts in buildings where it is technically feasible to install two or more systems.\n\n#### TECHNOLOGY RISKS\n\n#### **Competing Technologies**\n\nSeveral technologies may affect the way our services are delivered, including:\n\n- broadband\n- IP-based voice, data and video delivery services\n- increased use of optical fibre technologies to businesses and, or residences\n- broadband wireless access and wireless services using a radio frequency spectrum that we may have limited access to.\n\nThese technologies may also lead to significantly different cost structures for users and therefore affect the long-term viability of some of our current technologies. Some of the new technologies may allow competitors to enter our markets with similar products or services at lower costs, and they may be larger and have greater access to financial resources than we have.\n\nImprovements in the quality of streaming video over the Internet, coupled with the increasing availability of television shows and movies online are anticipated to increase competition for Canadian cable television systems. If changes in technology are made to any alternative Canadian multi-channel broadcasting distribution system, our cable services may face increased competition. In addition, wireless Internet is, in some instances, replacing traditional wireline Internet as the technology for wireless Internet continues to develop.\n\nThe growing use of PVRs could affect our ability to generate television advertising revenues because viewers can skip advertising aired on the television networks. The emergence of subscriber-based satellite and digital radio products could change radio audience listening habits and have a negative effect on the results of our radio stations. Certain audiences are also migrating to the Internet as more video and audio content becomes available.\n\n#### **Dependence on Information Technology Systems**\n\nOur businesses depend on information technology systems for day-today operations. If we are unable to operate our systems or make enhancements to accommodate customer growth and new products and services or our systems go down, it could have an adverse effect on our ability to acquire new subscribers, service customers, manage subscriber churn, produce accurate and timely subscriber invoices, generate revenue growth and manage operating expenses. This could have an adverse impact on our results and financial position.\n\nMost of our employees and critical elements of our network infrastructure and information technology systems are concentrated in various physical facilities. If we cannot access one or more of these facilities because of a natural or manmade disaster or otherwise, our operations may be significantly affected to the extent that it may be difficult for us to recover without a significant interruption in service or negative impact to our revenue or customer base.\n\n#### **Information Security Risk**\n\nSecurity is essential to maintaining efficient, reliable business processes and to enabling sustained business growth. Technology advancements and the people using these technologies introduce new information security risks. Cyber threats are maturing with time and their sophistication and effectiveness are increasing. A security breach could result in loss of revenue, reputation, and resources, or handing", - "page_start": 77, - "page_end": 77, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## OUR **BUSINESS**\n\n**Rogers Communications Inc.** is a diversified Canadian telecommunications and media company. Rogers Wireless is Canada's largest wireless voice and data telecommunications services provider and the country's only national carrier operating on the combined world standard GSM/HSPA+/LTE technology platforms. Rogers Cable is a leading Canadian cable services provider, offering high-speed Internet access, cable television, and telephony products, and together with Rogers Business Solutions, provides business telecom, networking, hosting, managed services and IP solutions to small, medium and large enterprise, government and carrier customers. Rogers Media is Canada's premier group of category-leading broadcast, specialty, print and online media assets, with businesses in radio and television broadcasting, televised shopping, sports entertainment, magazine and trade journal publishing and digitalmedia. We are publicly traded on both the TSX and NYSE stock exchanges and are included in the S&P/TSX 60 Index of the largest publicly traded companies in Canada.\n\n## **DELIVERING ON OUR COMMITMENTS** IN 2013\n\n#### **FREE CASH FLOW GENERATION**\n\n**WHAT WE SAID:** Deliver another year of significant consolidated pre-tax free cash flow.\n\n**WHAT WE DID:** Generated $2.0 billion of pre-tax free cash flow in 2013, supporting the significant investments and cash we returned to shareholders during the year.\n\n#### **DIVIDEND GROWTH**\n\n**HIGHER VALUE WIRELESS SUBSCRIBERS WHAT WE SAID:** Continue the growth in our smartphone subscriber base to drive wireless data revenue and ARPU. **WHAT WE DID:** Activated nearly 2.7 million smartphones, helping bring smartphone penetration to 75% of postpaid subscriber\n\nbase.\n\n**WHAT WE SAID:** Increase cash returns to shareholders consistently over time.\n\n**WHAT WE DID:** Increased the annualized dividend per share 10% from $1.58 to $1.74 in 2013. Further increased the dividend by 5% to $1.83 in February 2014.\n\n#### **OPERATING EFFICIENCIES**\n\n**WHAT WE SAID:** Implement productivity improvement initiatives to capture sustainable operating efficiencies.\n\n**WHAT WE DID:** Reduced operating expenses for the combined Wireless and Cable segments, excluding the cost of wireless equipment sales, by approximately 1% from 2012 levels.\n\n#### **EVOLVE AND ENHANCE TELEVISION PLATFORM**\n\n**WHAT WE SAID:** Invest in the evolution of our current TV platform and extend our video offerings to new platforms.\n\n**WHAT WE DID:** Launched NextBox 3.0 delivering a superior TV experience and leveraged the success of Rogers AnyPlace TV, our Internet and mobile on-demand TV service.\n\n#### **FAST AND RELIABLE NETWORKS**\n\n**AT A GLANCE**\n\n**WHAT WE SAID:** Maintain Rogers leadership in network technology and innovation.\n\n**WHAT WE DID:** Rogers was named both the fastest wireless network and the fastest broadband ISP in Canada by PCMag.com.\n\n#### **ENHANCE AND STRENGTHEN THE CORE BUSINESS**\n\n**WHAT WE SAID:** We will make strategic investments to expand and strengthen the core business.\n\n**WHAT WE DID:** Executed strategic acquisitions including Mountain Cable, data centre and hosting assets, theScore and valuable, high profile sports content.\n\n#### **DATA REVENUE GROWTH**\n\n**WHAT WE SAID:** Generate double-digit wireless and broadband data growth consistent with our data usage monetization strategy.\n\n**WHAT WE DID:** Grew wireless and broadband data revenues by 17% and 16%, respectively over 2012 levels.\n\n## CONTENTS\n\n- **2** Letters to Shareholders\n- **4** Strategic Objectives and Value Drivers\n- **5** Why Invest i n Rogers\n- **6** Connect Like Never Before\n- **16** Corporate Social Responsibility\n- **18** Corporate Governance\n- **20** Directors and Senior Executive Officers\n- **24** Management's Discussion and Analysis\n- **88** Management's Responsibility for Financial Reporting\n- **88** Independent Auditors' Report of Registered Public Accounting Firm\n- **89** Consolidated Statements of Income\n- **90** Consolidated Statements of Comprehensive Income\n- **91** Consolidated Statements of Financial Position\n- **92** Consolidated Statements of Changes in Shareholders' Equity\n- **93** Consolidated Statements of Cash Flows\n- **94** NotestoConsolidated Financial Statements\n- **126** Corporate and Shareholder Information", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### **Key Achievements**\n\n#### **Higher Operating Revenue and Adjusted Operating Profit**\n\n- Consolidated operating revenue was 2% higher this year compared to 2012, led by an increase in data revenue at Wireless, higher Internet revenue at Cable, higher Next Generation revenue at Business Solutions and higher subscriber revenue at Media. Revenue grew by 3% in Cable, 7% in Business Solutions and 5% in Media, while revenue at Wireless remained unchanged as the increase in data revenue was offset by the decrease in voice revenue.\n- Consolidated adjusted operating profit rose 3% this year to $4,993 million, with consolidated adjusted operating profit margins of 39.3%, resulting from higher revenue, the realization of cost efficiencies and shifts in the mix of revenue from products and services sold.\n- Postpaid Wireless subscriber growth continued with net additions of 228,000 and lower churn of 1.24%.\n- Cable high-speed Internet subscribers grew by 97,000 and cable telephony lines grew by 79,000, while television households decreased by 87,000 compared to 2012.\n\n#### **Strong Cash Flow**\n\n- Pre-tax free cash flow, defined as adjusted operating profit less spending on property, plant and equipment, and interest on longterm debt (net of capitalized interest), increased by 1% compared to 2012 to $2,044 million due to a 3% increase in adjusted operating profit offset by higher spending on property, plant and equipment. After-tax cash flow decreased by 6% from 2012 levels to $1,548 due to a 31% increase in cash taxes.\n#### **Strong Balance Sheet and Liquidity Position**\n\n- Issued and fully hedged US$2.5 billion of ten and thirty year senior notes at some of the lowest coupon rates ever achieved for Rogers corporate debt, in two separate offerings comprising:\n\t- US$500 million of 3.00% senior notes due 2023 and US$500 million of 4.50% senior notes due 2043\n\t- US$850 million of 4.10% senior notes due 2023 and US$650 million of 5.45% senior notes due 2043\n- Our overall weighted average cost of debt was 5.50% at December 31, 2013 compared to 6.10% at December 31, 2012 and the weighted average term to maturity on our debt was 11.3 years, compared to 9.2 years at December 31, 2012.\n\n- Ended the year with $4.5 billion of available liquidity, comprised of $2.3 billion cash on hand, $2 billion available under our bank credit facility and $0.2 billion available under our $0.9 billion accounts receivable securitization program.\n- In May 2013, each of Fitch Ratings and Standard and Poor's Ratings Services upgraded RCI's senior unsecured debt to BBB+ (from BBB) with a stable outlook, while Moody's Investors Service's comparable rating is Baa1 with a stable outlook remained unchanged from last year.\n\n#### **Growing Dividends**\n\n- We increased our annualized dividend rate in February 2013 by 10% to $1.74 per Class A Voting and Class B Non-Voting share and paid a quarterly dividend of $0.435 per share during 2013. We further increased our annualized dividend on February 12, 2014, by 5% to $1.83.\n#### **New CEO**\n\n- Guy Laurence joined Rogers in December 2013, as our new President and Chief Executive Officer, succeeding Nadir Mohamed who retired from Rogers. Mr. Laurence brings 30 years of global experience in the telecommunications and media industries.\n#### **Significant Developments**\n\n- Exclusive 12-year licensing agreement to broadcast national NHL games, beginning with the 2014-2015 season was signed. The agreement grants Rogers the exclusive distribution rights of all national regular season and playoff games within Canada, in multiple languages, across all platforms. At the same time, we executed separate agreements to sublicence certain of these broadcasting rights to TVA Sports and CBC.\n- Strategic acquisitions of Score Media Inc. (theScore), Mountain Cablevision Ltd. (Mountain Cable), Blackiron Data ULC (Blackiron) and Pivot Data Centres were completed.\n- Rogers First Rewards, a new loyalty program allowing customers to earn points on their eligible purchases and redeem them online for a wide selection of Rogers products and services, was launched in the Greater Toronto Area, Ottawa, Kingston, Sudbury and other cities throughout Ontario. We also received regulatory approval to launch a Rogers credit card which augments this loyalty program and will accelerate the rate at which customers earn points.\n\n**ADJUSTED OPERATING PROFIT BY SEGMENT**\n\n#### (IN MILLIONS OF DOLLARS) **CONSOLIDATED TOTAL ASSETS**\n\n(IN MILLIONS OF DOLLARS)", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### WIRELESS\n\n## ROGERS IS CANADA'S LARGEST WIRELESS\n\nCOMMUNICATIONS SERVICE PROVIDER\n\nAs at December 31, 2013, we had:\n\n- approximately 9.5 million subscribers\n- approximately 34% share of the Canadian wireless market.\n\n#### PRODUCTS AND SERVICES\n\nRogers is a Canadian leader in innovative new wireless network technologies and services. We provide wireless voice and advanced high-speed data communication services to subscribers across Canada under the Rogers, Fido and Chatr brands, and provide our customers with the best and latest wireless devices and applications including:\n\n- mobile high speed Internet access\n- wireless voice and enhanced voice features\n- wireless home phone\n- device protection\n- text messaging\n- e-mail\n- global voice and data roaming\n- machine-to-machine solutions\n- advanced business solutions\n- Suretap mobile wallet\n- Rogers AnyPlace TV\n- Rogers One Number\n- Rogers First Rewards Loyalty Program.\n\n#### NATIONAL DISTRIBUTION\n\nWe distribute our wireless products using various channels including:\n\n- independent dealer networks\n- company-owned Rogers, Fido and Chatr retail stores\n- customer self-serve rogers.com, fido.ca, chatrwireless.com, ecommerce sites\n- Rogers call centres and outbound telemarketing\n- major retail chains and convenience stores.\n\n#### EXTENSIVE WIRELESS NETWORK\n\nRogers has one of the most extensive and advanced wireless networks in Canada:\n\n- supports wireless services on smartphone, tablets, computers and a broad variety of M2M, mobile commerce, retail point of sale and other specialized devices\n- the first LTE high-speed network in Canada, which reached more than 73% of the Canadian population at December 31, 2013\n- voice and data roaming agreements with international carriers in more than 200 countries\n- network sharing arrangements with several regional wireless operators in Canada.\n\nWe are continuously enhancing our IP service infrastructure for all of our wireless services. Advances in technology have transformed how our customers interact and how they use the variety of tools that are available to them in their personal and professional lives. Technology has also changed the way businesses operate.\n\nNew technologies allow us to offer new services, such as Rogers One Number, which makes enhanced wireless services available to subscribers on their computer, tablet, or smartphone and can be used as an alternative to fixed line telephony. Users enjoy the same services and features across the coverage area, thanks to the seamless integrated nature of the Rogers network and those of our roaming and network sharing partners.", - "page_start": 40, - "page_end": 40, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "What has Rogers Communications done to improve its television platform?", - "target_page": 2, - "target_passage": "Launched NextBox 3.0 delivering a superior TV experience and leveraged the success of Rogers AnyPlace TV, our Internet and mobile on-demand TV service.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## **ROGERS COMMUNICATIONS INC.** AT A GLANCE\n\n#### **ROGERS COMMUNICATIONS**\n\n**Rogers Communications (TSX: RCI; NYSE: RCI) is a diversified Canadian telecommunications and media company. As discussed in the following pages, Rogers Communications is engaged in the telecom and media businesses through its primary operating segments Rogers Wireless, Rogers Cable, Rogers Business Solutions and Rogers Media.** \n\n#### WIRELESS SEGMENT\n\nRogers Wireless provides wireless voice and data communications services across Canada to approximately 9.5 million customers under the Rogers Wireless, Fido and chatr brands. Rogers Wireless is Canada's largest wireless provider and the only national carrier operating on the combined global standard GSM/HSPA+/LTE technology platforms. Rogers Wireless is Canada's leader in innovative wireless services, and provides customers with the best and latest wireless devices and applications and the fastest network speeds. Rogers Wireless also provides seamless wireless roaming across the U.S. and more than 200 other countries, and is the Canadian leader in the deployment of mobile commerce and machineto-machine communications.\n\n#### CABLE AND BUSINESS SOLUTIONS SEGMENTS\n\nRogers Cable is a leading Canadian cable services provider, whose service territory covers approximately 4.0 million homes in Ontario, New Brunswick and Newfoundland representing approximately 30% of the Canadian cable market. Our advanced digital hybrid fibre-coax network provides market leading highspeed broadband Internet access speeds, the most innovative selection of digital television and online viewing and telephony services to millions of residential and small business customers. Together with Rogers Business Solutions, it also provides scalable carrier-grade business telecom, networking, hosting and managed data services, and IP connectivity and solutions to medium and large enterprise, government and carrier customers.\n\n#### MEDIA SEGMENT\n\nRogers Media is Canada's premier destination for category-leading television and radio broadcasting, sports entertainment, publishing, and digital media properties. Television assets include national City network which reaches more than 80% of Canadians, five OMNI Television multilingual channels, seven regional and national Sportsnet channels, as well as specialty channels FX Canada, OLN, The Biography Channel and G4. Rogers Media also owns The Shopping Channel, Canada's only nationally televised and online shopping service. It operates more than 50 Canadian radio stations, publishes 50+ well known consumer and business magazines, and owns a suite of digital media properties. Media owns the Toronto Blue Jays Baseball Club and Rogers Centre, Canada's largest sports and entertainment facility. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment, owner of NHL Toronto Maple Leafs, NBA Toronto Raptors and MLS Toronto FC.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Our new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of – if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "With Canada's first and fastest LTE wireless network – the global gold standard in wireless network technology – Rogers makes \"placeshifting\" a reality so customers can connect to their communications, information and entertainment from almost anywhere, easily and seamlessly. With Rogers, watching TV on the train, conducting a virtual white-boarding session from the beach, disarming a home monitoring system from a smartphone, or answering a home phone from 5,000 kilometers away are becoming everyday activities. Rogers customers no longer have to pick up the phone to check their voicemail; they don't need to be in town to catch their local news; and they don't have to be at their PCs to access their e-mail. And with Rogers, businesses no longer need to work in traditional offices because we help them to quickly set up virtual workspaces, with complete access to customers, colleagues, files and corporate applications, so they are as productive on the road as they are in the office.\n\nAnd now, small businesses as well as households can enjoy the flexibility and value of Rogers new Wireless Home and Small Business Phone products as well.\n\nCustomers know that Rogers makes it easy and seamless to connect with the same personalized information, communications and entertainment experiences no matter where they are – at work, at school, at home or away, including when travelling to more than 200 countries around the world. And they know that only Rogers is there first with innovative new services, such as mobile TV, remote home monitoring, and Rogers One Number, which allows them to switch calls between their wireless device, computer, and home phone without interruption; manage e-mails, text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices – no matter where they are.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "WIRELESS CABLE MEDIA **ROGERS.COM**", - "page_start": 131, - "page_end": 131, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **LEADING** CONTENT\n\n#### ROGERS IS COMMITTED TO DELIVERING WORLD-CLASS CONTENT AND EXPERIENCES TO CONSUMERS AND ADVERTISING SOLUTIONS TO BUSINESSES. THE COMPANY HAS A STRONG LEGACY OF BUILDING POWERFUL MEDIA BRANDS WITH COMPELLING CONTENT THAT RESONATES WITH AUDIENCES ACROSS MULTIPLE PLATFORMS ON ANY DEVICE.\n\nToday, businesses across Canada connect with customers through Rogers category-leading television and radio assets, sports entertainment, televised and online shopping, publishing, and digital media properties as the one-stop solution for all their local and national advertising needs.\n\nRogers Media is Canada's premier combination of diversified broadcast, specialty, sports, print and online media assets which together touch nearly 90% of Canadians every week. This includes over 50 popular AM and FM radio stations across Canada. In television, it includes the seven station City network which broadcasts intensely local, urban-oriented", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "ROGERS COMMUNICATIONS INC. **2013 ANNUAL REPORT**", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "knows how businesses work, we also offer a choice of specifically designed plans and options that allow users to share buckets of voice and data, connect directly with team members, establish wireless backup for point-of-sale and other systems, and roam frequently with cost certainty.\n\nFor hundreds of thousands of smaller businesses located in and around Rogers cable footprint, Rogers offers a compelling set of wired telephony and Internet solutions that provide enterprise-grade dependability and value. With voice, data, hosting and online security solutions built specifically for business, Rogers provides a single reliable source for innovative, dependable communications solutions that are backed up by around-the-clock live agent support.\n\nLarger enterprises also increasingly rely on Rogers to deliver corporatecritical voice, Internet, networking and managed data centre solutions across its fibre-optic network that connects thousands of commercial and municipal buildings. These next generation on-net services for enterprise customers are backed by dedicated, around-the-clock support and connectivity to Rogers high-speed national fibre-optic backbone that provides redundancy as well as seamless connectivity into the United States and Europe.\n\nRogers also provides the most extensive set of advanced wireless machine-to-machine connectivity solutions which help businesses to increase productivity, reduce costs and optimize operations. As well, Rogers remains at the forefront of mobile commerce and electronic payments solutions in the Canadian market.\n\nBusinesses across Canada also connect with customers through Rogers leading media brands as the one-stop solution for all their local and national radio, television, online and print advertising needs.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **BUSINESS** SOLUTIONS\n\nIN TODAY'S FAST-PACED DIGITAL WORLD OF BUSINESS, THE ABILITY TO COMMUNICATE AND ACCESS INFORMATION ANYTIME, ANYPLACE IS A COMPETITIVE ADVANTAGE THAT BUSINESS PROFESSIONALS LOOK TO ROGERS TO PROVIDE. ROGERS ENSURES THE INFORMATION THAT DRIVES COMMERCE FORWARD IS ALWAYS ON HAND AND HELPS BUSINESSES DEFINE HOW TO WIN IN THE DIGITAL WORLD.\n\nRogers provides a single reliable source for advanced business-focused voice, Internet and data networking solutions designed specifically for the most demanding of wireless and wired commercial requirements.\n\nBusinesses across Canada rely on Rogers for its national wireless network, world-leading LTE technology, seamless global connectivity, and the broadest array of wireless applications and devices, because they know that their mobility and remote connectivity needs are always covered with the most advanced solutions available. Because Rogers", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### WIRELESS\n\n## ROGERS IS CANADA'S LARGEST WIRELESS\n\nCOMMUNICATIONS SERVICE PROVIDER\n\nAs at December 31, 2013, we had:\n\n- approximately 9.5 million subscribers\n- approximately 34% share of the Canadian wireless market.\n\n#### PRODUCTS AND SERVICES\n\nRogers is a Canadian leader in innovative new wireless network technologies and services. We provide wireless voice and advanced high-speed data communication services to subscribers across Canada under the Rogers, Fido and Chatr brands, and provide our customers with the best and latest wireless devices and applications including:\n\n- mobile high speed Internet access\n- wireless voice and enhanced voice features\n- wireless home phone\n- device protection\n- text messaging\n- e-mail\n- global voice and data roaming\n- machine-to-machine solutions\n- advanced business solutions\n- Suretap mobile wallet\n- Rogers AnyPlace TV\n- Rogers One Number\n- Rogers First Rewards Loyalty Program.\n\n#### NATIONAL DISTRIBUTION\n\nWe distribute our wireless products using various channels including:\n\n- independent dealer networks\n- company-owned Rogers, Fido and Chatr retail stores\n- customer self-serve rogers.com, fido.ca, chatrwireless.com, ecommerce sites\n- Rogers call centres and outbound telemarketing\n- major retail chains and convenience stores.\n\n#### EXTENSIVE WIRELESS NETWORK\n\nRogers has one of the most extensive and advanced wireless networks in Canada:\n\n- supports wireless services on smartphone, tablets, computers and a broad variety of M2M, mobile commerce, retail point of sale and other specialized devices\n- the first LTE high-speed network in Canada, which reached more than 73% of the Canadian population at December 31, 2013\n- voice and data roaming agreements with international carriers in more than 200 countries\n- network sharing arrangements with several regional wireless operators in Canada.\n\nWe are continuously enhancing our IP service infrastructure for all of our wireless services. Advances in technology have transformed how our customers interact and how they use the variety of tools that are available to them in their personal and professional lives. Technology has also changed the way businesses operate.\n\nNew technologies allow us to offer new services, such as Rogers One Number, which makes enhanced wireless services available to subscribers on their computer, tablet, or smartphone and can be used as an alternative to fixed line telephony. Users enjoy the same services and features across the coverage area, thanks to the seamless integrated nature of the Rogers network and those of our roaming and network sharing partners.", - "page_start": 40, - "page_end": 40, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **CONNECTED** HOME\n\nROGERS CONTINUES TO DEFINE HOW FAMILIES COME TOGETHER AND CONNECT WITH THEIR WORLD. MILLIONS OF CANADIANS DEPEND ON ROGERS TO KEEP THEM INFORMED, CONNECTED AND ENTERTAINED WITH A COMBINATION OF THE FASTEST INTERNET SPEEDS AND THE MOST INNOVATIVE TELEVISION, TELEPHONY AND HOME MONITORING SOLUTIONS AVAILABLE.\n\nThe core of Rogers connected home strategy is to provide customers with the fastest broadband connections, together with the ability to seamlessly shift – to shift time, to shift screens and to shift places so they access what they want, when they want, on the screen of their choice.\n\nRogers offers the best in on-demand, sports, movies, specialty, episodic and multicultural programming. Customers can schedule, pause, rewind", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "Until what NHL season will the Vancouver's ice hockey team be a Rogers Communications partner?", - "target_page": 39, - "target_passage": "Sportsnet announced a 10-year partnership extension with the Vancouver Canucks through the 2022-2023 NHL seasons", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "#### ACQUISITIONS\n\n- Closed our agreement to acquire Metro 14 Montreal for $10 million on February 4, 2013, and relaunched the station as City Montreal, expanding the City broadcast TV network into the largest market in Quebec and increasing the City television network reach to over 80% of Canadian households.\n- Finalized our purchase of theScore, Canada's third largest specialty sports channel, for $167 million. We later rebranded theScore as Sportsnet 360.\n\n#### NHL\n\n- Advanced our strategy of delivering highly sought-after sports content anywhere, anytime, on any platform and strengthening the value of our sports brand by entering into an exclusive 12-year licensing agreement with the NHL which begins with the 2014-2015 season and grants Rogers the following:\n\t- national rights across television broadcasts, wireless and mobile tablets and Internet streaming\n\t- national rights to all regular season games, all playoff games and the Stanley Cup Final, and all special events and nongame events (e.g. NHL All-Star Game, NHL Draft) – in multiple languages\n\t- out-of-market rights for all regional games\n\t- ownership of all linear and digital highlights, including condensed games and video archives\n\t- NHL broadcast assets: Rogers to operate NHL Centre Ice and NHL Game Centre Live\n\t- sponsorship rights to the NHL Shield logo as an official partner of the NHL\n\t- Canadian representation of ad sales for NHL.com\n\t- ownership of all commercial inventories for the television broadcasts\n\t- rights to sublicense broadcasting rights to TVA and CBC\n\t- rights to use the Hockey Night In Canada brand through the CBC sublicense agreement.\n\nThrough this agreement, Rogers plans to provide Canadians with a unique viewing experience that will feature expanded pre- and postgame coverage of regular season and playoff games and other enhanced NHL content. We expect this agreement to drive Sportsnet subscriber growth and to provide highly sought after content in multiple languages across all of Rogers' platforms.\n\n#### MEDIA FINANCIAL RESULTS\n\n| | | Years ended December 31 | |\n| --- | --- | --- | --- |\n| (In millions of dollars, except percentages) | 2013 1 | 2012 | % Chg |\n| Operating revenue – Media | $ 1,704 | $ 1,620 | 5 |\n| Operating expenses | (1,543) | (1,430) | 8 |\n| Adjusted operating profit – Media | $ 161 | $ 190 | (15) |\n| Adjusted operating profit margin | 9.4% | 11.7% | |\n| Additions to property, plant and equipment | $ 79 | $ 55 | 44 |\n\n1 Results of operations include theScore's operating results as of April 30, 2013 (the date of acquisition).\n\n#### **MEDIA REVENUE**\n\n(IN MILLIONS OF DOLLARS)\n\n#### **Higher Operating Revenue**\n\nMedia generates revenue in five areas:\n\n- advertising sales across its television, radio, publishing and digital media properties\n- circulation\n- subscriptions\n- retail product sales\n- ticket sales, receipts of MLB revenue sharing and concession sales associated with Rogers Sports Entertainment.\n\nOperating revenue was 5% higher this year, mainly because of:\n\n- higher subscription and advertising revenue generated by the Sportsnet properties, including the acquisition of theScore, and overall growth in distribution of our other specialty channels\n- higher advertising revenue of $21 million resulting from timing of NHL hockey games. Advertising revenue last year was lower than normal due to the NHL player lockout which resulted in no NHL games being aired, and higher than normal this year due to the compressed 2012-2013 season which started in January 2013 and the compressed 2013-2014 NHL schedule in advance of the upcoming winter Olympics\n- higher attendance and merchandise sales at Blue Jays games\n- higher sales at The Shopping Channel.\n\nThe increases in revenue were partially offset by continuing volatility in advertising spending across most industry sectors, driven by a continued slow economy.", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "programming across the country's largest markets, as well as five OMNI Television stations which deliver multilingual news, information and entertainment to Canada's multiple language communities.\n\nThe Sportsnet specialty network provides sports programming across Canada through its four regional television channels and its nationallydistributed Sportsnet ONE, Sportsnet World, and Sportsnet 360 stations. Rogers also owns other Canadian specialty television channels, including FX Canada, OLN, The Biography Channel and G4.\n\nThe Shopping Channel – Canada's only nationally televised and Internet shopping service – is a leading interactive multi-channel retailer, offering a vast assortment of exclusive products and top brand names. As one of Canada's most innovative and diversified retailers, it provides customers with exceptional selections in health/beauty, jewelry, home/lifestyle, fashion/accessories, and electronics.\n\nRogers also publishes many well-known consumer magazines, such as Maclean's, Chatelaine, FLARE, L'actualité, and Canadian Business, and is the leading publisher of a number of industry, medical and financial publications. Rogers also controls a suite of fast-growing digital media assets, including 90+ owned and 300+ premium partnership online sites, as well as the recently launched Next Issue Canada digital magazine platform which provides 100+ of North America's most celebrated titles on an unlimited anytime, anywhere basis.\n\nIn sports entertainment, Rogers owns the Toronto Blue Jays baseball team and Rogers Centre stadium, Canada's largest sports and entertainment facility and home field of the Blue Jays. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment which owns the NHL Maple Leafs, NBA Raptors, MLS Toronto FC and a number of other sports related assets.", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **ROGERS COMMUNICATIONS INC.** AT A GLANCE\n\n#### **ROGERS COMMUNICATIONS**\n\n**Rogers Communications (TSX: RCI; NYSE: RCI) is a diversified Canadian telecommunications and media company. As discussed in the following pages, Rogers Communications is engaged in the telecom and media businesses through its primary operating segments Rogers Wireless, Rogers Cable, Rogers Business Solutions and Rogers Media.** \n\n#### WIRELESS SEGMENT\n\nRogers Wireless provides wireless voice and data communications services across Canada to approximately 9.5 million customers under the Rogers Wireless, Fido and chatr brands. Rogers Wireless is Canada's largest wireless provider and the only national carrier operating on the combined global standard GSM/HSPA+/LTE technology platforms. Rogers Wireless is Canada's leader in innovative wireless services, and provides customers with the best and latest wireless devices and applications and the fastest network speeds. Rogers Wireless also provides seamless wireless roaming across the U.S. and more than 200 other countries, and is the Canadian leader in the deployment of mobile commerce and machineto-machine communications.\n\n#### CABLE AND BUSINESS SOLUTIONS SEGMENTS\n\nRogers Cable is a leading Canadian cable services provider, whose service territory covers approximately 4.0 million homes in Ontario, New Brunswick and Newfoundland representing approximately 30% of the Canadian cable market. Our advanced digital hybrid fibre-coax network provides market leading highspeed broadband Internet access speeds, the most innovative selection of digital television and online viewing and telephony services to millions of residential and small business customers. Together with Rogers Business Solutions, it also provides scalable carrier-grade business telecom, networking, hosting and managed data services, and IP connectivity and solutions to medium and large enterprise, government and carrier customers.\n\n#### MEDIA SEGMENT\n\nRogers Media is Canada's premier destination for category-leading television and radio broadcasting, sports entertainment, publishing, and digital media properties. Television assets include national City network which reaches more than 80% of Canadians, five OMNI Television multilingual channels, seven regional and national Sportsnet channels, as well as specialty channels FX Canada, OLN, The Biography Channel and G4. Rogers Media also owns The Shopping Channel, Canada's only nationally televised and online shopping service. It operates more than 50 Canadian radio stations, publishes 50+ well known consumer and business magazines, and owns a suite of digital media properties. Media owns the Toronto Blue Jays Baseball Club and Rogers Centre, Canada's largest sports and entertainment facility. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment, owner of NHL Toronto Maple Leafs, NBA Toronto Raptors and MLS Toronto FC.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### MEDIA\n\n#### DIVERSIFIED CANADIAN MEDIA COMPANY\n\nWe have a broad portfolio of media properties, which most significantly includes:\n\n- category-leading television and radio broadcasting properties\n- multi-platform shopping\n- publishing including Next Issue Canada\n- digital media\n- sports media and entertainment\n- exclusive 12-year licensing agreement with the NHL to broadcast all national live hockey games within Canada in multiple languages on all platforms beginning with the 2014- 2015 season.\n\n#### A NETWORK OF MEDIA ASSETS THAT REACHES CANADIANS COAST-TO-COAST\n\n| Radio | We operate more than 50 AM and FM radio stations in markets across Canada, including popular radio brands such as 98.1 CHFI, |\n| --- | --- |\n| | 680 News, Sportsnet 590, The FAN, KISS 92.5, JACK FM and SONiC. |\n| Television | We operate several conventional and specialty television networks: |\n| | • City network, which together with affiliated stations, has distribution to over 80% of Canadian households |\n| | • OMNI multicultural television stations |\n| | • Specialty channels that include Outdoor Life Network, The Biography Channel (Canada), G4 Canada and FX (Canada) |\n| | • Sportsnet's four regional networks and Sportsnet One, Sportsnet World and Sportsnet 360 |\n| | • The Shopping Channel, Canada's only national televised shopping channel which generates a significant and growing portion of its |\n| | revenues from online sales. |\n| Publishing | • We publish many well-known consumer magazines such as Maclean's, Chatelaine, Flare, Hello! Canada and Canadian Business |\n| | • We are a leading publisher of marketing, medical, financial and trade publications |\n| | • We also have a broad digital presence with a number of online publications, and are extending content across new platforms |\n| | • We deliver exclusive and unlimited access to a catalogue of more than 100 premium Canadian and US magazine titles through Next |\n| | Issue Canada digital magazine service offering. |\n| Digital Media | Our online and mobile digital media platforms include digital advertising across websites and mobile platforms, digital content |\n| | subscriptions, and commerce solutions. |\n| Sports Entertainment | We own the Toronto Blue Jays, Canada's only Major League Baseball team, and the Rogers Centre event venue, which hosts the |\n| | Toronto Blue Jays' home games and other professional league games, concerts, trade shows and special events. |\n\n#### COMPETITION\n\nOur radio stations compete mainly with individual stations in local markets, but they also compete:\n\n- nationally with other large radio operators, including satellite radio operator Sirius/XM, the CBC, Bell Media and Corus Entertainment\n- with other media, including newspapers, magazines, television and outdoor advertising\n- with new technologies such as online web information services, music downloading, portable media players and online music streaming services.\n\nThe Shopping Channel competes with:\n\n- retail stores, catalogue, Internet and direct mail retailers\n- infomercials that sell products on television\n- other television channels, for channel placement, viewer attention and loyalty.\n\nOur magazines and other publications compete for readership and advertisers with:\n\n- other Canadian magazines\n- foreign, mostly US, titles that sell in significant quantities in Canada\n- online information and entertainment websites.\n\nTelevision and specialty services compete for viewers and advertisers with:\n\n- other Canadian television stations that broadcast in their local markets, including those owned and operated by the CBC, Bell Media and Shaw Media, some of which have greater national coverage\n- other specialty channels\n- other distant Canadian signals and US border stations given the timeshifting capacity available to digital subscribers\n- other media, including newspapers, magazines, radio and outdoor advertising\n- content available on the Internet.\n\nCompetition in Sports Entertainment includes:\n\n- other Toronto professional teams, for attendance at Blue Jays games\n- other Major League Baseball teams, for Blue Jays players and fans\n- other local sporting and special event venues.", - "page_start": 50, - "page_end": 50, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### **Higher Operating Expenses**\n\nWe assess Media operating expenses in four areas:\n\n- the cost of broadcast content (including sports programming)\n- the cost of retail products sold by The Shopping Channel and Sports Entertainment\n- Blue Jays player payroll\n- all other expenses involved in day-to-day operations.\n\nOperating expenses were 8% higher than 2012, mainly because of higher programming costs at Sportsnet, higher Toronto Blue Jays player salaries, higher merchandise spending at The Shopping Channel and costs associated with our launch of Next Issue Canada.\n\nThe higher programming costs this year are a combination of lower costs in 2012 because of the NHL player lockout, and higher costs this year because more hockey games than normal were aired because of the compressed NHL hockey schedule due in part to upcoming winter Olympics. Approximately $62 million of Media's year over year increase in operating expense this year resulted from the 2012 NHL lockout and the timing of games aired in 2013. Player salaries at the Toronto Blue Jays were $34 million higher this year.\n\n#### **Lower Adjusted Operating Profit**\n\nAdjusted operating profit was down compared to last year mainly because of revenue and expenses changes described above.\n\nExcluding the impact of the 2012 NHL lockout and the compressed NHL schedule:\n\n- operating revenue would have been 4% higher this year compared to last year, instead of 5% higher as reported\n- adjusted operating profit would have been 7% higher this year compared to last year, instead of 15% lower as reported.\n\nExcluding the acquisition of theScore:\n\n- operating revenue would have been 4% higher this year compared to last year, instead of 5% higher as reported\n- adjusted operating profit would have been 19% lower this year compared to last year, instead of 15% lower as reported.", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **CONNECTED** HOME\n\nROGERS CONTINUES TO DEFINE HOW FAMILIES COME TOGETHER AND CONNECT WITH THEIR WORLD. MILLIONS OF CANADIANS DEPEND ON ROGERS TO KEEP THEM INFORMED, CONNECTED AND ENTERTAINED WITH A COMBINATION OF THE FASTEST INTERNET SPEEDS AND THE MOST INNOVATIVE TELEVISION, TELEPHONY AND HOME MONITORING SOLUTIONS AVAILABLE.\n\nThe core of Rogers connected home strategy is to provide customers with the fastest broadband connections, together with the ability to seamlessly shift – to shift time, to shift screens and to shift places so they access what they want, when they want, on the screen of their choice.\n\nRogers offers the best in on-demand, sports, movies, specialty, episodic and multicultural programming. Customers can schedule, pause, rewind", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## **LEADING** CONTENT\n\n#### ROGERS IS COMMITTED TO DELIVERING WORLD-CLASS CONTENT AND EXPERIENCES TO CONSUMERS AND ADVERTISING SOLUTIONS TO BUSINESSES. THE COMPANY HAS A STRONG LEGACY OF BUILDING POWERFUL MEDIA BRANDS WITH COMPELLING CONTENT THAT RESONATES WITH AUDIENCES ACROSS MULTIPLE PLATFORMS ON ANY DEVICE.\n\nToday, businesses across Canada connect with customers through Rogers category-leading television and radio assets, sports entertainment, televised and online shopping, publishing, and digital media properties as the one-stop solution for all their local and national advertising needs.\n\nRogers Media is Canada's premier combination of diversified broadcast, specialty, sports, print and online media assets which together touch nearly 90% of Canadians every week. This includes over 50 popular AM and FM radio stations across Canada. In television, it includes the seven station City network which broadcasts intensely local, urban-oriented", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "\"ROGERS MADE CLEAR PROGRESS ON A NUMBER OF STRATEGIC FRONTS, WHILE CONTINUING TO DELIVER STRONG RETURNS TO SHAREHOLDERS AND BUILDING UPON THE COMPANY'S DEEP-ROOTED FOUNDATIONS FOR THE FUTURE BENEFIT OF ALL OUR STAKEHOLDERS.\"\n\n**ALAN HORN, CPA, CA**\n\n## A MESSAGE FROM THE **CHAIRMAN**\n\n**2013 was another solid year in which Rogers made clear progress on a number of strategic fronts, while continuing to deliver strong returns to shareholders and building upon the company's deep-rooted foundations for the future benefit of all our stakeholders. Our management team delivered on their financial guidance targets in what continue to be highly competitive and regulatorily intense markets.**\n\nRogers continued to deliver on the evolution and expansion of its core services. It quickly expanded the reach of Canada's first and fastest LTE wireless network to 73% of the Canadian population, introduced significant enhancements to its broadband data speeds and cable TV platform, and further added to its leading sports content and digital media assets.\n\nThe company executed several strategic transactions that support Rogers core growth strategies, including in the areas of wireless spectrum and network sharing, cable footprint expansion, and significantly expanding its data centre, colocation and managed services capabilities for businesses. In addition, it struck a landmark 12 year agreement with the NHL for the exclusive national hockey broadcast rights across Canada.\n\nRogers also continued to deliver on its innovation agenda, being first to market with a series of new services in 2013, including in the quickly growing areas of mobile payments, machine-to-machine communications, home monitoring, local digital services, and a new and unique customer loyalty program.\n\nWe continued to return increasing amounts of cash to shareholders. In 2013, the company's significant cash generation allowed the Board to increase the dividend\n\nby 10% and return approximately $900 million to our shareholders in the form of dividends and share buybacks. And we further increased the dividend by 5% in February 2014, continuing a multi-year trend of dividend growth. As you read on in this report, you will find many more examples and much detail of the company's operational and financial accomplishments over the past year.\n\nI would like to take the opportunity to thank our recently retired President and Chief Executive Officer Nadir Mohamed for his leadership and substantial contributions at Rogers over the past 13 years. Succeeding a founder with professional management is always a delicate and important transition in the life cycle of a company, and Nadir provided important continuity and solid leadership as CEO over the course of the past five years for which the Board and management team are thankful.\n\nFollowing an extensive international search process, in September, 2013 the Board announced that Guy Laurence would become President and Chief Executive Officer of Rogers effective in December 2013. Guy brings 30 years of global experience in telecom, pay television and media, and is a proven, hands-on executive who has consistently delivered strong financial and operating results in highly complex and\n\ncompetitive markets. Guy is an excellent fit for this role on many levels and the entire Board look forward to his leadership for many years to come.\n\nI would encourage you to review the discussions around our corporate governance, community investments and sustainability initiatives later in this annual report. First class corporate governance practices have always been a strong tenet at Rogers, and as an entrepreneur founded and family controlled company, our Board takes pride in what is a proactive and disciplined approach to ensuring that our governance practices continue to justify the confidence of the public capital markets. Giving back to the communities we serve is also an important part of our culture at Rogers and the Board is very proud of the significant initiatives and investments which the company undertook over the past year on the corporate social responsibility front.\n\nI would like to thank Rogers' 28,000 employees for their ongoing dedication to our customers and striving to make Rogers better every day, my fellow Board members for their counsel and drive towards delivering continued value to our shareholders, and you our shareholders for your continued investment in this great company.\n\n**ALAN HORN, CPA, CA CHAIRMAN OF THE BOARD** ROGERS COMMUNICATIONS INC. **ALAN HORN**", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "positive health, safety and wellness culture. In 2013, Rogers was recognized as one of Canada's Top 100 Employers and one of the Best Diversity Employers.\n\nIn support of positive change in communities across Canada, Rogers provided cash and in-kind donations to support various organizations and causes, including youth education through our flagship program Rogers Youth Fund. This fund supports after-school homework clubs, academic tutoring and alternative schooling that help youth excel. As a natural extension of our business, we also contribute significant funding to encourage the development of innovative and creative Canadian content for film, television and wireless mobile devices.\n\nEnvironmental stewardship is a key pillar of our CSR strategy. With a focus on continually improving our environmental performance, we measure our carbon footprint each year and undertake initiatives to reduce our greenhouse gas emissions, paper consumption and waste. And, as a service provider to millions of customers each month, we've been an early and strong proponent of paperless electronic billing. In 2013, Rogers was also named one of Canada's Greenest Employers, an award recognizing companies that lead the nation in incorporating environmental values into their corporate culture.\n\nAcross our supply chain, we are committed to ethical procurement and have a strong framework in place to achieve this. Rogers continually works with our partners through our agreements, relationships and Supplier Code of Conduct to ensure that we collectively adhere to sound sourcing, production and environmental standards.\n\n*For a complete description of Rogers CSR priorities and performance, go to rogers.com/csr*", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "WIRELESS CABLE MEDIA **ROGERS.COM**", - "page_start": 131, - "page_end": 131, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "I am a shareholder of Emmis Communication, but I will be available from the 20th of June to the 4th of July, will the Annual Meeting take place during this period?", - "target_page": 6, - "target_passage": "The Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis’ Corporate office.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Outperform\n\nEmmis Communications 2004 Annual Report", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for \"truth\") acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of *Indianapolis Monthly*, and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\n*This annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.*", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "### ®\n\nemmis communications one emmis plaza 40 monument circle indianapolis, indiana 46204", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "#### Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, 317.266.0100.\n\n#### Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes *Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles* and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n#### Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n#### Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n#### Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n#### Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n| --- | --- | --- |\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18.00 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n#### Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe Publishing Division President\n\n#### Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh Former Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal President and Majority Owner, LMCS, LLC\n\nPeter A. Lund Media consultant and former President of CBS Inc.\n\nGreg A. Nathanson Media consultant and former President of Fox Television Stations and Emmis Television\n\nFrank V. Sica Senior Advisor Soros Fund Management LLC\n\nLawrence B. Sorrel Managing Partner and Co-CEO Tailwind Capital Partners", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## you can count on emmis to continue to do\n\n#### Dear Shareholders,\n\nOn our year-end conference call, I said that last year was the best in Emmis Communications' history. And while that might have sounded like the usual Wall Street hyperbole – like any other CEO bragging about his company's performance – the difference is, I believed it. And I still do.\n\nBut I've been in this business long enough to know two things for sure: What I believe is not as important as what I can prove, and what we did last year is only meaningful if it reflects on how we will do in the coming year. The good news is, Emmis does have the results to back up my high praise, and what we did to perform last year does directly relate to how we'll perform in the year ahead.\n\n#### **The best year**\n\nThe bottom line is this: Emmis Communications turned in a remarkable performance last year. Again and again, and by a number of measures, we outperformed our peers, our markets and our own solid track record.\n\nAnd we did this in a year that was challenging in just about every way. The economy was unstable, public companies came under continuing scrutiny, indecency issues hounded broadcasters, competition for tight ad dollars increased and technology continued to reshape the media world.\n\nBut our people refused to be slowed by those challenges. Instead, they worked through them. They innovated, hustled and focused. And they produced.\n\nOur radio division's revenue growth led our markets and the industry – in our fiscal year, our group was up 4.5 percent while our markets were up 2.7 percent and the industry only 1 percent. Based on this kind of performance, we have consistently ranked among the nation's leaders in per-station revenue, and we continue to produce top-rated programming in markets across the nation.\n\nOur TV performance was even more impressive. The Emmis television group's revenues were up 0.5 percent in calendar 2003, a year when our markets saw a 2.3 percent decrease in revenues, and the industry experienced a 4.7 percent revenue decline. This industry-leading result made us one of the few groups in the nation to post positive growth. In addition, we gained revenue share at 11 of our 13 measured stations and held the line on expenses, giving us a 1.2 percent increase in fiscal-year cash flow.\n\nOur publishing and international divisions also posted strong results. In a tough publishing market, our magazines boosted their division's revenues by 4.6 percent over last year and increased cash flow by 3.3 percent. Our international division turned in a revenue increase of 27 percent and a cash flow increase of 31 percent.\n\nIn addition to boosting performance in our divisions, we honed our corporate operations by continuing to build one of the most adept and hardest-working corporate groups in American media. With this team in place, we've brought our leverage and cost of capital down to more manageable levels, found ways to combat the continually increasing costs of health insurance and, in a truly top-notch effort, smoothly integrated our new Austin radio properties – in just under a year as a part of Emmis, the Austin properties are enjoying significant ratings and revenue increases.\n\nOf course, for you, the real bottom line on our performance is its impact on your investment. I'm proud to say that we saw a 27 percent increase in our share price over the course of the last fiscal year – we ended fiscal '03 at 19.79, and closed the book on fiscal '04 at 25.17.\n\n#### **How we did it**\n\nOperationally, we were on top of our game last year. However, as I said, I know that the past year's performance really only matters if it reflects on what we'll do in the coming year. The good news is, it does. We performed at these high levels not by doing something unusual, but by operating the way Emmis has always operated, and the way we always will.\n\nFirst of all, we focus on assembling and maintaining the best teams in our markets. We have traditionally had the top salespeople, creative and technical professionals, news staffs, managers and support staff in every city where we operate. Their peers turn to them for industry leadership, honor them with awards and copy them at every opportunity. We invest in these people, giving them industry-leading benefits packages, great opportunities and the tools they need to succeed. This has always been a hallmark of Emmis, and it won't change.", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## what it has always done: outperform.\n\nIn addition, we commit ourselves to creating the best content in our markets. Our magazines routinely dominate their industry awards ceremonies – last year, *Texas Monthly* won a coveted National Magazine Award, and Emmis publications claimed more than half of the awards at the City and Regional Magazine competition. Our radio stations feature some of the industry's most popular personalities – in 2003, Emmis people and stations were awarded three Marconi Radio Awards. And our television operations are regularly honored by journalism organizations for their news gathering and community service. In short, we provide our markets with reliable, high-quality content – content that helps us assemble the audiences our advertisers want to reach.\n\nWe then generate revenue by overallocating to sales. We give our teams well-developed strategies, clearly defined brands and solid products. We build bigger, better sales forces and put a greater emphasis on local dollars than our competitors. We hire aggressive managers, set ambitious goals and then watch our people work harder and smarter than anyone else.\n\nWe also seize the right opportunities and make the most of them. As the cost of buying radio properties has gone through the roof, we have been careful about buying. However, when we had a chance to acquire the LBJ stations in Austin, we knew it was the right fit: good stations, a tremendous heritage and a great culture, all with an opportunity for growth. And we've already built on that group's track record – since we bought them, we've reformatted one station and quickly sent it to No. 1 in the market, and we've pushed revenues up 9 percent for the entire group.\n\nFinally, we innovate. Why has Emmis, traditionally a radio company, become the company to emulate in TV? Because we approached TV in a way it's never been approached before. Why do we operate leading hip-hop stations in markets across the nation? Because we pioneered the concept. Why have we created a new \"Music with Class\" format in St. Louis' Red 104.1? Because we believe we see a new opportunity. We know that successful companies don't follow the pack. They lead it, and that's what we'll always do.\n\n#### **The year ahead**\n\nThat last point – innovation – is an important one, especially for the future of Emmis, because we are planning something that could change the face of American TV and once again demonstrate that Emmis is a company that leads the way.\n\nForty years ago, Americans began taking down their TV antennas and severing broadcasters' direct link to television audiences. Since then, the cable companies—the middlemen who replaced us—have created more than $300 billion of value for themselves. However, changes in technology have given broadcasters the ability to provide the American public with the most popular TV channels, without the middlemen and at a more reasonable price.\n\nWe are developing an innovative model that will leverage that technology to get broadcast companies back into the game. I believe it has the potential to revolutionize the television industry. I also believe it will add substantial value to your investment.\n\nWe unveiled this concept at the National Association of Broadcasters meeting in April. I am proud to say that 11 other television companies joined us at that meeting to express their support for what we're calling the Broadcasters' Initiative, and more are signing on each week. Once again, Emmis has leveraged innovation to take a leading role in our industries.\n\nWe'll continue to use innovation to push us forward. Meanwhile, we'll also build and maintain the best teams, produce the best media content, outhustle and outsell our competitors, seize the best opportunities and operate this company better than any other.\n\nIn other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\nJeffrey H. Smulyan chairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "April 2024", - "page_start": 0, - "page_end": 0, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "# **Item 9B. Other Information.**\n\nNone.\n\n# **PART III**\n\n# **Item 10. Directors, Executive Officers and Corporate Governance.**\n\nThe information required under this item is included in the following sections of our Proxy Statement for our 2015 Annual Meeting of Shareholders, the sections of which are incorporated by reference herein and will be filed within 120 days after the end of our fiscal year:\n\nExecutive Officers Director Elections Board Committees and Charters Director Nominating Process Website Access to Corporate Governance Documents Section 16(a) Beneficial Ownership Reporting Compliance Corporate Governance\n\nThe certifications of our President and Chief Financial Officer required pursuant to Sections 302 and 906 of the Sarbanes-Oxley Act of 2002 are included as exhibits to this Annual Report on Form 10-K and were included as exhibits to each of our quarterly reports on Form 10-Q. Our President certified to the New York Stock Exchange (\"NYSE\") on May 15, 2014 pursuant to Section 303A.12(a) of the NYSE's listing standards, that he was not aware of any violation by the Company of the NYSE's corporate governance listing standards as of that date.\n\n# **Item 11. Executive Compensation.**\n\nThe information required under this item is included in the following sections of our Proxy Statement for our 2015 Annual Meeting of Shareholders, the sections of which are incorporated by reference herein and will be filed within 120 days after the end of our fiscal year:\n\nCompensation of Executive Officers Compensation Discussion and Analysis Director Compensation Compensation Committee Interlocks and Insider Participation\n\n# **Item 12. Security Ownership of Certain Beneficial Owners and Management and Related Shareholder Matters.**\n\nThe information required under this item is included in the following sections of our Proxy Statement for our 2015 Annual Meeting of Shareholders, the sections of which are incorporated by reference herein and will be filed within 120 days after the end of our fiscal year:\n\nSecurity Ownership of Certain Beneficial Owners and Management Equity Compensation Plans\n\n# **Item 13. Certain Relationships and Related Transactions, and Director Independence.**\n\nThe information required under this item is included in the following sections of our Proxy Statement for our 2015 Annual Meeting of Shareholders, the sections of which are incorporated by reference herein and will be filed within 120 days after the end of our fiscal year:\n\nElection of Directors Certain Relationships and Related Transactions\n\n# **Item 14. Principal Accounting Fees and Services.**\n\nThe information required under this item is included in the following section of our Proxy Statement for our 2015 Annual Meeting of Shareholders, the section of which is incorporated by reference herein and will be filed within 120 days after the end of our fiscal year:\n\nRatification of the Appointment of Independent Registered Public Accounting Firm", - "page_start": 79, - "page_end": 79, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "#### Table of Contents\n\n#### Legal Proceedings\n\n#### Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the \"2018 CEO Performance Award\"). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n#### Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n#### Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## **18. Contributed Equity (continued)**\n\n#### **(d) Santos Executive Share Option Plan**\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 65\n\nThe Santos Executive Share Option Plan was approved by shareholders at the Annual General Meeting on 15 May 1997 and its continuation, with amendment, approved at the Annual General Meeting on 5 May 2000.\n\nThe Plan provides for the grant of options to subscribe for or purchase ordinary shares in the capital of the Company to eligible executives selected by the Board. Participation will be limited to those executives who, in the opinion of the Board, are able to significantly influence the generation of shareholder wealth. Directors envisage the Plan applying to up to 50 executives.\n\nEach option is a right to acquire one share, subject to adjustment in accordance with the Rules of the Plan. The options entitle the holder to participate in any bonus issue conducted by the Company, upon exercise of the options. The exercise price of each option will be adjusted in the event of a rights issue.\n\nThere are no voting or dividend rights attached to the options. There are no voting rights attached to the unissued ordinary shares. Voting rights will be attached to the unissued ordinary shares when the options have been exercised.\n\nThe exercise price of the options and other conditions, including any performance hurdles, will be determined by the Board. No consideration is provided by Executives for the options. The Plan provides for options with a life of up to ten years.\n\nThe ability to exercise the options is generally conditional on the Company achieving a prescribed performance hurdle or exercise condition. To reach the performance hurdle, the Company's Total Shareholder Return (broadly, growth in share price plus dividends reinvested) (\"TSR Growth\") over a minimum three-year period must equal or exceed 10% per annum calculated on a compound basis. If Total Shareholder Return does not reach the performance hurdle at the end of those respective periods, the options may nevertheless be exercisable if the hurdle is subsequently reached within the remaining life of the options. In assessing the performance against the hurdle, the Board may apply on a consistent basis an averaging method over a period of three months to allow for short-term volatility.\n\nThe fair value of shares issued as a result of exercising the options during the reporting period at their issue date is the market price of shares of the Company on the Australian Stock Exchange as at close of trading.\n\nDuring the financial year, the Company granted 330,148 options over unissued shares as set out below. The ability to exercise 200,000 of these options is generally conditional on the Company achieving the performance hurdle described above and the balance are subject to the forfeiture provision described in the Senior Executive Long Term Incentive section of the Santos Executive Share Purchase Plan described above.\n\nThe amounts recognised in the financial statements of the Santos Group and the Company in relation to executive share options exercised during the financial year were:\n\n| | Consolidated | | Santos Ltd | |\n| --- | --- | --- | --- | --- |\n| | 2004 | 2003 | 2004 | 2003 |\n| | $million | $million | $million | $million |\n| Issued ordinary share capital | 4.1 | 5.7 | 4.1 | 5.7 |", - "page_start": 66, - "page_end": 66, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "Who is the President of the TV Department of Emmis Communications?", - "target_page": 6, - "target_passage": "Randall Bongarten Television Division President", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Outperform\n\nEmmis Communications 2004 Annual Report", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "### ®\n\nemmis communications one emmis plaza 40 monument circle indianapolis, indiana 46204", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for \"truth\") acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of *Indianapolis Monthly*, and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\n*This annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.*", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "#### Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, 317.266.0100.\n\n#### Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes *Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles* and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n#### Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n#### Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n#### Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n#### Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n| --- | --- | --- |\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18.00 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n#### Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe Publishing Division President\n\n#### Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh Former Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal President and Majority Owner, LMCS, LLC\n\nPeter A. Lund Media consultant and former President of CBS Inc.\n\nGreg A. Nathanson Media consultant and former President of Fox Television Stations and Emmis Television\n\nFrank V. Sica Senior Advisor Soros Fund Management LLC\n\nLawrence B. Sorrel Managing Partner and Co-CEO Tailwind Capital Partners", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Executive Summary\n\n#### ABOUT ROGERS COMMUNICATIONS INC.\n\n#### Rogers Communications is one of Canada's leading diversified communications and media companies.\n\n(%)\n\nWe provide a broad range of services: wireless and wired voice and data communications, cable television, high-speed Internet, cable telephony, wired telecom and data networking services to consumers and businesses. We also compete in television and radio broadcasting, multi-platform shopping, sports media and entertainment, digital media and consumer, trade and professional publications.\n\nAlmost all of our operations and sales are in Canada. We have a highly skilled and diversified workforce of approximately 28,000 employees. Our head-office is in Toronto, Ontario and we have numerous offices across Canada.\n\n#### FOUR BUSINESS SEGMENTS\n\nWe report our results of operations in four segments.\n\n| Wireless | Wireless telecommunications operations |\n| --- | --- |\n| | for consumers and businesses |\n| Cable | Cable telecommunications operations, |\n| | including cable television, Internet and |\n| | cable telephony for |\n| | Canadian consumers and businesses |\n| Business Solutions | Network connectivity through our fibre |\n| | network assets to support a range of |\n| | voice, data, networking, data centre and |\n| | cloud-based services for medium and |\n| | large Canadian businesses, governments, |\n| | and other telecommunications providers |\n| Media | A diversified portfolio of media |\n| | properties, including television and radio |\n| | broadcasting, digital media, multi |\n| | platform shopping, publishing and sports |\n| | media and entertainment |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## you can count on emmis to continue to do\n\n#### Dear Shareholders,\n\nOn our year-end conference call, I said that last year was the best in Emmis Communications' history. And while that might have sounded like the usual Wall Street hyperbole – like any other CEO bragging about his company's performance – the difference is, I believed it. And I still do.\n\nBut I've been in this business long enough to know two things for sure: What I believe is not as important as what I can prove, and what we did last year is only meaningful if it reflects on how we will do in the coming year. The good news is, Emmis does have the results to back up my high praise, and what we did to perform last year does directly relate to how we'll perform in the year ahead.\n\n#### **The best year**\n\nThe bottom line is this: Emmis Communications turned in a remarkable performance last year. Again and again, and by a number of measures, we outperformed our peers, our markets and our own solid track record.\n\nAnd we did this in a year that was challenging in just about every way. The economy was unstable, public companies came under continuing scrutiny, indecency issues hounded broadcasters, competition for tight ad dollars increased and technology continued to reshape the media world.\n\nBut our people refused to be slowed by those challenges. Instead, they worked through them. They innovated, hustled and focused. And they produced.\n\nOur radio division's revenue growth led our markets and the industry – in our fiscal year, our group was up 4.5 percent while our markets were up 2.7 percent and the industry only 1 percent. Based on this kind of performance, we have consistently ranked among the nation's leaders in per-station revenue, and we continue to produce top-rated programming in markets across the nation.\n\nOur TV performance was even more impressive. The Emmis television group's revenues were up 0.5 percent in calendar 2003, a year when our markets saw a 2.3 percent decrease in revenues, and the industry experienced a 4.7 percent revenue decline. This industry-leading result made us one of the few groups in the nation to post positive growth. In addition, we gained revenue share at 11 of our 13 measured stations and held the line on expenses, giving us a 1.2 percent increase in fiscal-year cash flow.\n\nOur publishing and international divisions also posted strong results. In a tough publishing market, our magazines boosted their division's revenues by 4.6 percent over last year and increased cash flow by 3.3 percent. Our international division turned in a revenue increase of 27 percent and a cash flow increase of 31 percent.\n\nIn addition to boosting performance in our divisions, we honed our corporate operations by continuing to build one of the most adept and hardest-working corporate groups in American media. With this team in place, we've brought our leverage and cost of capital down to more manageable levels, found ways to combat the continually increasing costs of health insurance and, in a truly top-notch effort, smoothly integrated our new Austin radio properties – in just under a year as a part of Emmis, the Austin properties are enjoying significant ratings and revenue increases.\n\nOf course, for you, the real bottom line on our performance is its impact on your investment. I'm proud to say that we saw a 27 percent increase in our share price over the course of the last fiscal year – we ended fiscal '03 at 19.79, and closed the book on fiscal '04 at 25.17.\n\n#### **How we did it**\n\nOperationally, we were on top of our game last year. However, as I said, I know that the past year's performance really only matters if it reflects on what we'll do in the coming year. The good news is, it does. We performed at these high levels not by doing something unusual, but by operating the way Emmis has always operated, and the way we always will.\n\nFirst of all, we focus on assembling and maintaining the best teams in our markets. We have traditionally had the top salespeople, creative and technical professionals, news staffs, managers and support staff in every city where we operate. Their peers turn to them for industry leadership, honor them with awards and copy them at every opportunity. We invest in these people, giving them industry-leading benefits packages, great opportunities and the tools they need to succeed. This has always been a hallmark of Emmis, and it won't change.", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## what it has always done: outperform.\n\nIn addition, we commit ourselves to creating the best content in our markets. Our magazines routinely dominate their industry awards ceremonies – last year, *Texas Monthly* won a coveted National Magazine Award, and Emmis publications claimed more than half of the awards at the City and Regional Magazine competition. Our radio stations feature some of the industry's most popular personalities – in 2003, Emmis people and stations were awarded three Marconi Radio Awards. And our television operations are regularly honored by journalism organizations for their news gathering and community service. In short, we provide our markets with reliable, high-quality content – content that helps us assemble the audiences our advertisers want to reach.\n\nWe then generate revenue by overallocating to sales. We give our teams well-developed strategies, clearly defined brands and solid products. We build bigger, better sales forces and put a greater emphasis on local dollars than our competitors. We hire aggressive managers, set ambitious goals and then watch our people work harder and smarter than anyone else.\n\nWe also seize the right opportunities and make the most of them. As the cost of buying radio properties has gone through the roof, we have been careful about buying. However, when we had a chance to acquire the LBJ stations in Austin, we knew it was the right fit: good stations, a tremendous heritage and a great culture, all with an opportunity for growth. And we've already built on that group's track record – since we bought them, we've reformatted one station and quickly sent it to No. 1 in the market, and we've pushed revenues up 9 percent for the entire group.\n\nFinally, we innovate. Why has Emmis, traditionally a radio company, become the company to emulate in TV? Because we approached TV in a way it's never been approached before. Why do we operate leading hip-hop stations in markets across the nation? Because we pioneered the concept. Why have we created a new \"Music with Class\" format in St. Louis' Red 104.1? Because we believe we see a new opportunity. We know that successful companies don't follow the pack. They lead it, and that's what we'll always do.\n\n#### **The year ahead**\n\nThat last point – innovation – is an important one, especially for the future of Emmis, because we are planning something that could change the face of American TV and once again demonstrate that Emmis is a company that leads the way.\n\nForty years ago, Americans began taking down their TV antennas and severing broadcasters' direct link to television audiences. Since then, the cable companies—the middlemen who replaced us—have created more than $300 billion of value for themselves. However, changes in technology have given broadcasters the ability to provide the American public with the most popular TV channels, without the middlemen and at a more reasonable price.\n\nWe are developing an innovative model that will leverage that technology to get broadcast companies back into the game. I believe it has the potential to revolutionize the television industry. I also believe it will add substantial value to your investment.\n\nWe unveiled this concept at the National Association of Broadcasters meeting in April. I am proud to say that 11 other television companies joined us at that meeting to express their support for what we're calling the Broadcasters' Initiative, and more are signing on each week. Once again, Emmis has leveraged innovation to take a leading role in our industries.\n\nWe'll continue to use innovation to push us forward. Meanwhile, we'll also build and maintain the best teams, produce the best media content, outhustle and outsell our competitors, seize the best opportunities and operate this company better than any other.\n\nIn other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\nJeffrey H. Smulyan chairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "#### **Letter to Shareholders**\n\n# *Dear Fellow Shareholders:*\n\nI am pleased to report that 2004 was a very good year for Republic Services, Inc. Our team met and exceeded the important financial and management goals we told you about here a year ago, and we plan to work just as hard and accomplish just as much in the coming year.\n\nRepublic is strengthening its competitive position among the leading waste services providers every day. As always, we are doing so by offering our customers cost-effective and safe waste collection, reliable recycling, and environmentally protective disposal options.\n\nI am proud of our team and what they accomplished. The results tell you just how well they did.\n\nare exceeded.\n\n**The Year Ahead**\n\nimpressive results in 2005.\n\n2005 and beyond.\n\n**James E. O'Connor**\n\nMarch 31, 2005\n\n*Chairman and Chief Executive Officer*\n\nSincerely,\n\nmade to our people and service communities.\n\nOur decentralized structure is an advantage. It gives us flexibility and speed in reacting to local conditions. Our division leaders are well-positioned to respond immediately to the needs, changes and developments among their customers. We in the corporate office set the goals, establish the discipline, provide financial resources, management and operational support, but it is in our local divisions where customer relationships are established and the work is done. Our community-based focus forges strong local relationships and ensures that, at the customer level, the highest expectations\n\n**Board of Directors**\n\nJames E. O'Connor 1 *Chairman & Chief Executive Officer*\n\nJames E. O'Connor\n\n**Officers**\n\nDavid A. Barclay\n\nTod C. Holmes\n\nLee V. Twyford\n\nBrian A. Bales\n\nTim M. Benter\n\nJerry S. Clark\n\nPaul J. Connealy *Vice President, Tax* Matthew E. Davies\n\nArthur J. Dudzinski\n\nKenneth M. Baylor\n\n*Vice President & Controller*\n\nMichael J. Cordesman\n\nW. Lee Nutter 2, 3, 4 *Chairman, Compensation Committee Chairman, President & Chief Executive Officer Rayonier, Inc. (a forest products company)*\n\n*Chairman & Chief Executive Officer*\n\n*President & Chief Operating Officer* \n\n*Senior Vice President & General Counsel*\n\n*Vice President, Corporate Development*\n\n*Vice President, Employee & Labor Relations*\n\n*Vice President & Associate General Counsel*\n\n*Regional Vice President - Western Region*\n\n*Vice President, Environmental Engineering & Compliance*\n\n*Senior Vice President & Chief Financial Officer*\n\n*Senior Vice President & Chief Information Officer*\n\nWilliam C. Flower\n\nAllan C. Sorensen 2, 3, 4 *Presiding Director President & Chief Executive Officer Interim Health Care, Inc. (a provider of temporary labor to the healthcare industry)*\n\nHarris W. Hudson 1 *Vice Chairman of the Board*\n\n1 *Member, Executive Committee* • 2 *Member, Audit Committee* • 3 *Member, Compensation Committee* • 4 *Member, Nominating and Corporate Governance Committee*\n\nRamon A. Rodriguez 2, 3, 4 *Chairman, Audit Committee President & Chief Executive Officer Madsen, Sapp, Mena, Rodriguez & Co. (a public accounting firm)*\n\nMatthew D. Katz\n\nRonald R. Krall\n\nEdward A. Lang III\n\nThomas E. Miller\n\nCraig J. Nichols\n\nCharles F. Serianni\n\nRobert N. Shepard\n\nKevin C. Walbridge\n\nGerard W. Wickett\n\nGary L. Sova\n\n*Vice President, Communications*\n\n*Vice President & Associate General Counsel*\n\nMichael W. Wickham 2, 3, 4 *Retired Chairman, President & Chief Executive Officer, Roadway Corporation*\n\nJohn W. Croghan 2, 3, 4 *Chairman, Nominating and Corporate Governance Committee Chairman, Rail-Splitter Capital Management, LLC (an investment management firm)*\n\n*Regional Vice President - Southwest Region*\n\n*Vice President & Chief Accounting Officer*\n\n*Regional Vice President - Southern Region*\n\n*Regional Vice President - Central Region*\n\n*Vice President, Purchasing & Maintenance*\n\n*Regional Vice President - Eastern Region*\n\n*Vice President, Finance & Treasurer*\n\n*Vice President, Human Resources*\n\n*Vice President, Marketing & Sales*\n\nUltimately, all the things we do as a Company are aimed at increasing value for our shareholders. We know the importance of strong and predictable cash flow in meeting our shareholders' expectations. Over time, our cash flow has proven to be a strong indicator of the quality of our earnings. Last year's record free cash flow enabled us to reinvest in our business, acquire new companies, repurchase $266 million of our common stock and double the quarterly dividend to $0.12 per share. The plan this year is similar. We will continue to use our strong free cash flow to grow and strengthen the Company by building our customer base through internal growth and strategic acquisitions. Additionally, we plan to repurchase Republic stock worth up to $275 million and pay a regular quarterly cash dividend to\n\nWe are focused on improving our service and strengthening relationships with our customers. Exceptional service allows us to build loyalty and create lasting bonds with those we serve. We will continue to train and develop our people, too, so they may grow as we grow as a Company. And we will continue to focus on improving the safety of our operations, an important commitment we have\n\nThe last year was indeed an outstanding one for Republic. Our goal is to continue to deliver\n\nI am both privileged and grateful to have the opportunity to lead a team of such exceptional people. Everyday, I grow more impressed with the experience, knowledge, loyalty and hard work they\n\nOn behalf of all of us at Republic, I want to thank our shareholders for the trust they have placed in us. We are a Company that cares about you, and we pledge to continue working hard to serve you in\n\ncontribute. Republic truly has one of the best management and operations teams in America.\n\nour shareholders. We believe these steps will increase shareholder value.\n\nRevenue in 2004 grew 7.6 percent to $2.7 billion, a record. The increases came largely from new municipal contracts and improved pricing. At the same time, we benefited from our presence in highgrowth markets, especially those in the rapidly expanding Sunbelt states.\n\nWe met last year's guidance. Net income per diluted share rose 15 percent to $1.53. Our revenue enhancement and cost reduction efforts produced results. We generated a record level of free cash flow - $388 million to be exact. Republic continues to generate strong and predictable levels of cash flow. As in the past year, we will concentrate on free cash flow and use it for acquisitions, reinvestment, repurchases of our stock and regular quarterly cash dividends.\n\nAs I thought about these achievements, I realized they result from the environment that we work to create for both our customers and our people. We care about our customers and the communities we serve. About our people. About the environment. And, of course, we care about you -- our shareholders. Every year we adopt a theme that captures our Company and our values. Our theme for 2005 is \"Republic Services…A Company that cares\".\n\nOur 13,400 dedicated people worked hard last year to create real value. We improved the way we deliver our services, increasing our efficiency in routing our collection trucks. We improved the way we construct disposal cells at numerous landfills, lowering costs. We worked with our vendors to control prices. And, we communicated to our customers the value of the services we offer. This year will be no different. We will continue to concentrate on these fundamentals.\n\nRepublic's future is bright. We are mindful of our mission. We know our business exists to ease the burden of managing society's waste. It's not a glamorous business, but it is an essential one, and we take this responsibility very seriously.\n\nAt the end of the year, Republic had 140 collection companies, 58 landfills, 96 transfer stations and 35 recycling facilities in 22 states. These resources give us many opportunities to listen to our customers, anticipate their needs and quickly respond to them. Each customer faces challenges unique to his or her business and community. Our goal is to remain flexible and to tailor our services to each customer.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## **SENIOR EXECUTIVE OFFICERS** OF ROGERS COMMUNICATIONS INC.\n\nAS OF FEBRUARY 11, 2014\n\n#### SENIOR **EXECUTIVE OFFICERS**\n\n- 14 **Guy Laurence** President and Chief Executive Officer\n- 15 **Robert F. Berner** Executive Vice President, Network and Chief Technology Officer\n- 16 **Robert W. Bruce** President, Communications Division\n- 17 **Linda P. Jojo** Executive Vice President, Information Technology and Chief Information Officer\n- 18 **Philip B. Lind, CM** Executive Vice President, Regulatory and Vice Chairman\n- 19 **David P. Miller** Senior Vice President, Legal and General Counsel\n- 20 **Keith W. Pelley** President, Rogers Media\n- 21 **Jim M. Reid** Senior Vice President, Human Resources and Chief Human Resources Officer\n- 22 **Edward S. Rogers** Deputy Chairman and Executive Vice President, Emerging Business, Corporate Development\n- 23 **Melinda M. Rogers** Senior Vice President, Strategy and Development\n- 24 **Anthony Staffieri, FCPA, FCA** Executive Vice President and Chief Financial Officer\n- 25 **Terrie L. Tweddle** Vice President, Corporate Communications\n\n#### For detailed biographical information of Rogers Executive Officers, go to **rogers.com/investors**", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Our decentralized structure is an advantage. It gives us flexibility and speed in reacting to local conditions. Our division leaders are well-positioned to respond immediately to the needs, changes and developments among their customers. We in the corporate office set the goals, establish the discipline, provide financial resources, management and operational support, but it is in our local divisions where customer relationships are established and the work is done. Our community-based focus forges strong local relationships and ensures that, at the customer level, the highest expectations are exceeded.\n\n**Board of Directors**\n\nJames E. O'Connor 1 *Chairman & Chief Executive Officer*\n\nJames E. O'Connor\n\n**Officers**\n\nDavid A. Barclay\n\nTod C. Holmes\n\nLee V. Twyford\n\nBrian A. Bales\n\nTim M. Benter\n\nJerry S. Clark\n\nPaul J. Connealy *Vice President, Tax* Matthew E. Davies\n\nArthur J. Dudzinski\n\nKenneth M. Baylor\n\n*Vice President & Controller*\n\nMichael J. Cordesman\n\nW. Lee Nutter 2, 3, 4 *Chairman, Compensation Committee Chairman, President & Chief Executive Officer Rayonier, Inc. (a forest products company)*\n\n*Chairman & Chief Executive Officer*\n\n*President & Chief Operating Officer* \n\n*Senior Vice President & General Counsel*\n\n*Vice President, Corporate Development*\n\n*Vice President, Employee & Labor Relations*\n\n*Vice President & Associate General Counsel*\n\n*Regional Vice President - Western Region*\n\n*Vice President, Environmental Engineering & Compliance*\n\n*Senior Vice President & Chief Financial Officer*\n\n*Senior Vice President & Chief Information Officer*\n\nWilliam C. Flower\n\nAllan C. Sorensen 2, 3, 4 *Presiding Director President & Chief Executive Officer Interim Health Care, Inc. (a provider of temporary labor to the healthcare industry)*\n\nHarris W. Hudson 1 *Vice Chairman of the Board*\n\n1 *Member, Executive Committee* • 2 *Member, Audit Committee* • 3 *Member, Compensation Committee* • 4 *Member, Nominating and Corporate Governance Committee*\n\nRamon A. Rodriguez 2, 3, 4 *Chairman, Audit Committee President & Chief Executive Officer Madsen, Sapp, Mena, Rodriguez & Co. (a public accounting firm)*\n\nMatthew D. Katz\n\nRonald R. Krall\n\nEdward A. Lang III\n\nThomas E. Miller\n\nCraig J. Nichols\n\nCharles F. Serianni\n\nRobert N. Shepard\n\nKevin C. Walbridge\n\nGerard W. Wickett\n\nGary L. Sova\n\n*Vice President, Communications*\n\n*Vice President & Associate General Counsel*\n\nMichael W. Wickham 2, 3, 4 *Retired Chairman, President & Chief Executive Officer, Roadway Corporation*\n\nJohn W. Croghan 2, 3, 4 *Chairman, Nominating and Corporate Governance Committee Chairman, Rail-Splitter Capital Management, LLC (an investment management firm)*\n\n*Regional Vice President - Southwest Region*\n\n*Vice President & Chief Accounting Officer*\n\n*Regional Vice President - Southern Region*\n\n*Regional Vice President - Central Region*\n\n*Vice President, Purchasing & Maintenance*\n\n*Regional Vice President - Eastern Region*\n\n*Vice President, Finance & Treasurer*\n\n*Vice President, Human Resources*\n\n*Vice President, Marketing & Sales*\n\nUltimately, all the things we do as a Company are aimed at increasing value for our shareholders. We know the importance of strong and predictable cash flow in meeting our shareholders' expectations. Over time, our cash flow has proven to be a strong indicator of the quality of our earnings. Last year's record free cash flow enabled us to reinvest in our business, acquire new companies, repurchase $266 million of our common stock and double the quarterly dividend to $0.12 per share. The plan this year is similar. We will continue to use our strong free cash flow to grow and strengthen the Company by building our customer base through internal growth and strategic acquisitions. Additionally, we plan to repurchase Republic stock worth up to $275 million and pay a regular quarterly cash dividend to our shareholders. We believe these steps will increase shareholder value.\n\n#### **The Year Ahead**\n\n*Dear Fellow Shareholders:*\n\nI am pleased to report that 2004 was a very good year for Republic Services, Inc. Our team met and exceeded the important financial and management goals we told you about here a year ago, and we plan to work just as hard and\n\nRepublic is strengthening its competitive position among the leading waste services providers every day. As always, we are doing so by offering our customers cost-effective and safe waste collection, reliable recycling, and\n\nI am proud of our team and what they accomplished. The\n\nfor 2005 is \"Republic Services…A Company that cares\".\n\ngrowth markets, especially those in the rapidly expanding Sunbelt states.\n\nreinvestment, repurchases of our stock and regular quarterly cash dividends.\n\nwill be no different. We will continue to concentrate on these fundamentals.\n\nRevenue in 2004 grew 7.6 percent to $2.7 billion, a record. The increases came largely from new municipal contracts and improved pricing. At the same time, we benefited from our presence in high-\n\nWe met last year's guidance. Net income per diluted share rose 15 percent to $1.53. Our revenue enhancement and cost reduction efforts produced results. We generated a record level of free cash flow - $388 million to be exact. Republic continues to generate strong and predictable levels of cash flow. As in the past year, we will concentrate on free cash flow and use it for acquisitions,\n\nAs I thought about these achievements, I realized they result from the environment that we work to create for both our customers and our people. We care about our customers and the communities we serve. About our people. About the environment. And, of course, we care about you -- our shareholders. Every year we adopt a theme that captures our Company and our values. Our theme\n\nOur 13,400 dedicated people worked hard last year to create real value. We improved the way we deliver our services, increasing our efficiency in routing our collection trucks. We improved the way we construct disposal cells at numerous landfills, lowering costs. We worked with our vendors to control prices. And, we communicated to our customers the value of the services we offer. This year\n\nRepublic's future is bright. We are mindful of our mission. We know our business exists to ease the burden of managing society's waste. It's not a glamorous business, but it is an essential one, and we\n\nAt the end of the year, Republic had 140 collection companies, 58 landfills, 96 transfer stations and 35 recycling facilities in 22 states. These resources give us many opportunities to listen to our customers, anticipate their needs and quickly respond to them. Each customer faces challenges unique to his or her business and community. Our goal is to remain flexible and to tailor our services to each\n\naccomplish just as much in the coming year.\n\n**Letter to Shareholders**\n\nenvironmentally protective disposal options.\n\nresults tell you just how well they did.\n\ntake this responsibility very seriously.\n\ncustomer.\n\nWe are focused on improving our service and strengthening relationships with our customers. Exceptional service allows us to build loyalty and create lasting bonds with those we serve. We will continue to train and develop our people, too, so they may grow as we grow as a Company. And we will continue to focus on improving the safety of our operations, an important commitment we have made to our people and service communities.\n\nThe last year was indeed an outstanding one for Republic. Our goal is to continue to deliver impressive results in 2005.\n\nI am both privileged and grateful to have the opportunity to lead a team of such exceptional people. Everyday, I grow more impressed with the experience, knowledge, loyalty and hard work they contribute. Republic truly has one of the best management and operations teams in America.\n\nOn behalf of all of us at Republic, I want to thank our shareholders for the trust they have placed in us. We are a Company that cares about you, and we pledge to continue working hard to serve you in 2005 and beyond.\n\nSincerely,\n\n**James E. O'Connor** *Chairman and Chief Executive Officer* March 31, 2005", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "Does the radio station 93.7 in Austin belong to Emmis Communication?", - "target_page": 7, - "target_passage": "KLBJ-FM (93.7), Album Oriented Rock", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for \"truth\") acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of *Indianapolis Monthly*, and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\n*This annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.*", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "#### Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, 317.266.0100.\n\n#### Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes *Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles* and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n#### Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n#### Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n#### Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n#### Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n| --- | --- | --- |\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18.00 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n#### Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe Publishing Division President\n\n#### Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh Former Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger Executive Vice President, Chief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal President and Majority Owner, LMCS, LLC\n\nPeter A. Lund Media consultant and former President of CBS Inc.\n\nGreg A. Nathanson Media consultant and former President of Fox Television Stations and Emmis Television\n\nFrank V. Sica Senior Advisor Soros Fund Management LLC\n\nLawrence B. Sorrel Managing Partner and Co-CEO Tailwind Capital Partners", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "### ®\n\nemmis communications one emmis plaza 40 monument circle indianapolis, indiana 46204", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Outperform\n\nEmmis Communications 2004 Annual Report", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## what it has always done: outperform.\n\nIn addition, we commit ourselves to creating the best content in our markets. Our magazines routinely dominate their industry awards ceremonies – last year, *Texas Monthly* won a coveted National Magazine Award, and Emmis publications claimed more than half of the awards at the City and Regional Magazine competition. Our radio stations feature some of the industry's most popular personalities – in 2003, Emmis people and stations were awarded three Marconi Radio Awards. And our television operations are regularly honored by journalism organizations for their news gathering and community service. In short, we provide our markets with reliable, high-quality content – content that helps us assemble the audiences our advertisers want to reach.\n\nWe then generate revenue by overallocating to sales. We give our teams well-developed strategies, clearly defined brands and solid products. We build bigger, better sales forces and put a greater emphasis on local dollars than our competitors. We hire aggressive managers, set ambitious goals and then watch our people work harder and smarter than anyone else.\n\nWe also seize the right opportunities and make the most of them. As the cost of buying radio properties has gone through the roof, we have been careful about buying. However, when we had a chance to acquire the LBJ stations in Austin, we knew it was the right fit: good stations, a tremendous heritage and a great culture, all with an opportunity for growth. And we've already built on that group's track record – since we bought them, we've reformatted one station and quickly sent it to No. 1 in the market, and we've pushed revenues up 9 percent for the entire group.\n\nFinally, we innovate. Why has Emmis, traditionally a radio company, become the company to emulate in TV? Because we approached TV in a way it's never been approached before. Why do we operate leading hip-hop stations in markets across the nation? Because we pioneered the concept. Why have we created a new \"Music with Class\" format in St. Louis' Red 104.1? Because we believe we see a new opportunity. We know that successful companies don't follow the pack. They lead it, and that's what we'll always do.\n\n#### **The year ahead**\n\nThat last point – innovation – is an important one, especially for the future of Emmis, because we are planning something that could change the face of American TV and once again demonstrate that Emmis is a company that leads the way.\n\nForty years ago, Americans began taking down their TV antennas and severing broadcasters' direct link to television audiences. Since then, the cable companies—the middlemen who replaced us—have created more than $300 billion of value for themselves. However, changes in technology have given broadcasters the ability to provide the American public with the most popular TV channels, without the middlemen and at a more reasonable price.\n\nWe are developing an innovative model that will leverage that technology to get broadcast companies back into the game. I believe it has the potential to revolutionize the television industry. I also believe it will add substantial value to your investment.\n\nWe unveiled this concept at the National Association of Broadcasters meeting in April. I am proud to say that 11 other television companies joined us at that meeting to express their support for what we're calling the Broadcasters' Initiative, and more are signing on each week. Once again, Emmis has leveraged innovation to take a leading role in our industries.\n\nWe'll continue to use innovation to push us forward. Meanwhile, we'll also build and maintain the best teams, produce the best media content, outhustle and outsell our competitors, seize the best opportunities and operate this company better than any other.\n\nIn other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\nJeffrey H. Smulyan chairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "#### emmis entities\n\n#### **RADIO**\n\n**Austin** KDHT-FM (93.3), Rhythmic CHR KEYI-FM (103.5), Oldies KGSR-FM (107.1), Adult Alternative KLBJ-AM (590), News/Talk KLBJ-FM (93.7), Album Oriented Rock KROX-FM (101.5), Alternative Rock **Chicago** WKQX-FM (101.1), Alternative Rock **Indianapolis** WENS-FM (97.1), Adult Contemporary WIBC-AM (1070), News/Talk/Sports WNOU-FM (93.1), CHR WYXB-FM (105.7), Soft Adult Contemporary Network Indiana, Statewide news network **Los Angeles** KPWR-FM (105.9), Hip-Hop/R&B KZLA-FM (93.9), Country **New York** WQCD-FM (101.9), Smooth Jazz WQHT-FM (97.7), Hip-Hop WRKS-FM(98.7), Classic Soul/Today's R&B **Phoenix** KKFR-FM(92.3), Rhythmic CHR KKLT-FM (98.7), Adult Contemporary KMVP-AM (860), Sports\n\n#### **St. Louis**\n\nKFTK-FM (97.1), Talk KIHT-FM (96.3), Classic Hits KPNT-FM (105.7), Alternative Rock KSHE-FM (94.7), Album Oriented Rock WRDA-FM (104.1), New Standards **Terre Haute** WTHI-FM (99.9), Country WWVR-FM (105.5), Classic Rock\n\n#### **TELEVISION**\n\n- Albuquerque, N.M., KRQE-TV (Channel 13), CBS programming/local news Fort Myers, Fla., WFTX-TV (Channel 4), Fox programming/local news Green Bay, Wis., WLUK-TV (Channel 11), Fox programming/local news Honolulu, KHON-TV (Channel 2), Fox programming/local news Honolulu, KGMB-TV (Channel 9), CBS programming/local news Huntington/Charleston, W.Va., WSAZ-TV (Channel 3), NBC programming/local news Mobile, Ala./Pensacola, Fla., WALA-TV (Channel 10), Fox programming/local news Mobile, Ala./Pensacola, Fla., WBPG-TV (Channel 55), WB programming New Orleans, WVUE-TV (Channel 8), Fox programming/local news Omaha, Neb., KMTV-TV (Channel 3), CBS programming/local news\n#### Orlando, Fla., WKCF-TV (Channel 18), WB programming Portland, Ore., KOIN-TV (Channel 6), CBS programming/local news Terre Haute, Ind., WTHI-TV (Channel 10), CBS programming/local news Topeka, Kan., KSNT-TV (Channel 27), NBC programming/local news Tucson, Ariz., KGUN-TV (Channel 9), ABC programming/local news Wichita, Kan., KSNW-TV (Channel 3), NBC programming/local news\n\n#### **PUBLISHING**\n\n- *Atlanta Country Sampler Cincinnati Indianapolis Monthly Los Angeles Texas Monthly*\n#### **INTERNATIONAL** Hungary, Sláger Rádió, Classic Rock/local programming Belgium, nine stations serving the Flanders region\n\n**RELATED BUSINESSES** Emmis Books Emmis Interactive RDS\n\nKTAR-AM (620), News/Talk/Sports", - "page_start": 6, - "page_end": 6, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## you can count on emmis to continue to do\n\n#### Dear Shareholders,\n\nOn our year-end conference call, I said that last year was the best in Emmis Communications' history. And while that might have sounded like the usual Wall Street hyperbole – like any other CEO bragging about his company's performance – the difference is, I believed it. And I still do.\n\nBut I've been in this business long enough to know two things for sure: What I believe is not as important as what I can prove, and what we did last year is only meaningful if it reflects on how we will do in the coming year. The good news is, Emmis does have the results to back up my high praise, and what we did to perform last year does directly relate to how we'll perform in the year ahead.\n\n#### **The best year**\n\nThe bottom line is this: Emmis Communications turned in a remarkable performance last year. Again and again, and by a number of measures, we outperformed our peers, our markets and our own solid track record.\n\nAnd we did this in a year that was challenging in just about every way. The economy was unstable, public companies came under continuing scrutiny, indecency issues hounded broadcasters, competition for tight ad dollars increased and technology continued to reshape the media world.\n\nBut our people refused to be slowed by those challenges. Instead, they worked through them. They innovated, hustled and focused. And they produced.\n\nOur radio division's revenue growth led our markets and the industry – in our fiscal year, our group was up 4.5 percent while our markets were up 2.7 percent and the industry only 1 percent. Based on this kind of performance, we have consistently ranked among the nation's leaders in per-station revenue, and we continue to produce top-rated programming in markets across the nation.\n\nOur TV performance was even more impressive. The Emmis television group's revenues were up 0.5 percent in calendar 2003, a year when our markets saw a 2.3 percent decrease in revenues, and the industry experienced a 4.7 percent revenue decline. This industry-leading result made us one of the few groups in the nation to post positive growth. In addition, we gained revenue share at 11 of our 13 measured stations and held the line on expenses, giving us a 1.2 percent increase in fiscal-year cash flow.\n\nOur publishing and international divisions also posted strong results. In a tough publishing market, our magazines boosted their division's revenues by 4.6 percent over last year and increased cash flow by 3.3 percent. Our international division turned in a revenue increase of 27 percent and a cash flow increase of 31 percent.\n\nIn addition to boosting performance in our divisions, we honed our corporate operations by continuing to build one of the most adept and hardest-working corporate groups in American media. With this team in place, we've brought our leverage and cost of capital down to more manageable levels, found ways to combat the continually increasing costs of health insurance and, in a truly top-notch effort, smoothly integrated our new Austin radio properties – in just under a year as a part of Emmis, the Austin properties are enjoying significant ratings and revenue increases.\n\nOf course, for you, the real bottom line on our performance is its impact on your investment. I'm proud to say that we saw a 27 percent increase in our share price over the course of the last fiscal year – we ended fiscal '03 at 19.79, and closed the book on fiscal '04 at 25.17.\n\n#### **How we did it**\n\nOperationally, we were on top of our game last year. However, as I said, I know that the past year's performance really only matters if it reflects on what we'll do in the coming year. The good news is, it does. We performed at these high levels not by doing something unusual, but by operating the way Emmis has always operated, and the way we always will.\n\nFirst of all, we focus on assembling and maintaining the best teams in our markets. We have traditionally had the top salespeople, creative and technical professionals, news staffs, managers and support staff in every city where we operate. Their peers turn to them for industry leadership, honor them with awards and copy them at every opportunity. We invest in these people, giving them industry-leading benefits packages, great opportunities and the tools they need to succeed. This has always been a hallmark of Emmis, and it won't change.", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "# Outperform.\n\n### emmis communications 2004 abbreviated financial highlights *in thousands except where noted*\n\n| year ended Feb. 28 (29) | '00 | '01 | '02 | '03 | '04 |\n| --- | --- | --- | --- | --- | --- |\n| net revenues | 325,265 | 473,345 | 539,822 | 562,363 | 591,868 |\n| station operating income* | 125,477 | 174,213 | 185,665 | 213,112 | 220,445 |\n| station op income margin | 38.6% | 36.8% | 34.4% | 37.9% | 37.2% |\n| leverage | 2.5x | 6.8x | 9.3x | 6.5x | 6.7x |\n| | | | | | *excluding noncash compensation |\n\nradio tv publishing $600,000 $500,000 $400,000 $300,000 $200,000 $100,000 $0 00 01 02 03 04 **325,265 473,345 539,822 562,363 591,868**\n\n5 4 3 2 1 0 1% 2.7% 4.5% **INDUSTRY MARKETS EMMIS** radio division revenue growth fiscal 2004\n\nnet revenue station operating income, excluding noncash compensation\n\ntv division revenue growth calendar 2003\n\n-", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Executive Summary\n\n#### ABOUT ROGERS COMMUNICATIONS INC.\n\n#### Rogers Communications is one of Canada's leading diversified communications and media companies.\n\n(%)\n\nWe provide a broad range of services: wireless and wired voice and data communications, cable television, high-speed Internet, cable telephony, wired telecom and data networking services to consumers and businesses. We also compete in television and radio broadcasting, multi-platform shopping, sports media and entertainment, digital media and consumer, trade and professional publications.\n\nAlmost all of our operations and sales are in Canada. We have a highly skilled and diversified workforce of approximately 28,000 employees. Our head-office is in Toronto, Ontario and we have numerous offices across Canada.\n\n#### FOUR BUSINESS SEGMENTS\n\nWe report our results of operations in four segments.\n\n| Wireless | Wireless telecommunications operations |\n| --- | --- |\n| | for consumers and businesses |\n| Cable | Cable telecommunications operations, |\n| | including cable television, Internet and |\n| | cable telephony for |\n| | Canadian consumers and businesses |\n| Business Solutions | Network connectivity through our fibre |\n| | network assets to support a range of |\n| | voice, data, networking, data centre and |\n| | cloud-based services for medium and |\n| | large Canadian businesses, governments, |\n| | and other telecommunications providers |\n| Media | A diversified portfolio of media |\n| | properties, including television and radio |\n| | broadcasting, digital media, multi |\n| | platform shopping, publishing and sports |\n| | media and entertainment |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "#### Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## **Introduction**\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors.1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event.3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors.5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints.7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems.11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care.15-18 Limited work to date has demonstrated EM electronic handoff tools as feasible, efficient, and effective.19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.\n\nIn recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout.22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes.23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries,24 radiology reports,25 patient messaging,26 after-visit summaries,27 and ambient dictation28 with various levels of perceived quality in each workflow.29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records.30 A common concern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content.31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets.32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes.34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency.35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases,36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach.37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary.24 However, recently published clinical\n\nJAMA Network Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723 (Reprinted) December 3, 2024 2/12", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "What are the two components considered in the expected free energy?", - "target_page": 4, - "target_passage": "The former (utilitarian) objective is to realize one’s preferences, such as being satiated or safe, by minimizing the discrepancy between preferred sensa- tions (encoded as “priors over observations” in active inference) and current sensations in different modalities (e.g. interoceptive or exteroceptive). The latter (epistemic) objective is to reduce uncertainty about one’s estimated state", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, Vij = V (0) ij + ∆Vij , a first-order truncated expression for the free energy density of the system βfv is obtained,\n\n$$\\beta f_{v}\\lesssim\\beta f_{v}^{(0)}+\\frac{1}{2}\\beta\\sum_{i,j}\\rho_{i}\\rho_{j}\\int\\mathrm{d}\\mathbf{r}\\,g_{i j}^{(0)}(r)\\Delta V_{i j}(r)\\qquad(1)$$\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = (kBT ) −1 and ρi the concentration of species i. The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter (σi) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆Vij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g(r) = exp [gMSA(r) − 1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye H¨uckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillan-Mayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\nWe first used LPT for a two-component system (Na+ and Cl− free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2.0 mol l−1 . The minimization leads to almost constant diameters on the whole range of concentration: σ1 = 3.67 ˚A and σ2 = 4.78 ˚A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0.1 mol l−1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4.2 ˚A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.\n\nTo overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "quantities as its target: the variational free energy (*VFE*) in the case of perception and the expected free energy (*EFE*) in the case of action. The *VFE* is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The *EFE* is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low *EFE* lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n#### *2.1. POMDPs in Active Inference*\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nA discrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: **A**, **B**, **C**, **D** and **E** [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\n**A**, also called the *observation model*, represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state *s*, and a row for each possible observation *o*. Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If **A** is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.\n\n**B**, also called the *transition model*, describes the state-to-state transition probabilities of environmental states *s*. **B** encodes the agent's assumptions about how the environment changes over time, depending on its actions. It has a column and a row for each environmental state *s*, where each column is a categorical probability distribution over the states the environment will take on the next time step, given the state it is currently in. If the environment is modelled as multidimensional, there will be a matrix for each environmental state factor. Additionally, there is a separate matrix for each possible action (making each factor in **B** a tensor). This means that for every factor in the model, there may be one or more actions that pick out the appropriate slice of the tensor. Action therefore allows the agent to predict that the environment (and the corresponding observations) will change differently depending on the actions that it chooses. If **B** is imprecise (i.e., highly entropic),", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "| Core Concepts | |\n| --- | --- |\n| AIF | Active inference is a formal framework for modelling behaviour and cog |\n| | nition. Perception and action are cast as minimising free energy—the VFE |\n| | and EFE, respectively—given a generative model of the environment. |\n| VFE | The variational free energy F quantifies how well a generative model |\n| | explains incoming sensory observations. It can be rewritten as the negative |\n| | log model evidence (called surprise) upper-bounded by the divergence |\n| | from the optimal posterior p(s o). Perception as inference is accomplished |\n| | by selecting the approximate posterior q(s) with the lowest associated |\n| | VFE. |\n| | F[q(s), o] ≜ DKL[q(s)∥p(o,s)] = DKL[q(s)∥p(s o)] − ln p(o) |\n| | {z } {z } Divergence Surprise |\n| EFE | The expected free energy G quantifies the expected future free energy |\n| | under an action policy π. It consists of an information gain term and a |\n| | pragmatic value term that provide a natural balance between exploratory |\n| | and goal-seeking behaviour. Action as inference is accomplished by select |\n| | ing the action policy with the lowest associated EFE. |\n| | = − Eq(o˜,s˜ π) [ln q(s˜ o˜, π) − ln q(s˜ π)] − Eq(o˜ π) [ln p(o˜ C)] Gπ |\n| | {z } {z } Information gain Pragmatic value |\n| Generative | The generative model is an agent's formal assumptions about the structure |\n| model | and dynamics of its environment, based on which perceptual and active |\n| | inferences are carried out. Many types of generative models exist that are |\n| | suitable for different environments and tasks. |\n| POMDP | The Partially Observable Markov Decision Process is a type of flexible |\n| | generative model that is widely used in the AIF literature. In discrete time |\n| | and usually a discrete state space, this model type is parametrised to fit a |\n| | given task by a set matrices containing probability distributions. |\n\n## **2. Active Inference with POMDPs**\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44–47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4–8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## *Apartment Property Expenses*\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## **Utility and Fuel Expense ‑ Same Store**\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % Change |\n| --- | --- | --- | --- |\n| Natural gas | $4,565 | $2,729 | 67.3% |\n| Oil | 1,523 | 2,095 | (27.3)% |\n| Electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| Other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year‑to‑date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## **Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas**", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina1,2,3 , ∗ Jean-Fran¸cois Dufrˆeche1,2,3 , † Mathieu\n\nSalanne1,2 , Olivier Bernard1,2 , Marie Jardat1,2 , and Pierre Turq1,2\n\n1 UPMC-Universit´e Paris 06, UMR 7195, PECSA, F-75005 Paris, France\n\nUMR 5257 CEA–CNRS–Universit´e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C`eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, H¨uckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8–11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from molecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.\n\nThe first stage consists in calculating the McMillan-Mayer effective ion-ion interaction potentials V eff ij (r), by inverting the radial distribution functions (RDF) gij (r) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0.64 mol l−1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij (r) (which depends on the dielectric constant of the solvent) from V eff ij (r), we obtain the short-range contribution V SR ij (r) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na+ and Cl− free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier (& 2kBT ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-\n\n<sup>2</sup> CNRS, UMR 7195, PECSA, F-75005 Paris, France 3\n\nInstitut de Chimie S´eparative de Marcoule (ICSM),\n\n<sup>∗</sup>Electronic address: john.molina@etu.upmc.fr\n\n<sup>†</sup>Electronic address: jean-francois.dufreche@upmc.fr", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI.[195][196] Another discussed approach is to envision a separate *sui generis* system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[197]\n\n#### **Dominance by tech giants**\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[201][202]\n\n#### **Power needs and environmental impacts**\n\nIn January 2024, the International Energy Agency (IEA) released *Electricity 2024, Analysis and Forecast to 2026*, forecasting electric power use.[203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[205]\n\nA 2024 Goldman Sachs Research Paper, *AI Data Centers and the Coming US Power Demand Surge*, found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[207]\n\nIn 2024, the *Wall Street Journal* reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.[209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "in a given band is compensated by an appropriate change of the spectral weight in other bands such that the total spectral weight, integrated over all bands, is conserved, as in Eq. (1). Still, non-conservation of the spectral weight within a given band is an interesting phenomenon as the degree of non-conservation is an indicator of relevant energy scales in the problem. Indeed, when relevant energy scales are much smaller than the Fermi energy, i.e., changes in the conductivity are confined to a near vicinity of a Fermi surface (FS), one can expand εk near kF as εk = vF (k − kF ) + (k − kF ) 2/(2mB) + O(k − kF ) 3 and obtain ∇2 k~x ε~k ≈ 1/mB [this approximation is equivalent to approximating the density of states (DOS) by a constant]. Then WK becomes πne2/(2mB) which does not depend on temperature. The scale of the temperature dependence of WK is then an indicator how far in energy the changes in conductivity extend when, e.g., a system evolves from a normal metal to a superconductor. Because relevant energy scales increase with the interaction strength, the temperature dependence of WK is also an indirect indicator of whether a system is in a weak, intermediate, or strong coupling regime.\n\nIn a conventional BCS superconductor the only relevant scales are the superconducting gap ∆ and the impurity scattering rate Γ. Both are generally much smaller than the Fermi energy, so the optical integral should be almost T -independent, i.e., the spectral weight lost in a superconducting state at low frequencies because of gap opening is completely recovered by the zero-frequency δfunction. In a clean limit, the weight which goes into a δ−function is recovered within frequencies up to 4∆. This is the essence of FGT sum rule 2,3. In a dirty limit, this scale is larger, O(Γ), but still WK is T -independent and there was no \"violation of sum rule\".\n\nThe issue of sum rule attracted substantial interest in the studies of high Tc cuprates5–18,21–26 in which pairing is without doubts a strong coupling phenomenon. From a theoretical perspective, the interest in this issue was originally triggered by a similarity between WK and the kinetic energy K = 2P ε~k n~k . 18–20 For a model with a simple tight binding cosine dispersion εk ∝ (cos kx + cos ky), d 2 ε~k d k2 x ∼ −ε~k and WK = −K. For a more complex dispersion there is no exact relation between WK and K, but several groups argued 17,27,28 that WK can still be regarded as a good monitor for the changes in the kinetic energy. Now, in a BCS superconductor, kinetic energy increases below Tc because nk extends to higher frequencies (see Fig.2). At strong coupling, K not necessary increases because of opposite trend associated with the fermionic self-energy: fermions are more mobile in the SCS due to less space for scattering at low energies than they are in the NS. Model calculations show that above some coupling strength, the kinetic energy decreases below Tc 29. While, as we said, there is no one-to-one correspondence between K and WK, it is still likely that, when K decreases, WK increases.\n\nA good amount of experimental effort has been put into\n\naddressing the issue of the optical sum rule in the c−axis7 and in-plane conductivities 8–16 in overdoped, optimally doped, and underdoped cuprates. The experimental results demonstrated, above all, outstanding achievements of experimental abilities as these groups managed to detect the value of the optical integral with the accuracy of a fraction of a percent. The analysis of the change of the optical integral between normal and SCS is even more complex because one has to (i) extend NS data to T < Tc and (ii) measure superfluid density with the same accuracy as the optical integral itself.\n\nThe analysis of the optical integral showed that in overdoped cuprates it definitely decreases below Tc, in consistency with the expectations at weak coupling11. For underdoped cuprates, all experimental groups agree that a relative change of the optical integral below Tc gets much smaller. There is no agreement yet about the sign of the change of the optical integral : Molegraaf et al.8 and Santander-Syro et al.9 argued that the optical integral increases below Tc, while Boris et al.10 argued that it decreases.\n\nTheoretical analysis of these results21,22,25,28,30 added one more degree of complexity to the issue. It is tempting to analyze the temperature dependence of WK and relate it to the observed behavior of the optical integral, and some earlier works25,28,30 followed this route. In the experiments, however, optical conductivity is integrated only up to a certain frequency ωc, and the quantity which is actually measured is\n\n$$W(\\omega_{c})=\\int_{0}^{\\omega_{c}}\\,Re\\,\\sigma(\\Omega)\\,d\\Omega=W_{K}+f(\\omega_{c})$$\n \n$$f(\\omega_{c})=-\\int_{\\omega_{c}}^{\\prime\\,\\infty^{\\prime}}\\,Re\\,\\sigma(\\Omega)\\,d\\Omega\\tag{4}$$\n\nThe Kubo formula, Eq. (3) is obtained assuming that the second part is negligible. This is not guaranteed, however, as typical ωc ∼ 1 − 2eV are comparable to the bandwidth.\n\nThe differential sum rule ∆W is also a sum of two terms\n\n$$\\Delta W(\\omega_{c})=\\Delta W_{K}+\\Delta f(\\omega_{c})\\tag{5}$$\n\nwhere ∆WK is the variation of the r.h.s. of Eq. 3, and ∆f(ωc) is the variation of the cutoff term. Because conductivity changes with T at all frequencies, ∆f(ωc) also varies with temperature. It then becomes the issue whether the experimentally observed ∆W(ωc) is predominantly due to \"intrinsic\" ∆WK, or to ∆f(ωc). [A third possibility is non-applicability of the Kubo formula because of the close proximity of other bands, but we will not dwell on this.]\n\nFor the NS, previous works21,22 on particular models for the cuprates indicated that the origin of the temperature dependence of W(ωc) is likely the T dependence of the cutoff term f(ωc). Specifically, Norman et. al.22 approximated a fermionic DOS by a constant (in which", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0764.pdf" - }, - { - "text": "minimisation [9]. Choosing actions that minimise the expected free energy (*EFE*) of their consequences provides a natural balance between exploratory and exploitative behaviour; generalises descriptive approaches to behavioural modelling, like reinforcement learning and expected utility maximisation; and provides a singular approach to adaptive behaviour that can be used across different environments. AIF was argued to be applicable to any selforganising system that actively maintains a stable boundary that defines its integrity [10], a broad category that includes cells and plants [11], as well as humans [2] and even collectives [12]. Owing to its generality, AIF has seen a rise in popularity across multiple fields. It is used for theoretical simulations of the mechanisms underlying various types of behaviour [2], computational phenotyping in computational psychiatry [13,14], and agentbased simulations of population dynamics [15], as well as in engineering and robotics [16]. In AIF, perception and concurrent action are based on performing a variational Bayesian inversion of a generative model of the environment (i.e., a model of how the environment changes and brings about sensory observations). This belief updating includes inferring (hidden) states of the environment, learning parameters of the generative model and learning the structure of the generative model. Since the requisite inference schemes come pre-specified, the main task in AIF modelling becomes specifying an appropriate generative model. This includes specifying priors over environmental states, as well as what might be called *prior preferences*, *preference priors* or *goal priors*: immutable prior expectations that make up an agents' preferences by furnishing a set of predictions over future states or observations; in fulfilling these predictions, free energy is minimised. The space of possible generative models is vast, and they often have to be handcrafted for a given environment. However, there are some families of generative models that can be considered \"universal\" in the sense that they can be used for most environments. Currently, the most popular of these is the discrete state-space Partially Observable Markov Decision Process (POMDP) based generative models. Since they are ubiquitous in the literature, we focus here on making these types of generative models available to researchers. There are, however, other types of universal generative models, like generalised filtering models [17] or Hierarchical Gaussian Filtering-based models [18,19], that will be implemented in the future.\n\nTools for simulating POMDP-AIF models were originally developed as part of the DEM [20] library for MATLAB [21] (part of the larger SPM library [22]). Since then, a modal and flexible software package pymdp [23] was created for Python [24], as well as a performance-oriented package cpp-AIF [25] for C++ [26] that can be used across platforms. Finally, the factor graph library RxInfer [27] for Julia [28] has also been used to implement some AIF models on an efficient factor graph back-end [29–31]. The important tools that these packages provide make AIF available for researchers to perform simulation studies and for use in engineering contexts. They do not, however, usually allow for fitting models to empirically observed data, which is a fundamental method used in cognitive modelling [32], often in the context of computational psychiatry [13], to infer the mechanisms underlying variations in behaviour or to investigate the differences between (for example, clinical) populations. Smith and colleagues [33] provided a guide for manually doing variational Bayesian parameter estimation based on empirical data, but only in MATLAB and restricted to a particular class of variational parameter estimation methods (variational Laplace), instead of the sampling-based methods that currently predominate in the field of cognitive modelling [34,35].\n\nIn this paper, we introduce ActiveInference.jl, a new software library for Julia [28] that aims to provide easy-to-use tools for model fitting with AIF models and to introduce AIF to the growing community of researchers using Julia for computational psychiatry and cognitive modelling. Julia is a free and open-source high-level programming language that retains an easy user interface reminiscent of that in MATLAB and Python. Simultaneously,", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "FIG. 2: Distribution functions in four cases (a) BCSI model, where one can see that for ε > 0, SC>NS implying KE increases in the SCS. (b) The original MFLI model of Ref. 30, where for ε > 0, SC<NS, implying KE decreases in the SCS. (c) Our version of MFLI model (see text) and (d) the CB model. In both cases, SC>NS, implying KE increases in the SCS. Observe that in the impurity-free CB model there is no jump in n(ǫ) indicating lack of fermionic coherence. This is consistent with ARPES39\n\n#### A. The BCS case\n\nIn BCS theory the quantity Z(ω) is given by\n\n$$Z_{B C S I}(\\omega)=1+\\frac{\\Gamma}{\\sqrt{\\Delta^{2}-(\\omega+i\\delta)^{2}}}\\qquad(11)$$\n\nand\n\n$$\\Sigma_{B C S I}(\\omega)=\\omega\\left(Z(\\omega)-1\\right)=i\\Gamma\\frac{\\omega}{\\sqrt{(\\omega+i\\delta)^{2}-\\Delta^{2}}}\\ \\ \\ (12)$$\n\nThis is consistent with having in the NS, Σ = iΓ in accordance with Eq 6. In the SCS, Σ(ω) is purely imaginary for ω > ∆ and purely real for ω < ∆. The self-energy has a square-root singularity at ω = ∆.\n\nIt is worth noting that Eq.12 is derived from the integration over infinite band. If one uses Eq.6 for finite band, Eq.12 acquires an additional frequency dependence at large frequencies of the order of bandwidth (the low frequency structure still remains the same as in Eq.12). In principle, in a fully self-consistent analysis, one should indeed evaluate the self-energy using a finite bandwidth. In practice, however, the self-energy at frequencies of order bandwidth is generally much smaller than ω and contribute very little to optical conductivity which predominantly comes from frequencies where the self-energy is comparable or even larger than ω. Keeping this in mind, below we will continue with the form of self-energy derived form infinite band. We use the same argument for all four models for the self-energy.\n\nFor completeness, we first present some well known results about the conductivity and optical integral for a constant DOS and then extend the discussion to the case where the same calculations are done in the presence of a particular lattice dispersion.\n\nFIG. 3: The BCSI case with a dispersion linearized around the Fermi surface. Evolution of the difference of optical integrals in the SCS and the NS with the upper cut-off ωc Observe that the zero crossing point increases with impurity scattering rate Γ and also the 'dip' spreads out with increasing Γ. ∆ = 30 meV\n\nFor a constant DOS, ∆W(ωc) = WSC (ωc) − WNS(ωc) is zero at ωc = ∞ and Kubo sum rule reduces to FGT sum rule. In Fig. 3 we plot for this case ∆W(ωc) as a function of the cutoff ωc for different Γ′ s. The plot shows the two well known features: zero-crossing point is below 2∆ in the clean limit Γ << ∆ and is roughly 2Γ in the dirty limit21,40 The magnitude of the 'dip' decreases quite rapidly with increasing Γ. Still, there is always a point of zero crossing and ∆W(ωc) at large ωc approaches zero from below.\n\nWe now perform the same calculations in the presence of lattice dispersion. The results are summarized in Figs 4,5, and 6.\n\nFig 4 shows conductivities σ(ω) in the NS and the SCS and Kubo sums WK plotted against impurity scattering Γ. We see that the optical integral in the NS is always greater than in the SCS. The negative sign of ∆WK is simply the consequence of the fact that nk is larger in the NS for ǫk < 0 and smaller for ǫk < 0, and ∇2 ε~k closely follows −ε~k for our choice of dispersion38), Hence nk is larger in the NS for ∇2 ε~k > 0 and smaller for ∇2 ε~k < 0 and the Kubo sum rule, which is the integral of the product of nk and ∇2 ε~k (Eq. 3), is larger in the normal state.\n\nWe also see from Fig. 4 that ∆WK decreases with Γ reflecting the fact that with too much impurity scattering there is little difference in nk between NS and SCS.\n\nFig 5 shows the optical sum in NS and SCS in clean and dirty limits (the parameters are stated in the figure). This plot shows that the Kubo sums are almost completely recovered by integrating up to the bandwidth of 1eV : the recovery is 95% in the clean limit and ∼ 90% in the dirty limit. In Fig 6 we plot ∆W(ωc) as a function of ωc in clean and dirty limits. ∆W(∞) is now non-zero, in agreement with Fig. 4 and we also see that there is", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0764.pdf" - }, - { - "text": "```\n✞ ☎\n # Set center ( column 1) and cue ( column 4) to give no reward observation ( row 1)\n # Set reward and loss probabilities to 0.5 for the arm locations\n # For reward condition right\n A[2][:,:,1] = [ 1 .0 0 .0 0 .0 1 .0\n 0 .0 0 .5 0 .5 0 .0\n 0 .0 0 .5 0 .5 0 .0 ]\n # For reward condition left\n A[2][:,:,2] = [ 1 .0 0 .0 0 .0 1 .0\n 0 .0 0 .5 0 .5 0 .0\n 0 .0 0 .5 0 .5 0 .0 ]\n✝ ✆\n```\n**Figure 3.** Reward probabilities for the four locations. The centre (column 1) and cue (column 4) locations always resulted in the \"no reward\" observation (row 1). The two arms (columns 3 and 4) resulted in either rewards (row 2) or losses (row 3), with some probability. Left: the agent's agnostic starting beliefs about reward probabilities. Right: the true reward probabilities for the reward condition left arm, which the agent needed to learn over time. The amount of saturation of the green color represent the likelihood of a specific observation in a give state.\n\nThe third and last modality was the cue modality, which mapped the cue observations onto the location and reward condition states. This resulted in an **A** that was two cue observations by four location states by two reward conditions, i.e., a 2 × 4 × 2 tensor. The observations in this modality were correctly assumed by the agent to truthfully reveal the current reward condition—i.e., whether the right or left arm was better—when standing at the cue location. We implemented this by giving each cue observation equal probabilities at all locations except the cue location, where there was a perfect correspondence between the reward condition and the observation:\n\n```\n✞ ☎\n # Set cue observation probabilities to be equal at all\n # locations except the cue location ( column 4).\n # Let cue locations correspond to reward conditions .\n # For reward condition right\n A[3][:,:,1] = [ 0 .5 0 .5 0 .5 1 .0\n 0 .5 0 .5 0 .5 0 .0 ]\n # For reward condition left\n A[3][:,:,2] = [ 0 .5 0 .5 0 .5 0 .0\n 0 .5 0 .5 0 .5 1 .0 ]\n✝ ✆\n```\nHaving created all three modalities of **A**, we could continue to **B**, or the transition model. Each of the two state factors of **B**—the location factor and reward condition factor—needed to be defined separately. We started with the location factor, which contained the transition to and from four possible location states under four different actions: a 4 × 4 × 4 tensor. The agent could control these states perfectly with its four movement actions, independently", - "page_start": 19, - "page_end": 19, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "How could the heart rate be estimated by means of an active inference paradigm?", - "target_page": 6, - "target_passage": "The second panel of Fig. 2 shows the Shannon surprise of an inference model that estimates the current heart rate using the two standard components of a generative model. The for- mer component is the prior, which encodes the person’s a priori probabilistic belief (i.e. probability distribution) about her “nor- mal” heart rate range; here, the prior is a Gaussian centered on 67 and has a precision of 0.11. The latter component is the likeli- hood, which encodes the probabilistic mapping between sensory (heartbeat) observations and the hidden state (heart rate); here, the likelihood is a Gaussian centered on the current heart rate with an additional bias of 15 pulses, and the panel shows the results for 10 values for precision obtained by subdividing the range [0.1,10] into equal intervals.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "Figure 2. A simplifed example of (Bayesian) inference of one's heart rate. First panel: simulated time series of heartbeat observations. Second panel: Shannon surprise of a generative model composed of a fxed prior about heart rate (a Gaussian with a mean of 67 and a precision of 0.11) and a likelihood (a Gaussian centered on the current heart rate with an additional bias of 15 pulses, with various precisions that vary between 0.47 and 10, see the legend). Third panel: Bayesian surprise, which measures the discrepancy between posterior and prior probabilities over time. Bottom panels: the two series of panels are organized in two (left and right) columns, which show the frst fve time steps of inference for the two cases with high precision (of 10) and low precision (of 0.1) of the likelihood, respectively. See the main text for an explanation and online article for colored version of this fgure.\n\nthe current model generate signifcant surprise, and sometimes, the surprise can remain relatively high for long periods before the model adapts (or the world changes), especially with some parameterizations of the generative model. This is particularly relevant in this context since active inference agents strive to minimize their surprise (and the long-term average of surprise, entropy, which is a measure of uncertainty) by changing their model, or changing the world, or both.\n\nSecond, these examples illustrate the importance of precision control and the appropriate setting of precision parameters in guiding inference. Remarkably, the inference can be more or less accurate or fast using the same data, depending on the precision parameters. Note that in Fig. 2, we manipulated only the precision of the likelihood. However, it would also be possible to manipulate the precision of the prior, together or in alternative to the precision of the likelihood. Generally speaking, when the precision of the", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed1.pdf" - }, - { - "text": "| Core Concepts | |\n| --- | --- |\n| AIF | Active inference is a formal framework for modelling behaviour and cog |\n| | nition. Perception and action are cast as minimising free energy—the VFE |\n| | and EFE, respectively—given a generative model of the environment. |\n| VFE | The variational free energy F quantifies how well a generative model |\n| | explains incoming sensory observations. It can be rewritten as the negative |\n| | log model evidence (called surprise) upper-bounded by the divergence |\n| | from the optimal posterior p(s o). Perception as inference is accomplished |\n| | by selecting the approximate posterior q(s) with the lowest associated |\n| | VFE. |\n| | F[q(s), o] ≜ DKL[q(s)∥p(o,s)] = DKL[q(s)∥p(s o)] − ln p(o) |\n| | {z } {z } Divergence Surprise |\n| EFE | The expected free energy G quantifies the expected future free energy |\n| | under an action policy π. It consists of an information gain term and a |\n| | pragmatic value term that provide a natural balance between exploratory |\n| | and goal-seeking behaviour. Action as inference is accomplished by select |\n| | ing the action policy with the lowest associated EFE. |\n| | = − Eq(o˜,s˜ π) [ln q(s˜ o˜, π) − ln q(s˜ π)] − Eq(o˜ π) [ln p(o˜ C)] Gπ |\n| | {z } {z } Information gain Pragmatic value |\n| Generative | The generative model is an agent's formal assumptions about the structure |\n| model | and dynamics of its environment, based on which perceptual and active |\n| | inferences are carried out. Many types of generative models exist that are |\n| | suitable for different environments and tasks. |\n| POMDP | The Partially Observable Markov Decision Process is a type of flexible |\n| | generative model that is widely used in the AIF literature. In discrete time |\n| | and usually a discrete state space, this model type is parametrised to fit a |\n| | given task by a set matrices containing probability distributions. |\n\n## **2. Active Inference with POMDPs**\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44–47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4–8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "**Figure 5. A** learning for the actual reward condition (reward condition left). The agent correctly learned the probability of receiving rewards in the rewarding arm. It did not learn the probabilities of the non-rewarding arm since it did not explore that option. The color grading signifies the likelihood of an observation being generated by a specific state. The more saturated the color, the higher the likelihood.\n\n## *4.3. Fitting the Model to the Data*\n\nSimulations are useful for a variety of purposes, like exploring the consequences of different priors and parameters and establishing the face validity of hypothetical mechanisms underlying behavioural phenomena. However, we often want to use models to make inferences about specific observed phenomena, like the differences in behaviour between various populations, as in computational psychiatry [14]. One standard method here is model fitting, where we estimate the parameter values (e.g., prior beliefs) of an AIF model that are the most likely given some observed behaviour of a participant. This is often performed with approximate Bayesian methods. In the cognitive and behavioural sciences, the predominant method is Markov Chain Monte Carlo (MCMC) methods [34], which are slower but in the limit can estimate parameter posteriors without making assumptions about their functional form. An alternative, which is more often used in other fields and also available in ActiveInference is variational methods, which are faster but require making assumptions about the functional form of the posterior. In general, MCMC methods are favourable when making parameter inferences (i.e., comparing parameters of the same model fitted to different data, like two groups of subjects). When performing a Bayesian model comparison (i.e., comparing different models fitted to the same data), the different approaches rely on different approximations of the model evidence, with the variational", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "participants processed faces expressing fear (but not neutral faces or faces expressing other emotions) when their heart rate was high—hence congruent with the fearful expression (Pezzulo et al. 2018, Yu et al. 2021). The generative model shown in Fig. 1 could support this kind of inference by using interoceptive information from the heart (i.e. high heart rate) as evidence that \"there might be something fearful out there\" (Pezzulo 2013). Another more complex example regards emotional awareness and self-awareness—which signifcantly engage the brain regions involved in interoception and the representation of physiological processes (Garfnkel et al. 2013). The generative model shown in Fig. 1 might support processes of emotional awareness in a way that is neither purely bottom-up (i.e. as if interoceptive signals cause emotional awareness) nor top-down (i.e. as if emotional awareness causes interoceptive signals), but rather through a circular causality between central predictions about bodily state that engage autonomic refexes—and interoceptive streams—that update the predictions (Seth and Friston 2016). In this perspective, any representation that induces interoceptive predictions could be associated with emotional or affective content; crucially, this is also the case with some aspects of self-awareness (e.g. recognizing one's own face) that require integrating interoceptive streams with concurrent exteroceptive (e.g. visual) and proprioceptive cues. These examples illustrate that the generative model of Fig. 1 natively implements both the multisensory integration required to unite (for example) interoceptive and exteroceptive streams and the active aspects that are supposed to support emotional and self-processing—and the construction of an \"embodied self\" (i.e. the circular causality between engaging autonomic refexes and capturing the ensuing interoceptive signals).\n\nIn general, the accuracy of the inference of hidden bodily states, the \"embodied self,\" or other aspects of the model depends on the signal-to-noise ratio of the sensations and on the quality of the model. For example, it is diffcult to self-localize in a city if it is dark (low signal-to-noise ratio) or if one does not know the city well (poor model). The inference of hidden bodily and emotional states might function in an analogous manner. If the quality of the afferent interoceptive (e.g. cardiac) signals is low, or if one has a poor model of how one's body functions, then it would estimate one's bodily states such as fatigue incorrectly (which in turn would also impair its adaptive regulation of the same bodily states). Interoceptive signals could be \"too noisy\" for various reasons, which might be related to physiology, infammation, or stress. The body model can be poor in various ways, too. For example, it could poorly characterize the statistical relations between interoceptive sensations and hidden bodily states (e.g. systematically mischaracterize high heart rate as caused by hunger but not fatigue or joy).\n\nFinally, there is a third essential element that determines the accuracy of the inference: precision control. In predictive coding, the infuence of prediction errors on inference is weighted by their precision, i.e. inverse variance (pink triangles in Fig. 1). This weighting would ensure that very reliable sensations have more impact on inference than unreliable sensations. However, precision (like all other variables) needs to be estimated, but this might be incorrect. An incorrect setting of precisions has been associated with various psychopathological conditions, such as psychosis (Adams et al. 2013), eating disorders (Barca and Pezzulo 2020), panic disorders (Maisto et al. 2021), symptom perception (Pezzulo et al. 2019), depression (Barrett et al. 2016), and many others (Khalsa et al. 2018, Paulus et al. 2019). Intuitively, assigning excessively high weight to noisy sensations yields an incorrect inference that tracks the noise rather than the correct state of the estimated variable system (i.e. overftting), whereas assigning excessively low weight to sensations (or excessively high weight to prior knowledge) makes the system poorly responsive to incoming observations that might signal a change in the state of the system—and both are examples of aberrant inference (Friston et al. 2014).\n\nFigure 2 provides a formal illustration of the above by plotting some examples of Bayesian inference using generative models under various levels of precision of the model components. For simplicity, we focus on a simplifed example of inference of an interoceptive variable: one's heart rate. Heart rate is a \"hidden variable\" in Bayesian parlance since it is not directly observable but needs to be inferred through two sources of information: prior knowledge about the most likely heart rate and sensory (heartbeat) observations. The top panel of Fig. 2 shows a series of (noisy) heartbeat observations. In the beginning, they are in the normal range for an adult (time steps 1–10), then they increase signifcantly, simulating tachycardia (time steps 11–20), then they go back to the normal range (time steps 21–30), then they decrease signifcantly, simulating bradycardia (time steps 31–40), and fnally, they go back to the normal range (time steps 41–50).\n\nThe second panel of Fig. 2 shows the Shannon surprise of an inference model that estimates the current heart rate using the two standard components of a generative model. The former component is the prior, which encodes the person's a priori probabilistic belief (i.e. probability distribution) about her \"normal\" heart rate range; here, the prior is a Gaussian centered on 67 and has a precision of 0.11. The latter component is the likelihood, which encodes the probabilistic mapping between sensory (heartbeat) observations and the hidden state (heart rate); here, the likelihood is a Gaussian centered on the current heart rate with an additional bias of 15 pulses, and the panel shows the results for 10 values for precision obtained by subdividing the range [0.1,10] into equal intervals. The results shown in the second panel of Fig. 2 show that Shannon surprise increases dramatically during episodes of tachycardia and bradycardia, which are far from the normal range. The pattern of results is the same across all levels of likelihood precision. However, the inference with a very high precision (a precision of 10) tracks more closely the noise sensory signals and can therefore lead to more extreme results.\n\nThe third panel shows the Bayesian surprise (or the Kullback-Leibler divergence between posterior and prior probability distributions) over time. This is a measure of how much dissimilar the posterior and the prior are, and it always decreases as a result of inference, but note that it decreases much more rapidly when the precision of the likelihood is 10, which is another indication that the posterior is \"overftting,\" meaning that the inference result is excessively biased by the likelihood distribution.\n\nFinally, the two bottom series of panels are organized in two (left and right) columns, which show the frst fve time steps of inference for the two cases with high precision (of 10) and low precision (of 0.1) of the likelihood, respectively. In these plots, the prior distributions are in blue, the posterior distributions are in green, and the likelihoods are in red. It is possible to note that in the left (high precision) panels, the posterior inference closely follows the likelihood (it \"overfts\") after fve time steps and the inferred heart rate is slightly biased (i.e. it is 79). Differently, in the right (low precision) panels, the inference converges much slower to a high precision posterior, but without overftting.\n\nThese simple examples of Bayesian inference illustrate two things. First, sensory observations that are unpredictable given", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed1.pdf" - }, - { - "text": "quantities as its target: the variational free energy (*VFE*) in the case of perception and the expected free energy (*EFE*) in the case of action. The *VFE* is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The *EFE* is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low *EFE* lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n#### *2.1. POMDPs in Active Inference*\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nA discrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: **A**, **B**, **C**, **D** and **E** [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\n**A**, also called the *observation model*, represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state *s*, and a row for each possible observation *o*. Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If **A** is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.\n\n**B**, also called the *transition model*, describes the state-to-state transition probabilities of environmental states *s*. **B** encodes the agent's assumptions about how the environment changes over time, depending on its actions. It has a column and a row for each environmental state *s*, where each column is a categorical probability distribution over the states the environment will take on the next time step, given the state it is currently in. If the environment is modelled as multidimensional, there will be a matrix for each environmental state factor. Additionally, there is a separate matrix for each possible action (making each factor in **B** a tensor). This means that for every factor in the model, there may be one or more actions that pick out the appropriate slice of the tensor. Action therefore allows the agent to predict that the environment (and the corresponding observations) will change differently depending on the actions that it chooses. If **B** is imprecise (i.e., highly entropic),", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "prior is very high, the posterior will closely refect the prior, rendering the inference rigid and incapable of adapting to changing environmental conditions—which might be especially problematic in periods of signifcant changes, such as adolescence or more simply when one changes city, working environment, and friends. Furthermore, as shown in Fig. 1, hierarchical predictive coding architectures have precision values associated with every hierarchical level (whereas, for simplicity, the inference shown in Fig. 2 is not hierarchical). The correct balance of precision parameters within and across layers is crucial for accurate inference, as it ensures that the correct levels of confdence are assigned to data and prior information.\n\nFinally, and importantly, aberrant precision control (as well as various combinations of other factors discussed earlier, such as noisy bodily sensations and poor bodily mode) can render inference not just incorrect but also highly ambiguous, leaving a person in a permanent condition of uncertainty about whether one is fatigued (when considering the bodily state), happy, or sad (when considering the emotional state), what kind of person one is or what are one's desires (when considering self-models), etc. Importantly, this condition of uncertainty is not limited to perceptual inference but has a cascade effect on decision-making and action selection. Indeed, an uncertain estimate of one's state automatically implies that one has low confdence in the effects of one's plans; for example, it renders more diffcult the prediction of whether a run would be too fatiguing or a party too stressful. It is exactly this kind of uncertainty (about the present and the future, the body state or the outcomes of social interactions, etc.) that active inference agents strive to avoid.\n\n#### **Avoiding excessive uncertainty in maladaptive ways**\n\nOur previous discussion clarifed that active inference agents have sophisticated (hierarchically deep, temporally extended) models of themselves that permit making inferences at multiple levels about hidden bodily states (which comprise both the classical \"body schema\" and other states that are relevant for allostasis, such as hunger, thirst, and fatigue) and other states related to the emotional and embodied self. These models are essential for ensuring effective regulation and control at multiple levels, from simple refexes to sophisticated goal-directed behaviors (Tschantz et al. 2022). However, in some cases, the aforementioned inferential process might not work properly (e.g. if the sensory channels are too noisy or are assigned excessively high or low precision). As a consequence, a person could experience an excessive or irreducible uncertainty about her bodily and emotional states or about the self, which in turn translates into a loss of confdence about which future courses of action could produce desired outcomes. Crucially, active inference agents follow the imperative to avoid such an uncertainty about the present or the future. Normally, uncertainty minimization strategies are adaptive (e.g. seeking advice if one is uncertain about the direction of the preferred restaurant). However, in some conditions, such as when a person experiences excessive and irreducible uncertainty and when the uncertainty is particularly distressing or related to fundamental life concerns, she might potentially seek \"maladaptive\" ways to reduce it—or methods that reduce uncertainty at the cost of hindering fundamental imperatives of well-being and survival (see also Linson et al. 2020).\n\nIn this perspective, apparently paradoxical actions, such as food restriction and self-injurious behaviors, might be pursued because they could contribute to reducing the (otherwise unmanageable) uncertainty about bodily and emotional states or the self. In other words, in some conditions, the self-injuring pain could be more than compensated by the information gain—and the possibility to generate precise sensations about one's bodily state. By harming the body, we turn it into a very precise source of sensations that relieves us from excessive uncertainty about the present state and the future course of action. Our (simple) example, therefore, illustrates a possible way paradoxical actions could be pursued by active inference agents who endure to minimize their uncertainty. While self-injuries and other similar behaviors are maladaptive in the sense of reducing the ftness of an organism, they can still emerge as a result of a correct inference that tries to minimize the uncertainty of one's model of the body and the self. This case could particularly ft when some of the (precision) parameters of one's model of the body and the self are not appropriately tuned (Fig. 2), producing excessive levels of uncertainty.\n\nHaving said this, the idea that NSSI behaviors could refect the imperative to minimize uncertainty is not at odds but complementary to the idea that these behaviors might also be motivated by reward achievement (remember that in active inference, both uncertainty minimization and utility maximization can be in play simultaneously). While NSSI behaviors are associated with a variety of adverse outcomes, such as negative emotions and distress (Klonsky et al. 2003), they can also have paradoxically positive effects by providing a way to relieve or distract from other sources of emotional distress and negative affect (Nock and Prinstein 2004, Chapman et al. 2006, Bresin and Gordon 2013, Selby et al. 2019). The hedonic effect of NSSI behaviors might be further magnifed by poor models of one's body and the self, as suggested by evidence that children who engage in NSSI show aberrant responsiveness to rewards (Tsypes et al. 2018). Finally, NSSI behaviors have habitual components, which might contribute to their selection, over and above consideration of utility maximization or uncertainty reduction (Magerl et al. 2012). This body of evidence suggests that if uncertainty minimization is a driver of NSSI behaviors, as suggested here, it could work in concert with other drivers (reward achievement and habit), in ways that are still poorly understood.\n\nFocusing on uncertainty minimization as a possible factor contributing to NSSI behaviors might also help understand the prevalence of NSSI during adolescence. As discussed earlier, people in adolescence experience signifcant changes at many levels from bodily states such as body size to interoceptive and hormonal processes to affective states and the self. As illustrated in Fig. 2, rapid changes (as in the cases of simulated tachycardia and bradycardia) determine high levels of surprise and uncertainty that, in some cases, remain elevated, either because some of the precision parameters that afford model updates are set incorrectly or simply because readapting internal models of the body and the self takes time. In periods of rapid changes, such as adolescence or after very surprising events, there might be a (temporary) misalignment between the predictions of the (outdated) internal model and the incoming sensations. For example, during adolescence, one might use an outdated model that predicts the usual affective states during a party and fail to contextualize novel sensations (e.g. unexpected feelings or interoceptive signals when meeting somebody), hence experiencing high levels of uncertainty. Thus, failing to reduce this uncertainty and achieve a coherent model of oneself could be particularly distressing.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed1.pdf" - }, - { - "text": "## Discussion\n\nCurrent theories of predictive processing and active inference assume that, to steer adaptive perception and action, the brain forms internal generative models of the environment and of the body within it. Various studies reveal that the brain has rich models of the body; for example, it integrates somatosensory and proprioceptive information into a coherent representation of things like body size and limb position—i.e. a \"body schema.\" More recently, this model-based perspective has been extended to interoception—and the rich sensations we constantly receive from the internal body. Theories of interoceptive processing propose that the brain continuously estimates key bodily and homeostatic variables, such as thirst or fatigue levels, perhaps forming something like an \"interoceptive schema.\"\n\nA key reason for forming bodily or interoceptive models is that they permit us to exert accurate control over the variety of signals (e.g. somatosensory and interoceptive) that the body produces. Forming an accurate body schema is prominent for motor control, whereas modeling interoceptive variables (e.g. thirst) is key to keeping them under control by engaging autonomic refexes (e.g. vasodilation) and allostatic or goal-directed actions (e.g. drinking) when they have incorrect values. The generative modeling perspective can also be extended hierarchically to consider richer models of multimodal experiences and \"embodied self\" that persists in time and anchors our experiences, permitting us to select adaptive courses of action to achieve our favorite goals.\n\nWhile it seems obvious that controlling bodily variables and achieving goals are crucial for survival, this perspective poses a fundamental challenge. In control theory and active inference, \"controlling\" the body ensures that the body generates the preferred outcomes with high (hedonic or pragmatic) value, e.g. safe levels for thirst and fatigue. This idea applies naturally to many of our activities that pursue some form of biologically adaptive function or well-being, such as ensuring that we keep our bodies healthy and consume good food (Sterling and Eyer 1988, Sterling 2012). However, it fails to explain why we engage in some activities that are apparently maladaptive and contradict our primary biological imperative to ensure body health. Perhaps the most puzzling examples are pathological behaviors (e.g. non-suicidal self-harm or starvation), which are common across psychopathological conditions. In these cases, the control exerted over the body and its sensations might serve the purpose of generating outcomes with high (hedonic or pragmatic) values that nevertheless run against our homeostatic and survival imperatives (e.g. pain and excessive levels of hunger).\n\nIn this article, we started with formal accounts of brain processing based on active inference to discuss the mechanisms and functional purpose of the (apparently) maladaptive ways to \"control the body\" that arise in these and other psychopathological behaviors. We frst discussed how we build models of the world, of our bodily and interoceptive processes, of our emotions, and of the embodied self, which provides a sense of understanding of reality and affords adaptive control at many levels, from the allostatic regulation of our physiological states to the achievement of our individual and social goals. Then, we discussed under which conditions we can become highly uncertain about our current state and the future course of action. These conditions include both contextual factors (e.g. periods of noteworthy changes or stress) and factors related to the person's internal models (e.g. poor models in which precision parameters are incorrectly set). We next turned to active inference and discussed how reducing uncertainty (not just maximizing utility) is a key imperative in this framework. This implies that an active inference agent can sometimes privilege uncertainty minimization over utility maximization. In extreme conditions, such as when interoceptive uncertainty is excessive or diffcult to reduce, a person could develop maladaptive strategies to deal with it, such as acting on the body to produce interoceptive sensations of pain or starvation that reduce interoceptive uncertainty.\n\nThe centrality of physiological processes and bodily information for the sense of self has been widely discussed by interoceptive research (Seth et al. 2012, Quigley et al. 2021). Here, in continuity with previous works (Barca and Pezzulo 2020), we suggest that (i) some pathological behaviors—that \"act on the body\" in maladaptive ways—might be considered as strategies for modifying internal models and the sense of self when it is defcient, through bodily sensations and (ii) the sense of self can be defcient when bodily information is uncertain, and this can happen not only in clinical conditions but also during pivotal periods of developmental transition, e.g. in adolescence.\n\nThe theoretical perspective offered here leaves several important questions unaddressed. First, even if uncertainty reduction might be a central drive in self-injury behaviors, it is unclear what kinds of uncertainty (if any) specifcally trigger the paradoxical behaviors. It may be only the uncertainty at deep hierarchical levels (e.g. at the level of self-models) that promotes paradoxical behaviors. Alternatively, it could be possible that it is not so much the kind of uncertainty that matters but somewhat its associated distress, which in turn could be amplifed by conditions like the intolerance of uncertainty. While these and alternative hypotheses remain to be tested in future research, they might in the future lead to novel tailored interventions. Current reviews of NSSI interventions (see, e.g. Turner et al. 2014, Witt et al. 2021) outline the various treatments currently available (e.g. psychological and psychosocial interventions, pharmacological treatments, and a combination of both), but underline the need for further data on their effectiveness. The use of formal models of brain function to characterize the mechanisms of psychopathology (Friston et al. 2014, Stephan and Mathys 2014) might help conceptualize dysfunctional behaviors in operationalizable terms. In this vein, one might delineate interventions aimed at reducing the uncertainty of self-models by starting from the bodily self and the defnition of self-other boundaries (if these turn out to be the critical aspects for the patient). In this endeavor, techniques such as virtual reality and robotics might help elucidate which levels of the multisensory integration process of the bodily self might be compromised (Dieguez and Lopez 2017, Tsakiris 2017, Serino et al. 2018). Virtual reality along with role-playing sessions and the use of avatars are increasingly considered effective tools for the training of clinicians who deal with individuals engaging in NSSI (Taliaferro et al. 2023). It remains to be tested whether the use of virtual reality or similar interventions—and the defnition of contexts and tasks aimed at reducing the uncertainty of the bodily self—might also be viable for individuals engaging in NSSI.\n\nSecond, in this paper, we have mainly focused on uncertainty reduction, but as we reviewed earlier, there are other alternative (or complementary) perspectives on the genesis of NSSI that considers elements such as affective regulation. In addition to the studies discussed earlier, other insights into the pathological mechanisms that might underlie NSSI come from the analysis of clinical populations. For example, dysregulations of the", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.[173]\n\n#### **Epistemology of logic**\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true.[174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false.[175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths.[177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.[179]\n\n# **History**\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed *term logic* in his *Organon* and *Prior Analytics*. [183] He was responsible for the introduction of the hypothetical syllogism[184] and temporal modal logic.[185] Further innovations include inductive logic[186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.[188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions—the use of which are ubiquitous in computational modelling—AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies *γ*. Preference priors should also be implementable for environmental states, in addition to observations, and **A** can be made action dependent.\n\nA library of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs—depending on expected action-dependent observations in the future—could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n# **Definition**\n\nThe word \"logic\" originates from the Greek word *logos*, which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]\n\n# **Formal logic**\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) *p*, (2) if *p* then *q*, (3) therefore *q*\" are valid, independent of what the terms *p* and *q* stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from *p* to *q* is deductively valid then the claim \"if *p* then *q*\" is a logical truth.[16]\n\nFormal logic uses formal languages to express and analyze arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, *a logic* is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "At what stage of childhood does the construction of narrative identity take place?", - "target_page": 3, - "target_passage": "Among the challenges that adolescents have to face are the structuring of a “narrative identity” or self-story, featuring the development of a sense of personal identity that integrates past experiences with current, and future goals and meanings in a coherent whole over time ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "reciprocity with caregivers and peers. Thus, in parallel to the negotiation of identity with caregivers (through a relative detachment from them, a renegotiation of intimacy, and the questioning of their confrmatory authority), the modifcations of friendship structures—from childhood to adolescence—lay the ground for the progressive recognition of social contexts and peer relationships as the elite territories for the modulation and exploration of personal identity. The redefnition that the adolescent has to face in these territories of exploration (of the self as an individual separated from the other and of the self with the other) might pass through a phase of reduced coherence in the narration of the self and hence an increased level of uncertainty. Coherence in the self's narrative is considered a measure of well-being and has been associated with psychopathology in adulthood (Klimstra and Denissen 2017) and adolescence (Lind et al. 2020, Shiner et al. 2021). For example, narrative incoherence has been found to be associated with personality disorders in adolescents (Lind et al. 2019), where \"identity diffusion\" (e.g. feelings of emptiness and being fragmented and lack of a sense of continuity over time) might be considered an expression of high levels of uncertainty of the self.\n\nEmotion-wise, a developmental trend toward an increased specifcity of emotion-related maps of bodily sensations (Barca et al. 2023)—a proxy of interoceptive representations of emotions—has been reported from children aged 6 years to adulthood (Hietanen et al. 2016). Pubertal changes encompass dramatic bodily and neuroendocrine system changes, comprising—but not reduced to—changes in the reproductive, adrenal, and growth axes (Cameron 2004). Thus, adolescents might face at least four sources of uncertainty: (i) the uncertainty due to physiological alterations related to bodily changes and to modifcation in hormonal levels leading to sexual maturity; (ii) the uncertainty in selfidentity (i.e. the structure of self-awareness) and personal identity (i.e, the narrative diachronic self) (Drummond 2021), which might be coupled with changes in body image and the development of gender identity; (iii) the uncertainty in affect regulation, with the emergence of new forms of affectivity as feelings of love and sexual attraction toward a partner; and (iv) uncertainty in the social context, with respect to their social status and role expectations in the adult society. Such high levels of uncertainty might lead to a poorly defned sense of self, with unclear boundaries and a sense of emptiness. In this context, pain becomes a possible way to recover a bodily sense of self, and self-injurious behavior might be instantiated as an attempt to reduce the rise in the levels of uncertainty in these (and potentially other) domains, toward the transition to adulthood (see Miller et al. 2020 for a closely related approach on addiction).\n\n## Active inference, interoceptive processing, and uncertainty reduction\n\nActive inference is based on the idea that in order to engage in adaptive allostatic regulation and goal-directed behavior, living organisms continuously strive to minimize the surprise of their sensations or, more formally, an upper bound to surprise: variational free energy (Parr et al. 2022). Notably, the (expected) free energy minimization processes that drive active inference jointly consider two complementary objectives. The former (utilitarian) objective is to realize one's preferences, such as being satiated or safe, by minimizing the discrepancy between preferred sensations (encoded as \"priors over observations\" in active inference) and current sensations in different modalities (e.g. interoceptive or exteroceptive). The latter (epistemic) objective is to reduce uncertainty about one's estimated state. This means that active inference agents tend to avoid ambiguous states, encompassing the avoidance of ambiguous places where self-localization is challenging, ambiguous social situations where safety is uncertain, and ambiguous bodily states, such as unsure feelings of fatigue. However, one apparent exception to this aversion to ambiguity arises when exploring novel states implies the opportunity to learn new things and enhance one's model; see Friston et al. (2017) for a discussion. Furthermore, and importantly, active inference agents will actively operate in the environment to reduce their ambiguity; for example, by actively seeking informative sensations that disambiguate in which location they are (e.g. by looking for traffc signs), whether their social context is safe or unsafe (e.g. by trying to understand other's intentions from their facial expressions and actions), or whether they are currently fatigued (e.g. by putting attention to one's heart), happy, or sad.\n\nThe last examples—disambiguating one's fatigue and emotional states—may seem strange if one assumes that we do have direct access to the body- and allostasis-related states (e.g. states of satiation, thirst, and fatigue) and to our emotions (e.g. we automatically know whether we are happy or sad). However, one assumption of active inference is that one's bodily and emotional states are not necessarily observable but, instead, \"hidden states\" that need to be inferred on the basis of sensations (especially, but not exclusively, of interoceptive sensations from the inside of the body) and of an implicit, unconscious model of how the body functions (Barrett and Simmons 2015, Pezzulo et al. 2015, Seth and Friston 2016). In other words, the same inferential process that allows active inference agents to estimate the hidden state of the external environment (e.g. the presence or absence of an object in the environment) is also used to estimate other hidden states, such as fatigue, happiness, or sadness. This implies that one can also be wrong, or be fooled, about these states; for example, we could experience the \"interoceptive illusion\" of feeling more fatigued than our physiological parameters would afford (Iodice et al. 2019).\n\nExtending this idea even further, one can assume that certain emotional states, as well as self-awareness and the (embodied) sense of self—and the feeling of continually being the same person—could be constructed similarly: it would be the result of an inferential process that integrates bodily sensations and other experiences over time (Gu et al. 2013, Seth 2013, Stephan et al. 2016, Barrett 2017). Figure 1 illustrates graphically this perspective by showing a (schematic) hierarchical generative model that links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger at intermediate layers, and, fnally, with temporally extended, integrative models of the emotional and embodied self at the higher hierarchical level. The hierarchical generative model recapitulates a simple predictive coding architecture, which includes various putative brain areas or networks (gray ovals) arranged hierarchically. In the schematic, networks for unimodal (exteroceptive, proprioceptive, and interoceptive) processing are situated at the lowest hierarchical level, multimodal networks are at an intermediate level, and networks for processing a persistent model of the self are at the highest level. Note that this simple schematic is not supposed to recapitulate brain anatomy but to illustrate the basic principles of hierarchical generative models and predictive coding; (for a discussion of the mapping between predictive coding networks and brain anatomy, see Parr et al. 2022). Each network includes cells encoding predictions (black nodes) and prediction errors (red nodes). These units", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed1.pdf" - }, - { - "text": "# *3. Why Books are Important to Training AI*\n\nDespite the proliferation of online content and some speculating that books would simply die out with the advent of the Internet,9 books remain a critical vehicle for disseminating knowledge. The more scientists study how books can impact people, the less surprising this is. Our brains have been shown to interact with longform books in meaningful ways: we develop bigger vocabularies when we read books; we develop more empathy when we read literary fiction; and connectivity between different regions of our brain increases when we read. 10\n\nIn that light, it might be unsurprising that books are important for training AI models. A broadly accessible books dataset could be useful not only for building LLMs, but also for many other types of AI research and development.\n\n## *Performance and Quality*\n\nThe performance and versatility of an AI model can significantly depend on whether the training corpus includes books or not. Books are uniquely valuable for AI training due to several characteristics.\n\n- **Length:** Books tend to represent longer-form content, and fiction books, in particular, represent long-form narrative. An AI trained on this longer-form, narrative type of content is able to make connections over a longer context, so instead of putting words together to form a single sentence, the AI becomes more able to string concepts together into a coherent whole; even after a book is divided into many \"chunks\" before the process of tokenization, that will still provide long stretches of text that are longer than the average web page. While Web documents, for instance, tend to be longer than a single sentence, they are not typically hundreds of pages long like a book.\n- **Quality:** The qualities of the training data impact the outputs a tool can produce. Consider an LLM trained on gibberish; it can learn the patterns of that gibberish and, in turn, produce related gibberish, but will not be very useful for writing an argument or a story, for instance. In contrast, training an LLM on books with well-constructed arguments or crafted stories could serve those purposes. While \"well-constructed\" and \"crafted\" are necessarily subjective, the traditional role of editors and the publishing process can provide a useful indicator for the quality of writing inside of books. What's more, metadata for books — information such as the title, author and year of publication — is often more comprehensive than metadata for information\n\n<sup>&</sup>quot;the novel, too, as we know it, has come to its end\" — \"The End of Books.\" *Archive.nytimes.com*, 21 June 9 1992, archive.nytimes.com/www.nytimes.com/books/98/09/27/specials/coover-end.html. Accessed 27 Aug. 2021.\n\nStanborough, Rebecca Joy. \"Benefits of Reading Books: For Your Physical and Mental Health.\" 10 *Healthline*, 15 Oct. 2019, www.healthline.com/health/benefits-of-reading-books#prevents-cognitivedecline.", - "page_start": 5, - "page_end": 5, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "and rephrased and asked follow-up questions to clarify and confirm the correct understanding of participants' answers.\n\nAs similar themes arose repeatedly and no new themes emerged in the final interviews, data saturation was achieved (23).\n\n#### 2.7 Analysis\n\nThe transcribed material was analyzed using systematic text condensation (STC) (30) and was organized utilizing NVivo (version 1.7.1). STC is a method for cross-case analysis inspired by phenomenology. It involves four-steps: (1) identification overall themes from the empirical material, (2) extraction of meaning units from the text which were then coded into groups, (3) condensation of all meaning units within the subgroups into an artificial quotation, that summarize and represents participants' voices, (4) recontextualization of the material into categories, presented as analytical texts. The process is iterative, resulting in continuous movement between the transcripts and within different steps of the analysis. An example of the STC process is illustrated in Figure 1.\n\nThe first author (SSHD) transcribed the interviews and read all material several times, while BN and ECA read most of the interviews before preliminary themes were agreed on. SSHD identified meaning units adhering to these themes and coded them into groups. Condensates of the subgroups were written by SSHD and discussed by all researchers. SSHD then recontextualized the material by forming categories described as analytical texts supplemented by quotes, a process that was discussed and revised several times by all authors. All authors contributed to writing the manuscript. Enactive theory was used to interpret the results, aiming at extracting new knowledge beyond what the informants had provided (28).\n\n## 3 Results\n\nParticipants were interviewed one-on-one by the first author (SSHD) in November and December 2021 (mean = 14 days postoutdoor group). The time and place of the interviews were agreed upon according to participants' preferences (undisturbed office (n = 14), participant's home (n = 1)). None dropped out. The interviews lasted between 40 and 70 min (mean = 54, total = 822) and were audio-recorded.\n\nThe results are presented as four categories summarized in Figure 2 and described below as analytic texts and illustrative quotes referenced with the participant ID and EDSS score.\n\nFIGURE 1\n\nExample of the analysis process (excerpts).", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed13.pdf" - }, - { - "text": "- (ii) to access critical public services, including—\n\t- (aa) social services,\n\t- (bb) services provided to victims (such as victims of crime),\n- (iii) to move to a different place for self-isolation where it becomes impracticable to remain at the address at which they are self-isolating;\n- (j) for the purposes of, or connected with, undertaking a test in accordance with Schedule 8 or Schedule 10;\n- (k) if self-isolating in a goods vehicle by virtue of paragraph (3)(d)—\n\t- (i) for sanitary reasons,\n\t- (ii) to take exercise outside,\n\t- (iii) where required or permitted by that paragraph, to move to a different place for selfisolation,\n\t- (iv) to inspect the vehicle or its load or to carry out any other task required for the safe and continued operation of the vehicle, including refuelling, and\n\t- (v) for any other reason or purpose specified in this paragraph.\n\n(12) For the purposes of this regulation, the place referred to in paragraph (3) includes the premises where P is self-isolating together with any garden, yard, passage, stair, garage, outhouse, or other appurtenance of such premises.\n\n(13) If P is a child, any person who has custody or charge of P during P's period of self-isolation must ensure, so far as reasonably practicable, that P self-isolates in accordance with this regulation.\n\n(14) If P has arrived from Wales or Scotland and is in England, temporarily, for a reason which would constitute an exception under paragraph (11), P is not required to comply with this regulation.\n\n(15) If P is a person described—\n\n- (a) in paragraph 1(1) of Schedule 4—\n\t- (i) where P is a person described in paragraph 1(1)(a) to (k) of, and meets the conditions set out in paragraph 1(3) of, that Schedule, P is not required to comply with this regulation,\n\t- (ii) in any other case, paragraph (3)(b) and (c) does not apply to P;\n- (b) in paragraph 1(2) of Schedule 4 (essential work for foreign country etc), P is not required to comply with this regulation;\n- (c) in paragraph 33 of Schedule 4 (healthcare), paragraph (2) does not require P to remain in isolation in the circumstances set out in paragraph 33 of that Schedule;\n- (d) in paragraph 43 of Schedule 4 (horticultural work)—\n\t- (i) paragraph (2) does not require P to remain in isolation from any other person who is living or working on the specified farm,\n\t- (ii) paragraph (3)(a)(i) applies with the modification that the address specified by P as the address at which they intend to self-isolate must be the specified farm, where \"specified farm\" has the meaning given in paragraph 43 of Schedule 4;\n- (e) either—\n\t- (i) in paragraph 44 of Schedule 4 (elite sports),\n\t- (ii) in sub-paragraphs (1)(h) to (l) of paragraph 2 of Schedule 11 (exemptions from additional measures applicable to arrivals from category 3 countries and territories),\n\nP satisfies the requirements of paragraph (2) if P complies with the relevant conditions specified in paragraph 44(4) of Schedule 4;", - "page_start": 15, - "page_end": 15, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "# **5. Previous Projections**\n\nAt the end of September 2014 the published prison population was within 1.8 % of the 2013 Scenario 2 (central) projection, and within 3.4 % of the 2013 Scenario 1 projection and 0.2 % of the 2013 Scenario 3 projection. This does not indicate which scenario the actual prison population will track going forward.\n\nDifferences between the 2013 projections and the actual population could be explained by changes, different to those projected, in overall demand, offence mix, age and gender of defendants, court routes, custody rates or sentence lengths.\n\nChart 3 plots the 2014 Central Scenario projection against the three 2013 prison population projections. The 2014-2020 Central Scenario projection is above all three scenarios from last year. The higher level of the new projections can be attributed to a more serious case mix coming into the courts with a resulting increase in average custodial sentence lengths. The projection for June 2019 in the Central Scenario this year is 10.2 % above the equivalent scenario (Scenario 2) last year.\n\n**Chart 3: Comparing 2013 and 2014 projections (November 2014 – December 2020)**", - "page_start": 14, - "page_end": 14, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "- **33.**—(1) Any of the following—\n\t- (a) a person (\"P\") who—\n\t\t- (i) before travelling to the United Kingdom has made arrangements with a provider in the United Kingdom to receive healthcare (or, where P is a child, on whose behalf such arrangements have been made),\n\t\t- (ii) is in possession of written confirmation of the arrangements from the provider,\n\t\t- (iii) has travelled to the United Kingdom to receive that healthcare, and\n\t\t- (iv) is attending a place to receive that healthcare or is travelling directly between that place and the place where they are self-isolating;\n\t- (b) a person who—\n\t\t- (i) is accompanying P for the purpose of providing necessary care or support to P in the circumstances referred to in sub-paragraph (1)(a)(iv), or\n\t\t- (ii) is travelling, for the purpose of so accompanying P, directly between the place where they are self-isolating and either of the places referred to in sub-paragraph (1)(a)(iv),\n\nwhere that person has travelled to the United Kingdom for that purpose and is in possession of the confirmation referred to in sub-paragraph (1)(a)(ii) or a copy of it;\n\n- (c) an accompanying child who is accompanying P or, where P is a child, is accompanying a person referred to in sub-paragraph (1)(b);\n- (d) a live donor who is attending a place for the purpose referred to in the definition of \"live donor\" or is travelling directly between that place and the place where they are selfisolating.\n- (2) For the purposes of this paragraph—\n\t- (a) \"accompanying child\", in relation to P, means a child who has arrived in England with P and for whom P has responsibility, or where P is a child, a child who has arrived in England with the person referred to in sub-paragraph (1)(b) and for whom that person has responsibility;\n\t- (b) \"healthcare\" means all forms of healthcare provided for individuals, whether relating to mental or physical health, including healthcare in connection with giving birth;\n\t- (c) \"live donor\" means a person who—\n\t\t- (i) has travelled to the United Kingdom for the purpose of donation of material which consists of or includes their human cells pursuant to arrangements made with a provider in the United Kingdom before travelling to the United Kingdom, and which are to be used by the provider for the purpose of providing healthcare, and\n\t\t- (ii) is in possession of written confirmation of the arrangements from the provider;\n\t- (d) \"provider\" means a provider of healthcare;\n\t- (e) references to a place where a person is self-isolating are to a place where they are required to self-isolate, or permitted to be at, by virtue of regulation 9.\n\n**34.**—(1) A person who has travelled to the United Kingdom for the purpose of transporting material which consists of, or includes, human cells or blood and which is to be used for the provision of healthcare by a provider.\n\n(2) For the purposes of sub-paragraph (1)—\n\n- (a) \"blood\" includes blood components;\n- (b) \"healthcare\" and \"provider\" have the meanings given in paragraph 33(2).\n\n**35.** A person who is an \"inspector\" within the meaning given in regulation 8(1) of the Human Medicines Regulations 2012(**a**), or who has been appointed as an inspector under regulation 33 of\n\n<sup>(<</sup>b>a) S.I. 2012/1916.", - "page_start": 43, - "page_end": 43, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (iv) in the goods vehicle or a hotel, hostel or bed and breakfast accommodation while not undertaking the work described in that paragraph if P is travelling with another person in a goods vehicle with a sleeper cab.\n(4) The address specified by P in the Passenger Locator Form pursuant to paragraph 2(a) of Schedule 6 must be—\n\n- (a) their home;\n- (b) the home of a friend or family member;\n- (c) a hotel, hostel, bed and breakfast accommodation, holiday apartment or home, campsite, caravan park or boarding house, canal boat or any other vessel;\n- (d) a military site or establishment;\n- (e) accommodation facilitated by the Secretary of State for the purposes of P's self-isolation;\n- (f) where P is an asylum seeker, accommodation provided or arranged under section 4, 95 or 98 of the Immigration and Asylum Act 1999; or\n- (g) where P is a person described in paragraph 9(1) of Schedule 10 to the Immigration Act 2016 (powers of Secretary of State to enable person to meet bail conditions), accommodation provided or arranged under that paragraph.\n\n(5) More than one address may be specified as the place at which P intends to self-isolate in the Passenger Locator Form where—\n\n- (a) a legal obligation requires P to change addresses; or\n- (b) it is necessary for P to stay overnight at an address on their arrival in England before travelling directly to another address at which they will be self-isolating.\n\n(6) In paragraph (3)(a)(ii) \"a place at which they intend to self-isolate while in England\" means—\n\n- (a) where the person has completed a Passenger Locator Form, at an intended place of selfisolation specified in that form;\n- (b) where the person has completed a form equivalent to a Passenger Locator Form pursuant to an enactment in Scotland, Wales or Northern Ireland, at an intended place of selfisolation specified in that form;\n- (c) in any other case at a place described in paragraph (4)(a) to (c).\n\n(7) P must, on their arrival in England, travel directly to the place at which they are to selfisolate, and must then self-isolate until whichever is the earlier of—\n\n- (a) the end of the 10th day after the day on which they arrived in England or, if later, the end of any period that applies by virtue of paragraph 2 or 3 of Schedule 8;\n- (b) their departure from England; or\n\n- (c) the beginning of P's period of self-isolation, where P or R, where P is a child, is notified under regulation 2A or 2B of the Self-Isolation Regulations(**a**).\n(8) In paragraph (7)(c), \"period of self-isolation\" and \"R\" have the meanings given for the purposes of Part 1 of the Self-Isolation Regulations (see regulations 3 and 5 of those Regulations).\n\n(9) Paragraph (2) does not require P to remain in isolation—\n\n- (a) from any person with whom they were travelling when they arrived in England and who is also self-isolating in the place where P is self-isolating;\n- (b) where P is self-isolating in their home, from any member of their household;\n- (c) where P is self-isolating in the home of a friend or family member, from any member of the household of that friend or family member;\n\n<sup>(<</sup>b>a) A person notified, or a child in respect of whom a notification is given, under regulation 2A or 2B will be required to selfisolate in accordance with those Regulations from the moment the notification is given. Regulations 2A and 2B were inserted by S.I. 2021/364.", - "page_start": 13, - "page_end": 13, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (viii) paragraph 25 (chemical weapons inspectors),\n\t- (ix) paragraph 26 (space workers),\n\t- (x) paragraph 28 (oil workers),\n\t- (xi) paragraph 29 (offshore oil and gas workers) unless paragraph (4) applies to the person,\n- (xii) paragraph 31 (specialist technical workers),\n- (xiii) paragraph 32 (specialist waste management workers),\n- (xiv) paragraph 35 (medicines inspectors),\n- (xv) paragraph 36 (clinical trial conductors),\n- (xvi) paragraph 37 (clinical investigators),\n- (xvii) paragraph 38 (medical and veterinary specialists),\n- (xviii) paragraph 39 (infrastructure workers), or\n- (xix) paragraph 40 (communications operation workers).\n\n(2) In paragraph (1)(b), the reference to persons required to self-isolate under regulation 9 does not include anyone who may temporarily cease to self-isolate by virtue of regulation 9(15)(f)(ii), (15)(g)(ii), or (15)(i) (and accordingly regulation 6 does not apply to such persons).\n\n(3) Regulation 7 (requirement to undertake workforce tests) applies to a person who is not required to self-isolate under regulation 9 by virtue of any sub-paragraph of regulation 9(15) and the following paragraphs of Schedule 4, or who may temporarily cease to self-isolate or whose obligation to self-isolate under that regulation is otherwise modified by virtue of those provisions—\n\n- (a) paragraph 2 (UK officials with border security duties);\n- (b) paragraph 3 (officials involved in essential defence activities);\n- (c) paragraph 6 (seamen and masters) other than seamen and masters of fishing vessels within the meaning of the Merchant Shipping Act 1995(**a**);\n- (d) paragraph 7 (pilots);\n- (e) paragraph 8 (inspectors and surveyors of ships);\n- (f) paragraph 9 (aircraft crew and pilots);\n- (g) paragraph 10 (international rail crew, passenger and freight operators);\n- (h) paragraph 13 (road haulage workers);\n- (i) paragraph 15 (Channel Tunnel system workers);\n- (j) paragraph 18 (repatriated prisoners);\n- (k) paragraph 19 (international prison escorts);\n- (l) paragraph 27 (aerospace engineers and aerospace workers);\n- (m) paragraph 34 (persons transporting human blood etc.); or\n- (n) paragraph 43 (seasonal agricultural workers).\n\n(4) Regulation 7 also applies to a category 1 arrival who would have been a person to whom paragraph (3) applied if that person had arrived from a category 2 country or territory.\n\n(5) Regulation 8 (test requirements: offshore installation workers) applies to a worker who falls within the description in paragraph 29(1)(a) of Schedule 4 who arrives in England and is required to undertake or commence activities on an offshore installation, including critical safety work on an offshore installation.\n\n<sup>(<</sup>b>a) 1995 c. 21.", - "page_start": 7, - "page_end": 7, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "#### **Create a user with administrative access**\n\nAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.\n\n#### **Secure your AWS account root user**\n\n- 1. Sign in to the AWS Management Console as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.\nFor help signing in by using root user, see Signing in as the root user in the *AWS Sign-In User Guide*.\n\n- 2. Turn on multi-factor authentication (MFA) for your root user.\nFor instructions, see Enable a virtual MFA device for your AWS account root user (console) in the *IAM User Guide*.\n\n#### **Create a user with administrative access**\n\n- 1. Enable IAM Identity Center.\nFor instructions, see Enabling AWS IAM Identity Center in the *AWS IAM Identity Center User Guide*.\n\n- 2. In IAM Identity Center, grant administrative access to a user.\nFor a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the *AWS IAM Identity Center User Guide*.\n\n#### **Sign in as the user with administrative access**\n\n- To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.\nFor help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the *AWS Sign-In User Guide*.", - "page_start": 14, - "page_end": 14, - "source_file": "serverless-core.pdf" - }, - { - "text": "**Figure 11: Number of recent (within two years) OCU initiates presenting to treatment in 2005 and 2013, by age of individual at first presentation.**\n\nThe mode age of initiation has shifted from around 18 to around 25 and there is an older age profile throughout. Rises in average age of initiation have also been reported recently in cohorts of Australian injecting drug users (Horyniak et al., 2015). There appear to be two possible explanations.\n\n- There is a genuine shift towards new initiates being older, and for them to present to treatment much faster than in previous years.\n- There is a consistent, but small number of individuals who mis-report their age of onset when attending treatment i.e. who report that they have only been using opiates/crack for a short period when in fact they have been using for a far longer period, and that this is starting to really bias the numbers for recent cohorts because attendees from the original epidemic are becoming smaller.\n\nIt is possible then that the flattening we observe in the incidence trend is due to a small in-flux of older initiates, although mis-reporting may also explain that phenomenon. Either way though, as this analysis has made clear throughout, absolute numbers of new OCUs appear to be small – probably fewer than 10,000 per annum and the numbers of those involved with crime will be smaller still. In addition, despite a flattening in the probable trend in new users, there is currently no sign that it is likely to tip upwards. If anything, the data suggest the downward trend is set to resume, though clearly it remains important to monitor the situation.", - "page_start": 28, - "page_end": 28, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "What was the indicator related to increasing Nissan's research and development activities in terms of publication of scientific articles in 2004?", - "target_page": 46, - "target_passage": "And the number of research papers we present at societies such as The Japan Society of Mechanical Engineers rose dramatically in fiscal 2004. ", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "DESPITE NISSAN'S RECORD OPERATING RESULT IN FISCAL 2004, ITS STOCK PERFORMANCE RETURN WAS NEGATIVE AND LOWER THAN THE TOPIX INDEX. THE INVESTOR RELATIONS TEAM WAS STRENGTHENED AT THE START OF FISCAL 2005 TO BETTER ADDRESS THE NEEDS OF INVESTORS AND ENHANCE THEIR UNDERSTANDING OF NISSAN'S PERFORMANCE. INVESTORS WILL NOW BE ABLE TO GAIN A MORE IN-DEPTH VIEW OF THE COMPANY'S OPERATIONS AND PERFORMANCE INDICATORS.\n\n#### **Share Performance in Fiscal 2004**\n\nNissan's share price began at ¥1,143 at the beginning of fiscal 2004 and ended the fiscal year at ¥1,099, generating a negative return of 3.85 percent. Total shareholder return (TSR) was -1.67 percent, while the dividend yield came to 2.18 percent (¥24 per share dividend, divided by the ¥1,099 closing price). Adverse movements in foreign exchange rates and commodity price hikes adversely affected Nissan's profitability, which was reflected in the share price. In addition, specific events relating directly to the company also had a negative impact. Later in this report, corporate officers will explain what actions Nissan has undertaken to ensure better performance.\n\n#### **Payout Policy**\n\nNissan announced its NISSAN Value-Up three-year dividend policy, covering the period from fiscal 2005 to fiscal 2007, at the annual general meeting of shareholders on June 23, 2004. Nissan proposes a long-term dividend policy to provide more visibility and improve transparency into the ways in which Nissan rewards its shareholders. Nissan believes that a long-term dividend policy reduces uncertainty for investors who already own or are considering acquiring Nissan stock.\n\n#### **Fiscal Year 2004 Share Performance** (Index: April 1, 2004=100)\n\n80 Apr. **2004 2005** \n\n#### **IR Activities**\n\nUnder NISSAN Value-Up, the IR team's performance will be evaluated based on the price-earnings ratio (PER) and volatility relative to our major competitors. PER is used to measure how successfully the IR team manages market expectations about Nissan in order to maintain the Nissan share price close to an intrinsic value. The other measure, volatility, is used to measure the risk investors perceive when considering Nissan stock. If Nissan can successfully reduce volatility, the minimum return required by investors should decline. The IR team believes that a strengthening of disclosure activities is required to improve both measures. The team plans to disclose not only financial results but also more forward-looking information about Nissan fundamentals such as technology and product. Such forward-looking information helps investors to forecast future performance more precisely and reduces uncertainty about the future. As a consequence, Nissan will increase the number of investor conferences, events, and teleconferences during fiscal 2005.\n\n#### **Five-Year Share Performance**", - "page_start": 16, - "page_end": 16, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Due to changes in government regulations, information on risks involved in business operations has been disclosed in the Yukashoken-Houkokusho for the year ended March 31,2005 as follows:\n\n#### Economic Factors\n\nThe demand for products manufactured by Nissan is affected by the economic conditions in each country or market in which they are offered for sale. Nissan conducts its operations all over the world and, in particular, in the major markets of North America, Europe, and Asia, to say nothing of Japan. While Nissan strives to develop a comprehensive and integrated projection of the global economic outlook, any greater-than-anticipated downturn in one of these markets may have a significant effect on Nissan financial position and results of operations.\n\n#### International Activities and Overseas Expansion\n\nNissan's manufacturing and marketing activities outside Japan are conducted in the United States, in Europe, and in the developing and emerging markets of Asia. Nissan forecasts and evaluates a wide variety of risks inherent in doing business in such overseas markets including the following factors, each of which entails a greater-than-anticipated level of risk:\n\n- Unfavorable political or economic factors\n- Legal or regulatory changes\n- Potentially adverse tax consequences\n- Labor disputes including strikes\n- Difficulties in recruiting and retaining personnel\n- Social, political or economic turmoil due to terrorism, war, or other destabilizing factors.\n\n#### Research and Development\n\nNissan's technology must be \"real world\"—useful, pragmatic and easy to use. Nissan anticipates the nature and scope of the market demand, and then prioritizes and invests in new technologies. Nonetheless, any sudden and greater-than-anticipated changes in its business environment or in customer preferences may impact negatively on customer satisfaction with these new technologies.\n\n#### Product Defects\n\nNissan places a high priority on safety and does its best to enhance safety from the standpoint of research and development, manufacturing and sales. Although Nissan takes out insurance policies to cover product liability, this does not necessarily mean that all potential defects and the related liabilities are fully covered. If Nissan were to implement strict product recalls for its customers, Nissan would incur significant additional expenses which could adversely affect its financial position and results of operations.\n\n#### Fluctuation in Foreign Currency Exchange Rates\n\nNissan's Japanese operations export vehicles to various countries around the world. In general, the appreciation of the yen against other currencies adversely affects Nissan's financial results of operations and, on the contrary, the depreciation of the yen against other currencies favorably affects Nissan's financial results of operations. Any sharp appreciation of the currencies of those countries against the yen could lead to increases in both procurement and production costs which would adversely affect Nissan's competitiveness.\n\n#### Derivatives\n\nNissan utilizes derivatives transactions for the purpose of hedging its exposure to fluctuation in foreign exchange rates, interest rates and commodity prices. While Nissan can hedge against these risks by using derivatives transactions, Nissan, by so doing, may miss the potential gains which could result from seizing the market opportunities to profit from such fluctuation in exchange rates and interest rates.\n\nIn addition, Nissan manages its exposure to credit risk by limiting its counterparties to financial institutions with high credit ratings. However, a default by any one of these counterparties could have an adverse effect on Nissan's financial position and operating results.\n\n#### Lawsuits and Claims\n\nWith respect to various lawsuits and claims which Nissan encounters, the possibility exists that the position defended by Nissan will not be accepted and that the outcome may be significantly different from that anticipated. As a result, any such verdict or settlement could adversely affect Nissan's financial position and operating results.\n\n#### Government Regulations\n\nThe automobile industry worldwide is influenced by a broad spectrum of regulations governing the emission levels of exhaust fumes, fuel economy guidelines, noise level limitations and safety standards, and Nissan expects these regulations to become increasingly stringent. In order to ensure compliance, it may be necessary for Nissan to make significant ongoing investments in these areas which would have an impact on its financial position and results of operations.\n\n#### Intellectual Property Rights\n\nNissan owns a wide variety of proprietary technologies and has the expertise to differentiate Nissan's products making them unique from those of its competitors. These assets have proven their value in the growth of Nissan's business and will, no doubt, continue to be of value in the future. Nissan strives to protect its intellectual property assets; however, in certain markets, Nissan may encounter difficulty in fully protecting the proprietary rights to its own technologies. Cases may arise where Nissan finds itself unable to prohibit others from infringing on its intellectual property rights.\n\nThe Company has established Intellectual Property Rights Management Department for the purpose of protecting intellectual property rights in specific areas, strengthening activities to protect Nissan's intellectual property rights, and abstracting new intellectual property rights. And the department has been performing various activities to protect and create Nissan Brand.\n\n#### Natural Disasters\n\nNissan's corporate headquarters and many of its manufacturing facilities are located in Japan, where the statistically proven probability of earthquakes is higher than in many other countries. Nissan has developed risk management guidelines relating to earthquake damage and the CEO has organized a global task force to direct disaster prevention and recovery activities. In addition, the Gruop has begun to strengthen its manufacturing facilities with anti-seismic reinforcement. However, if a severe earthquake were to hit one of Nissan's key facilities causing a halt in production, this would adversely affect Nissan's financial position and results of operations.\n\n#### Sales Financing Business Risk\n\nSales financing is an integral part of Nissan's core business, providing strong support to its automotive sales, while maintaining high profitability and a sound and stable financial condition through strict risk management policies. However, the sales financing companies have a high exposure to interest-rate risk, residual value risk, and credit risk, any one of which may adversely affect Nissan's financial position and results of operations.\n\n#### Counterparty Credit Risk\n\nNissan does business with a variety of counterparties and manages its counterparty credit risk by conducting a comprehensive annual assessment of its customers' financial condition based on their financial information. Nonetheless, any significant default by a counterparty would adversely affect Nissan's financial position and results of operations.\n\n#### Employee Retirement Benefit Expenses and Obligations\n\nThe amount of retirement Nissan's benefit obligation and related expenses are calculated using various actuarial assumptions including the discount rate applied, the projected rate of return on plan assets, and so forth. If Nissan's actual results differ from those assumptions or if the assumptions are changed, the resulting effects will be accumulated and recognized systematically over future periods. The cumulative effect could adversely impact the recognition of expenses and liabilities recorded in future periods.\n\n#### Purchase of raw materials and parts\n\nNissan purchases raw materials and parts from many suppliers. Market conditions that Nissan can't control and whether or not the suppliers can procure raw materials and parts continuously may adversely affect Nissan's financial position and results of operations.", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT_", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "NISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT_", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **More value, Higher quality, Win-win partnerships** HIROTO SAIKAWA\n\nPURCHASING\n\nExecutive Vice President\n\n\"The evolution that took place in Nissan's purchasing activities during the Nissan Revival Plan, or NRP, and continued through NISSAN 180, will stretch even further during NISSAN Value-Up. Why evolution and not revolution? Because the shift in purchasing that started six years ago was not a single action, it was a mindset change that continues to drive all our activities.\n\nPurchasing represents the single largest area of cost for Nissan. Through the NISSAN Value-Up business plan, we are determined to drive greater value from our purchasing activities and maintain the momentum built over the last six years.\n\nDuring the Nissan Revival Plan years, our focus was on catching up with the rest of the industry. NISSAN 180 was focused on reaching the benchmarks set during NRP and now as we enter the NISSAN Value-Up period, that focus evolves towards being the global cost leader.\n\nOne of the key breakthrough strategies of NISSAN Value-Up is the focus on new and emerging markets. On the sales side, markets like China, India, Russia and ASEAN represent significant opportunities for Nissan. On the purchasing side, we look at the cost competitiveness of these new markets and how we can increasingly use them to enhance our global competitiveness.\n\nOur strategy for what we call 'Leading Competitive Countries', or LCCs, is to focus on those markets that we see as trend leaders in both cost, quality and supply stability. We will focus first on China and then on ASEAN nations. This will bring cost advantages for our major regions, such as Japan, North America and Western Europe, making us more competitive. We're also investigating sourcing from Eastern Europe, the Mercosur trading zone, and India.\n\nOur Alliance with Renault has also provided substantial purchasing benefits and opportunities. Formed in 2001, the Renault Nissan Purchasing Organization, or RNPO, now accounts for over 70 percent of all purchasing for Nissan and Renault. Nissan will further benefit from RNPO through the utilization of Renault supply bases in certain LCCs.\n\nAlthough the turnaround in the Nissan business has been profound, we also recognize that our supplier partners have played a significant role. Going forward, we intend to reinforce those relationships, building value on both sides. For example, we are reinvigorating our innovative 3-3-3 engineering program.\n\nWe are also deploying a purchasing process that gets suppliers involved earlier and further upstream in the product development process, the concept of 'project partners'. This is a program that identifies key technologies and innovations that require substantial investments from both sides. Suppliers will be selected as project partners for a specific area and will work closer with us to develop lower cost and higher quality solutions. This win-win approach has already started with interior systems and chassis development projects.\n\nLast year, we faced several challenges with raw materials. Those risks—both price and supply related—are a factor that we have to recognize and address in the coming years. Last year, the pressure was concentrated on the supply side, going forward we see an increasingly challenging cost environment. Working closely with our key raw material suppliers as well as parts suppliers and accelerating our cost reduction countermeasures will be key during NISSAN Value-Up.\n\nOur purchasing philosophy at Nissan is focused on value, quality and relationships. We want our purchasing process to be transparent and proactive, and create more value for our suppliers and for the company.\"", - "page_start": 49, - "page_end": 49, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **Making Profit as a Smaller Player**\n\nEUROPE\n\n\"Europe is one of the most fragmented automotive market in the world and a highly competitive one besides. Despite our relatively small size, however, we have begun to demonstrate that it is possible to make money in Europe. In fact, although Nissan does not yet deliver the levels of profitability here\n\nDOMINIQUE THORMANN Senior Vice President Nissan Europe\n\nthat the U.S. or other markets generate, we surpassed our NISSAN 180 business targets in fiscal 2004. Our profitability is now on par with the best European manufacturers. Nissan has a foundation for increasing profitability further in the coming years in Europe.\n\nNissan is already an established name around the region, and the brand is strongly associated with 4x4 technology, off-road vehicles and pickup trucks. However, there is also a solid heritage built around the Micra, a model designed for urban driving. Both the first and second generations of this car were very successful, and the third generation is performing well. To leverage our 4x4 heritage and SUV strength into the passenger car segment, Nissan is developing a series of crossover vehicles that blend car-like performance with 4x4 versatility. The Qashqai concept vehicle introduced at the 2004 Geneva Motor Show is the first of these—smaller, more affordable, and better adapted to European roads. The Qashqai will go into production in our plant in Sunderland in the UK in early 2007. The Murano, launched this year, is a precursor to the Qashqai in the larger executive segment. Europeans have already taken to the Murano, driving sales far past our initial forecasts in all markets. This car is helping make Nissan a brand that people aspire to own.\n\nNissan is still a small player in the region, selling 550,000 cars across a very large and diverse territory that stretches from the Atlantic Ocean to Russia, and from Finland to Israel. In the past we covered the area through multiple distribution channels, which we are currently in the process of simplifying. A few aspects of the European market have made profitability more difficult to achieve. For example, automakers must provide models with much diversity: diesel and gasoline powertrains; manual and automatic transmissions. The cars must also be engineered to suit the high driving speeds typical in the region and ensure superior handling, which results in higher costs.\n\nAs in many other mature markets, an incentive war is raging in Europe. Nissan's position here, as elsewhere, is to use incentives selectively and to always protect profitability. Providing products which customers recognize and appreciate for their style and attributes rather than being the best deal is the foundation of Nissan's profitable growth. We now have a wide range of products, five of which were newly launched in 2005, including the Pathfinder and the Navara pickup. We will release the Micra C+C at the Frankfurt Motor Show in September, giving customers the option of a unique standard glass roof in a fully retracting hard convertible top.\n\nNissan's manufacturing still defines the leading edge in Europe. According to *The Harbour Report*, our plant in Sunderland is the most productive plant in Europe. Sunderland will start production on a new B-segment car based on the Tone concept car in early 2006, followed by the Qashqai crossover vehicle in early 2007. Our Barcelona plant, which manufactures SUVs, 4x4s and light commercial vehicles, will reach full capacity in mid-2005. Finally, our truck plant in Avila, Spain, which specializes in light-duty trucks, will start producing a replacement for the popular Cabstar in late 2006. This efficient production base is a critical part of our profitable growth scenario.\n\nNISSAN Value-Up has given us a plan for building both profit and volume. We will not, however, sacrifice profit to gain volume. How far we can go depends on how fast we deliver results. I believe that we have much more room to grow, and to demonstrate that in even a crowded European market a smaller player can produce significant returns.\"", - "page_start": 62, - "page_end": 62, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# **Pursuing Value Through Technological Excellence** MITSUHIKO YAMASHITA\n\nExecutive Vice President\n\n\"I have two prime objectives. The first is to realize our corporate vision, 'Enriching people's lives,' from an engineering standpoint. The second is to create a future vision for people working in R&D. Research and development is all about providing practical value to the customer via technological excellence, which in turn creates value for our shareholders. Nissan has made a major commitment to technological excellence so that we can accomplish these objectives.\n\n#### **Research and Development**\n\nTECHNOLOGY\n\nNissan's investment in R&D has been rising. In fiscal 2004 we devoted approximately ¥400 billion to it, equivalent to 4.6 percent of our turnover. We estimate that our financial commitment to R&D will continue to range between 4.5 and 5 percent. R&D investments take a lot of time to pay off, of course, so it's difficult to evaluate our evolution over the short term. Given our expanded output, however, I believe that we are headed in the right direction.\n\nFor example, the number of patents we have generated is growing quickly, exceeding 4,000 in fiscal 2003—more than twice the fiscal 1999 figure. And the number of research papers we present at societies such as The Japan Society of Mechanical Engineers rose dramatically in fiscal 2004. These are direct results of our commitment to research. We are also generating more new technologies related to safety and the environment, such as the Around View Monitor and the lane-keeping system.\n\nWe have succeeded in shortening our production pipeline, too, using a new vehicle development process called V3P that our engineers devised over the past three years. V3P, which stands for Value-up innovation of Product, Process, and Program, has helped us cut our development time almost in half, from 20 months to just 10.5 months. I believe this makes Nissan the world benchmark in development. That improvement is having a major effect on the flexibility and execution of R&D at Nissan, and will ultimately boost the company's profitability.\n\nThe number of new products we have brought to market over the past three years is equally significant more than thirty new vehicles. That's an impressive engineering achievement, and the reason you are seeing so many new Nissan models on the road.\n\nOur R&D infrastructure, however, is still in need of expansion. We've therefore begun building new facilities at the Nissan Technical Center, NTC, and at the Nissan Advanced Technical Center, NATC, both of which are in Japan. These additions represent a major investment, and show Nissan's dedication to maintaining and enhancing its technological skills.\n\nOur technology base is in Japan, where we have some ten thousand people involved in R&D, but we also have two major centers in North America and Europe, and smaller operations in Taiwan, China, Thailand, South Africa and Brazil. In the past, these entities were mostly standalone operations, but today there are many more joint projects\n\nRear active steering Intelligent cruise control Shock-absorbing body, to reduce pedestrian injuries", - "page_start": 45, - "page_end": 45, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **NISSAN Value-Up: Sustaining Performance**\n\nNissan's position today is much different than it was six years ago or even three years ago. In 1999, we were in crisis, and the Nissan Revival Plan was needed to revive our company and build a future. In April 2002, when NISSAN 180 began, we wanted to complete the revival process, with an emphasis on profitable growth.\n\nNISSAN Value-Up is about sustaining performance. About taking all the gains we have made in connecting with our customers, in growing volumes, in creating value, in earning profits, in improving management— and then building upon these gains.\n\nWith NISSAN Value-Up, you will not see a radical break from NISSAN 180. This plan is evolutionary, not revolutionary. We will take the core elements that got us to this point—namely, more revenue, less cost, more quality and speed, and maximized Alliance benefit with Renault and build upon them.\n\nNISSAN Value-Up has three critical commitments:\n\n- Profit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan.\nVolume:Nissan will achieve global sales of 4.2 million units measured in fiscal 2008.\n\n- ROIC: Nissan will achieve a 20 percent ROIC on average over the course of the plan, based on the new formula that excludes cash on hand from the denominator.\nNISSAN Value-Up will oversee 28 new models, resulting in the start of production of 70 models worldwide, over two dozen more than the 44 production starts during NISSAN 180. Of the 28 new models, 18 will be replacements for existing models and 10 will be completely new \"conquest\" models. We will enter more new segments, and we will introduce six models that will delight customers by being completely innovative in their concept and benefits.\n\nWe will pursue four major breakthroughs while implementing NISSAN Value-Up:\n\n- Our Infiniti luxury brand will extend its reach into new markets such as China and Russia and continue to establish its credibility as a Tier-1 luxury player.\n- We will develop our Light Commercial Vehicle (LCV) business into a fully competitive global operation through new market and product entries. By 2007, we plan to increase our LCV volume by 40 percent from fiscal 2004 to 434,000 units. During this period, operating margin is targeted to double from 4 percent to 8 percent.\n- We will take a more efficient global sourcing approach to maximize our opportunities and minimize our overall costs as we grow. Our engineering, production and purchasing functions will continue their acceleration toward being fully integrated global operations.\n- We will continue to invest in new and emerging markets, including China, India and Russia.", - "page_start": 11, - "page_end": 11, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### LETTER FROM THE COO\n\nMuch has been written about the Nissan revival. While innovative product, an improved cost base, greater manufacturing efficiencies and a better-defined brand have all been factors, the strongest element in our revival has been our people. And, what we learned during the crisis in the 90s and through the Nissan Revival Plan and Nissan 180 plan, now guides how we will manage the company in the future. We call it the Nissan Management Way. It is both a philosophy and set of disciplines that guide us at all levels of the organization and will help Nissan build on the momentum of the past six years.\n\nAlthough our president and CEO Carlos Ghosn has now taken on the same responsibilities at Renault, our basic management style will not change. As in the past, the Executive Committee, chaired by Carlos Ghosn, is still the highest decision making authority for strategy and management policy.\n\nThe COO position I now hold was created to provide an \"operating officer\" in the truest sense of the title. As COO my role is to assist the CEO by executing the business plan, monitoring the Company's performance and supervising dayto-day operations. The decisions I make are always based on the Nissan Management Way and support the commitments of the NISSAN Value-Up business plan.\n\nWhat distinguishes the Nissan Management Way is that we are both profit-driven and customer-focused, and that we share our strategy globally and execute in a cross-functional way. These cross-functional activities are particularly important to our success; along with cross-functional thinking, they have helped create an organization of singular structure, focus and culture. In this organization, employees representing each of Nissan's three axis—regional businesses such as Japan and U.S., functions such as engineering and manufacturing, and products—are actively encouraged to work together to maximize profits and to avoid a 'silo' mentality that is only focused on their immediate operational group.\n\nFiscal 2005 is a year of immense challenges and uncertainties, but we have still pushed ahead with an ambitious business plan for this period. As COO, my priority is to keep a close watch on Nissan's performance to ensure that we deliver our commitments. These include achieving the final Nissan 180 commitment of one million additional vehicles by the end of September 2005 and hitting our financial targets for fiscal 2005. There is no doubt that we have the strong leadership and management teams capable of sustaining the high level of performance required to reach these goals.\n\nNissan is now a learning organization. We have fully integrated the changes that began during the Nissan Revival Plan and continue to shape our business in the future. Our employees continually seek to build a better Nissan and fortify the brand, and are not afraid to speak out on issues and openly discuss challenges that face the business. Within the Nissan Management Way, we call that \"healthy conflict\"— and it strongly related to our belief in transparency and accountability. This is the essence of the evolution that continues to empower our company.\n\nOur alliance with Renault also continues to be a source of immense strength. We expect to further reinforce the Alliance and to develop new synergies now that Carlos Ghosn is the CEO of both companies.\n\nWhile we have the kinds of advantages I have mentioned, we also have risks. One of those risks is complacency. During the last six years, we have made significant achievements and consistently met tough commitments, but countless challenges remain. Our industry is immensely competitive, our customers more demanding than ever and we have no time to rest and congratulate ourselves. We need to create a culture where employees are always motivated to challenge themselves and the company and to create value for all our stakeholders.\n\nPeople around the world know that Nissan is a profitable and customer-driven company. As COO, one of my key roles under NISSAN Value-Up is to promote this customer-driven culture throughout the entire value chain, from initial product planning to after-sales service. I truly believe that by enhancing our focus on profit and pursuing a customer-driven approach, we can provide more value to all our stakeholders: employees, communities, suppliers, partners, and, of course, our shareholders.\n\nToshiyuki Shiga Chief Operating Officer", - "page_start": 5, - "page_end": 5, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "NFS and Nissan Motor Acceptance Corporation join with our counterparts from Renault and RCI Banque once a year for the Global Finance Synergy Meeting. We exchange ideas and best practices at this session, which has proved beneficial for both companies. The concept of offering fleet services, for example, originated with RCI Banque, which has been doing it in Europe.\n\nOur performance is measured not only by volume, but also by return on assets. We will continue to increase revenues, reduce costs through process integration, and enhance the functions of our centralized call center and IT activities. We aim to diversify our sources of income through other business activities, such as insurance and maintenance, while improving the customer experience. We want to be the best sales finance company in Japan.\"\n\n# NORTH AMERICA North America\n\nSTEVEN R. LAMBERT President and CEO Nissan Motor Acceptance Corporation\n\n\"At Nissan Motor Acceptance Corporation, our mission is to maximize the value of Nissan by providing competitive financial products and exceptional customer service. We are continually striving to support our customers by being an integral component of the Nissan North America sales and marketing plan, being the first choice of dealership financing, and by being the preferred lender to Nissan and Infiniti retail and lease customers. Since we mainly contribute to the Nissan global profit objective when a car is sold, we work closely with Nissan North America to support this sales process. Our overall market penetration—one of our key performance indicators, or KPI—was strong in fiscal 2004 at 49.7 percent for retail and lease combined. That means nearly half of all retail Nissan and Infiniti vehicles sold in the U.S. are financed through Infiniti Financial Services or NMAC.\n\nPerformance during NISSAN 180 was very strong as well, with penetration and profit levels higher than our budget objectives for all three years. This was partly due to the higher volume, but also as a result of our tight controls we kept on loss ratios, which we accomplished through good buying practices and closely managing our portfolio. In fact, roughly 75 percent of our portfolio is categorized as Tier 1 and Tier 2, based on the FICO or Fair Isaac & Company score. As a result, in fiscal 2004 our retail loss ratio was 1.1 percent, and our lease loss ratio was 0.4 percent. Both ratios have improved since the previous year. We also grew our dealer inventory-financing portfolio. At the beginning of\n\n2003, we had 359 dealerships in our inventory floor plan count. By the end of fiscal 2004, that had increased to 595. It's a profitable business, and one that sets the stage for a strong overall relationship with the dealer.\n\nOn the cost side of our business, we have effectively managed our operating expenses, which represent another KPI. From the beginning of fiscal 2003 to the end of fiscal 2004 we improved our operating efficiency metric by over 20 percent, and continue to be among the industry leaders in cost structure.\n\nRegarding our funding strategy, approximately fifty percent of funding comes from asset-backed securitization, making that our largest funding source. However, that proportion has been declining because we began using a variety of other funding sources, including commercial paper and bonds, after our ratings improvement. As a result, our dependence on Nissan North America for funding via inter-company loans will be reduced in the future.\n\nUnder NISSAN Value-Up, we will work closely with Nissan Motor Co., Ltd. and Nissan North America to provide additional sales-financing capabilities in new global markets, which can be a key to increasing sales volume. To achieve the same kind of success we have achieved in our new Mexican sales-financing efforts under the NISSAN 180 plan, we will support the global Infiniti expansion and other geographic growth, including developing financial products for the light commercial vehicle market.\"", - "page_start": 30, - "page_end": 30, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "What was Nissan's vehicle production in Mexico in 2003?", - "target_page": 72, - "target_passage": "308,322", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT_", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "NISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT_", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **Making Profit as a Smaller Player**\n\nEUROPE\n\n\"Europe is one of the most fragmented automotive market in the world and a highly competitive one besides. Despite our relatively small size, however, we have begun to demonstrate that it is possible to make money in Europe. In fact, although Nissan does not yet deliver the levels of profitability here\n\nDOMINIQUE THORMANN Senior Vice President Nissan Europe\n\nthat the U.S. or other markets generate, we surpassed our NISSAN 180 business targets in fiscal 2004. Our profitability is now on par with the best European manufacturers. Nissan has a foundation for increasing profitability further in the coming years in Europe.\n\nNissan is already an established name around the region, and the brand is strongly associated with 4x4 technology, off-road vehicles and pickup trucks. However, there is also a solid heritage built around the Micra, a model designed for urban driving. Both the first and second generations of this car were very successful, and the third generation is performing well. To leverage our 4x4 heritage and SUV strength into the passenger car segment, Nissan is developing a series of crossover vehicles that blend car-like performance with 4x4 versatility. The Qashqai concept vehicle introduced at the 2004 Geneva Motor Show is the first of these—smaller, more affordable, and better adapted to European roads. The Qashqai will go into production in our plant in Sunderland in the UK in early 2007. The Murano, launched this year, is a precursor to the Qashqai in the larger executive segment. Europeans have already taken to the Murano, driving sales far past our initial forecasts in all markets. This car is helping make Nissan a brand that people aspire to own.\n\nNissan is still a small player in the region, selling 550,000 cars across a very large and diverse territory that stretches from the Atlantic Ocean to Russia, and from Finland to Israel. In the past we covered the area through multiple distribution channels, which we are currently in the process of simplifying. A few aspects of the European market have made profitability more difficult to achieve. For example, automakers must provide models with much diversity: diesel and gasoline powertrains; manual and automatic transmissions. The cars must also be engineered to suit the high driving speeds typical in the region and ensure superior handling, which results in higher costs.\n\nAs in many other mature markets, an incentive war is raging in Europe. Nissan's position here, as elsewhere, is to use incentives selectively and to always protect profitability. Providing products which customers recognize and appreciate for their style and attributes rather than being the best deal is the foundation of Nissan's profitable growth. We now have a wide range of products, five of which were newly launched in 2005, including the Pathfinder and the Navara pickup. We will release the Micra C+C at the Frankfurt Motor Show in September, giving customers the option of a unique standard glass roof in a fully retracting hard convertible top.\n\nNissan's manufacturing still defines the leading edge in Europe. According to *The Harbour Report*, our plant in Sunderland is the most productive plant in Europe. Sunderland will start production on a new B-segment car based on the Tone concept car in early 2006, followed by the Qashqai crossover vehicle in early 2007. Our Barcelona plant, which manufactures SUVs, 4x4s and light commercial vehicles, will reach full capacity in mid-2005. Finally, our truck plant in Avila, Spain, which specializes in light-duty trucks, will start producing a replacement for the popular Cabstar in late 2006. This efficient production base is a critical part of our profitable growth scenario.\n\nNISSAN Value-Up has given us a plan for building both profit and volume. We will not, however, sacrifice profit to gain volume. How far we can go depends on how fast we deliver results. I believe that we have much more room to grow, and to demonstrate that in even a crowded European market a smaller player can produce significant returns.\"", - "page_start": 62, - "page_end": 62, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### Europe\n\n| Nissan Europe S.A.S. | Trappes, France | Management of European manufacturing and sales | €1,626 | 100.00 |\n| --- | --- | --- | --- | --- |\n| Nissan International Finance | Amsterdam, | Financing for group companies | €13 | 100.00 |\n| (Netherlands) B.V. | The Netherlands | | | |\n| Nissan France S.A. | Trappes, France | Sales of automobiles and parts | €4 | 94.77 |\n| Nissan Motor (GB) Ltd. | Rickmansworth, UK | Sales of automobiles and parts | £136 | 100.00 |\n| Nissan Holding (UK) Ltd. | Sunderland, UK | Holding company for English subsidiaries | €870 | 100.00 |\n| Nissan Italia S.p.A. | Rome, Italy | Sales of automobiles and parts | €5 | 100.00 |\n| Nissan Motor Manufacturing | Sunderland, UK | Manufacture and sales of automobiles and parts | £250 | 100.00 |\n| (UK) Ltd. | | | | |\n| Nissan Technical Center | Granfield, UK | Research and development, testing | £15 | 100.00 |\n| Europe Ltd. | | | | |\n| Nissan Forklift Europe B.V. | Amsterdam, | Sales of forklifts and parts | €6 | 100.00 |\n| | The Netherlands | | | |\n| Nissan Motor Iberica, S.A. | Barcelona, Spain | Manufacture and sales of automobiles and parts | €725 | 99.76 |\n| Nissan Motor Espana, S.A. | Barcelona, Spain | Sales of automobiles and parts | €12 | 100.00 |\n| Nissan Forklift Espana, S.A. | Noain, Spain | Manufacture and sales of forklifts and parts | €9 | 100.00 |\n| Australia | | | | |\n| Nissan Motor Co. (Australia) Pty. Ltd. | Dandenong, Victoria | Sales of automobiles and parts | A$290 | 100.00 |\n| New Zealand | | | | |\n| Nissan New Zealand Ltd. | Auckland | Managing New Zealand subsidiaries; | NZ$51 | 100.00 |\n| | | automobile sales | | |\n| South Africa | | | | |\n| Nissan Motor Company | Rosslyn | Managing South African subsidiaries; | R39 | 100.00 |\n| South Africa (Pty) Ltd. | | automobile manufacturing and sales | | |\n| Middle East | | | | |\n| Nissan Middle East F.Z.E. | Dubai, UAE | Automobile sales | Dh2 | 100.00 |\n| China | | | | |\n| Nissan Motor (China) Ltd. | Hong Kong | Automobile sales | HK$16 | 100.00 |\n| Dongfeng Motor Co., Ltd. | Hubei | Manufacture and sales of automobiles and parts | RMB16,700 | 50.00 |\n| Taiwan | | | | |\n| Yulon Nissan Motor Co., Ltd. | Miao Li Hsien | Manufacture and sales of automobiles and parts | NT$3,000 | 40.00 |\n| Thailand | | | | |\n| Siam Nissan Automobile Co., Ltd. | Samuthprakarn | Manufacture and sales of automobiles and parts | THB1,931 | 75.00 |\n| Other consolidated subsidiaries | 156 companies | | | |\n| Total consolidated subsidiaries | 200 companies | | | |", - "page_start": 108, - "page_end": 108, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "### OUR WORK\n\nNISSAN IS A WORLD-CLASS AUTOMOBILE MANUFACTURER. TO ENVISION, PLAN, BUILD AND DISTRIBUTE MILLIONS OF AUTOMOBILES TO THE WORLD REQUIRES A CLEAR DEFINITION OF ROLES AND PROCESSES. AT NISSAN, OUR BUSINESS DIVISIONS COMMUNICATE IDEAS ACROSS COUNTRIES, CULTURES AND FUNCTIONS TO DEVISE THE TRANSPARENT, EFFICIENT SOLUTIONS THAT CREATE SUCCESS. THIS IS THE NISSAN SHIFT_", - "page_start": 33, - "page_end": 33, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# **Building on World-Class Productivity and Efficiency** TADAO TAKAHASHI\n\nMANUFACTURING\n\nExecutive Vice President\n\n\"By following the Nissan Production Way and the principle of *doukiseisan*—meaning synchronization with the customer—manufacturing at Nissan remains flexible and integrated, and keeps lead times short. The Nissan Production Way incorporates integration at the supplier, global and logistic levels. That is why we remain the most productive manufacturer in the world.\n\nWe've also become much more efficient, as our utilization rates show. In Japan, we were operating at 54 percent of capacity in 1999. In fiscal 2004 that figure increased to 86 percent, which is just about the maximum possible. During NISSAN Value-Up, we will increase our global utilization rate from approximately 74 percent to over 80 percent. We will not achieve that target by closing facilities, either. In fact, we've opened new plants in the U.S. and China, and increased capacity at our other facilities.\n\nManufacturing achieved a series of milestones during NISSAN 180. One of the biggest was opening the Canton plant in the U.S., which got up to speed quickly, launching five new vehicles in a period of just eight months. We built two plants in China, and restarted operations in Egypt. We dramatically expanded the Decherd, Tennessee engine plant in the U.S., and all engines for North America are now built at Decherd or at our plant in Mexico.\n\nWe also commenced cross-production with Renault: Nissan began building Renault's Platina in Mexico and its Traffic in Spain, while Renault began building our Pickup and Xterra at its factory in Brazil. We also started production of common engines with Renault, with our subsidiary Aichi Kikai and the Yokohama plant producing the four-cylinder engines used in our new Tiida, Note and Lafesta models. In Japan, we launched six new models in just six months—the Murano, Fuga, Lafesta, Tiida, Tiida Latio and Note. We also launched three vehicles—the Tiida, Teana and Tiida Latio—in China.\n\nWhile we were successful in Japan and China, we did have quality issues at the Canton facility. This was\n\nunfortunate, since it affected our ratings in the J. D. Power and Associates Initial Quality Study. We've since taken effective measures to resolve these problems. More importantly, we learned from them. We created new systems and new approaches to quality, which we then applied in Japan and to the new factories in China. Incidentally, the factories in China opened with no significant quality issues. This highlights one of our 'neverending' quests at Nissan, which is to identify problems and rapidly get solutions for them in place.\n\nWe do not rely solely on external quality evaluations. In cooperation with Renault, we created AVES, the Alliance Vehicle Evaluation System. AVES is a sophisticated process involving two people taking four to five hours to evaluate a vehicle. Because it is time-intensive, we also devised a short version of AVES that only takes an hour and can be done at the factory.\n\nThe second major area of focus is logistics, which is becoming more complicated. We send engine parts to the U.S., and soon we will be shipping more parts from leading competitive countries, or LCCs. During 2004, we encountered cargo-handling problems on the U.S. West Coast, which highlighted the need for a more sophisticated tracking system. If we had had such a system in place, we could have anticipated those problems and made the necessary adjustments.\n\nWhile Nissan's productivity leads the world, we have not stopped working to improve the process. One system we have implemented is the Design Standard Time Ratio, which allows us to calculate the ideal standard time for every operation. By applying this globally, we have brought all our branches around the world to nearly the same level. This in turn illustrated that we can produce vehicles more cheaply and with good productivity in the LCCs. Another opportunity discovered for the LCCs was in low-cost jig and die making. As a result, we have doubled the capacity of our die-making plant in Thailand and are looking into doing the same in China.", - "page_start": 51, - "page_end": 51, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### AUTOMOBILES\n\n## **Nissan**\n\n## **Exceeding expectations —the Nissan automobile**\n\nAt the center of everything we do stands the Nissan automobile. Our vehicles are the most tangible expression of our brand and the values of our company. We make cars that both inspire passion and exceed the expectations of our customers. Through bold and thoughtful designs, innovative technologies, and a richer and more rewarding driving experience, we are defining our unique place in the auto industry.\n\nOur product development philosophy differs from that which many of our competitors follow. Rather than focus on what the competition is providing, we concentrate on what they do not. We listen to drivers to discover their unmet needs and desires, and follow the most promising threads of emerging trends. Our designs are bold, geared to electrify and inspire. We see little point in building vehicles that please everyone but excite no one.\n\nThe appeal of a Nissan goes much deeper than the fine lines of its body and the gleam of its paint. We make some of the world's most advanced high-performance engines and transmissions. From our renowned VQ engine series to the latest in high technology, continuously variable transmissions (CVT), we blend driving pleasure with safety, fuel efficiency, and real-world environmental solutions.\n\nNissan has a long history of leadership and innovation in the automotive industry. We began our quest to create the best cars in the world in 1933, when the company was founded in Yokohama. The first Datsun passenger car rolled off the assembly line two years later. In the years since, we have fashioned a reputation for bold and innovative products. We were the first company to design, manufacture and export a small pickup truck from Japan to the United States, and to build and export a sports sedan, the Datsun 510. And we were the first to produce a true sports car that was also affordable, the Z. Today, we build equally exceptional vehicles in factories throughout the world that consistently rank in the top tier for efficiency, productivity and quality.\n\nIn the future, we will take the Nissan brand into new segments and markets. We will accelerate the pace of automotive evolution. And our products will continue to define our brand with clarity and consistency that brings lasting value to all our stakeholders.", - "page_start": 23, - "page_end": 23, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## **NISSAN Value-Up: Sustaining Performance**\n\nNissan's position today is much different than it was six years ago or even three years ago. In 1999, we were in crisis, and the Nissan Revival Plan was needed to revive our company and build a future. In April 2002, when NISSAN 180 began, we wanted to complete the revival process, with an emphasis on profitable growth.\n\nNISSAN Value-Up is about sustaining performance. About taking all the gains we have made in connecting with our customers, in growing volumes, in creating value, in earning profits, in improving management— and then building upon these gains.\n\nWith NISSAN Value-Up, you will not see a radical break from NISSAN 180. This plan is evolutionary, not revolutionary. We will take the core elements that got us to this point—namely, more revenue, less cost, more quality and speed, and maximized Alliance benefit with Renault and build upon them.\n\nNISSAN Value-Up has three critical commitments:\n\n- Profit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan.\nVolume:Nissan will achieve global sales of 4.2 million units measured in fiscal 2008.\n\n- ROIC: Nissan will achieve a 20 percent ROIC on average over the course of the plan, based on the new formula that excludes cash on hand from the denominator.\nNISSAN Value-Up will oversee 28 new models, resulting in the start of production of 70 models worldwide, over two dozen more than the 44 production starts during NISSAN 180. Of the 28 new models, 18 will be replacements for existing models and 10 will be completely new \"conquest\" models. We will enter more new segments, and we will introduce six models that will delight customers by being completely innovative in their concept and benefits.\n\nWe will pursue four major breakthroughs while implementing NISSAN Value-Up:\n\n- Our Infiniti luxury brand will extend its reach into new markets such as China and Russia and continue to establish its credibility as a Tier-1 luxury player.\n- We will develop our Light Commercial Vehicle (LCV) business into a fully competitive global operation through new market and product entries. By 2007, we plan to increase our LCV volume by 40 percent from fiscal 2004 to 434,000 units. During this period, operating margin is targeted to double from 4 percent to 8 percent.\n- We will take a more efficient global sourcing approach to maximize our opportunities and minimize our overall costs as we grow. Our engineering, production and purchasing functions will continue their acceleration toward being fully integrated global operations.\n- We will continue to invest in new and emerging markets, including China, India and Russia.", - "page_start": 11, - "page_end": 11, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### **Aftersales**\n\nJUNICHI ENDO Senior Vice President\n\n\"Aftersales was established in 2002 because Nissan wanted to expand the scope of what was once the Parts Division. Our primary objective is to extend the value chain. We are trying to engage new-car owners for a longer time by offering an extensive range of attractive aftersales products. These products include parts, service contracts, conversion—both accessories and customization—and new service methods such as quick inspection and quick body repair. Global Aftersales covers the downstream business in cooperation with other marketing and sales divisions.\n\nThis has become an increasingly global function as we deploy and monitor various programs throughout the world. For example, Project SX, the new Nissan service standard, should drastically improve dealer service operations. This program educates dealers on how to be more customeroriented by providing insights into productivity, marketing\n\nand management. To increase service productivity and efficiency, we send former factory foremen and engineers to various service workshops to analyze service staff performance. This will help cut repair times and improve customer satisfaction. The Nissan Sales and Service Way is also a tool used to increase the quality of service provided by all dealers. Its successful implementation has enhanced customer satisfaction worldwide.\n\nThe conversion business in Japan looks very promising. We discovered that 50 percent of car owners want to customize their vehicles, and 28 percent already had. Such a high penetration rate illustrates how much people want a car that's different from everyone else's. The Rider series customized versions of Nissan cars developed by our wholly owned subsidiary Autech—are very popular, especially among younger Japanese. The series exemplifies the major potential of the conversion business.\n\nGlobal Aftersales is a young division, but we've performed well from the start, meeting our global commitments every year during NISSAN 180 and contributing to the Company's growth. We have expanded nearly 20 percent year-on-year between 2001 and 2004, and intend to continue this momentum during NISSAN Value-Up. We will optimize our cost structure by sourcing parts from the leading competitive countries. We are striving to develop an even tighter relationship with our customers and to provide them with new services throughout the ownership cycle. I believe this broader range of aftersales services will provide sustainable growth in Nissan's revenues and profit.\"\n\n# Motorsports MOTORSPORTS\n\nMotorsports is a dynamic form of marketing that offers a natural forum for presenting the Nissan brand. On the track, Nissan's technologies are pushed to the limit—and sometimes beyond under grueling conditions.\n\nNissan participates in a wide range of motorsports, including the Super GT Series. This is the most popular racing series in Japan, and is increasingly broadcast around the world. Motorsports will remain an important marketing outlet that enhances both Nissan's brand presence and our engineering capabilities.", - "page_start": 43, - "page_end": 43, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "#### Contents\n\n| Financial Highlights | 1 |\n| --- | --- |\n| Letter from the President and CEO | 2 |\n| Letter from the COO | 4 |\n| Executives | 5 |\n| Performance | 6 |\n| Who We Are | 16 |\n| Our Way | 18 |\n| Automobiles | 22 |\n| Sales Finance | 28 |\n| Industrial Machinery | |\n| and Marine Business | 30 |\n| Renault-Nissan Alliance | 31 |\n| Our Work | 32 |\n| Planning | 34 |\n| Brand | 37 |\n| Design | 38 |\n| Marketing | 40 |\n| Communications | 43 |\n| Technology | 44 |\n| Purchasing | 48 |\n| Quality | 49 |\n| Manufacturing | 50 |\n| Control | 53 |\n| Finance | 54 |\n| Human resource | 56 |\n| Our World | 58 |\n| Japan | 60 |\n| Europe | 61 |\n| North America | 62 |\n| China | 64 |\n| General Overseas Markets | 66 |\n| Financial Section | 68 |\n| Corporate Data | 106 |\n| Subsidiaries and Affiliates | 106 |\n| Corporate Officers | 109 |\n\nThis Annual Report contains forward-looking statements on Nissan's future plans and targets, and related operating investment, product planning and production targets. Please note that there can be no assurance that these targets and plans will actually be achieved. Achieving them will depend on many factors, including not only Nissan's activities and development, but on the dynamics of the automobile industry worldwide and the global economy.\n\n## **Vision**\n\n**Nissan: Enriching people's lives**\n\n## **Mission**\n\n**Nissan provides unique and innovative automotive products and services that deliver superior measurable values to all stakeholders* in alliance with Renault.**\n\n*Our stakeholders include customers, shareholders, employees, dealers, suppliers, as well as the communities where we work and operate.\n\nThis Annual Report presents financial results for the fiscal period ending March 31, 2005. The report also provides shareholders with insight to Nissan's management team. Through one-onone interviews, various members of executive management, including Carlos Ghosn, President and Chief Executive Officer, discuss the philosophy and direction of Nissan.\n\n#### Our Websites\n\nhttp://www.nissan-global.com/EN/COMPANY/ Corporate Information\n\nIR Information\n\nhttp://www.nissan-global.com/EN/IR/ Environment, Design, Safety and Technology Information\n\nhttp://www.nissan-global.com/EN/PLAN/\n\nhttp://www.nissan-global.com/EN/GLOBAL/ Product Information (by Country)\n\nProduct Information (Japan)\n\nhttp://www.nissan.co.jp/\n\nhttp://www.nissan-global.com/EN/COMPANY/CITIZENSHIP/ Corporate Citizenship Information\n\nOUR WORK\n\nFINANCIAL SECTION\n\nCORPORATE DATA", - "page_start": 1, - "page_end": 1, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "Why did Sundance Energy's oil sales improve in 2014?", - "target_page": 18, - "target_passage": "The increase in oil revenues was the result of increased oil production volumes ($81.3 million) offset by a decrease in product pricing ($15.7 million). ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n*Advancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.*\n\n> The good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30–50% during the next 5–10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5–10 years earlier than hoped.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\nAubrey K. McClendon Chairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# NOTES TO THE FINANCIAL STATEMENTS\n\nfor the year ended 31 December 2004\n\nSAN165 WWW Fins 30/3/05 11:55 AM Page 72\n\n#### **23. Interests in Joint Ventures**\n\n(a) Santos Ltd and its controlled entities have combined interests in unincorporated joint ventures in the following major areas:\n\n| Joint venture/area | Principal activities | Average interest |\n| --- | --- | --- |\n| | | % |\n| Amadeus Basin | | |\n| Mereenie | Oil and gas production | 65 |\n| Mereenie Pipeline | Oil transportation | 65 |\n| Palm Valley | Gas production | 48 |\n| Browse Basin | Oil and gas exploration | 74 |\n| Carnarvon Basin | Oil and gas exploration and production | 32 |\n| Cooper Basin Downstream | Liquid hydrocarbon transportation and processing | 65 |\n| Cooper Basin Unit | | |\n| South Australia | Oil and gas production | 65 |\n| Queensland | Oil and gas production | 60 |\n| Cooper/Eromanga Basins | | |\n| South Australia | Oil and gas exploration and production | 65 |\n| Queensland, ATP 259P | Oil and gas exploration and production | 60 |\n| Other Eromanga | Oil and gas exploration and production | 74 |\n| Jackson Moonie Pipeline | Oil transportation | 83 |\n| Eastern Queensland | | |\n| Bowen Basin | Gas exploration and production | 50 |\n| Surat Basin | Oil and gas exploration and production | 48 |\n| Egypt | | |\n| Gulf of Suez | Oil and gas exploration | 50 |\n| Gippsland Basin | Oil and gas exploration and production | 35 |\n| Indonesia | | |\n| East Java Basin | Oil and gas exploration and production | 42 |\n| Kutei Basin | Oil and gas exploration | 35 |\n| West Natuna Basin | Oil and gas exploration and production | 6 |\n| West Papua | Oil and gas exploration | 20 |\n| Offshore Northern Australia | | |\n| Bonaparte Basin | Oil and gas exploration | 95 |\n| Houtman Basin | Oil and gas exploration | 42 |\n| Timor Gap | Oil and gas exploration and production | 17 |\n| Timor Sea | Oil and gas exploration and production | 22 |\n| Otway Basin | Oil and gas exploration and production | 36 |\n| Papua New Guinea | | |\n| PDL1 (Part Hides Field) | Oil and gas exploration | 31 |\n| Other interests | Oil and gas exploration and production | 31 |\n| Sorell Basin | Oil and gas exploration | 58 |\n| USA | | |\n| Gulf Coast | Oil and gas exploration and production | 39 |\n| Rocky Mountains | Oil and gas exploration and production | 50 |\n\n(b) The sales revenue received from the Santos Group's share of petroleum products produced by the joint ventures is $1,493.5 million\n\n(2003: $1,451.2 million) and the contribution of joint venture business undertakings to profit from ordinary activities before interest and tax of the Santos Group is $581.3 million (2003: $496.7 million).", - "page_start": 73, - "page_end": 73, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## *Apartment Property Expenses*\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## **Utility and Fuel Expense ‑ Same Store**\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % Change |\n| --- | --- | --- | --- |\n| Natural gas | $4,565 | $2,729 | 67.3% |\n| Oil | 1,523 | 2,095 | (27.3)% |\n| Electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| Other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year‑to‑date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## **Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas**", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "**CEO'S REPORT**\n\n# *Dear Fellow Shareholders,*\n\n*2014 Review—2014 was a year of stark economic contrasts in our industry. During the first half as in the past several years, historically volatile West Texas Intermediate oil prices seemed range bound between $80 and $110 with geopolitical events driving prices towards the ceiling and demand risks pushing prices towards the floor of the range.*\n\nIn the US, E&P companies were spending record amounts of capital, fueled by cheap and plentiful debt, on horizontal drilling and completions to drive production growth while making material strategic acquisitions in order to increase their long-term exposure to oil prices.\n\nThe easy credit environment caused asset prices to increase significantly to the point where, in our view, risk adjusted returns on new acquisitions were threatening cyclical lows. In line with our strategy, Sundance had monetized several mature assets realizing\n\n| | Sundance's Performance versus the ASX 200 | | |\n| --- | --- | --- | --- |\n| | | ANNUAL PERCENTAGE CHANGE | |\n| | IN 2P PV10 | | |\n| | (NET ASSET VALUE) | IN SUNDANCE | |\n| YEAR | PER DEBT ADJUSTED SHARE | PRICE PER SHARE | IN ASX200 |\n| 2014 | 21.6% | -48.0% | 1.1% |\n| 2013 | 63.3% | 29.9% | 15.1% |\n| 2012 | -15.6% | 87.8% | 14.6% |\n| 2011 | 59.7% | -44.6% | -14.5% |\n\n~$50 million in current period gains while freeing up ~$165 million in invested capital.\n\nWe primarily reinvested this capital in production growth and cash flow with only about $75 million reinvested in acquiring oil and gas leases and producing properties. This resulted in our production increasing from 5,028 BOEPD to 9,434 BOEPD by December 2014 and full year EBITDAX increasing $73.8 million to $126.4 million in 2014. Had prices stayed steady, we likely would have generated earnings before income taxes of over $85 million and a return on capital in excess of 20%.\n\nOur second capital priority for the year was to conclude the appraisal of the Woodford formation in our Logan County, Oklahoma assets. We viewed this relatively modest, but higher risk, investment as having a 25% chance of success with a 15x upside. Unfortunately, we met with mixed success in our appraisal activities proving that in today's onshore US oil and gas industry that the best absolute returns are generated by drilling in proved regions. There are plenty of solid opportunities to efficiently grow the business without exposure to undue geologic risk.\n\nLike many prior bubbles driven by new technologies, the second half of the year saw the pricing environment come crashing down around us. The market became fundamentally unbalanced, driving prices down almost 50% and rendering material portions of global oil and gas development uneconomic.\n\nOur peers went from talking about their growth prospects to fretting about cash costs and liquidity, a stark contrast from the go-go growth times which existed in the first half of the year. This shift in industry strategy has now come in line with our general business philosophy—in the resource space, low-cost, low debt businesses will survive and thrive across cycles; and, relative to our US onshore peer group, Sundance boasts a top 15% cost structure and balance sheet.\n\nOur position as a cost and balance sheet leader is underpinned by two key philosophies: 1) investment in a leading technical team that is encouraged to take reasonable risks to improve recoveries and/or reduce costs, and 2) a ruthless focus on portfolio returns as demonstrated by our consistent track record of divesting assets that don't fit our strategic objectives or promise lower forward return profiles.\n\nOur high quality Eagle Ford acreage produces strong recoveries at reasonable costs and thus generates good returns, even in a low price environment. Because of these characteristics, the majority of our forward capital is expected to be invested generating strong growth and shareholder returns in the Eagle Ford.\n\nWith mixed appraisal results in the Woodford, Sundance's Mississippian/Woodford position generally requires higher prices to meet our hurdle rates. Because of the mixed Woodford results, higher overall unit costs, and depressed pricing at year end, we recognized an impairment charge of ~$60 million on these assets at year 2014. Had prices maintained their strength, we likely would have been in a position to recover our investment from these assets.", - "page_start": 5, - "page_end": 5, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "**CHAIRMAN'S LETTER**\n\n*Despite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the opertional performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.*\n\n# *Dear Fellow Shareholders,*\n\n*I am pleased to present Sundance Energy Australia Limited's Annual Report for the 12 months ended 31 December 2014. It has been another year of significant progress for Sundance across our portfolio of liquids rich oil and gas assets in the US.*\n\nThe Company's strategic focus on growing production, cash flows and reserves from large, repeatable resource plays in North America continues to deliver positive results with growth in production, cash flows, and reserves.\n\nDuring late 2013 and 2014, we completed the divestment of our interest in the Williston Basin in North Dakota for $51 million which realised an internal rate of return of 45 percent; and also opportunistically divested our interest in the Denver-Julesburg Basin in Colorado for $114 million which realised an internal rate of return of 104 percent. These divestitures of smaller, less scalable positions enabled us to focus on developing and growing our assets in the Eagle Ford in Texas and our Mississippian/Woodford assets in Oklahoma.\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the operational performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n### **A year of growing production, cash flow and reserves**\n\nIn line with our strategy we continued to increase the level of company operated assets, and successfully maintained a very strong focus on optimising our operations and reducing costs. This resulted in an impressive improvement in well performance combined with a top tier cost structure.\n\nThrough our operated development program, we ended 2014 with record production of 9,434 barrels of oil equivalent per day (BOEPD) compared with an exit rate of 5,028 BOEPD in December 2013 and an average annual production of 6,635 BOEPD compared to 3,015 BOEPD in 2013. During 2014 we drilled and completed 42.7 net wells, primarily in the Eagle Ford, bringing our total well count to 81.3 by 31 December 2014. High value oil comprised approximately 69 percent of our total 2014 annual production and production from Sundance-operated projects accounted for 89 percent of total production for the year.\n\nCorresponding with the growth in annual production, the Company's full year revenues increased to $159.8 million and Adjusted EBITDAX increased to $126.4 million.\n\nThe Company's development program also generated significant growth in Constant Case reserves during the year. More details are contained elsewhere in this Annual Report, but in summary our 1P Reserves at the end of 2014 were 26.0 MBOE, 2P Reserves 54.1 MBOE, and 3P Reserves 147.7 MBOE. This compares with Reserves of 20.7 MBOE, 34.6 MBOE, and 92.8 MBOE, respectively, at the end of 2013.\n\nIn the current price environment, we have elected to scale back our drilling program to mainly concentrate on limited drilling obligations to hold Eagle Ford acreage. This will enable us to maintain our low leverage profile, which was approximately 1.03x debt to Adjusted EBITDAX at year end, and focus on growing our drilling inventory in an environment with less competition for leases and small acquisitions. Liquidity was $84 million at year end, with a borrowing base redetermination in 2015 expected to materially increase debt availability if the use of such funds is justified in line with our strategy.\n\n### **The Eagle Ford – driving value and production growth**\n\nSundance has grown its Eagle Ford acreage position from ~7,200 acres upon entering the basin to approximately 26,160 net mineral acres in the Eagle Ford at the end of 2014 which includes the acquisition of approximately 18,000 net acreage in 2014. By the end of the first quarter 2015 this had grown to 38,701 net mineral acres. Our growing presence in this prolific oil and gas region has been driving significant value for the Company and our shareholders, and continues to form our priority focus for development and acreage growth in the coming years.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "At year end, we had 197 gross 3P Reserves drilling locations across our Eagle Ford acreage where we continue to pursue operational and drilling efficiencies, opportunities to further improve well economics by improving recoveries and reducing costs. In 2014 this included a switch to pad drilling with zipper fracs and new completion techniques that have provided significant upside in production.\n\nDespite our current scaling back of drilling activity, we have set 2015 production guidance at 7,850 – 8,500 BOEPD, an increase from the previous year of some 13 – 17 percent, but a target that we believe is achievable while maintaining acceptable levels of liquidity given our demonstrated abilities and growing footprint in the Eagle Ford.\n\n### **Safety and Environment**\n\nSundance has a strong culture throughout the organisation of ensuring that high standards of safety are maintained and that our operations are conducted in an environmentally responsible way. During 2014 our comprehensive safety program was enhanced and further improvements will be a strong focus throughout 2015.\n\n#### **A strong financial position**\n\nSundance is well placed for future growth in the Eagle Ford. The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.\n\nWe expect that Sundance will grow organically and also through further leasing or bolt-on acquisitions in our core Eagle Ford focus area within our current, conservative balance sheet parameters.\n\n### **Positive outlook for 2015**\n\nDespite the current oil pricing scenario, Sundance's medium-to-long term growth trajectory looks very positive.\n\nWe can demonstrate this through:\n\n- A track record of capital efficient growth\n- A track record of value creation\n- Being a low cost/high margin operator\n- Having top tier Eagle Ford assets with an extensive drilling inventory\n- Having a clean balance sheet\n\nAs a mid-tier oil and gas producer and explorer in the S&P/ASX All Australian 200 index, and with the increasing interest and support from institutional and retail investors. I believe that Sundance will deliver significant long-term value from our assets for our shareholders.\n\n#### **Thank you for your support**\n\nWe have had a busy year at Sundance and I would like to recognise the efforts and valued contribution of the Board of Directors, management team and all staff and contractors of the Company in helping us achieve our strategic goals. I am confident that we have the right team and excellent assets in place to execute our clear and focused strategy that we expect to deliver significant value for our shareholders.\n\nOn behalf of the Board and Company, I would like to thank our shareholders for your strong support of the Company throughout the year. We are committed to delivering long-term value for our shareholders and I look forward to reporting over the rest of the coming year on the continued value creation and growth of Sundance.\n\nYours sincerely,\n\n**MIKE HANNELL** *Chairman*\n\n*The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.*", - "page_start": 4, - "page_end": 4, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President – Production\n\n# **What advantages does CHK's unique vertical integration strategy provide?**\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n# **Will U.S. natural gas prices reconnect with world natural gas prices?**\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by production (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President – Investor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why *shouldn't* it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso Executive Vice President and Chief Financial Officer\n\n# **Why is an investment grade rating on its debt securities important to CHK?**\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an investment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# AMERICA'S PREMIER ENERGY RESOURCE BASE »\n\nChesapeake is the second-largest producer of U.S. natural gas and a Top 15 producer of U.S. oil and natural gas liquids. The company has built a large resource base of high-quality U.S. assets in the Barnett, Haynesville, Bossier, Marcellus and Pearsall natural gas shale plays and in the Granite Wash, Cleveland, Tonkawa, Mississippian, Bone Spring, Avalon, Wolfcamp, Wolfberry, Eagle Ford, Niobrara and Utica unconventional liquids plays. In 2010 Chesapeake increased its focus on applying the geoscientific and horizontal drilling expertise gained from developing unconventional natural gas shale plays to unconventional liquids-rich plays. Our goal is to reach a balanced mix of natural gas and liquids revenue as quickly as possible through organic drilling. We invested approximately $4.7 billion in 2010, net of divestitures, primarily in liquids-rich acreage to provide the foundation for this shift toward more profitable plays.\n\nWe own interests in approximately 46,000 producing natural gas and oil wells, and in 2010 we produced approximately 1.035 trillion cubic feet of natural gas equivalent (tcfe) for an average of 2.8 billion cubic feet of natural gas equivalent (bcfe) per day. At year-end 2010, our proved reserves were 17.1 trillion cubic feet of natural gas equivalent, of which 90% were natural gas and all were onshore in the U.S. We have also captured an inventory of up to 115,000 unrisked net future drilling opportunities — almost 50 years worth of drilling opportunities — on approximately 13.2 million net leasehold acres in the U.S. The following highlights Chesapeake's ownership position in our key operating areas.", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "### *Financial Position*\n\nIn May 2014, the borrowing capacity under our credit facilities increased from an aggregate of $63 million to $135 million. The increase in the borrowing capacity was driven by the significant uplift of the Company's proved oil and gas reserves as at 31 December 2013. In conjunction with the increase in the Company's borrowing capacity, the Company expanded the syndicate of banks under the Senior Credit Facility. Bank of America Merrill Lynch and the Bank of Nova Scotia have now joined the bank group which is led by Wells Fargo.\n\nIn July 2014, the borrowing capacity increased an additional net $10 million, to $145 million, after taking into consideration the removal of proved oil and gas reserves associated with the DJ and Williston Basin dispositions and the development of proved oil and gas reserves in the Eagle Ford Formation.\n\nAt 31 December 2014, the Company had $130 million outstanding under our credit facilities and $15 million available under our borrowing capacity. Ending cash at 31 December 2014 was $69.2 million.\n\n### *Cashflow*\n\nCash provided by operating activities for the year ended 31 December 2014 increased 104.5% to $128.1 million compared to the prior year. This increase was primarily due to receipts from sales increasing $85.7 million, or 101.2%, to $170.4 million, while keeping payments to suppliers and employees relatively stable with an increase of $8.2 million, or 37.7%, to $30.0 million. See Review of Operations for more information.\n\nCash used in investing activities for the year ended 31 December 2014 increased $158.9 million, or 96.7%, to $323.2 million. This increase is due to successful implementation of the Company's strategy to develop and grow the reserves from our high working interest, repeatable resource plays, primarily in the Eagle Ford. Due to funding available to the Company through asset sales, capital raises and credit facilities, the Company was able to accelerate its 2015 drilling program into 2014. However, due to the reduction in crude oil prices in the fourth quarter of 2014 and continuing into early 2015, the Company will scale back its drilling program to concentrate on limited drilling obligations to hold Eagle Ford acreage during the 2015 year.\n\nCash provided by financing activities for the year ended 31 December 2014 increased $123.1 million, or 277.0%, to $167.6 million. This increase is a result of the increased availability and draws under the Company's credit facilities and proceeds received in a private placement of shares. In February 2014, the Company completed a private placement in which we sold 84.2 million ordinary shares at A$0.95 per share, resulting in net proceeds of approximately $68.4 million. The first tranche of 63.7 million shares was issued in March 2014 and the second tranche of 20.5 million shares was issued in April 2014.\n\n#### **Matters Subsequent to the End of the Financial Year**\n\nSubsequent to 31 December 2014, an additional $13.9 million was drawn-down the credit facilities, bringing total outstanding debt to $143.9 million, with undrawn funds of $1.1 million.\n\nIn January 2015, the company acquired three leases totalling approximately 14,180 net acres in the Eagle Ford for approximately $13.4 million.\n\n### **Future Developments, Prospects and Business Strategies**\n\nThe Group's business strategies and prospects for growth in future financial years are presently concentrated on growing the value of the Group's current resource plays through direct leasing from mineral owners, small acquisitions of producing properties, drilling inventory within the Group's current balance sheet capabilities, and development of the Group's current acreage. Further information on likely development in the operations of the Group and expected results of operations has not been included because the Directors believe it would result in unreasonable prejudice to the Group.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "in cash and drilling carries. This was CNOOC's second investment with Chesapeake and its second investment in the U.S. onshore E&P industry. We are currently drilling with five rigs in this play and expect to accelerate our drilling to 15 rigs by year-end 2013. We believe our leasehold position could support the drilling of up to 7,600 additional net wells.\n\nCleveland, Tonkawa and Mississippian Plays — These three liquids-rich plays of the Anadarko Basin should become significant contributors to our growth in the years ahead. The Cleveland and Tonkawa plays are tight sandstones located in western Oklahoma and the eastern Texas Panhandle, and they provide returns that are some of the very best in\n\n# **Fracking Operations Transparency**\n\nNatural gas and oil operations continue to grow and expand across the country as vast new resources are unlocked through the process of hydraulic fracturing, or \"fracking,\" a proven technology that has been used safely and successfully in the completion of more than 1 million U.S. wells since 1949.\n\nDuring the fracking process, a mixture of approximately 99% water and sand, combined with a small amount of chemical additives, is pumped at high pressure into a targeted formation to create small fissures or fractures in the surrounding rock or shale. These fractures are kept propped open by the sand to allow the natural gas or oil to freely flow into a wellbore.\n\nIn our continuing efforts to educate the public and alleviate common misconceptions about hydraulic fracturing, Chesapeake became one of the first energy companies to disclose the additives used in the process. We are actively participating in a national, publicly accessible web-based registry developed by the Ground Water Protection Council and the Interstate Oil and Gas Compact Commission, with support of the U.S. Department of Energy. The registry allows for fracking additives to be reported on a well-by-well basis and offers public access to that material on its website. Chesapeake began loading well completion data onto the registry on February 15, 2011, for wells where completion reports have been filed with the appropriate state agencies.\n\nTo view the listings and learn more about the fracking process, the additives used and measures taken to protect fresh ground water aquifers, visit www.fracfocus.org.\n\nthe company. We have acquired approximately 600,000 net leasehold acres prospective for these plays and have drilled 75 net wells to date. We are currently using eight rigs and believe our leasehold could support the drilling of up to an additional 3,700 net wells.\n\nThe Mississippian fractured carbonate is primarily an oil play and is located on the Anadarko Basin shelf of northern Oklahoma and southern Kansas. We have acquired approximately 900,000 net leasehold acres prospective for this play and have drilled 40 net wells to date. We are currently using four rigs and believe our leasehold could support the drilling of up to an additional 6,000 net wells. This is an area where we anticipate bringing in a joint venture partner later in 2011 or in early 2012.\n\nBone Spring, Avalon, Wolfcamp and Wolfberry Plays — These four liquids-rich plays of the Permian Basin should also become significant contributors to our growth in the years ahead. To date, we have acquired approximately 560,000 net leasehold acres that we believe are prospective for these plays and have drilled 155 net wells. We are currently using eight rigs and believe our leasehold could support the drilling of up to an additional 4,400 net wells.\n\nUtica Shale — Chesapeake has high hopes for this emerging shale play in eastern Ohio, especially because it would become the fourth large unconventional play (along with the Haynesville and Bossier shales and the Mississippian carbonate) that Chesapeake has discovered. In addition, we believe the play will have three distinct components (oil,\n\n*A prime example of Best Management Practices for fracture stimulation, this well in Bradford County, Pennsylvania, is now producing natural gas from the Marcellus Shale. A closely regulated completion technique, fracking is necessary to allow natural gas or oil to freely flow into the wellbore.*", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "I heard that Sundance Energy has acquired land in South Texas in July 2014, where is it?", - "target_page": 21, - "target_passage": "In July 2014, the Company completed the acquisition of approximately 5,700 net Eagle Ford acres in Dimmit County, South Texas", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "# **NOTE 2 – BUSINESS COMBINATIONS**\n\n### **Acquisitions in 2014**\n\nThere were no business acquisitions for the year ended 31 December 2014.\n\n# **Acquisition in 2013**\n\nOn 8 March 2013, the Company acquired 100% of the outstanding shares of Texon Petroleum Ltd (\"Texon\", whose name was changed to Armadillo Petroleum Ltd), an Australian corporation with oil and gas assets in the Eagle Ford formation in the United States. The Company acquired Texon to gain access to its existing production and drilling inventory in the Eagle Ford formation. As consideration for substantially all of the net assets of Texon, the Company issued 122.7 million ordinary shares (approximately 30.6% of the total outstanding shares immediately subsequent to the acquisition), which had a fair value of $132.1 million on the acquisition date and net cash consideration of $26.3 million for a total purchase price of $158.4 million. The net cash consideration includes a $141.0 million premerger purchase by the Company of certain Texon oil and gas properties, offset by $114.7 million of cash acquired at the time of the merger. The current income tax liability, included in accrued expenses, and deferred tax liability of $33.4 million and $16.9 million, respectively, are comprised of tax liabilities assumed as at the acquisition date and an increase in the tax liability related to the incremental acquisition date fair value of the acquired development and production and exploration and evaluation assets as compared to Texon's historical basis.\n\nThe following table reflects the final adjusted assets acquired and the liabilities assumed at their fair value or otherwise where specified by AASB 3/IFRS 3 – *Business Combinations* (in thousands):\n\n| Fair value of assets acquired: | |\n| --- | --- |\n| Trade and other receivables | $ 5,604 |\n| Other current assets | 456 |\n| Development and production assets | 53,937 |\n| Exploration and evaluation assets | 150,474 |\n| Prepaid drilling and completion costs | 3,027 |\n| Amount attributable to assets acquired | 213,498 |\n| Fair value of liabilities assumed: | |\n| Trade and other payables | 119 |\n| Accrued expenses | 37,816 |\n| Restoration provision | 277 |\n| Deferred tax liabilities | 16,884 |\n| Amount attributable to liabilities assumed | 55,096 |\n| Net assets acquired | $ 158,402 |\n| Purchase price: | |\n| Cash and cash equivalents, net of cash acquired | $ 26,310 |\n| Issued capital | 132,092 |\n| Total consideration paid | $ 158,402 |\n\nThe net assets recognized in the 31 December 2013 financial statements were based on a provisional assessment of their fair value.", - "page_start": 74, - "page_end": 74, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\nRent expense for 2014, 2013 and 2012 was as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Minimum rent: | | | |\n| Store locations | $170 | $145 | $124 |\n| Offices, warehouses and equipment | 36 | 35 | 32 |\n| Percentage rent | 14 | 14 | 14 |\n| Property incentives | (83) | (69) | (65) |\n| Total rent expense | $137 | $125 | $105 |\n\nThe rent expense above does not include common area charges, real estate taxes and other executory costs, which were $88 in 2014, $81 in 2013 and $74 in 2012.\n\n#### **NOTE 11: COMMITMENTS AND CONTINGENT LIABILITIES**\n\nOur estimated total purchase obligations, capital expenditure contractual commitments and inventory purchase orders were $2,092 as of January 31, 2015. In connection with the purchase of foreign merchandise, we have outstanding trade letters of credit totaling $1 as of January 31, 2015.\n\nPlans for our Manhattan full-line store, which we currently expect to open in late 2018 to 2019, ultimately include owning a condominium interest in a mixed-use tower and leasing certain nearby properties. As of January 31, 2015, we had approximately $125 of fee interest in land, which is expected to convert to the condominium interest once the store is constructed. We have committed to make future installment payments based on the developer meeting pre-established construction and development milestones. Our fee interest in the land is currently and will continue to be subject to lien by project development lenders until project completion or fulfillment of our existing installment payment commitment. In the unlikely event that this project is not completed, the opening may be delayed and we may potentially be subject to future losses or capital commitments in order to complete construction or to monetize our previous investments in the land.\n\n#### **NOTE 12: SHAREHOLDERS' EQUITY**\n\nIn February 2013, our Board of Directors authorized a program to repurchase up to $800 of our outstanding common stock, through March 1, 2015. In September 2014, our Board of Directors authorized a new program to repurchase up to $1,000 of our outstanding common stock through March 1, 2016, in addition to the remaining amount available for repurchase under the previously authorized program. The following is a summary of the activity related to our share repurchase programs in 2012, 2013 and 2014:\n\n| | | Average price | |\n| --- | --- | --- | --- |\n| | Shares | per share | Amount |\n| Capacity at January 28, 2012 | | | $310 |\n| February 2012 authorization (ended February 1, 2014) | | | 800 |\n| Shares repurchased | 14.0 | $51 | (717) |\n| Capacity at February 2, 2013 | | | 393 |\n| February 2013 authorization (ends March 1, 2015) | | | 800 |\n| Shares repurchased | 9.1 | $57 | (523) |\n| Capacity at February 1, 2014 | | | 670 |\n| September 2014 authorization (ends March 1, 2016) | | | 1,000 |\n| Shares repurchased | 8.9 | $66 | (595) |\n| Capacity at January 31, 2015 | | | $1,075 |\n\nThe actual number and timing of future share repurchases, if any, will be subject to market and economic conditions and applicable SEC rules.\n\nWe paid dividends of $1.32 per share in 2014, $1.20 per share in 2013 and $1.08 per share in 2012. In February 2015, we declared a quarterly dividend of $0.37 per share, increased from a quarterly dividend of $0.33 per share in 2014.", - "page_start": 66, - "page_end": 66, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "# ACHIEVING INNOVATIVE COMMERCIALISATION\n\nOn top of exploration and new ventures growth opportunities, Santos has a large inventory of gas fields that are yet to be committed to gas contracts. These fields, known as contingent resources, represent significant opportunities for Santos.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 20\n\nEach year Santos works towards commercialising these fields by finding new gas contracts or extending existing contracts so that they can be booked as Proven (1P) or Proven plus Probable (2P) reserves.\n\nSantos' contingent gas resources are largely located offshore southern Australia and Western Australia, in the Bonaparte Basin offshore northern Australia and onshore Papua New Guinea.\n\nSantos continued to deliver on gas commercialisation during 2004, commercialising 27 million boe during the year. Santos also achieved positive contract price reviews for gas sales that were well above the indexed levels.\n\n## **UNIQUE ENERGY HUBS DELIVER GAS SWAPS**\n\nSome of the most important gas commercialisation achievements for the year were the innovative gas swaps agreements that were only possible because of Santos' unique spread of assets across key Australian gas hubs.\n\nSantos and the other South West Queensland Gas Producers announced a coal seam methane gas swap in May to allow each party to supply the other party's contractual obligations in different states via the Moomba gas hub in central Australia. This arrangement for 200 PJ meant that Origin could avoid building a pipeline and that Santos could capture a share of the saving.\n\nGas swapping will commence in 2005 and could continue until the end of 2011.\n\nA second gas swap, from eastern Queensland to Gippsland, moved gas through three states and five joint ventures, expanding market horizons for partners and providing backup options to customers.\n\n## **EXPANDED CASINO CONTRACT ENHANCES VALUE**\n\nThe commercialisation of the Casino gas field in the Otway Basin, offshore southern Australia, continued during 2004 with an increase in the quantity of gas being sold under the initial term sheet signed in September 2003 with TXU for 293 PJ.\n\nWhen the project was sanctioned in October 2004, the joint venture announced an extension to the original Gas Sales Agreement to supply up to 420 PJ of gas, and possibly another 105 PJ, over 12 years for the Victorian or South Australian markets.\n\nThe Casino contracts are unique in that the reserves have been contracted prior to the field being fully appraised to confirm the quantity of gas available. This has allowed the joint venture to undertake appraisal drilling and near field exploration programs with the knowledge that all of the gas likely to be discovered will be taken, thereby significantly reducing the risk. This shortens the time from discovery to production and delivers profits to Santos and its shareholders sooner.\n\n## **WA CONTRACTS FAST-TRACK JOHN BROOKES**\n\nSantos and its co-venturer Apache won two significant gas contracts in Western Australia\n\nwhich resulted in the fast tracking and sanctioning of the John Brookes gas field in the Carnarvon Basin.\n\n**ENERGY HUB STRATEGY**\n\nThe successful appraisal of the field in late 2003 and early 2004 significantly increased the available gas reserves. The decision to bring the field into production by mid-2005 enabled active marketing of gas above that already allocated to support the declining East Spar field.\n\nIn a separate move, designed to enhance future commercialisation opportunities, the joint venture equity interests in the East Spar and the John Brookes fields were aligned through an acquisition program which created an important production hub at Varanus Island.\n\nJohn Brookes has an expected field life of more than 15 years which could be further extended by a development of the Reindeer field in later years.\n\nIn the first contract, the joint venture agreed to supply Newcrest Mining with 120 PJ of gas over 15 years at a maximum rate of 25 TJ per day. Newcrest will use the gas for power generation at the Telfer gold mine in the Pilbara region of Western Australia.\n\nThe second John Brookes contract is to supply 58 PJ of gas over 20 years to EDL to supply four gasfired powered stations under construction as part of its West Kimberly Power project in Western Australia.\n\nThe gas will be converted to LNG at a new facility to be built at Karratha. The LNG will then be transported by road tankers to fuel the gas-fired power stations in Broome, Derby, Fitzroy Crossing and Halls Creek. The contract will commence in the first half of 2006.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "# ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nSAN165 WWW Text 30/3/05 12:07 PM Page 23\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n#### *Property Acquisitions*\n\nWhen investment properties are acquired, Management considers whether the acquisition represents the acquisition of an asset or a business. The Company accounts for an acquisition as a business combination where an integrated set of activities is acquired in addition to the property. More specifically, consideration is made of the extent to which significant processes are acquired and, in particular, the extent of ancillary services provided by the subsidiary (e.g., maintenance, cleaning, security, bookkeeping, leasing operations, etc.).\n\nManagement believes that the majority of the Company's acquisitions will be classified as asset acquisitions. During the acquisition of most properties, Killam buys the asset itself and any short‑term leases that are in place. Generally, Killam does not purchase any business systems or processes with a property. Management would consider an acquisition to be a business combination if all the following criteria were met:\n\n- The acquisition includes a property portfolio (multiple buildings),\n- A significant staff complement is included, including a maintenance team, leasing representatives and property management personnel, and\n- Systems are acquired and continue to be incorporated into operations.\n\n#### *Investment Properties*\n\nThe Company's accounting policies relating to investment properties are described in Note 2(F). In applying this policy, judgment is applied in determining whether certain costs are additions to the carrying amount of the property and, for properties under construction, identifying the point at which substantial completion of the property occurs and identifying the directly attributable borrowing costs to be included in the carrying value of the development property. Judgment is also applied in determining the extent and frequency of independent appraisals.\n\n#### *Leases*\n\nThe Company has entered into residential property leases on its investment property portfolio. The Company has determined, based on an evaluation of the terms and conditions of the arrangements, that it has not transferred all the significant risks and rewards of ownership of these properties and accounts for the contracts with tenants as operating leases.\n\n#### *Financial Instruments*\n\nThe Company's accounting policies relating to financial instruments are described in Note 2(K). The critical judgments inherent in these policies relate to applying the criteria set out in IAS 39 to designate financial instruments as fair value through profit and loss \"FVTPL\", and determining whether the Company has significant influence over investees with which it has contractual relationships in addition to the financial instrument it holds.\n\n#### *Taxes*\n\nThe Company is subject to income and capital gains taxes in numerous jurisdictions. Significant judgment is required to determine the total provision for current and deferred taxes. There are many transactions and calculations for which the ultimate tax determination and timing of payment is uncertain. The Company recognizes liabilities for current taxes based on estimates of whether additional taxes will be due. Where the final tax outcome of these matters is different from the amounts that were initially recorded, such differences will impact the income and deferred tax provisions in the period in which the determination is made. Deferred tax assets and liabilities are recognized on a net basis to the extent they are relating to the same fiscal entity and fall due in approximately the same period.\n\n#### *Consolidation and joint arrangements*\n\nThe Company has determined that it controls and consolidates the subsidiaries where it owns a majority of the shares. The Company is part owner of one property in which it has a 47% interest. The Company has determined that it does control this property as it operates and manages the property, governs the financial and operating policies, and has the power to cast the majority of the votes at meetings of the board of directors given the widely held distribution of the remaining ownership percentage. This property is accounted for on a consolidated basis.\n\nThe Company is part owner of an investment in which it has a 25% ownership interest. The Company has determined that it does not have control as it holds less than a 50% ownership interest. This investment is a joint arrangement which is separately incorporated. It is deemed that the joint arrangement is separate from the Company, having no direct interest in the assets and obligation of the joint arrangement. The Company has (after considering the structure and form of the arrangement, the terms agreed by the parties in the contractual arrangement and the Company's rights and obligations arising from the arrangement) classified its interest as a joint venture under IFRS 11. As a consequence it accounts for its investment in the joint venture using the equity method.\n\n#### **Estimates**\n\n#### *Valuation of Investment Properties*\n\nThe fair value of investment properties is partially determined by independent real estate valuation experts (the \"External Valuator\") using recognized valuation techniques and partially by Management. The External Valuator uses the capitalization of net income method to determine the fair market values. In some cases, the fair values are corroborated by recent real estate transactions with similar characteristics and location to those of the Company's assets. Management's internal valuation model is also based on a capitalization of NOI by property, using property specific quarterly cap‑rates, provided by an independent qualified valuation professional.", - "page_start": 60, - "page_end": 60, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "horizontal wells drilled just to the Bossier may not always hold Haynesville rights. Therefore, Chesapeake and other producers have been drilling aggressively to hold all rights through the Haynesville before the initial three-year term of a typical lease expires. As a result, there has not been much drilling to the Bossier to date. However, once our leases are held by production (HBP) by Haynesville drilling (we expect to be largely complete with HBP drilling by year-end 2011 and completely finished by year-end 2012), we will begin developing the Bossier Shale more aggressively in 2013. In the Bossier play, we own 205,000 net leasehold acres and estimate we could drill up to 2,600 net wells in the years ahead.\n\nlargest and most respected European energy companies. In this transaction, we sold Statoil 32.5% of our Marcellus assets for $3.375 billion in cash and drilling carries. Today, having sold 32.5% of our original 1.8 million net leasehold acres, we have returned to owning 1.7 million net leasehold acres in the play and are the industry's leading leasehold owner, largest producer and most active developer. We are producing from more than 100 net wells in the Marcellus on our 1.7 million net acres, are currently drilling with 32 rigs and estimate we could drill up to 21,000 additional net wells in the years ahead.\n\n> Colony and Texas Panhandle Granite Wash — These liquids-rich plays generate the company's highest returns (routinely more than 100%) and provided the inspiration\n\n*Generating the highest returns in the company, plays like the Oklahoma Colony Granite Wash inspire Chesapeake to find other liquids-rich opportunities.*\n\nMarcellus Shale — We first became aware of the Marcellus in 2005 when we were negotiating our $2.2 billion acquisition of Appalachia's second-largest natural gas producer, Columbia Natural Resources, LLC. In 2007 we aggressively accelerated our Marcellus leasehold acquisition efforts and began to prepare for our first drilling activities. By early 2008, we had determined the Marcellus could be prospective over an area of approximately 15 million net acres (approximately five times larger than the prospective Haynesville core area and 10 times larger than the Barnett core area).\n\nAfter acquiring 1.8 million net leasehold acres, we entered into a joint venture agreement in late 2008 with Oslo-based Statoil, one of the The very significant upward trajectory of value creation that Chesapeake is on today is primarily driven by the quality of our assets, which feature dominant positions in 16 of the 20 most important major unconventional natural gas and liquids plays in the U.S.\n\nfor the company to find other liquids-rich plays in 2010. The Granite Wash, and other plays with liquids-rich gas production streams, provide the strongest economics in the industry today because they possess the best of both worlds: high-volume natural gas production along with\n\nsignificant volumes of highly valued liquids that dramatically increase investment returns.\n\nWe are producing from approximately 150 net Granite Wash wells, are currently drilling with 16 rigs and estimate we could drill up to 1,700 additional net wells on our 215,000 net leasehold acres in the years ahead. Based on current NYMEX futures prices for natural gas and oil, each Granite Wash well should generate approximately $11.5 million of present value (or up to an undiscounted total of $19.5 billion for all 1,700 wells), making it obvious why finding, leasing and developing more unconventional liquids-rich plays was Chesapeake's number one priority for 2010. We were very successful", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "**CEO'S REPORT**\n\n# *Dear Fellow Shareholders,*\n\n*2014 Review—2014 was a year of stark economic contrasts in our industry. During the first half as in the past several years, historically volatile West Texas Intermediate oil prices seemed range bound between $80 and $110 with geopolitical events driving prices towards the ceiling and demand risks pushing prices towards the floor of the range.*\n\nIn the US, E&P companies were spending record amounts of capital, fueled by cheap and plentiful debt, on horizontal drilling and completions to drive production growth while making material strategic acquisitions in order to increase their long-term exposure to oil prices.\n\nThe easy credit environment caused asset prices to increase significantly to the point where, in our view, risk adjusted returns on new acquisitions were threatening cyclical lows. In line with our strategy, Sundance had monetized several mature assets realizing\n\n| | Sundance's Performance versus the ASX 200 | | |\n| --- | --- | --- | --- |\n| | | ANNUAL PERCENTAGE CHANGE | |\n| | IN 2P PV10 | | |\n| | (NET ASSET VALUE) | IN SUNDANCE | |\n| YEAR | PER DEBT ADJUSTED SHARE | PRICE PER SHARE | IN ASX200 |\n| 2014 | 21.6% | -48.0% | 1.1% |\n| 2013 | 63.3% | 29.9% | 15.1% |\n| 2012 | -15.6% | 87.8% | 14.6% |\n| 2011 | 59.7% | -44.6% | -14.5% |\n\n~$50 million in current period gains while freeing up ~$165 million in invested capital.\n\nWe primarily reinvested this capital in production growth and cash flow with only about $75 million reinvested in acquiring oil and gas leases and producing properties. This resulted in our production increasing from 5,028 BOEPD to 9,434 BOEPD by December 2014 and full year EBITDAX increasing $73.8 million to $126.4 million in 2014. Had prices stayed steady, we likely would have generated earnings before income taxes of over $85 million and a return on capital in excess of 20%.\n\nOur second capital priority for the year was to conclude the appraisal of the Woodford formation in our Logan County, Oklahoma assets. We viewed this relatively modest, but higher risk, investment as having a 25% chance of success with a 15x upside. Unfortunately, we met with mixed success in our appraisal activities proving that in today's onshore US oil and gas industry that the best absolute returns are generated by drilling in proved regions. There are plenty of solid opportunities to efficiently grow the business without exposure to undue geologic risk.\n\nLike many prior bubbles driven by new technologies, the second half of the year saw the pricing environment come crashing down around us. The market became fundamentally unbalanced, driving prices down almost 50% and rendering material portions of global oil and gas development uneconomic.\n\nOur peers went from talking about their growth prospects to fretting about cash costs and liquidity, a stark contrast from the go-go growth times which existed in the first half of the year. This shift in industry strategy has now come in line with our general business philosophy—in the resource space, low-cost, low debt businesses will survive and thrive across cycles; and, relative to our US onshore peer group, Sundance boasts a top 15% cost structure and balance sheet.\n\nOur position as a cost and balance sheet leader is underpinned by two key philosophies: 1) investment in a leading technical team that is encouraged to take reasonable risks to improve recoveries and/or reduce costs, and 2) a ruthless focus on portfolio returns as demonstrated by our consistent track record of divesting assets that don't fit our strategic objectives or promise lower forward return profiles.\n\nOur high quality Eagle Ford acreage produces strong recoveries at reasonable costs and thus generates good returns, even in a low price environment. Because of these characteristics, the majority of our forward capital is expected to be invested generating strong growth and shareholder returns in the Eagle Ford.\n\nWith mixed appraisal results in the Woodford, Sundance's Mississippian/Woodford position generally requires higher prices to meet our hurdle rates. Because of the mixed Woodford results, higher overall unit costs, and depressed pricing at year end, we recognized an impairment charge of ~$60 million on these assets at year 2014. Had prices maintained their strength, we likely would have been in a position to recover our investment from these assets.", - "page_start": 5, - "page_end": 5, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "### **NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)**\n\n### **Capitalized LandÑll Costs**\n\nCapitalized landÑll costs include expenditures for land, permitting costs, cell construction costs and environmental structures. Capitalized permitting and cell construction costs are limited to direct costs relating to these activities, including legal, engineering and construction costs associated with excavation, natural and synthetic liners, construction of leachate collection systems, installation of methane gas collection and monitoring systems, installation of groundwater monitoring wells, and other costs associated with the development of the site. Interest is capitalized on landÑll construction projects while the assets are undergoing activities to ready them for their intended use. Capitalized landÑll costs also include Ñnal capping, closure and post-closure assets accrued in accordance with SFAS 143 as discussed below.\n\nCosts related to acquiring land, excluding the estimated residual value of unpermitted, non-buÅer land, and costs related to permitting and cell construction are depleted as airspace is consumed using the units-ofconsumption method.\n\nCapitalized landÑll costs may also include an allocation of purchase price paid for landÑlls. For landÑlls purchased as part of a group of several assets, the purchase price assigned to the landÑll is determined based upon the discounted expected future cash Öows of the landÑll relative to the other assets within the acquired group. If the landÑll meets the Company's expansion criteria, the purchase price is further allocated between permitted airspace and expansion airspace based upon the ratio of permitted versus probable expansion airspace to total available airspace. LandÑll purchase price is amortized using the units-of-consumption method over the total available airspace including probable expansion airspace where appropriate.\n\n### **Final Capping, Closure and Post-Closure Costs**\n\nOn January 1, 2003, the Company changed the methodology it used to record Ñnal capping, closure and post-closure expense in accordance with SFAS 143. SFAS 143 does not change the basic landÑll accounting policies followed by the Company and others in the waste industry. Through December 31, 2002, the industry has generally amortized capitalized costs and accrued future Ñnal capping, closure and post-closure obligations using the units-of-consumption method as cubic yards of available airspace are consumed over the life of the related landÑll. This practice is referred to as life cycle accounting and will continue to be followed except as modiÑed by SFAS 143 as discussed below.\n\nThe table below reÖects signiÑcant changes between the Company's historical methodology and the methodology the Company currently uses to account for Ñnal capping, closure and post-closure activities and for methane gas collection systems:\n\n| Description | Historical Practice | Current Practice (EÅective January 1, 2003) |\n| --- | --- | --- |\n| DEFINITIONS: | | |\n| Final Capping | Costs related to installation of the | No change. |\n| | components that comprise the | |\n| | permanent Ñnal cover over areas of a | |\n| | landÑll where airspace capacity has | |\n| | been consumed. | |\n| Closure | Includes routine maintenance costs | No change, except that it includes |\n| | incurred after a site ceases to accept | the Ñnal portion of the methane gas |\n| | waste, but prior to being certiÑed | collection system to be constructed. |\n| | closed. | |", - "page_start": 75, - "page_end": 75, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "*Developing America's fuel in the backyard of America's team: a Chesapeake rig drills deep in the Barnett Shale near Cowboys Stadium in Arlington, Texas.*", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "*Transfer and Disposal Services.* We own or operate 96 transfer stations. We deposit waste at these stations, as do other private haulers and municipal haulers, for compaction and transfer to trailers for transport to disposal sites or recycling facilities. As of December 31, 2004, we owned or operated 58 landÑlls, which had approximately 8,904 permitted acres and total available permitted and probable expansion disposal capacity of approximately 1.7 billion in-place cubic yards. The in-place capacity of our landÑlls is subject to change based on engineering factors, requirements of regulatory authorities and the ability to expand our sites successfully. Some of our landÑlls accept non-hazardous special waste, including utility ash, asbestos and contaminated soils. See \"\"Ì Properties.''\n\nMost of our existing landÑll sites have the potential for expanded disposal capacity beyond the currently permitted acreage. We monitor the availability of permitted disposal capacity at each of our landÑlls and evaluate whether to pursue expansion at a given landÑll based on estimated future waste volumes and prices, market needs, remaining capacity and likelihood of obtaining an expansion. To satisfy future disposal demand, we are currently seeking to expand permitted capacity at certain of our landÑlls, although no assurances can be made that all future expansions will be permitted as designed.\n\n*Other Services.* We have 35 materials recovery facilities and other recycling operations, which are generally required to fulÑll our obligations under long-term municipal contracts for residential collection services. These facilities sort recyclable paper, aluminum, glass and other materials. Most of these recyclable materials are internally collected by our residential collection operations. In some areas, we receive commercial and industrial solid waste that is sorted at our facilities into recyclable materials and nonrecyclable waste. The recyclable materials are salvaged, repackaged and sold to third parties and the nonrecyclable waste is disposed of at landÑlls or incinerators. Wherever possible, our strategy is to reduce our exposure to Öuctuations in recyclable commodity prices by utilizing third party recycling facilities, thereby minimizing our recycling investment.\n\nWe provide remediation and other heavy construction services primarily through our subsidiary located in Missouri.\n\nWe also have a Texas-based compost, mulch and soil business at which yard, mill and other waste is processed, packaged and sold as various products.\n\n### **Sales and Marketing**\n\nWe seek to provide quality services that will enable our company to maintain high levels of customer satisfaction. We derive our business from a broad customer base which we believe will enable our company to experience stable growth. We focus our marketing eÅorts on continuing and expanding business with existing customers, as well as attracting new customers.\n\nWe employ approximately 500 sales and marketing employees. Our sales and marketing strategy is to provide high-quality, comprehensive solid waste collection, recycling, transfer and disposal services to our customers at competitive prices. We target potential customers of all sizes, from small quantity generators to large \"\"Fortune 500'' companies and municipalities.\n\nMost of our marketing activity is local in nature. However, in 2000 we initiated a national accounts program in response to our customers' needs.\n\nWe generally do not change the tradenames of the local businesses we acquire, and therefore we do not operate nationally under any one mark or tradename. Rather, we rely on the goodwill associated with the acquired companies' local tradenames as used in each geographic market in which we operate.\n\n### **Customers**\n\nWe provide services to commercial, industrial, municipal and residential customers. No one customer has individually accounted for more than 10% of our consolidated revenue or of our reportable segment revenue in any of the last three years.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "I am the CFO of Sundance Energy, will my base increase in 2015 as it did in 2014?", - "target_page": 31, - "target_passage": "No increases to Managing Director’s or KMP’s base salary", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "At year end, we had 197 gross 3P Reserves drilling locations across our Eagle Ford acreage where we continue to pursue operational and drilling efficiencies, opportunities to further improve well economics by improving recoveries and reducing costs. In 2014 this included a switch to pad drilling with zipper fracs and new completion techniques that have provided significant upside in production.\n\nDespite our current scaling back of drilling activity, we have set 2015 production guidance at 7,850 – 8,500 BOEPD, an increase from the previous year of some 13 – 17 percent, but a target that we believe is achievable while maintaining acceptable levels of liquidity given our demonstrated abilities and growing footprint in the Eagle Ford.\n\n### **Safety and Environment**\n\nSundance has a strong culture throughout the organisation of ensuring that high standards of safety are maintained and that our operations are conducted in an environmentally responsible way. During 2014 our comprehensive safety program was enhanced and further improvements will be a strong focus throughout 2015.\n\n#### **A strong financial position**\n\nSundance is well placed for future growth in the Eagle Ford. The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.\n\nWe expect that Sundance will grow organically and also through further leasing or bolt-on acquisitions in our core Eagle Ford focus area within our current, conservative balance sheet parameters.\n\n### **Positive outlook for 2015**\n\nDespite the current oil pricing scenario, Sundance's medium-to-long term growth trajectory looks very positive.\n\nWe can demonstrate this through:\n\n- A track record of capital efficient growth\n- A track record of value creation\n- Being a low cost/high margin operator\n- Having top tier Eagle Ford assets with an extensive drilling inventory\n- Having a clean balance sheet\n\nAs a mid-tier oil and gas producer and explorer in the S&P/ASX All Australian 200 index, and with the increasing interest and support from institutional and retail investors. I believe that Sundance will deliver significant long-term value from our assets for our shareholders.\n\n#### **Thank you for your support**\n\nWe have had a busy year at Sundance and I would like to recognise the efforts and valued contribution of the Board of Directors, management team and all staff and contractors of the Company in helping us achieve our strategic goals. I am confident that we have the right team and excellent assets in place to execute our clear and focused strategy that we expect to deliver significant value for our shareholders.\n\nOn behalf of the Board and Company, I would like to thank our shareholders for your strong support of the Company throughout the year. We are committed to delivering long-term value for our shareholders and I look forward to reporting over the rest of the coming year on the continued value creation and growth of Sundance.\n\nYours sincerely,\n\n**MIKE HANNELL** *Chairman*\n\n*The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.*", - "page_start": 4, - "page_end": 4, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "**CEO'S REPORT**\n\n# *Dear Fellow Shareholders,*\n\n*2014 Review—2014 was a year of stark economic contrasts in our industry. During the first half as in the past several years, historically volatile West Texas Intermediate oil prices seemed range bound between $80 and $110 with geopolitical events driving prices towards the ceiling and demand risks pushing prices towards the floor of the range.*\n\nIn the US, E&P companies were spending record amounts of capital, fueled by cheap and plentiful debt, on horizontal drilling and completions to drive production growth while making material strategic acquisitions in order to increase their long-term exposure to oil prices.\n\nThe easy credit environment caused asset prices to increase significantly to the point where, in our view, risk adjusted returns on new acquisitions were threatening cyclical lows. In line with our strategy, Sundance had monetized several mature assets realizing\n\n| | Sundance's Performance versus the ASX 200 | | |\n| --- | --- | --- | --- |\n| | | ANNUAL PERCENTAGE CHANGE | |\n| | IN 2P PV10 | | |\n| | (NET ASSET VALUE) | IN SUNDANCE | |\n| YEAR | PER DEBT ADJUSTED SHARE | PRICE PER SHARE | IN ASX200 |\n| 2014 | 21.6% | -48.0% | 1.1% |\n| 2013 | 63.3% | 29.9% | 15.1% |\n| 2012 | -15.6% | 87.8% | 14.6% |\n| 2011 | 59.7% | -44.6% | -14.5% |\n\n~$50 million in current period gains while freeing up ~$165 million in invested capital.\n\nWe primarily reinvested this capital in production growth and cash flow with only about $75 million reinvested in acquiring oil and gas leases and producing properties. This resulted in our production increasing from 5,028 BOEPD to 9,434 BOEPD by December 2014 and full year EBITDAX increasing $73.8 million to $126.4 million in 2014. Had prices stayed steady, we likely would have generated earnings before income taxes of over $85 million and a return on capital in excess of 20%.\n\nOur second capital priority for the year was to conclude the appraisal of the Woodford formation in our Logan County, Oklahoma assets. We viewed this relatively modest, but higher risk, investment as having a 25% chance of success with a 15x upside. Unfortunately, we met with mixed success in our appraisal activities proving that in today's onshore US oil and gas industry that the best absolute returns are generated by drilling in proved regions. There are plenty of solid opportunities to efficiently grow the business without exposure to undue geologic risk.\n\nLike many prior bubbles driven by new technologies, the second half of the year saw the pricing environment come crashing down around us. The market became fundamentally unbalanced, driving prices down almost 50% and rendering material portions of global oil and gas development uneconomic.\n\nOur peers went from talking about their growth prospects to fretting about cash costs and liquidity, a stark contrast from the go-go growth times which existed in the first half of the year. This shift in industry strategy has now come in line with our general business philosophy—in the resource space, low-cost, low debt businesses will survive and thrive across cycles; and, relative to our US onshore peer group, Sundance boasts a top 15% cost structure and balance sheet.\n\nOur position as a cost and balance sheet leader is underpinned by two key philosophies: 1) investment in a leading technical team that is encouraged to take reasonable risks to improve recoveries and/or reduce costs, and 2) a ruthless focus on portfolio returns as demonstrated by our consistent track record of divesting assets that don't fit our strategic objectives or promise lower forward return profiles.\n\nOur high quality Eagle Ford acreage produces strong recoveries at reasonable costs and thus generates good returns, even in a low price environment. Because of these characteristics, the majority of our forward capital is expected to be invested generating strong growth and shareholder returns in the Eagle Ford.\n\nWith mixed appraisal results in the Woodford, Sundance's Mississippian/Woodford position generally requires higher prices to meet our hurdle rates. Because of the mixed Woodford results, higher overall unit costs, and depressed pricing at year end, we recognized an impairment charge of ~$60 million on these assets at year 2014. Had prices maintained their strength, we likely would have been in a position to recover our investment from these assets.", - "page_start": 5, - "page_end": 5, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "**CHAIRMAN'S LETTER**\n\n*Despite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the opertional performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.*\n\n# *Dear Fellow Shareholders,*\n\n*I am pleased to present Sundance Energy Australia Limited's Annual Report for the 12 months ended 31 December 2014. It has been another year of significant progress for Sundance across our portfolio of liquids rich oil and gas assets in the US.*\n\nThe Company's strategic focus on growing production, cash flows and reserves from large, repeatable resource plays in North America continues to deliver positive results with growth in production, cash flows, and reserves.\n\nDuring late 2013 and 2014, we completed the divestment of our interest in the Williston Basin in North Dakota for $51 million which realised an internal rate of return of 45 percent; and also opportunistically divested our interest in the Denver-Julesburg Basin in Colorado for $114 million which realised an internal rate of return of 104 percent. These divestitures of smaller, less scalable positions enabled us to focus on developing and growing our assets in the Eagle Ford in Texas and our Mississippian/Woodford assets in Oklahoma.\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the operational performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n### **A year of growing production, cash flow and reserves**\n\nIn line with our strategy we continued to increase the level of company operated assets, and successfully maintained a very strong focus on optimising our operations and reducing costs. This resulted in an impressive improvement in well performance combined with a top tier cost structure.\n\nThrough our operated development program, we ended 2014 with record production of 9,434 barrels of oil equivalent per day (BOEPD) compared with an exit rate of 5,028 BOEPD in December 2013 and an average annual production of 6,635 BOEPD compared to 3,015 BOEPD in 2013. During 2014 we drilled and completed 42.7 net wells, primarily in the Eagle Ford, bringing our total well count to 81.3 by 31 December 2014. High value oil comprised approximately 69 percent of our total 2014 annual production and production from Sundance-operated projects accounted for 89 percent of total production for the year.\n\nCorresponding with the growth in annual production, the Company's full year revenues increased to $159.8 million and Adjusted EBITDAX increased to $126.4 million.\n\nThe Company's development program also generated significant growth in Constant Case reserves during the year. More details are contained elsewhere in this Annual Report, but in summary our 1P Reserves at the end of 2014 were 26.0 MBOE, 2P Reserves 54.1 MBOE, and 3P Reserves 147.7 MBOE. This compares with Reserves of 20.7 MBOE, 34.6 MBOE, and 92.8 MBOE, respectively, at the end of 2013.\n\nIn the current price environment, we have elected to scale back our drilling program to mainly concentrate on limited drilling obligations to hold Eagle Ford acreage. This will enable us to maintain our low leverage profile, which was approximately 1.03x debt to Adjusted EBITDAX at year end, and focus on growing our drilling inventory in an environment with less competition for leases and small acquisitions. Liquidity was $84 million at year end, with a borrowing base redetermination in 2015 expected to materially increase debt availability if the use of such funds is justified in line with our strategy.\n\n### **The Eagle Ford – driving value and production growth**\n\nSundance has grown its Eagle Ford acreage position from ~7,200 acres upon entering the basin to approximately 26,160 net mineral acres in the Eagle Ford at the end of 2014 which includes the acquisition of approximately 18,000 net acreage in 2014. By the end of the first quarter 2015 this had grown to 38,701 net mineral acres. Our growing presence in this prolific oil and gas region has been driving significant value for the Company and our shareholders, and continues to form our priority focus for development and acreage growth in the coming years.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "# Our Goals for 2014\n\nComplete a minimum of $75 million in acquisitions.\n\nAcquire over 50% of 2014 acquisitions outside Atlantic Canada, with a focus in Ontario.\n\nGrow same store NOI by up to 2%.\n\nContinue to invest in development with two projects underway, managing projects on schedule and on budget.\n\ndevelopment program to a maximum of 5% of our balance sheet per year. We have three other developments projects in various planning stages, but don't expect to begin construction on any additional new projects until late 2014 or into 2015.\n\n## **Geographic Diversification is a Priority**\n\nGeographic diversification is a priority for Killam. Our asset base in Atlantic Canada is the foundation of the Company; however, with Atlantic Canada representing only 5% of the Canadian rental market, our growth opportunities increase significantly by expanding our target markets outside of this region. With its strong operating platform, Killam can support a larger and more geographically diverse portfolio. We are actively growing a portfolio of apartments in Ontario in three target markets: Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment outside Atlantic Canada will increase not only Killam's growth potential, it will also expand the Company's diversification and exposure to higher growth markets.\n\nAcquisitions in Ontario represented 45% of acquisitions in 2013. In addition to 1,359 apartment units in the province, we also have 2,144 manufactured home community sites, representing 29% of the MHC NOI last year. Based on our current portfolio, 15% of Killam's 2014 NOI will be generated in Ontario, compared to our longer-term goal of generating 50% of NOI outside Atlantic Canada. We expect to reach this goal by focusing acquisition activity in Ontario, with the majority of future investment anticipated in the province over the next few years. We will look for additional development opportunities in Ontario and we are exploring opportunities in Western Canada, attracted by the strong population growth trends in Alberta's urban markets. I would like to thank all Killam employees for their contributions and\n\ncommitment over the last year and our board of directors for their governance. Also, I would like to thank you, our shareholders, for your continued investment in Killam. I invite you to attend the Company's annual meeting on May 7, 2014 at 2:00 pm Atlantic Time at the Halifax Marriott Harbourfront Hotel, either in person or via webcast.\n\nYours truly,\n\nPhilip Fraser", - "page_start": 10, - "page_end": 10, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "Dollar and share amounts in millions except per share, per option and per unit amounts\n\nRent expense for 2014, 2013 and 2012 was as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n| --- | --- | --- | --- |\n| Minimum rent: | | | |\n| Store locations | $170 | $145 | $124 |\n| Offices, warehouses and equipment | 36 | 35 | 32 |\n| Percentage rent | 14 | 14 | 14 |\n| Property incentives | (83) | (69) | (65) |\n| Total rent expense | $137 | $125 | $105 |\n\nThe rent expense above does not include common area charges, real estate taxes and other executory costs, which were $88 in 2014, $81 in 2013 and $74 in 2012.\n\n#### **NOTE 11: COMMITMENTS AND CONTINGENT LIABILITIES**\n\nOur estimated total purchase obligations, capital expenditure contractual commitments and inventory purchase orders were $2,092 as of January 31, 2015. In connection with the purchase of foreign merchandise, we have outstanding trade letters of credit totaling $1 as of January 31, 2015.\n\nPlans for our Manhattan full-line store, which we currently expect to open in late 2018 to 2019, ultimately include owning a condominium interest in a mixed-use tower and leasing certain nearby properties. As of January 31, 2015, we had approximately $125 of fee interest in land, which is expected to convert to the condominium interest once the store is constructed. We have committed to make future installment payments based on the developer meeting pre-established construction and development milestones. Our fee interest in the land is currently and will continue to be subject to lien by project development lenders until project completion or fulfillment of our existing installment payment commitment. In the unlikely event that this project is not completed, the opening may be delayed and we may potentially be subject to future losses or capital commitments in order to complete construction or to monetize our previous investments in the land.\n\n#### **NOTE 12: SHAREHOLDERS' EQUITY**\n\nIn February 2013, our Board of Directors authorized a program to repurchase up to $800 of our outstanding common stock, through March 1, 2015. In September 2014, our Board of Directors authorized a new program to repurchase up to $1,000 of our outstanding common stock through March 1, 2016, in addition to the remaining amount available for repurchase under the previously authorized program. The following is a summary of the activity related to our share repurchase programs in 2012, 2013 and 2014:\n\n| | | Average price | |\n| --- | --- | --- | --- |\n| | Shares | per share | Amount |\n| Capacity at January 28, 2012 | | | $310 |\n| February 2012 authorization (ended February 1, 2014) | | | 800 |\n| Shares repurchased | 14.0 | $51 | (717) |\n| Capacity at February 2, 2013 | | | 393 |\n| February 2013 authorization (ends March 1, 2015) | | | 800 |\n| Shares repurchased | 9.1 | $57 | (523) |\n| Capacity at February 1, 2014 | | | 670 |\n| September 2014 authorization (ends March 1, 2016) | | | 1,000 |\n| Shares repurchased | 8.9 | $66 | (595) |\n| Capacity at January 31, 2015 | | | $1,075 |\n\nThe actual number and timing of future share repurchases, if any, will be subject to market and economic conditions and applicable SEC rules.\n\nWe paid dividends of $1.32 per share in 2014, $1.20 per share in 2013 and $1.08 per share in 2012. In February 2015, we declared a quarterly dividend of $0.37 per share, increased from a quarterly dividend of $0.33 per share in 2014.", - "page_start": 66, - "page_end": 66, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "WHILE IT IS EARLY DAYS, I BELIEVE WE CAN EVOLVE THE BUSINESS IN A WAY THAT WILL BE EVEN MORE REWARDING FOR OUR CUSTOMERS, OUR SHAREHOLDERS AND EMPLOYEES.\" \"\n\n**GUY LAURENCE**\n\n## A MESSAGE FROM THE **PRESIDENT & CEO**\n\n**As I write these words after recently joining the company, I can say with genuine enthusiasm that it's great to be here at Rogers. I took this post because Rogers is a remarkable company with a rich history and an unrivalled mix of wireless, cable and media assets. It is a good match with my background and my experience.**\n\nDuring the recruiting and onboarding process, I spent considerable time with the Rogers family, the Board of Directors and the leadership team. I am struck by their energy, passion and drive to win, which I think we can harness to do even greater things. I also value the support and longerterm focus of the founding Rogers family who own significant equity in the company.\n\nSince joining, I have criss-crossed Canada meeting my team, external stakeholders and customers. I have also conducted numerous business reviews, overseen the 700 MHz spectrum auction and reviewed the regulatory agenda. All this with the view to developing a detailed set of priorities and plans for the company going forward. After I complete this review in the Spring I will outline a detailed strategy and business plan working with my management team.\n\nRogers has many strengths and I intend to capitalize on them. This is a financially strong company with a solid balance sheet and investment grade credit ratings. We have highly advanced cable and wireless networks and a robust portfolio of media assets. We also have a strong pipeline of new products and services to offer to our customers and some of the most passionate, committed employees I have ever worked with.\n\nWhile it is early days, I believe we can evolve the business in a way that will be even more rewarding for our customers, our shareholders and employees. Our goal is clear – winning on a consistent basis. And while our industry faces the challenge of moderating growth and regulatory uncertainty, few industries are more dynamic and better at leveraging new technologies.\n\nTo win, we must put our customers' needs front and centre in everything we do. This means delivering a better and more consistent customer experience. It means strengthening our value proposition to make sure our customers can answer the question \"why Rogers?\" As a company, we need to bring our collection of assets together in a way that strengthens and differentiates Rogers with our customers and our shareholders. We also need to align and focus our investments in key areas to accelerate our growth. Internally we need to execute with operational excellence. And we need to focus on clarifying accountabilities and strengthening our teams at all levels of the company.\n\nAs CEO, I will work to re-establish our leadership position and accelerate our growth. This will take time. It is a longterm effort that will require a clear strategy, rigorous prioritization and disciplined execution. It will not be easy, but it is the job I have signed up for, and it is a challenge I intend to meet head-on.\n\nI look forward to continuing Ted's legacy, and to leading Rogers through the next phase of growth and to serving you, our shareholders.\n\nThank you for your continued business, investment and support.\n\n**GUY LAURENCE PRESIDENT AND CHIEF EXECUTIVE OFFICER** ROGERS COMMUNICATIONS INC.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "### *Financial Position*\n\nIn May 2014, the borrowing capacity under our credit facilities increased from an aggregate of $63 million to $135 million. The increase in the borrowing capacity was driven by the significant uplift of the Company's proved oil and gas reserves as at 31 December 2013. In conjunction with the increase in the Company's borrowing capacity, the Company expanded the syndicate of banks under the Senior Credit Facility. Bank of America Merrill Lynch and the Bank of Nova Scotia have now joined the bank group which is led by Wells Fargo.\n\nIn July 2014, the borrowing capacity increased an additional net $10 million, to $145 million, after taking into consideration the removal of proved oil and gas reserves associated with the DJ and Williston Basin dispositions and the development of proved oil and gas reserves in the Eagle Ford Formation.\n\nAt 31 December 2014, the Company had $130 million outstanding under our credit facilities and $15 million available under our borrowing capacity. Ending cash at 31 December 2014 was $69.2 million.\n\n### *Cashflow*\n\nCash provided by operating activities for the year ended 31 December 2014 increased 104.5% to $128.1 million compared to the prior year. This increase was primarily due to receipts from sales increasing $85.7 million, or 101.2%, to $170.4 million, while keeping payments to suppliers and employees relatively stable with an increase of $8.2 million, or 37.7%, to $30.0 million. See Review of Operations for more information.\n\nCash used in investing activities for the year ended 31 December 2014 increased $158.9 million, or 96.7%, to $323.2 million. This increase is due to successful implementation of the Company's strategy to develop and grow the reserves from our high working interest, repeatable resource plays, primarily in the Eagle Ford. Due to funding available to the Company through asset sales, capital raises and credit facilities, the Company was able to accelerate its 2015 drilling program into 2014. However, due to the reduction in crude oil prices in the fourth quarter of 2014 and continuing into early 2015, the Company will scale back its drilling program to concentrate on limited drilling obligations to hold Eagle Ford acreage during the 2015 year.\n\nCash provided by financing activities for the year ended 31 December 2014 increased $123.1 million, or 277.0%, to $167.6 million. This increase is a result of the increased availability and draws under the Company's credit facilities and proceeds received in a private placement of shares. In February 2014, the Company completed a private placement in which we sold 84.2 million ordinary shares at A$0.95 per share, resulting in net proceeds of approximately $68.4 million. The first tranche of 63.7 million shares was issued in March 2014 and the second tranche of 20.5 million shares was issued in April 2014.\n\n#### **Matters Subsequent to the End of the Financial Year**\n\nSubsequent to 31 December 2014, an additional $13.9 million was drawn-down the credit facilities, bringing total outstanding debt to $143.9 million, with undrawn funds of $1.1 million.\n\nIn January 2015, the company acquired three leases totalling approximately 14,180 net acres in the Eagle Ford for approximately $13.4 million.\n\n### **Future Developments, Prospects and Business Strategies**\n\nThe Group's business strategies and prospects for growth in future financial years are presently concentrated on growing the value of the Group's current resource plays through direct leasing from mineral owners, small acquisitions of producing properties, drilling inventory within the Group's current balance sheet capabilities, and development of the Group's current acreage. Further information on likely development in the operations of the Group and expected results of operations has not been included because the Directors believe it would result in unreasonable prejudice to the Group.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "| | Year ended 31 December | |\n| --- | --- | --- |\n| (In US$'000s) | 2014 | 2013 |\n| IFRS Profit Loss Reconciliation to Adjusted EBITDAX: | | |\n| Profit attributable to owners of Sundance | 15,321 | 15,942 |\n| Income tax (benefit)/expense | (841) | 5,567 |\n| Finance costs, net of amounts capitalised and interest received | 494 | (232) |\n| (Gain) Loss on derivative financial instruments | (10,792) | 554 |\n| Settlement of derivative financial instruments | 1,150 | 282 |\n| Depreciation and amortisation expense | 85,584 | 36,225 |\n| Impairment of non-current assets | 71,212 | - |\n| Exploration expense | 10,934 | - |\n| Stock compensation, value of services | 1,915 | 1,590 |\n| Gain on sale of non-current assets | (48,604) | (7,335) |\n| Adjusted EBITDAX | 126,373 | 52,594 |\n| EBITDAX Margin | 79% | 62% |\n\nThe following table presents a reconciliation of the profit (loss) attributable to owners of Sundance to Adjusted EBITDAX:\n\n#### *Exploration and Development*\n\nFor the month of December 2014, the Company achieved record production of 9,434 Boe/d, which included 869 Boe/d of flared gas from wells waiting to hook-up to pipelines. The December 2014 exit rate increased 88% over prior year's exit rate of 5,028 Boe/d. During the year ended 31 December 2014, the Company produced 2.4 MMBoe, which included 0.2 MMBoe of flared gas. This result was more than double the production in prior year, primarily as a result of increased drilling activity and production in the Eagle Ford Basin.\n\nThe Company's exploration and development activities are focused in the Eagle Ford and the Mississippian/Woodford Formations. Costs incurred for development and production expenditures for the Eagle Ford and Mississippian/Woodford Formations during the year ended 31 December 2014 totalled $324.0 million, which included $295.9 million of drilling and development expenditure related to our 2014 plan, $3.8 million on infrastructure, and $24.3 million of drilling and development expenditure related to our 2015 plan. This investment resulted in the addition of 75 gross (42.7 net) wells into production, including 50 gross (39.5 net) Sundance-operated horizontal wells. An additional 24 gross (13.7 net) wells were drilling, being prepared for fracture stimulation or testing as at 31 December 2014, an increase of 7 gross (3.0 net) compared to the beginning of the year.\n\n#### *Acquisitions*\n\nIn April 2014, the Company acquired approximately 4,800 net acres in the Eagle Ford for an initial purchase price of approximately $10.5 million and two separate earn out payments due upon commencement of drilling in each of three blocks of acreage (total for all three blocks of $7.7 million) and payout of the first two wells drilled on each block of the acreage ($7.7 million). The term of the agreement is two years and provides a one year extension for $500 per acre extended. This acquired acreage is adjacent to our existing acreage in McMullen County, Texas.\n\nIn July 2014, the Company completed the acquisition of approximately 5,700 net Eagle Ford acres in Dimmit County, South Texas, for approximately $36 million and a commitment to drill four Eagle Ford wells. The Company also has the option, at its sole discretion, to acquire the Seller's remaining working interest for an additional $45 million for the earlier of one year from closing the acquisition or six months from first production of hydrocarbons.", - "page_start": 20, - "page_end": 20, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\nKillam spent $1,482 per unit for the year ended December 31, 2013, compared to $1,683 per unit for the year ended December 31, 2012. Approximately 40% of the capital spend during the year was invested in suite renovations. The increase year‑over‑year was a result of unit upgrades to improve quality and increase occupancy, increase yields on properties identified for repositioning, and support the Company's commitment to increasing unit quality to maximize rental increases.\n\nAs an example, in 2013 the Company has been actively working to reposition Brentwood Apartments, a 45‑year old, 240‑unit, property located in Halifax, that was acquired in 2012. The Company identified that significant value could be created at this property by improving the quality of the units and generating increased NOI through higher rents. Unit upgrades have averaged $15,000 per unit and have consisted of new appliances, flooring and kitchen and bathroom upgrades. The Company has achieved a corresponding lift in rents of approximately 15% on the 53 units it has completed to date. Based on a 5‑year project timeline, with 20% of the units renovated each year, the Company expects to see the return on the total investment improve 145 bps from 6.25% to 7.70%.\n\nKillam has also invested in suite renovations to reposition an Ottawa portfolio acquired in 2012. Kitchen, bathroom, flooring and appliance upgrades have improved the quality of the Ottawa units, leading to a 1,100 bps increase in occupancy in the past 12 months. Excluding the repositioning of the Brentwood and the Ottawa portfolio in 2013, suite renovation costs would have been $6.0 million, or a 21% increase from 2012.\n\nThe Company has also identified additional properties in the Atlantic region as well as Ontario for repositioning and will continue to invest in upgrades where these higher yields can be achieved. One such property identified for 2014 is Shaunslieve, the 154‑unit property adjacent to S2 in Halifax. Killam expects to recover the renovation costs through increased rental rates. Capital spend on appliances increased in 2013 as well, which was directly correlated to the increased suite renovation work.\n\nBoiler and heating equipment costs have decreased significantly in 2013, as the Company converted twenty properties to natural gas in 2012, compared to one in 2013.\n\nThe majority of the remaining capital expenditures during 2013 related to exterior building repairs, including roofing and balcony upgrades, brick replacement and exterior facade upgrades. The timing of capital spending is influenced by tenant turnover, market conditions, and individual property requirements, causing variability. In addition, the length of time that Killam has owned a property and the age of the property also influences the capital requirements.\n\n## **Average Capital Spend Per Unit by Building Age**\n\nAs the above chart highlights, the capital spend per unit is less for newer properties, averaging $364 per unit in 2013, compared to $2,248 per unit for buildings over 40 years old. This analysis excludes capital spending on development and energy projects. Killam's continual focus on developing and acquiring new properties aids in maintaining lower capital requirements on a per unit basis. 20% of Killam's apartments as of December 31, 2013, have been built in the past ten years.\n\nKillam expects to invest approximately $22 million to $24 million during 2014 on apartment portfolio capital investments.", - "page_start": 50, - "page_end": 50, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "# **DIRECTORS' REPORT**\n\nYour Directors present this report on the Company and its consolidated entities (\"Group,\" the \"Company\" or \"Consolidated Group\") for the financial year ended 31 December 2014.\n\n# **Directors**\n\nThe names of Directors in office at any time during or since the end of the year are:\n\n- Michael D Hannell\n- Damien A Hannes\n- Neville W Martin\n- Eric P McCrady\n- H. Weldon Holcombe\n\nThese Directors have been in office since the start of the financial period to the date of this report.\n\n### **Company Secretary**\n\nAt the end of the financial period, *Mr Damien Connor* held the position of Company Secretary and has served as Company Secretary since August 2013. Mr. Connor has been a member of the Institute of Chartered Accountants of Australia since 2002 and is a member of the Governance Institute of Australia and a graduate of the Australian Institute of Company Directors. He is also Chief Financial Officer and Company Secretary of ASX-listed UraniumSA Limited and Archer Exploration Limited.\n\n### **Principal Activities**\n\nThe principal activities of the Group during the financial year were:\n\n- *•* the exploration for and development and productionof oil and natural gas in the United States of America; and,\n- *•* the continuedexpansion of its portfolio of oil and gas leasesin the United States of America.\n\nNo significant changes in the nature of the activities of the Group occurred during the year.\n\n### **Highlights and Significant Changes in State of Affairs**\n\nFollowing is a summary of highlights and significant changes in the state of affairs of the Group during the year ended 31 December 2014:\n\n- Acquired approximately 4,800 net acres in the Eagle Ford for an initial purchase price of approximately $10.5 million.\n- Completed the acquisition of approximately 5,700 net Eagle Ford acres for approximately $36 million.\n- Divested the Company's remaining Denver-Julesburg and Bakken assets for approximately $108.8 million and $14.0 million in net proceeds, respectively.\n- Completed a successful capital raise of approximately A$80 million during the year with proceeds being used primarily to accelerate pace of the Company's drilling program in the Eagle Ford.\n- Achieved record production in December 2014 of 9,434 Boe/d, which included 869 Boe/d of flared gas from wells waiting to hook-up to pipelines. The December 2014 exit rate increased 88% over prior year's exit rate of 5,028 Boe/d.\n- Net revenue increased to $159.8 million, or 87% over the prior year.\n- EBITDAX increased to $126.4 million, or 140% over the prior year, and EBITDAX margin increased to 79%, a 17 percentage point increase over the prior year.\n- Due to our successful drilling program, brought 75 gross (42.7 net) wells into production and saw a significant increase across 1P, 2P and 3P Reserves from prior year bringing total Constant Case 3P Reserves to 147,723 MBoe and PV10 of 3P Reserves to $1.5 billion.\n- Ended the year with $69.2 million of cash, total debt outstanding of $130 million and $15 million of unused borrowing capacity under the Company's credit facilities.\n\nThere were no other material changes in the state of affairs of the Company.", - "page_start": 16, - "page_end": 16, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "What are the physical requirements for installing the Storwize V7000?", - "target_page": 70, - "target_passage": "You must consider several key factors when you are planning the physical site of a Storwize V7000 installation. The physical site must have the following characteristics: \u0002 Meets power, cooling, and location requirements of the Storwize V7000 nodes. \u0002 Has two separate power sources. \u0002 Sufficient rack space exists for the installation of controller and disk expansion enclosures. \u0002 Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not exceed maximum power rating of the rack. For more information about the power and environmental requirements, see this website", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "# **3.1 General planning rules**\n\n**Important:** At the time of this writing, the statements that are provided in this book are accurate but can change. Always verify any statements that are made in this book with the IBM Storwize V7000 supported hardware list, device driver, firmware, and recommended software levels information that are available at the following websites:\n\n- -Support Information for Storwize V7000\n- -IBM System Storage Interoperation Center (SSIC)\n\nTo maximize the benefit that is realized from the Storwize V7000, pre-installation planning must include several important steps. These steps ensure that the Storwize V7000 provides the best possible performance, reliability, and ease of management for your application needs. The correct configuration also helps minimize downtime by avoiding changes to the Storwize V7000 and the storage area network (SAN) environment to meet future growth needs.\n\nThis book is *not* intended to provide in-depth information about the described topics. For an enhanced analysis of advanced topics, see *IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines*, SG24-7521.\n\n# **3.1.1 Basic planning flow**\n\nThe general rule of planning is to define your goals, and then, plan a solution that can be shown to meet these goals. Always remember to verify that each element of your configuration is supported.\n\nConsider the following points when planning for the Storwize V7000:\n\n- - Collect and document the number of hosts (application servers) to attach to the Storwize V7000. Identify the traffic profile activity (read or write, sequential, or random), and the performance requirements (bandwidth and input/output [I/O] operations per second [IOPS]) for each host.\n- - Decide whether you are going to use Storwize V7000 to virtualize external storage. If you do, collect and document the following items:\n\t- Information on the back-end storage that exists in the environment and is intended to be virtualized by the Storwize V7000.\n\t- Whether you must configure image mode volumes. If you want to use image mode volumes, decide whether and how you plan to migrate them into managed mode volumes.\n\t- Information about the planned new back-end storage to be virtualized by the Storwize V7000.\n\t- The required virtual storage capacity for fully provisioned and space-efficient (SE) volumes.\n\t- The required storage capacity for:\n\t\t- Local mirror copy (volume mirroring)\n\t\t- Point-in-time copy (IBM FlashCopy)\n\t\t- Remote copy (Metro Mirror and Global Mirror)\n\t\t- Compressed volumes\n\t\t- Encrypted volumes", - "page_start": 65, - "page_end": 65, - "source_file": "sg247938.pdf" - }, - { - "text": "- 2076-92F\n- 2076-A9F\n\nThe control enclosures can connect to 8 Gbps and 16 Gbps SAN switches. With an optional 10 Gbps Ethernet card to 10 Gbps Ethernet switch (for iSCSI traffic) and FCoE switches.\n\nModel 2076-624 with 25 Gb Ethernet card can be used for RDMA (iSER). For more information, see 3.7.4, \"iSCSI Extensions for RDMA (iSER)\" on page 62.\n\n# **3.19.2 Back-end storage subsystems**\n\nWhen connecting a back-end storage subsystem to IBM Storwize V7000, follow these guidelines:\n\n- - Connect all storage ports to the switch up to a maximum of 16, and zone them to all of the Storwize V7000 ports.\n- - Zone all ports on the disk back-end storage to all ports on the Storwize V7000 nodes in a clustered system.\n- - Ensure that you configure the storage subsystem LUN-masking settings to map all LUNs that are used by the Storwize V7000 to all the Storwize V7000 WWPNs in the clustered system.\n\nThe Storwize V7000 is designed to handle many paths to the back-end storage.\n\nIn most cases, the Storwize V7000 can improve performance, especially of mid-sized to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems, for the following reasons:\n\n- - The Storwize V7000 can stripe across disk arrays, and it can stripe across the entire set of configured physical disk resources.\n- - The Storwize V7000 control enclosure 2076-524 has 32 GB of cache and 2076-624 has 32 GB of cache (upgradeable to 64 GB).\n- - The Storwize V7000 can provide automated performance optimization of hot spots by using flash drives and Easy Tier.\n\nThe Storwize V7000 large cache and advanced cache management algorithms also allow it to improve the performance of many types of underlying disk technologies. The Storwize V7000 capability to asynchronously manage destaging operations that are incurred by writes while maintaining full data integrity can be important in achieving good database performance.\n\nBecause hits to the cache can occur in the upper (Storwize V7000) and the lower (back-end storage disk controller) level of the overall system, the system as a whole can use the larger amount of cache wherever it is located. Therefore, Storwize V7000 cache also provides more performance benefits for back-end storage systems with extensive cache banks.\n\nAlso, regardless of their relative capacities, both levels of cache tend to play an important role in enabling sequentially organized data to flow smoothly through the system.\n\nHowever, Storwize V7000 cannot increase the throughput potential of the underlying disks in all cases. Performance benefits depend on the underlying storage technology and the workload characteristics, including the degree to which the workload exhibits hotspots or sensitivity to cache size or cache algorithms.", - "page_start": 102, - "page_end": 102, - "source_file": "sg247938.pdf" - }, - { - "text": "A thin-provisioned volume feature that is called *zero detect* provides clients with the ability to reclaim unused allocated disk space (zeros) when they are converting a fully allocated volume to a thin-provisioned volume by using volume mirroring.\n\n# **3.12 Host attachment planning**\n\nThe typical FC host attachment to the Storwize V7000 is done through SAN fabric. However, the system allows direct attachment connectivity between its 8 Gb or 16 Gb Fibre Channel ports and host ports. No special configuration is required for host systems that are using this configuration. However, the maximum number of directly attached hosts is severely limited by the number of FC ports on Storwize V7000's nodes.\n\nThe Storwize V7000 imposes no particular limit on the distance between the Storwize V7000 nodes and host servers. However, for host attachment, the Storwize V7000 supports up to three ISL hops in the fabric. This capacity means that the server to the Storwize V7000 can be separated by up to five FC links, four of which can be 10 km long (6.2 miles) if long wave Small Form-factor Pluggables (SFPs) are used.\n\nFigure 3-9 shows an example of a supported configuration with Storwize V7000 nodes using shortwave SFPs.\n\n*Figure 3-9 Example of host connectivity*\n\nIn Figure 3-9, the optical distance between Storwize V7000 Node 1 and Host 2 is slightly over 40 km (24.85 miles).\n\nTo avoid latencies that lead to degraded performance, avoid ISL hops whenever possible. In an optimal setup, the servers connect to the same SAN switch as the Storwize V7000 nodes.\n\n**Note:** Before attaching host systems to Storwize V7000, review the Configuration Limits and Restrictions for the IBM System Storage Storwize V7000 at this IBM Support web page.", - "page_start": 91, - "page_end": 91, - "source_file": "sg247938.pdf" - }, - { - "text": "# **3.18 Storwize V7000 configuration backup procedure**\n\nSave the configuration before and after any change to the clustered system, such as adding nodes and back-end storage. Saving the configuration is a crucial part of Storwize V7000 management, and various methods can be applied to back up your Storwize V7000 configuration. The preferred practice is to implement an automatic configuration backup using the configuration backup command. Make sure that you save the configuration to storage that is not dependent on the SAN Virtualization Controller.\n\nFor more information, see Chapter 13, \"RAS, monitoring, and troubleshooting\" on page 673.\n\n# **3.19 Performance considerations**\n\nStorage virtualization with the Storwize V7000 improves flexibility and simplifies management of storage infrastructure, and can provide a substantial performance advantage. The Storwize V7000 caching capability and its ability to stripe volumes across multiple disk arrays are the reasons why usually significant performance improvements are observed when Storwize V7000 is used to virtualize midrange back-end storage subsystems.\n\n**Tip:** Technically, almost all storage controllers provide both striping (in the form of RAID 5, RAID 6, or RAID 10) and a form of caching. The real benefit of Storwize V7000 is the degree to which you can stripe the data across disks in a storage pool, even if they are installed in different back-end storage systems. This technique maximizes the number of active disks available to service I/O requests. The Storwize V7000 provides more caching, but its impact is secondary for sustained workloads.\n\nTo ensure the performance that you want and verify the capacity of your storage infrastructure, analyze performance and capacity to reveal the business requirements of your storage environment. Use the analysis results and the guidelines in this chapter to design a solution that meets the business requirements of your organization.\n\nWhen considering performance for a system, always identify the bottleneck and, therefore, the limiting factor of a specific system. This is a multidimensional analysis that needs to be performed for each of your workload patterns. There can be different bottleneck components for different workloads.\n\nWhen you are designing a storage infrastructure with the Storwize V7000 or implementing a Storwize V7000 in an existing storage infrastructure, you must ensure that the performance and capacity of the SAN, back-end disk subsystems, and Storwize V7000 meets requirements for the set of known or expected workloads.\n\n# **3.19.1 SAN**\n\nThe following Storwize V7000 models are supported for V8.2.1:\n\n- - Control enclosures:\n\t- 2076-524\n\t- 2076-624\n- - Expansion enclosures:\n\t- 2076-12F\n\t- 2076-24F", - "page_start": 101, - "page_end": 101, - "source_file": "sg247938.pdf" - }, - { - "text": "#### Figure 3-2 shows the Storwize V7000 zoning classes.\n\n*Figure 3-2 Storwize V7000 zoning classes*\n\nThe fundamental rules of Storwize V7000 zoning are described next. However, also review the latest zoning guidelines and requirements when designing zoning for the planned solution by searching for \"SAN configuration and zoning rules summary\" at IBM Knowledge Center.\n\n**Note:** Configurations that use Metro Mirror, Global Mirror, N_Port ID Virtualization, or long-distance links have extra zoning requirements. Do not follow only the general zoning rules if you plan to use any of these.\n\nThe FCoE fabric uses the same set of zoning rules as the Fibre Channel fabric.\n\n# **3.6.3 Storwize V7000 cluster system zone**\n\nThe Storwize V7000 cluster system zone is required only if you deploy solution with more than one control enclosure. The purpose of cluster system zone is to enable traffic between all Storwize V7000 nodes within the clustered system. This traffic consists of heartbeats, cache synchronization, and other data that nodes must exchange to maintain a healthy cluster state.\n\nEach Storwize V7000 port must be zoned so that it can be used for internode communications. A system node cannot have more than 16 paths to another node in the same system.\n\nMixed port speeds are not possible for intracluster communication. All node ports within a clustered system must be running at the same speed.\n\nStorwize V7000 supports the use of mixed fabrics for communication between nodes. The 10 GbE FCoE ports of one Storwize V7000 can be zoned to the FC ports of another node that is part of the same clustered system.", - "page_start": 73, - "page_end": 73, - "source_file": "sg247938.pdf" - }, - { - "text": "An IBM Storwize V7000 solution provides a choice of up to 760 disk drives per system or 1024 disk drives per clustered system (by using dense drawers). The solution uses SAS cables and connectors to attach to the optional expansion enclosures.\n\nThe IBM Storwize V7000 system supports a range of external disk systems similar to what IBM SAN Volume Controller supports today. A view of an IBM Storwize V7000 control enclosure is shown in Figure 2-4.\n\n*Figure 2-4 Top-front view of a Storwize V7000 control enclosure*\n\nThe IBM Storwize V7000 solution consists of 1 - 4 control enclosures and optionally, up to 36 expansion enclosures. It supports the intermixing of the different expansion enclosures. Each enclosure contains two canisters. Control enclosures contain two node canisters, and expansion enclosures contain two expansion canisters.\n\n# **2.3.1 IBM Storwize V7000 models**\n\nThe IBM Storwize V7000 consists of enclosures and drives. An enclosure contains two canisters that are seen as part of the enclosure, although they can be replaced independently.\n\n**More information:** For the most up-to-date information about features, benefits, and specifications of IBM Storwize V7000 models, see this web page.\n\nThe information in this IBM Redbooks publication is valid at the time of this writing and covers IBM Spectrum Virtualize V8.2. As IBM Storwize V7000 matures, expect to see new features and enhanced specifications.\n\nThe IBM Storwize V7000 models are listed in Table 2-1.\n\n| Model | Cache | Fibre Channel (FC) / iSCSI / SAS ports | Drive slots | Power supply |\n| --- | --- | --- | --- | --- |\n| 2076-AF1 (with | 64, 128, or 256 | 16 x 16 Gb / | 24 x 2.5-inch | Integrated dual |\n| two node | gigabytes (GB) | 6 x 1 Gb + 8x 10 Gb / | (All Flash) | power supplies |\n| canisters Gen2+) | | 4 x 12 Gb | | with battery |\n\n*Table 2-1 IBM Storwize V7000 models*", - "page_start": 34, - "page_end": 34, - "source_file": "sg247938.pdf" - }, - { - "text": "**2**\n\n# **Chapter 2. System overview**\n\nThis chapter provides an overview of IBM Spectrum Virtualize software and IBM Storwize V7000 architecture and components. The chapter shows the software elements that build the IBM Storwize V7000 platform and provides an overview of the useful management and support tools that helps to maintain and operate the IBM Storwize V7000.\n\nThis chapter includes the following topics:\n\n- -2.1, \"IBM Spectrum Virtualize\" on page 10\n- -2.2, \"Storage virtualization\" on page 10\n- -2.3, \"IBM Storwize V7000 overview\" on page 12\n- -2.4, \"IBM Storwize V7000 hardware\" on page 19\n- -2.5, \"IBM Storwize V7000 components\" on page 19\n- -2.6, \"Business continuity\" on page 39\n- -2.7, \"Management and support tools\" on page 40\n- -2.8, \"Useful IBM Storwize V7000 websites\" on page 42", - "page_start": 30, - "page_end": 30, - "source_file": "sg247938.pdf" - }, - { - "text": "When you plan deployment of Storwize V7000, identify networking technologies that you will use.\n\n**Note:** With Spectrum Virtualize V8.1.1.1 and later, RDMA (iSER) is supported by 25 Gb Ethernet iSCSI adapter cards with V7000 Gen2+ only. For more information, see 3.7.4, \"iSCSI Extensions for RDMA (iSER)\" on page 62.\n\n# **3.4 Physical planning**\n\nYou must consider several key factors when you are planning the physical site of a Storwize V7000 installation. The physical site must have the following characteristics:\n\n- -Meets power, cooling, and location requirements of the Storwize V7000 nodes.\n- -Has two separate power sources.\n- -Sufficient rack space exists for the installation of controller and disk expansion enclosures.\n- - Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not exceed maximum power rating of the rack. For more information about the power and environmental requirements, see this website.\n\nYour Storwize V7000 2076-524 and Storwize V7000 2076-624 order includes a printed copy of the IBM Storwize V7000 Gen2 and Gen2+ Quick Installation Guide, which also provides information about environmental and power requirements.\n\n# **3.4.1 Cabling**\n\nCreate a cable connection table that follows your environment's documentation procedure to track all of the following connections that are required for the setup:\n\n- -Power\n- -Ethernet\n- -SAS\n- iSCSI or Fibre Channel over Ethernet (FCoE) connections\n- -Switch ports (FC, Ethernet, and FCoE)\n\nDistribute your disk expansion enclosures evenly between control enclosures, nodes within control enclosures, and SAS channels within nodes. For more information, search for \"SAS cabling guidelines\" at this IBM Knowledge Center page.\n\nWhen planning SAN cabling make sure that your physical topology allows you to observe zoning rules and recommendations.\n\nIf the data center provides more than one power source, make sure that you use that capacity when planning power cabling for your system.\n\n# **3.5 Planning IP connectivity**\n\nSystem management is performed through an embedded graphical user interface (GUI) that is running on the nodes. To access the management GUI, direct a web browser to the system management IP address.", - "page_start": 69, - "page_end": 69, - "source_file": "sg247938.pdf" - }, - { - "text": "# **2.8 Useful IBM Storwize V7000 websites**\n\nSee the following IBM Storwize V7000 web pages for more information:\n\n- - IBM Support page: https://www.ibm.com/support/home/product/5402112/IBM_Storwize_V7000_(2076)\n- - IBM Storwize V7000 Unified and IBM Storwize V7000 Systems: https://www.ibm.com/support/home/product/5421300/IBM_Storwize_V7000_Unified\n- - IBM Storwize V7000 page support http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741\n- - Direct attachment of IBM Storwize V7000 https://www-01.ibm.com/support/docview.wss?uid=ssg1S1005776\n- -IBM Knowledge Center:\n\nhttps://www.ibm.com/support/knowledgecenter/en/ST3FR7_8.2.1/com.ibm.storwize.v7 000.821.doc/v7000_ichome.html", - "page_start": 63, - "page_end": 63, - "source_file": "sg247938.pdf" - }, - { - "text": "# **A**\n\n# **Appendix A. Performance data and statistics gathering**\n\nThis appendix provides a brief overview of the performance analysis capabilities of the IBM Storwize V7000 and IBM Spectrum Virtualize V8.2. It also describes a method that you can use to collect and process IBM Spectrum Virtualize performance statistics.\n\nIt is beyond the intended scope of this book to provide an in-depth understanding of performance statistics or to explain how to interpret them. For more information about the performance of the Storwize V7000, see *IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines*, SG24-7521.\n\nThis appendix includes the following topics:\n\n- -\"Storwize V7000 performance overview\" on page 740\n- -\"Performance monitoring\" on page 742", - "page_start": 760, - "page_end": 760, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "Is '1oijizer--10108453535318919918883384---jhjjzhiuhzrh--14584joiz///KK ' valid for a pool?", - "target_page": 218, - "target_passage": "Naming rules: When you choose a name for a pool, the following rules apply: \u0002 Names must begin with a letter. \u0002 The first character cannot be numeric. \u0002 The name can be a maximum of 63 characters. \u0002 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), underscore (_), period (.), hyphen (-), and space. \u0002 Names must not begin or end with a space. \u0002 Object names must be unique within the object type. For example, you can have a volume that is named ABC and a storage pool that is calledvolumes that are calledvolumes called ABC. \u0002 The default object name is valid (object prefix with an integer). \u0002 Objects can be renamed to their current names", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "user_public_key = \"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA09+YMqJ8VHX3HC7qy6HSxs3JjTGKbEgK+CExpf811uxsq+uJYbfXEKH19/NCf/U vpkozJBDDXDIxJ4uqOEBWDG4mUuu5U9a4lXgb6qaPYyXwVTygL/IcB0poSGEQQaJzhB05g71uZrya++sG1xHUjSQAQz hDuKrs4Bc3gcN4184UR+BX1pVgCls3NRn9hLrfLWS37M/kn+b/n6VMYYVpHsZ2XVydAn2nwuzktaEuWYaY/1cNd4xuu yVu08GQOon6t5KQ1EZBheADdSsyamulLqW9z4j6Y1wwDe4GPDc5zIW++ASDAZB0eEfbKGDLVdpFsI5YV8nLV1r/T0Y/ FiFZqQ== Bogdan Savu;IBMROO45771;IBMROZZ014E826;J;\" dns1 = \"192.168.11.210\" # DNS server 1 dns_domain = \"domain.example.com\" # DNS Domain Name #Network configuration #-------------------------------- net1_name = \"net_ocp_cluster1\" # Network Name net1_vlan_id = \"1\" # VLAN ID net1_subnet = \"192.168.11.0/21\" # Network/Mask net1_gateway = \"192.168.11.1\" # Gateway net1_start = \"192.168.11.223\" # First IP from Pool net1_end = \"192.168.11.223\" # Last IP from Pool #VM1 configuration (OCP - Master Nodes) #-------------------------------- vm1_number = \"1\" # Number of VMs vm1_memory = \"32\" # Memory GB vm1_cpu = \"8\" # Virtual CPU vm1_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm1_name = \"bsocp\" # Hostname prefix vm1_first_ip = \"192.168.11.223\" # Fist IP from a consecutive pool of IPs vm1_image_name = \"xiv_p9_image_rhel76\" # The image name vm1_remote_restart = \"true\" # Enable Auto Remote Restart vm1_storage_name = \"xiv_StoragePool\" # Storage Template vm1_dockerdisk1 = \"0\" # Docker disk size in GB for ephemeral storage #VM2 configuration (OCP - Infra Nodes) #-------------------------------- vm2_number = \"0\" # Number of VMs vm2_memory = \"16\" # Memory GB vm2_cpu = \"4\" # Virtual CPU vm2_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm2_name = \"infnode\" # Hostname prefix vm2_first_ip = \"192.168.11.205\" # Fist IP from a consecutive pool of IPs vm2_image_name = \"xiv_p9_image_rhel76\" # The image name vm2_remote_restart = \"true\" # Enable Auto Remote Restart vm2_storage_name = \"xiv_StoragePool\" # Storage Template vm2_dockerdisk1 = \"68\" # Docker disk size in GB for ephemeral storage #VM3 configuration (OCP - Workers(App) Nodes) #-------------------------------- vm3_number = \"0\" # Number of VMs vm3_memory = \"32\" # Memory GB vm3_cpu = \"4\" # Virtual CPU vm3_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores) vm3_name = \"appnode\" # Hostname prefix vm3_first_ip = \"192.168.11.208\" # Fist IP from a consecutive pool of IPs vm3_image_name = \"xiv_p9_image_rhel76\" # The image name vm3_remote_restart = \"false\" # Disable Auto Remote Restart vm3_storage_name = \"xiv_StoragePool\" # Storage Template vm3_dockerdisk1 = \"34\" # Docker disk size in GB for ephemeral storage #VM4 configuration (OCP - Load Balancer Node) #-------------------------------- vm4_number = \"0\" # Number of VMs", - "page_start": 130, - "page_end": 130, - "source_file": "sg248459.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290–301.\n- 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251–8.\n- 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343–50.\n- 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157–62.\n- 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533–9.\n- 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331–4.\n- 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491–502.\n- 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409–15.\n- 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187–204.\n- 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776–88.\n- 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038–46.\n- 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600–13.\n- 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251–61.\n- 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–83.\n- 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73–84.\n- 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512–26.\n- 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327–43.\n- 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656–64.\n- 39. Myers RH. Classical and modern regression with applications. 2nd ed. Pacific Grove: Duxbury Press; 2000.\n- 40. Weuve J, Tchetgen Tchetgen EJ, Glymour MM, Beck TL, Aggarwal NT, Wilson RS, et al. Accounting for bias due to selective attrition: the example of smoking and cognitive decline. Epidemiol Camb Mass. 2012;23:119–28.\n- 41. Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiol Camb Mass. 2004;15:615–25.\n- 42. Kent P, Keating JL, Leboeuf-Yde C. Research methods for subgrouping low back pain. BMC Med Res Methodol. 2010;10:62.\n- 43. Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR. A simulation study of the number of events per variable in logistic regression analysis. J Clin Epidemiol. 1996;49:1373–9.\n- 44. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Pearson; 2006.\n- 45. Green SB. How many subjects does it take to do a regression analysis. Multivar Behav Res. 1991;26:499–510.\n- 46. Harris RJ. A primer of multivariate statistics. 3rd ed. Mahwah: Psychology Press; 2001.\n- 47. Piette JD, Kerr EA. The impact of comorbid chronic conditions on diabetes care. Diabetes Care. 2006;29:725–31.\n- 48. Rice ASC, Smith BH, Blyth FM. Pain and the global burden of disease. Pain. 2016;157:791–6.\n- 49. Fritz JM, Cleland JA, Speckman M, Brennan GP, Hunter SJ. Physical therapy for acute low back pain: associations with subsequent healthcare costs. Spine. 2008;33:1800–5.\n- 50. Lentz TA, Harman JS, Marlow NM, George SZ. Application of a value model for the prevention and management of chronic musculoskeletal pain by physical therapists. Phys Ther. 2017;97:354–64.\n- 51. Sterne JAC, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.\n- 52. Bishop MD, Mintken PE, Bialosky JE, Cleland JA. Patient expectations of benefit from interventions for neck pain and resulting influence on outcomes. J Orthop Sports Phys Ther. 2013;43:457–65.\n- 53. Bialosky JE, Bishop MD, Cleland JA. Individual expectation: an overlooked, but pertinent, factor in the treatment of individuals experiencing musculoskeletal pain. Phys Ther. 2010;90:1345–55.\n- 54. Hanney WJ, Masaracchio M, Liu X, Kolber MJ. The influence of physical therapy guideline adherence on healthcare utilization and costs among patients with low back pain: a systematic review of the literature. PLoS One. 2016;11:e0156799.\n- 55. Childs JD, Fritz JM, Wu SS, Flynn TW, Wainner RS, Robertson EK, et al. Implications of early and guideline adherent physical therapy for low back pain on utilization and costs. BMC Health Serv Res. 2015;15 https://doi.org/ 10.1186/s12913-015-0830-3.\n- 56. Yu S-T, Chang H-Y, Lin M-C, Lin Y-H. Agreement between self-reported and health insurance claims on utilization of health care: a population study. J Clin Epidemiol. 2009;62:1316–22.\n- 57. Petrou S, Murray L, Cooper P, Davidson LL. The accuracy of self-reported healthcare resource utilization in health economic studies. Int J Technol Assess Health Care. 2002;18:705–10.\n- 58. Short ME, Goetzel RZ, Pei X, Tabrizi MJ, Ozminkowski RJ, Gibson TB, et al. How accurate are self-reports? Analysis of self-reported health care utilization and absence when compared with administrative data. J Occup Environ Med. 2009;51:786–96.\n\n- \n- \n- \n- \n- \n-", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "#### ICAO STANDARD ATMOSPHERE\n\n| ALTITUDE | DENSITY RATIO | | PRESSURE RATIO | TEMPER- ATURE | TEMPER- ATURE | SPEED OF SOUND | KINEMATIC VISCOSITY |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| FT. | | ل | | | RATIO | | V |\n| | ச | | ટે | o k | ਰ | 0 KNOTS | FT 2/ SEC |\n| O | 1.0000 | 1.0000 | 1.0000 | 59.00 | 1.0000 | 661.7 | 000158 |\n| 1000 | 0.9711 | 0.9854 | 0.9644 | 55.43 | 0.9931 | 659.5 | .000161 |\n| 2000 | 0.9428 | 0.9710 | 0.9298 | 51.87 | O aBES | 657.2 | 000165 |\n| 3000 | ૦. કારા | 0.9566 | 0.8962 | 48.30 | 0.9794 | 654 a | 000169 |\n| 4000 | 0.8881 | 0.9424 | 0.8637 | 44.74 | 0.9725 | 652.6 | .000174 |\n| 5000 | 0.8617 | 0.9283 | 0.8320 | 41.17 | 0.9656 | 650.3 | .000178 |\n| 6000 | 0.8359 | 0.9143 | 0.8014 | 37.60 | 0.9587 | 647.9 | 000182 |\n| 7000 | 0.8106 | 0.9004 | 0.7716 | 34.04 | o asia | 645.6 | .000187 |\n| 8000 | 0.7860 | 0.8866 | 0.7428 | 30.47 | 0.9450 | 643.3 | .000192 |\n| 9000 | 0.7620 | 0.8729 | 0.7148 | 26.90 | 0.938 I | 640.9 | .000197 |\n| 10000 | 0.7385 | 0.8593 | 0.6877 | 23.34 | 0.9312 | 638.6 | 000202 |\n| 15000 | 0.6292 | 0.7932 | 0.5643 | 5.51 | O. 8969 | 626.7 | .000229 |\n| 20000 | 0.5328 | 0.7299 | 0.4595 | -12.32 | 0.8625 | 614.6 | .000262 |\n| 25000 | 0.4481 | 0.6694 | 0.3711 | - 30.15 | 0.8281 | 602.2 | .000302 |\n| 30000 | 0.3741 | 0.6117 | 0.2970 | -47.98 | 0.7937 | 589.5 | .000349 |\n| 35000 | 0.3099 | 0.5567 | 0.2353 | -65.82 | 0.7594 | 576.6 | .000405 |\n| * 36089 | 0.2971 | 0.5450 | 0.2234 | -69.70 | 0.7519 | 573.8 | .000419 |\n| 40000 | 0.2462 | 0.4962 | 0.1851 | -69.70 | 0.7519 | 573.8 | .000506 |\n| 45000 | 0.1936 | 0.4400 | 0.1455 | -69.70 | 0.7519 | 573.8 | .000643 |\n| 50000 | 0.1522 | 0.3902 | 0.1145 | -69.70 | 0.7519 | 573.8 | .000818 |\n| 55000 | 0.1197 | 0.3460 | 0.0900 | -69.70 | 0.7519 | 573.8 | .001040 |\n| 60000 | ' 0.0941 | 0.3068 | 0.0708 | -69.70 | 0.7519 | 573.8 | .001323 |\n| 65000 | 0.0740 | 0.2721 | 0.0557 | -69.70 | 0.7519 | 573.8 | .001682 |\n| 70000 | 0.0582 | 0.2413 | 0.0438 | - 69.70 | 0.7519 | 573.8 | .002139 |\n| 75000 | 0.0458 | 0.2140 | 0.0344 | - 69.70 | 0.7519 | 573.8 | .002721 |\n| 80000 | 0.0360 | 0.1897 | 0.0271 | - 69.70 | 0.7519 | 573.8 | .003460 |\n| 85000 | 0.0280 | 0.1673 | 0.0213 | -64.80 | 0.7613 | 577.4 | 004499 |\n| 90000 | 0.0217 | 0.1472 | 0.0168 | -56.57 | 0.7772 | 583.4 | .00591 |\n| 95000 | 0.0169 | 0.1299 | 0.0134 | - 48.34 | 0.7931 | 589.3 | .00772 |\n| 100000 | 0.0132 | 0.1149 | 0.0107 | - 40.11 | 0.8089 | રતે રહ્યું હતું હ | .01004 |\n\n*GEOPOTENTIAL OF THE TROPOPAUSE\n\nFigure 1.7. Standard Altitude Table", - "page_start": 22, - "page_end": 22, - "source_file": "00-80T-80.pdf" - }, - { - "text": "H = X j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 − X z−links <jk> Jz (16/9)[Sj2 · (Sj3 × Sj4)][Sk2 · (Sk3 × Sk4)] − X x−links <jk> Jx (2Sj1 · Sj2 + 1/2)(2Sk1 · Sk2 + 1/2) − X y−links <jk> Jy (4/3)[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)] (8)\n\nWhile by the represenation (4) and (5), the Hamilto- nian becomes\n\nH = X j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 − X x−links <jk> Jx (2Sj1 · Sj2 + 1/2)(2Sk1 · Sk2 + 1/2) − X y−links <jk> Jy (4/3)[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)] − X z−links <jk> Jz (−4/3)(2Sj3 · Sj4 + 1/2)[Sj1 · (Sj3 − Sj4)](2Sk3 · Sk4 + 1/2)[Sk1 · (Sk3 − Sk4)] (9)\n\nThis model, in terms of physical spins S, has full spin rotation symmetry and time-reversal symmetry. A pseudo-magnetic field term P j ~h · ~τj term can also be included under this mapping, however the resulting Kitaev model with magnetic field is not exactly solvable. It is quite curious that such a formidably looking Hamiltonian (8), with biquadratic and six-spin(or eight-spin) terms, has an exactly solvable low energy sector.\n\nP We emphasize that because the first intra-cluster term cluster Hcluster commutes with the latter Kitaev terms independent of the representation used, the Kitaev model is realized as the exact low energy Hamiltonian of this model without truncation errors of perturbation theories, namely no (|Jx,y,z|/Jcluster) 2 or higher order terms will be generated under the projection to low energy cluster singlet space. This is unlike, for example, the t/U expansion of the half-filled Hubbard model22,23, where at lowest t 2/U order the effective Hamiltonian is the Heisenberg model, but higher order terms (t 4/U3 etc.) should in principle still be included in the low energy effective Hamiltonian for any finite t/U. Similar comparison can be made to the perturbative expansion studies of the Kitaev-type models by Vidal et al.9 , where the low energy effective Hamiltonians were obtained in certian anisotropic (strong bond/triangle) limits. Although the spirit of this work, namely projection to low energy sector, is the same as all previous perturbative approaches to effective Hamiltonians.\n\nNote that the original Kitaev model (1) has threefold rotation symmetry around a honeycomb lattice site, combined with a three-fold rotation in pseudo-spin space (cyclic permutation of τ x , τ y , τ z ). This is not apparent in our model (8) in terms of physical spins, under the current representation of τ x,y,z. We can remedy this by using a different set of pseudo-spin Pauli matrices τ ′x,y,z in (7),\n\n$$\\begin{array}{l}{{\\tau^{\\prime x}=\\sqrt{1/3}\\tau^{z}+\\sqrt{2/3}\\tau^{x},}}\\\\ {{\\tau^{\\prime y}=\\sqrt{1/3}\\tau^{z}-\\sqrt{1/6}\\tau^{x}+\\sqrt{1/2}\\tau^{y},}}\\\\ {{\\tau^{\\prime z}=\\sqrt{1/3}\\tau^{z}-\\sqrt{1/6}\\tau^{x}-\\sqrt{1/2}\\tau^{y}}}\\end{array}$$\n\nWith proper representation choice, they have a symmetric form in terms of physical spins,\n\n$$\\tau^{\\prime x}=-(4/3){\\bf S}_{2}\\cdot({\\bf S}_{3}\\times{\\bf S}_{4})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{2}+1/2)$$\n \n$$\\tau^{\\prime y}=-(4/3){\\bf S}_{3}\\cdot({\\bf S}_{4}\\times{\\bf S}_{2})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{3}+1/2)$$\n \n$$\\tau^{\\prime z}=-(4/3){\\bf S}_{4}\\cdot({\\bf S}_{2}\\times{\\bf S}_{3})+\\sqrt{2/3}(2{\\bf S}_{1}\\cdot{\\bf S}_{4}+1/2)\\tag{10}$$\n\nSo the symmetry mentioned above can be realized by a three-fold rotation of the honeycomb lattice, with a cyclic permutation of S2, S3 and S4 in each cluster. This is in fact the three-fold rotation symmetry of the physical spin lattice illustrated in FIG. 2. However this more symmetric representation will not be used in later part of this paper.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0266.pdf" - }, - { - "text": "modes of neighboring tetrahedra. And these coupling constants λx,y,z need to be tuned to produce Jx,y,z of the Kitaev model. This is still not easy to implement in solid state systems. At lowest non-trivial order of perturbative expansion, we do get our model (9). Higher order terms in expansion destroy the exact solvability, but may be controlled by the small parameters λx,y,z/k.\n\n# B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\nIn this Subsection we consider more conventional perturbations, magnetic interactions between the clusters, e.g. the Heisenberg coupling Sj · Sk with j and k belong to different tetrahedra. This has the advantage over the previous phonon approach for not introducing additional degrees of freedom. But it also has a significant disadvantage: the perturbation does not commute with the cluster Heisenberg Hamiltonian (2), so the cluster singlet subspace will be mixed with other total spin states. In this Subsection we will use the spin-chirality representation (6) for τ z .\n\nAgain consider two clusters j and k. For simplicity of notations define a projection operator Pjk = PjPk, where Pj,k is projection into the singlet subspace of cluster j and k, respectively, Pj,k = P s=±1 |τ z j,k = sihτ z j,k = s|. For a given perturbation λ Hperturbation with small parameter λ (in factor λ/Jcluster is the expansion parameter), lowest two orders of the perturbation series are\n\n$$\\lambda\\,{\\cal P}_{jk}H_{\\rm perturbation}{\\cal P}_{jk}+\\lambda^{2}\\,{\\cal P}_{jk}H_{\\rm perturbation}(1-{\\cal P}_{jk})$$\n \n$$\\times[0-H_{\\rm cluster}\\ j-H_{\\rm cluster}\\ k]^{-1}(1-{\\cal P}_{jk})H_{\\rm perturbation}{\\cal P}_{jk}\\tag{15}$$\n\nWith proper choice of λ and Hperturbation we can generate\n\nthe desired Jx,y,z terms in (8) from the first and second order of perturbations.\n\nThe calculation can be dramatically simplified by the following fact that any physical spin-1/2 operator S x,y,z ℓ converts the cluster spin singlet states |τ z = ±1i into spin-1 states of the cluster. This can be checked by explicit calculations and will not be proved here. For all the perturbations to be considered later, the above mentioned fact can be exploited to replace the factor [0 − Hcluster j − Hcluster k] −1 in the second order perturbation to a c-number (−2Jcluster) −1 .\n\nThe detailed calculations are given in Appendix B. We will only list the results here.\n\nThe perturbation on x-links is given by\n\n$$\\begin{array}{c}{{\\lambda_{x}\\,H_{\\mathrm{perturbation,~}x}=\\lambda_{x}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\mathrm{sgn}(J_{x})\\cdot(\\mathbf{S}_{j2}\\cdot\\mathbf{S}_{k2})]}}\\\\ {{-\\,J_{x}(\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{j2}+\\mathbf{S}_{k1}\\cdot\\mathbf{S}_{k2}).}}\\end{array}$$\n\nwhere λx = p 12|Jx| · Jcluster, sgn(Jx) = ±1 is the sign of Jx.\n\nThe perturbation on y-links is\n\n$$\\begin{array}{r}{\\lambda_{y}\\,H_{\\mathrm{perturbation,}\\,y}}\\\\ {=\\lambda_{y}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\operatorname{sgn}(J_{y})\\cdot(\\mathbf{S}_{j3}-\\mathbf{S}_{j4})\\cdot(\\mathbf{S}_{k3}-\\mathbf{S}_{k4})]}\\\\ {\\quad-|J_{y}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}\\end{array}$$\n\nwith λy = p 4|Jy| · Jcluster. The perturbation on z-links is\n\n$\\lambda_{z}\\,H_{\\rm perturbation}$, $z$ \n \n$=\\lambda_{z}[{\\bf S}_{j2}\\cdot({\\bf S}_{k3}\\times{\\bf S}_{k4})+{\\rm sgn}(J_{z})\\cdot{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})]$ \n \n$-|J_{z}|({\\bf S}_{j3}\\cdot{\\bf S}_{j4}+{\\bf S}_{k3}\\cdot{\\bf S}_{k4})$. \n \n\nwith λz = 4p |Jz| · Jcluster. The entire Hamiltonian Hmagnetic reads explicitly as,\n\nHmagnetic = X cluster j (Jcluster/2)(Sj1 + Sj2 + Sj3 + Sj4) 2 + X x−links <jk> p 12|Jx| · Jcluster- Sj1 · Sk1 + sgn(Jx) · (Sj2 · Sk2) − Jx(Sj1 · Sj2 + Sk1 · Sk2) + X y−links <jk> q 4|Jy| · Jcluster- Sj1 · (Sk3 − Sk4) + sgn(Jy)Sk1 · (Sj3 − Sj4) − |Jy|(Sj3 · Sj4 + Sk3 · Sk4) + X z−links <jk> 4 p |Jz| · Jcluster- Sj2 · (Sk3 × Sk4) + sgn(Jz)Sk2 · (Sj3 × Sj4) − |Jz|(Sj3 · Sj4 + Sk3 · Sk4) . (16)\n\nIn (16), we have been able to reduce the four spin interactions in (8) to inter-cluster Heisenberg interactions, and the six-spin interactions in (8) to inter-cluster spinchirality interactions. The inter-cluster Heisenberg couplings in Hperturbation x,y may be easier to arrange. The inter-cluster spin-chirality coupling in Hperturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets36,37, and a realization of spin", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "#### Monthly tables of overall projected prison population\n\n#### **Table A14: Monthly values of the overall projected prison population (end of month figures)**\n\n| | | Sentencing Scenarios | | | | Sentencing Scenarios | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| | Scenario 1 | Central | Scenario 2 | | Scenario 1 | Central | Scenario 2 |\n| Nov-14 | 85,800 | 86,100 | 86,100 | Dec-17 | 82,900 | 87,800 | 92,900 |\n| Dec-14 | 84,300 | 84,600 | 84,800 | Jan-18 | 84,200 | 89,200 | 94,400 |\n| Jan-15 | 85,900 | 86,200 | 86,700 | Feb-18 | 84,200 | 89,500 | 94,800 |\n| Feb-15 | 86,400 | 86,800 | 87,400 | Mar-18 | 84,100 | 89,600 | 95,100 |\n| Mar-15 | 86,700 | 87,200 | 87,900 | Apr-18 | 84,100 | 89,600 | 95,500 |\n| Apr-15 | 86,700 | 87,400 | 88,300 | May-18 | 84,000 | 89,700 | 95,700 |\n| May-15 | 86,900 | 87,500 | 88,600 | Jun-18 | 83,900 | 89,700 | 95,800 |\n| Jun-15 | 87,100 | 87,700 | 88,900 | Jul-18 | 83,700 | 89,800 | 96,000 |\n| Jul-15 | 87,100 | 88,000 | 89,100 | Aug-18 | 83,700 | 90,100 | 96,400 |\n| Aug-15 | 87,300 | 88,400 | 89,600 | Sep-18 | 83,800 | 90,300 | 96,800 |\n| Sep-15 | 87,400 | 88,700 | 90,100 | Oct-18 | 83,400 | 90,100 | 96,700 |\n| Oct-15 | 87,300 | 88,600 | 90,000 | Nov-18 | 83,400 | 90,100 | 96,800 |\n| Nov-15 | 87,200 | 88,600 | 90,200 | Dec-18 | 81,600 | 88,300 | 95,100 |\n| Dec-15 | 85,500 | 87,000 | 88,900 | Jan-19 | 82,900 | 89,700 | 96,500 |\n| Jan-16 | 86,900 | 88,500 | 90,500 | Feb-19 | 83,000 | 90,000 | 97,200 |\n| Feb-16 | 87,100 | 88,900 | 91,100 | Mar-19 | 83,000 | 90,100 | 97,400 |\n| Mar-16 | 87,100 | 89,000 | 91,400 | Apr-19 | 83,000 | 90,100 | 97,300 |\n| Apr-16 | 87,000 | 89,000 | 91,600 | May-19 | 82,800 | 90,100 | 97,500 |\n| May-16 | 86,900 | 89,100 | 91,800 | Jun-19 | 82,600 | 90,100 | 97,600 |\n| Jun-16 | 86,800 | 89,100 | 92,000 | Jul-19 | 82,600 | 90,200 | 97,600 |\n| Jul-16 | 86,500 | 89,200 | 92,100 | Aug-19 | 82,800 | 90,500 | 98,000 |\n| Aug-16 | 86,700 | 89,400 | 92,400 | Sep-19 | 82,800 | 90,700 | 98,100 |\n| Sep-16 | 86,800 | 89,600 | 92,600 | Oct-19 | 82,400 | 90,500 | 98,100 |\n| Oct-16 | 86,500 | 89,400 | 92,600 | Nov-19 | 82,200 | 90,400 | 98,300 |\n| Nov-16 | 86,300 | 89,400 | 92,800 | Dec-19 | 80,300 | 88,600 | 96,700 |\n| Dec-16 | 84,400 | 87,600 | 91,300 | Jan-20 | 81,500 | 89,900 | 98,200 |\n| Jan-17 | 85,600 | 88,900 | 92,800 | Feb-20 | 81,700 | 90,200 | 98,500 |\n| Feb-17 | 85,600 | 89,200 | 93,200 | Mar-20 | 81,800 | 90,300 | 98,700 |\n| Mar-17 | 85,600 | 89,200 | 93,300 | Apr-20 | 81,700 | 90,300 | 98,800 |\n| Apr-17 | 85,400 | 89,300 | 93,300 | May-20 | 81,500 | 90,300 | 98,800 |\n| May-17 | 85,300 | 89,300 | 93,500 | Jun-20 | 81,400 | 90,200 | 98,900 |\n| Jun-17 | 85,200 | 89,300 | 93,600 | Jul-20 | 81,400 | 90,300 | 98,900 |\n| Jul-17 | 85,000 | 89,300 | 93,900 | Aug-20 | 81,600 | 90,600 | 99,300 |\n| Aug-17 | 85,200 | 89,600 | 94,200 | Sep-20 | 81,800 | 90,700 | 99,500 |\n| Sep-17 | 85,200 | 89,800 | 94,500 | Oct-20 | 81,300 | 90,500 | 99,500 |\n| Oct-17 | 84,900 | 89,600 | 94,500 | Nov-20 | 81,100 | 90,400 | 99,600 |\n| Nov-17 | 84,700 | 89,500 | 94,600 | Dec-20 | 79,200 | 88,700 | 97,900 |\n\n| | Sentencing Scenarios | | | | Sentencing Scenarios | |\n| --- | --- | --- | --- | --- | --- | --- |\n| Scenario 1 | Central | Scenario 2 | | Scenario 1 | Central | Scenario 2 |\n| Nov-14 85,800 | 86,100 | 86,100 | Dec-17 | 82,900 | 87,800 | 92,900 |\n| Dec-14 84,300 | 84,600 | 84,800 | Jan-18 | 84,200 | 89,200 | 94,400 |\n| Jan-15 85,900 | 86,200 | 86,700 | Feb-18 | 84,200 | 89,500 | 94,800 |\n| Feb-15 86,400 | 86,800 | 87,400 | Mar-18 | 84,100 | 89,600 | 95,100 |\n| Mar-15 86,700 | 87,200 | 87,900 | Apr-18 | 84,100 | 89,600 | 95,500 |\n| Apr-15 86,700 | 87,400 | 88,300 | May-18 | 84,000 | 89,700 | 95,700 |\n| May-15 86,900 | 87,500 | 88,600 | Jun-18 | 83,900 | 89,700 | 95,800 |\n| Jun-15 87,100 | 87,700 | 88,900 | Jul-18 | 83,700 | 89,800 | 96,000 |\n| Jul-15 87,100 | 88,000 | 89,100 | Aug-18 | 83,700 | 90,100 | 96,400 |\n| Aug-15 87,300 | 88,400 | 89,600 | Sep-18 | 83,800 | 90,300 | 96,800 |\n| Sep-15 87,400 | 88,700 | 90,100 | Oct-18 | 83,400 | 90,100 | 96,700 |\n| Oct-15 87,300 | 88,600 | 90,000 | Nov-18 | 83,400 | 90,100 | 96,800 |\n| Nov-15 87,200 | 88,600 | 90,200 | Dec-18 | 81,600 | 88,300 | 95,100 |\n| Dec-15 85,500 | 87,000 | 88,900 | Jan-19 | 82,900 | 89,700 | 96,500 |\n| Jan-16 86,900 | 88,500 | 90,500 | Feb-19 | 83,000 | 90,000 | 97,200 |\n| Feb-16 87,100 | 88,900 | 91,100 | Mar-19 | 83,000 | 90,100 | 97,400 |\n| Mar-16 87,100 | 89,000 | 91,400 | Apr-19 | 83,000 | 90,100 | 97,300 |\n| Apr-16 87,000 | 89,000 | 91,600 | May-19 | 82,800 | 90,100 | 97,500 |\n| May-16 86,900 | 89,100 | 91,800 | Jun-19 | 82,600 | 90,100 | 97,600 |\n| Jun-16 86,800 | 89,100 | 92,000 | Jul-19 | 82,600 | 90,200 | 97,600 |\n| Jul-16 86,500 | 89,200 | 92,100 | Aug-19 | 82,800 | 90,500 | 98,000 |\n| Aug-16 86,700 | 89,400 | 92,400 | Sep-19 | 82,800 | 90,700 | 98,100 |\n| Sep-16 86,800 | 89,600 | 92,600 | Oct-19 | 82,400 | 90,500 | 98,100 |\n| Oct-16 86,500 | 89,400 | 92,600 | Nov-19 | 82,200 | 90,400 | 98,300 |\n| Nov-16 86,300 | 89,400 | 92,800 | Dec-19 | 80,300 | 88,600 | 96,700 |\n| Dec-16 84,400 | 87,600 | 91,300 | Jan-20 | 81,500 | 89,900 | 98,200 |\n| Jan-17 85,600 | 88,900 | 92,800 | Feb-20 | 81,700 | 90,200 | 98,500 |\n| Feb-17 85,600 | 89,200 | 93,200 | Mar-20 | 81,800 | 90,300 | 98,700 |\n| Mar-17 85,600 | 89,200 | 93,300 | Apr-20 | 81,700 | 90,300 | 98,800 |\n| Apr-17 85,400 | 89,300 | 93,300 | May-20 | 81,500 | 90,300 | 98,800 |\n| May-17 85,300 | 89,300 | 93,500 | Jun-20 | 81,400 | 90,200 | 98,900 |\n| Jun-17 85,200 | 89,300 | 93,600 | Jul-20 | 81,400 | 90,300 | 98,900 |\n| Jul-17 85,000 | 89,300 | 93,900 | Aug-20 | 81,600 | 90,600 | 99,300 |\n| Aug-17 85,200 | 89,600 | 94,200 | Sep-20 | 81,800 | 90,700 | 99,500 |\n| Sep-17 85,200 | 89,800 | 94,500 | Oct-20 | 81,300 | 90,500 | 99,500 |\n| Oct-17 84,900 | 89,600 | 94,500 | Nov-20 | 81,100 | 90,400 | 99,600 |\n| Nov-17 84,700 | 89,500 | 94,600 | Dec-20 | 79,200 | 88,700 | 97,900 |", - "page_start": 21, - "page_end": 21, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "# Appendix B: Derivation of the Terms Generated by Second Order Perturbation of Inter-cluster Magnetic Interactions\n\nIn this Appendix we derive the second order perturbations of inter-cluster Heisenberg and spin-chirality interactions. The results can then be used to construct (16).\n\nFirst consider the perturbation λ Hperturbation = λ[Sj1 · Sk1 + r(Sj2 · Sk2)], where r is a real number to be tuned later. Due to the fact mentioned in Subsection IV B, the action of Hperturbation on any cluster singlet state will produce a state with total spin-1 for both cluster j and k. Thus the first order perturbation in (15) vanishes. And the second order perturbation term can be greatly simplified: operator (1 − Pjk)[0 − Hcluster j − Hcluster k] −1 (1 − Pjk) can be replaced by a c-number (−2Jcluster) −1 . Therefore the perturbation up to second order is\n\n$$-\\frac{\\lambda^{2}}{2J_{\\mathrm{cluster}}}\\,{\\mathcal{P}}_{j k}(H_{\\mathrm{perturbation}})^{2}{\\mathcal{P}}_{j k}$$\n\nThis is true for other perturbations considered later in this Appendix. The cluster j and cluster k parts can be separated, this term then becomes (a, b = x, y, z),\n\n$$\\begin{array}{c}{{-\\,\\frac{\\lambda^{2}}{2J_{\\mathrm{cluster}}}\\sum_{a,b}\\left[\\mathcal{P}_{j}S_{j1}^{a}S_{j1}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k1}^{a}S_{k1}^{b}\\mathcal{P}_{k}\\right]}}\\\\ {{\\quad+2r\\,\\mathcal{P}_{j}S_{j1}^{a}S_{j2}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k1}^{a}S_{k2}^{b}\\mathcal{P}_{k}}}\\\\ {{\\quad+r^{2}\\,\\mathcal{P}_{j}S_{j2}^{a}S_{j2}^{b}\\mathcal{P}_{j}\\cdot\\mathcal{P}_{k}S_{k2}^{a}S_{k2}^{b}\\mathcal{P}_{k}\\right]}}\\end{array}$$\n\nThen use the fact that PjS a jℓS b jmPj = δab(1/3)Pj(Sjℓ · Sjm)Pj by spin rotation symmetry, the perturbation becomes\n\n$$-\\frac{\\lambda^{2}}{6J_{\\rm cluster}}\\Big{[}\\frac{9+9r^{2}}{16}+2r\\,{\\cal P}_{jk}({\\bf S}_{j1}\\cdot{\\bf S}_{j2})({\\bf S}_{k1}\\cdot{\\bf S}_{k2}){\\cal P}_{jk}\\Big{]}$$\n \n$$=-\\frac{\\lambda^{2}}{6J_{\\rm cluster}}\\Big{[}\\frac{9+9r^{2}}{16}+(r/2)\\tau_{j}^{x}\\tau_{k}^{x}-r/2$$\n \n$$-r\\,{\\cal P}_{jk}({\\bf S}_{j1}\\cdot{\\bf S}_{j2}+{\\bf S}_{k1}\\cdot{\\bf S}_{k2}){\\cal P}_{jk}\\Big{]}.$$\n\nSo we can choose −(r λ2 )/(12Jcluster) = −Jx, and include the last intra-cluster Sj1 ·Sj2 + Sk1 ·Sk2 term in the first order perturbation.\n\nThe perturbation on x-links is then (not unique),\n\n$\\lambda_{x}\\,H_{\\rm perturbation}$, $x=\\lambda_{x}[{\\bf S}_{j1}\\cdot{\\bf S}_{k1}+{\\rm sgn}(J_{x})\\cdot({\\bf S}_{j2}\\cdot{\\bf S}_{k2})]$ \n \n$-J_{x}({\\bf S}_{j1}\\cdot{\\bf S}_{j2}+{\\bf S}_{k1}\\cdot{\\bf S}_{k2})$\n\nwith λx = p 12|Jx| · Jcluster, and r = sgn(Jx) is the sign of Jx. The non-trivial terms produced by up to second order perturbation will be the τ x j τ x k term. Note that the last term in the above equation commutes with cluster Hamiltonians so it does not produce second or higher order perturbations.\n\nSimilarly considering the following perturbation on ylinks, λ Hperturbation = λ[Sj1 ·(Sk3 − Sk4) + r Sk1 ·(Sj3 − Sj4)]. Following similar procedures we get the second order perturbation from this term\n\n− λ 2 6Jcluster h 9 + 9r 2 8 + 2r Pjk[Sj1 · (Sj3 − Sj4)][Sk1 · (Sk3 − Sk4)]Pjk − (3/2)Pjk(Sk3 · Sk4 + r 2 Sj3 · Sj4)Pjki = − λ 2 6Jcluster h 9 + 9r 2 8 + 2r (3/4)τ y j τ y k − (3/2)Pjk(Sk3 · Sk4 + r 2 Sj3 · Sj4)Pjki\n\nSo we can choose −(r λ2 )/(4Jcluster) = −Jy, and include the last intra-cluster Sk3 · Sk4 + r 2 Sj3 · Sj4 term in the first order perturbation.\n\nTherefore we can choose the following perturbation on y-links (not unique),\n\n$$\\begin{array}{r}{\\lambda_{y}\\,H_{\\mathrm{perturbation,}\\,y}}\\\\ {=\\lambda_{y}[\\mathbf{S}_{j1}\\cdot\\mathbf{S}_{k1}+\\operatorname{sgn}(J_{y})\\cdot(\\mathbf{S}_{j3}-\\mathbf{S}_{j4})\\cdot(\\mathbf{S}_{k3}-\\mathbf{S}_{k4})]}\\\\ {\\quad-|J_{y}|(\\mathbf{S}_{j3}\\cdot\\mathbf{S}_{j4}+\\mathbf{S}_{k3}\\cdot\\mathbf{S}_{k4})}\\end{array}$$\n\nwith λy = p 4|Jy| · Jcluster, r = sgn(Jy) is the sign of Jy. The τ z τ z term is again more difficult to get. We use\n\nj k the representation of τ z by spin-chirality (6). And consider the following perturbation\n\n$$H_{\\mathrm{perturbation}}={\\bf S}_{j2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})+r\\,{\\bf S}_{k2}\\cdot({\\bf S}_{j3}\\times{\\bf S}_{j4})$$\n\nThe first order term in (15) vanishes due to the same reason as before. There are four terms in the second order perturbation. The first one is\n\n$$\\begin{array}{l}{{\\lambda^{2}\\,{\\mathcal{P}}_{j k}{\\mathbf{S}}_{j2}\\cdot({\\mathbf{S}}_{k3}\\times{\\mathbf{S}}_{k4})(1-{\\mathcal{P}}_{j k})}}\\\\ {{\\ \\times\\left[0-H_{\\mathrm{cluster}\\ j}-H_{\\mathrm{cluster}\\ k}\\right]^{-1}}}\\\\ {{\\ \\times\\left(1-{\\mathcal{P}}_{j k}\\right){\\mathbf{S}}_{j2}\\cdot({\\mathbf{S}}_{k3}\\times{\\mathbf{S}}_{k4}){\\mathcal{P}}_{j k}}}\\end{array}$$\n\nFor the cluster j part we can use the same arguments as before, the Hcluster j can be replaced by a c-number Jcluster. For the cluster k part, consider the fact that Sk3 × Sk4 equals to the commutator −i[Sk4, Sk3 · Sk4], the action of Sk3 ×Sk4 on physical singlet states of k will also only produce spin-1 state. So we can replace the Hcluster k in the denominator by a c-number Jcluster as well. Use spin rotation symmetry to separate the j and k parts, this term simplifies to\n\n− λ 2 6Jcluster PjSj2 · Sj2Pj · Pk(Sk3 × Sk4) · (Sk3 × Sk4)Pk. Use (S) 2 = 3/4 and (Sk3 × Sk4) · (Sk3 × Sk4) = X a,b (S a k3S b k4S a k3S b k4 − S a k3S b k4S b k3S a k4 ) = (Sk3 · Sk3)(Sk4 · Sk4) − X a,b S a k3S b k3 [δab/2 − S a k4S b k4 ] = 9/16 + (Sk3 · Sk4)(Sk3 · Sk4) − (3/8)", - "page_start": 8, - "page_end": 8, - "source_file": "1001.0266.pdf" - }, - { - "text": "# **Annex 4: Default values**\n\n### **1. Fraction of carbon stored for reference approach**\n\nBitumen – 1 Coal oils and tars (from coking coal – 0.75 Ethane – 0.8 Gas/Diesel oil – 0.5 LPG – 0.8 Lubricants – 0.5 Naphtha – 0.8 Natural gas – 0.33\n\n### **2. Conversion factors**\n\n- a. CH4 volume CH4 Gg = 0.67\n- b. *Conversion factors for energy*\n\n| From | To | Multiply by |\n| --- | --- | --- |\n| J | TJ | 10-12 |\n| KJ | TJ | 10-9 |\n| MJ | TJ | 10-6 |\n| GJ | TJ | 10-3 |\n| TJ | TJ | 1 |\n| cal | TJ | 4.1868 x 10-12 |\n| kcal | TJ | 4.1868 x 10-9 |\n| Mcal | TJ | 4.1868 x 10-6 |\n| Gcal | TJ | 4.1868 x 10-3 |\n| Tcal | TJ | 4.1868 |\n| kWh | TJ | 3.6 x 10-6 |\n| MWh | TJ | 3.6 x 10-3 |\n| GWh | TJ | 3.6 |\n| Btu | TJ | 1.0551 x 10-9 |\n| kBtu | TJ | 1.0551 x 10-6 |\n| MBtu | TJ | 1.0551 x 10-3 |\n| GBtu | TJ | 1.0551 |\n| toe | TJ | 41.868 x 10-3 |\n| ktoe | TJ | 41.868 |\n| Mtoe | TJ | 4.1868 x 104 |\n| TJ | J | 1012 |\n| TJ | KJ | 109 |\n| TJ | MJ | 106 |\n| TJ | GJ | 103 |\n| TJ | cal | 238.8 x 109 |\n| TJ | kcal | 238.8 x 106 |\n| TJ | Mcal | 238.8 x 103 |\n| TJ | Gcal | 238.8 |\n| TJ | Tcal | 238.8 x 10-3 |\n| TJ | kWh | 277.8 x 103 |\n| TJ | MWh | 277.8 |\n| TJ | GWh | 277.8 x 10-3 |\n| TJ | Btu | 947.8 x 106 |\n| TJ | kBtu | 947.8 x 103 |\n| TJ | MBtu | 947.8 |\n| TJ | GBtu | 947.8 x 10-3 |\n| TJ | toe | 23.88 |\n| TJ | ktoe | 23.88 x x 10-3 |\n| TJ | Mtoe | 23.88 x 10-6 |", - "page_start": 48, - "page_end": 48, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "# **CONSENT OF INDEPENDENT REGISTERED PUBLIC ACCOUNTING FIRM**\n\nWe consent to the incorporation by reference in Registration Statement Nos. 333-166961, 333-161803, 333-63403, 333-40064, 333-40066, 333-79791, 333-101110, 333-118756, 333-146049, 333-174336, 333-173020, 333-189301 and 333-198413 on Form S-8 and 333-198408 on Form S-3 of our reports dated March 16, 2015, relating to the financial statements of Nordstrom, Inc. and subsidiaries, and the effectiveness of Nordstrom, Inc. and subsidiaries' internal control over financial reporting, appearing in the Annual Report on Form 10-K of Nordstrom, Inc. for the year ended January 31, 2015.\n\n/s/ Deloitte & Touche LLP Seattle, Washington March 16, 2015", - "page_start": 82, - "page_end": 82, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "```\nvm4_memory = \"8\" # Memory GB\nvm4_cpu = \"2\" # Virtual CPU\nvm4_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\nvm4_name = \"lbsnode\" # Hostname prefix\nvm4_first_ip = \"192.168.11.212\" # Fist IP from a consecutive pool of IPs\nvm4_image_name = \"xiv_p9_image_rhel76\" # The image name\nvm4_remote_restart = \"true\" # Enable Auto Remote Restart\n```\n*Example 6-5 terrafomr.tfvars for seven nodes deployment*\n\n```\ncat terraform.tfvars\n#PowerVC (OpenStack)\n#---------------------------------\npowervc_user = \"ocpadmin\" # PowerVC user\npowervc_password = \"<password>\" # PowerVC password\npowervc_server = \"192.168.11.31\" # PowerVC IP or hostname\npowervc_project = \"ocp-project\" # PowerVC project(tenant) name\n#General configuration:\n#---------------------------------\nssh_user = \"root\" # Image username\nssh_user_password = \"<password>\" # Image password\nuser_public_key = \"ssh-rsa \nAAAAB3NzaC1yc2EAAAABIwAAAQEA09+YMqJ8VHX3HC7qy6HSxs3JjTGKbEgK+CExpf811uxsq+uJYbfXEKH19/NCf/U\nvpkozJBDDXDIxJ4uqOEBWDG4mUuu5U9a4lXgb6qaPYyXwVTygL/IcB0poSGEQQaJzhB05g71uZrya++sG1xHUjSQAQz\nhDuKrs4Bc3gcN4184UR+BX1pVgCls3NRn9hLrfLWS37M/kn+b/n6VMYYVpHsZ2XVydAn2nwuzktaEuWYaY/1cNd4xuu\nyVu08GQOon6t5KQ1EZBheADdSsyamulLqW9z4j6Y1wwDe4GPDc5zIW++ASDAZB0eEfbKGDLVdpFsI5YV8nLV1r/T0Y/\nFiFZqQ== Bogdan Savu;IBMROO45771;IBMROZZ014E826;J;\"\ndns1 = \"192.168.11.210\" # DNS server 1\ndns_domain = \"domain.example.com\" # DNS Domain Name\n#Network configuration\n#---------------------------------\nnet1_name = \"net_ocp_cluster2\" # Network Name\nnet1_vlan_id = \"1\" # VLAN ID\nnet1_subnet = \"192.168.11.0/21\" # Network/Mask \nnet1_gateway = \"192.168.11.1\" # Gateway\nnet1_start = \"192.168.11.202\" # First IP from Pool\nnet1_end = \"192.168.11.212\" # Last IP from Pool\n#VM1 configuration (OCP - Master Nodes)\n#---------------------------------\nvm1_number = \"3\" # Number of VMs\nvm1_memory = \"64\" # Memory GB\nvm1_cpu = \"8\" # Virtual CPU\nvm1_vcpu_ratio = \"2\" # vCPU RATIO 1:2 1 vCPU = 0.5 eCPU (cores)\nvm1_name = \"mstnode\" # Hostname prefix\nvm1_first_ip = \"192.168.11.202\" # Fist IP from a consecutive pool of IPs\nvm1_image_name = \"xiv_p9_image_rhel76\" # The image name\nvm1_remote_restart = \"true\" # Enable Auto Remote Restart\nvm1_storage_name = \"xiv_StoragePool\" # Storage Template\nvm1_dockerdisk1 = \"512\" # Docker disk size in GB for ephemeral storage\n#VM2 configuration (OCP - Infra Nodes)\n#---------------------------------\nvm2_number = \"0\" # Number of VMs\nvm2_memory = \"16\" # Memory GB\nvm2_cpu = \"2\" # Virtual CPU\nvm2_vcpu_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\nvm2_name = \"infnode\" # Hostname prefix\n```", - "page_start": 131, - "page_end": 131, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "I want to start a company that automates kitchen tasks, does that sound like a good idea for 2025?", - "target_page": 1, - "target_passage": "Smart home automation Smart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### ISSUE\n\nDecember 2024\n\n#### CATEGORIES\n\nTechnology & Cybersecurity Editor's Picks Finance - Personal Home - Interior\n\n# **The top AI-powered tech trends in 2025**\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n### AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops – or AI PC – is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors – also known as the brain of the computer – which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n### Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and\n\nnutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n# Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n# Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com Word Count: 346\n\n#### M ed i a A tt a ch m e n ts −\n\n#### View", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "The Process and Resource Management division is one of Nissan's greatest assets. They are sometimes perceived as too rigid, and it is true that the division has established quite a number of rules. However, I easily imagine what can happen to a company without rules. The point, really, is to keep the structure and provide some freedom when needed. The core creative divisions can add great value to a process, such as when they interact with the advanced engineering team. When the creative people are happy with what they have developed, however, someone has to support the complex process of creating added value. That responsibility belongs to the Process and Resource Management division. Otherwise, a nicely crafted process may never be implemented. But at Nissan, employees in the Process and Resource Management division serve as the guardians of the timelines and support the implementation of processes. If a process is not working as we planned, they get the project back on track in a smooth and efficient manner. If a process is no longer relevant, they quickly organize a taskforce to update it.\n\nSo Corporate Planning provides the direction, Design and Product Planning create products with value, and Market Intelligence and Process and Resource Management support the creative teams. Someone has to drive the implementation, and that role belongs to our six program directors in Program Management. The program directors are involved from the beginning. They are businesspeople, the CEOs of their own platform businesses. Each has a different part of the vehicle lineup, but the substance of their mutual targets and commitments is simple: profit. Program directors make it happen. They ensure that everybody in the Company keeps each project consistently profitable through\n\n#### **Nre Global Product Launches 28 All-New Models**\n\nall phases: planning, development and launch, right through to the end of the lifecycle. Our program directors are persuasive people with strong characters, special skills and attributes, and they are not afraid to challenge the system. Their diversity contributes tremendously to Nissan's success. The cumulative work of all these divisions results in a very consistent organization with an upstream process that creates value.\n\nLooking at Nissan's global output over the last six years, it is clear that some terrific products have been created, and the value of the Company as a whole is greater. There are many scorecards that reflect this, and our stakeholders certainly know Nissan's success first-hand. At the same time, we must prepare for the future. We need to reinforce the strength of our program management groups and establish more precise, accurate groups to standardize and improve processes for the future. Ironically, our achievements have created uncertainty for the future. Success creates risk, and the more we highlight our successes, the more we raise the anxiety level of investors. How can our new products be as good as those already released? How can we keep it all going?\n\nOne way to sustain our strong pace is to take greater advantage of the Alliance. The value is there, in areas such as purchasing, development, benchmarking, sales networks, market knowledge and even financial strategy. Yet we must maintain both a balance and a clear separation between the brand identities of Renault and Nissan. Neither company wants to make the same cars, or have the same corporate culture, or have its brand mistaken for the other. We will continue to derive benefits from this strategic partnership while remaining Nissan.", - "page_start": 36, - "page_end": 36, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "**Digitalisation** and its impact on economy and work is a major topic in political and scientific discussions. Obviously the term 'Digitalisation'271 covers such a broad array of technologies and developments that statements on their impact on society, economy and work can rarely be simple and straightforward.272 Digitalisation includes technical issues like 5G coverage, widespread connectivity, IoT and big data, wearables, semiconductor capacities, edge and cloud computing, AI, data handling issues, for example, of medical records, mobile devices and online platforms, and it triggers economic and societal changes, for example, of business models, skills development, education and digital government.\n\nDigital transformation is globally supported by governments using financial, political and legal measures. The European Commission launched in February 2020 the **European Digital Strategy 2020-2025**. This strategy aims to promote a new generation of digital technologies.\n\nConcerning the **overall impact of digitalisation on work**, most researchers state a decrease of certain types of work and growth of others. Cedefop describes this as 'the great divide' and writes:\n\n*'Cedefop's European skills and jobs (ESJ) survey reveals that more than 7 in 10 adult employees in the EU need at least some fundamental ICT level to be able to perform their jobs. Yet, about one in three of those employees are at risk of digital skill gaps. At the same time, almost half of all employees in lowskilled occupations do not require ICT skills to do their work. Cedefop … notes that 'the digital divide is alive and well. A strikingly high share of the EU adult workforce is still employed in a semi-analogue world, at the same time that others are faced with technological obsolescence.'*273\n\nA statement of two researchers from the Massachusetts Institute of Technology shortly summarises this:\n\n*'Technologies such as payroll-processing and inventory-control software, factory automation, computercontrolled machining centers, and scheduling tools have replaced workers on the shop floor and in clerical tasks and rote information processing. By contrast, big data, analytics, and high-speed communications have enhanced the output of people with engineering, creative, and design skills and made them more valuable. The net effect has been to decrease the demand for low-skilled information workers while increasing the demand for highly skilled ones.'*274\n\n**Digital technologies can enhance prevention at workplaces.** They can help to separate workers from hazardous working situations, facilitate better and innovative ways of monitoring exposure, and might improve the quality of work by relieving workers from repetitive or routine tasks. Digital technologies may also create higher levels of autonomy and flexibility or facilitate the access of a more diverse workforce to the labour market, in particular vulnerable groups such as disabled people, ageing", - "page_start": 103, - "page_end": 103, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[249] By 2015, over fifty countries were reported to be researching battlefield robots.[250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[254]\n\n#### **Technological unemployment**\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI.[256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\".[p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; *The Economist* stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\".[262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]\n\nFrom the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI.[195][196] Another discussed approach is to envision a separate *sui generis* system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[197]\n\n#### **Dominance by tech giants**\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[201][202]\n\n#### **Power needs and environmental impacts**\n\nIn January 2024, the International Energy Agency (IEA) released *Electricity 2024, Analysis and Forecast to 2026*, forecasting electric power use.[203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[205]\n\nA 2024 Goldman Sachs Research Paper, *AI Data Centers and the Coming US Power Demand Surge*, found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[207]\n\nIn 2024, the *Wall Street Journal* reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.[209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## **Building on Strengths and Being Innovative** CARLOS TAVARES\n\nPLANNING\n\nExecutive Vice President\n\n\"The Planning Group covers a great deal of corporate territory and handles a number of key responsibilities within Nissan. Our Corporate Planning division, for example, oversees strategy, setting the Company's long-term course under the Executive Committee's direction. The two creative divisions, Design and Product Planning, create value for the customer. Together, those three divisions form the core of our group, surrounded by several other key divisions. Market Intelligence supports Design and Product Planning in customer understanding. The people in Process and Resource Management provide the practical direction and restraint a company of our size must have when deploying its resources. And Program Management drives the implementation process, turning the work of all the other divisions into reality.\n\nThe role of Corporate Planning is to look to the future and devise ways to take advantage of the business opportunities we identify. In the past, the division relied primarily on three-year plans such the Nissan Revival Plan and NISSAN 180. That strategy served the interests of Nissan stakeholders well. The Company is now sound, and the power and constancy of vision Corporate Planning provides will determine how well Nissan maintains its strength. However, in addition to the mid-term plan, we have now entered a phase that requires us to extend that vision and implement a longer-term plan. Corporate Planning is working closely with the Executive Committee on this matter.\n\nDesign and Product Planning are central to the creation of Nissan's strength. Both focus on satisfying the consumer's unmet needs, and create value in the process. Our product planning DNA is to identify and target our customers, and do it better than our competitors. Rather than simply throwing a product into the market and waiting for a response, we first seek a deep understanding of the expected response. Only then can we create a product consistent with that understanding.\n\nOne key for both creative divisions is to focus on \"customer clusters.\" We refuse to spend our money to develop products that should please everyone. In fact, we may invest in a certain innovation because we understand that a particular subset of customers will appreciate the performance it provides. Our process is very focused, and may even target a smaller customer cluster that no one else is addressing. The marketing process for these two divisions is deep and accurate. This creates value through differentiation.\n\nThe NISSAN Value-Up plan is about focusing on strong products that reinforce our brand, pursuing new concepts and innovation, and expanding geographically in a stronger and faster way. During the Nissan Revival Plan and NISSAN 180, we introduced some influential and innovative models the Murano, the Z, the FX and the X-TRAIL, to name a few. It would be a mistake not to capitalize on those successes and reinforce the brand. At the same time, we cannot rely solely on our current concepts. Launching a new product naturally requires significant expenditures, because awareness and understanding must be created for the new product. We must differentiate to succeed, devise new products and concepts, and venture into areas that others have not. During the NISSAN Value-Up period, we will offer products that build on past successes—without being conservative—as well as products that are new and innovative. Our brand pyramid shows us the way to be both 'bold and thoughtful.'\n\nOur Market Intelligence division, which supports the Design and Product Planning teams as well as other divisions throughout the Company, is relatively new. The division's experts not only supply research data, they also help shape surveys to answer precise questions and identify the traps that are often hidden within surveys. One challenge for the Market Intelligence people is to clearly communicate their conclusions to the Company's decisionmakers. If only their peers are able to understand the data they produce, their efforts and the data itself serve no purpose. In addition, this division is challenged to standardize and extend best practices globally, while maintaining a regional focus when appropriate.", - "page_start": 35, - "page_end": 35, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "### A L L S T E E L : A N A N T I D O T E T O T H E O R D I N A R Y\n\n#### A C A S E S T U D Y I N Q U A L I T Y\n\nGreat, high-quality design creates better work environments and happier end-users. Whether we're building lateral files (the first product for which we became known) or designing awardwinning seating, like our #19® chair, the Allsteel core message remains constant: the highest quality in functionality, durability, and service.\n\nToday's Allsteel is about a broad array of workplace furniture solutions: new, exciting panel and desking systems, storage, seating, and tables that offer a unique counterpoint to the sea of sameness provided by most office furniture. Working closely with architects and designers, we target the contract market, providing project-driven and design-oriented office solutions. Our rapid modeling and prototyping allows for equally rapid product development, a reflection of our agile, lean culture. As innovative as many of our products are, design innovation — for us — is simply what happens along the way to solving customer problems.\n\nSome of our products, like the #19® chair, are iconographically associated with the Allsteel name, and are quite influential in our brand building efforts. Our two newest enterprises are Terrace® 2.6 — a fast-growing systems line providing enormous flexibility and durability — and Get SetTM — an incredibly versatile line of multi-purpose room tables, chairs, and communication products. All of our products respond completely to the needs of end-users because that's where the design process starts.\n\nIn all that we do, our main focus is to identify end-user problems and solve them better than anyone else. The majority of our customers are large corporations with multiple locations worldwide. According to the senior vice president responsible for the global design, construction, and project management of an internationally renowned financial services company, \"Allsteel offers extremely attractive, cost-effective furniture solutions. Your manufacturing and service are best in class you turn everything around with impressive swiftness. There's really not much in the market to beat you.\"\n\nWell-designed, forward-thinking, and glad to be of service. Allsteel is proud to uphold our long heritage of quality.", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "WHILE IT IS EARLY DAYS, I BELIEVE WE CAN EVOLVE THE BUSINESS IN A WAY THAT WILL BE EVEN MORE REWARDING FOR OUR CUSTOMERS, OUR SHAREHOLDERS AND EMPLOYEES.\" \"\n\n**GUY LAURENCE**\n\n## A MESSAGE FROM THE **PRESIDENT & CEO**\n\n**As I write these words after recently joining the company, I can say with genuine enthusiasm that it's great to be here at Rogers. I took this post because Rogers is a remarkable company with a rich history and an unrivalled mix of wireless, cable and media assets. It is a good match with my background and my experience.**\n\nDuring the recruiting and onboarding process, I spent considerable time with the Rogers family, the Board of Directors and the leadership team. I am struck by their energy, passion and drive to win, which I think we can harness to do even greater things. I also value the support and longerterm focus of the founding Rogers family who own significant equity in the company.\n\nSince joining, I have criss-crossed Canada meeting my team, external stakeholders and customers. I have also conducted numerous business reviews, overseen the 700 MHz spectrum auction and reviewed the regulatory agenda. All this with the view to developing a detailed set of priorities and plans for the company going forward. After I complete this review in the Spring I will outline a detailed strategy and business plan working with my management team.\n\nRogers has many strengths and I intend to capitalize on them. This is a financially strong company with a solid balance sheet and investment grade credit ratings. We have highly advanced cable and wireless networks and a robust portfolio of media assets. We also have a strong pipeline of new products and services to offer to our customers and some of the most passionate, committed employees I have ever worked with.\n\nWhile it is early days, I believe we can evolve the business in a way that will be even more rewarding for our customers, our shareholders and employees. Our goal is clear – winning on a consistent basis. And while our industry faces the challenge of moderating growth and regulatory uncertainty, few industries are more dynamic and better at leveraging new technologies.\n\nTo win, we must put our customers' needs front and centre in everything we do. This means delivering a better and more consistent customer experience. It means strengthening our value proposition to make sure our customers can answer the question \"why Rogers?\" As a company, we need to bring our collection of assets together in a way that strengthens and differentiates Rogers with our customers and our shareholders. We also need to align and focus our investments in key areas to accelerate our growth. Internally we need to execute with operational excellence. And we need to focus on clarifying accountabilities and strengthening our teams at all levels of the company.\n\nAs CEO, I will work to re-establish our leadership position and accelerate our growth. This will take time. It is a longterm effort that will require a clear strategy, rigorous prioritization and disciplined execution. It will not be easy, but it is the job I have signed up for, and it is a challenge I intend to meet head-on.\n\nI look forward to continuing Ted's legacy, and to leading Rogers through the next phase of growth and to serving you, our shareholders.\n\nThank you for your continued business, investment and support.\n\n**GUY LAURENCE PRESIDENT AND CHIEF EXECUTIVE OFFICER** ROGERS COMMUNICATIONS INC.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "I want to help my parents who are in residential care, are there any trendy AI-related devices I could help them with? ", - "target_page": 1, - "target_passage": "Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "#### ISSUE\n\nDecember 2024\n\n#### CATEGORIES\n\nTechnology & Cybersecurity Editor's Picks Finance - Personal Home - Interior\n\n# **The top AI-powered tech trends in 2025**\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n### AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops – or AI PC – is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors – also known as the brain of the computer – which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n### Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and\n\nnutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n# Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n# Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com Word Count: 346\n\n#### M ed i a A tt a ch m e n ts −\n\n#### View", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "# **Artificial intelligence**\n\n**Artificial intelligence** (**AI**), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\"[2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]\n\nArtificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## **Goals**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "#### **Existential risk**\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\".[265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives *almost any* goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager).[267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\"[268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\".[269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\"[273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\".[276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\"[277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\"[278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests.\"[280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\"[281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind.[387]\n\n#### **AI welfare and rights**\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[390][389]\n\n## **Future**\n\n### **Superintelligence and the singularity**\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\".[395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[396]\n\n### **Transhumanism**\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[249] By 2015, over fifty countries were reported to be researching battlefield robots.[250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[254]\n\n#### **Technological unemployment**\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI.[256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\".[p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; *The Economist* stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\".[262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]\n\nFrom the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\".[368]\n\n### **Evaluating approaches to AI**\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n#### **Symbolic AI and its limits**\n\nSymbolic AI (or \"GOFAI\")[370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\"[371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult.[372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge.[373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n#### **Neat vs. scruffy**\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n#### **Soft vs. hard computing**", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[301]\n\n#### **Regulation**\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\".[304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\".[312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n#### **Other industry-specific tasks**\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes.[178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "models are prone to generating falsehoods called \"hallucinations\", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text.[122][123]\n\nCurrent models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. [124] Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text.[125]\n\n### **Hardware and software**\n\nIn the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training.[126] Specialized programming languages such as Prolog were used in early AI research,[127] but general-purpose programming languages like Python have become predominant.[128]\n\nThe transistor density in integrated circuits has been observed to roughly double every 18 months—a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster. [129]\n\n## **Applications**\n\nAI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).\n\n### **Health and medicine**\n\nThe application of AI in medicine and medical research has the potential to increase patient care and quality of life.[130] Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.[131][132]\n\nFor medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication.[133] It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[133] New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. [134] In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[135] In 2024, researchers used machine learning to accelerate the search for Parkinson's disease", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "Is the topic of finance trending among AI topics for 2015 in Canada?", - "target_page": 1, - "target_passage": "Financial services", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[301]\n\n#### **Regulation**\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\".[304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\".[312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "#### ISSUE\n\nDecember 2024\n\n#### CATEGORIES\n\nTechnology & Cybersecurity Editor's Picks Finance - Personal Home - Interior\n\n# **The top AI-powered tech trends in 2025**\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n### AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops – or AI PC – is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors – also known as the brain of the computer – which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n### Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and\n\nnutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n# Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n# Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com Word Count: 346\n\n#### M ed i a A tt a ch m e n ts −\n\n#### View", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "# **Artificial intelligence**\n\n**Artificial intelligence** (**AI**), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\"[2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]\n\nArtificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## **Goals**", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n- 266. Russell & Norvig 2021, p. 1001.\n- 267. Bostrom (2014).\n- 268. Russell (2019).\n- 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n- 270. Harari (2023).\n- 271. Müller & Bostrom (2014).\n- 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n- 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). *CBS News*. 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n- 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis-1.6829302). *CBC*. Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n- 275. \" '50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). *Bloomberg BNN*. 14 June 2024. Retrieved 6 July 2024.\n- 276. Valance (2023).\n- 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). *The Guardian*. Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n- 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). *Fox News*. Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). *Forbes*. Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n- 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). *Financial Times*. Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n- 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). *Wired*. Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "# Our Goals for 2014\n\nComplete a minimum of $75 million in acquisitions.\n\nAcquire over 50% of 2014 acquisitions outside Atlantic Canada, with a focus in Ontario.\n\nGrow same store NOI by up to 2%.\n\nContinue to invest in development with two projects underway, managing projects on schedule and on budget.\n\ndevelopment program to a maximum of 5% of our balance sheet per year. We have three other developments projects in various planning stages, but don't expect to begin construction on any additional new projects until late 2014 or into 2015.\n\n## **Geographic Diversification is a Priority**\n\nGeographic diversification is a priority for Killam. Our asset base in Atlantic Canada is the foundation of the Company; however, with Atlantic Canada representing only 5% of the Canadian rental market, our growth opportunities increase significantly by expanding our target markets outside of this region. With its strong operating platform, Killam can support a larger and more geographically diverse portfolio. We are actively growing a portfolio of apartments in Ontario in three target markets: Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment outside Atlantic Canada will increase not only Killam's growth potential, it will also expand the Company's diversification and exposure to higher growth markets.\n\nAcquisitions in Ontario represented 45% of acquisitions in 2013. In addition to 1,359 apartment units in the province, we also have 2,144 manufactured home community sites, representing 29% of the MHC NOI last year. Based on our current portfolio, 15% of Killam's 2014 NOI will be generated in Ontario, compared to our longer-term goal of generating 50% of NOI outside Atlantic Canada. We expect to reach this goal by focusing acquisition activity in Ontario, with the majority of future investment anticipated in the province over the next few years. We will look for additional development opportunities in Ontario and we are exploring opportunities in Western Canada, attracted by the strong population growth trends in Alberta's urban markets. I would like to thank all Killam employees for their contributions and\n\ncommitment over the last year and our board of directors for their governance. Also, I would like to thank you, our shareholders, for your continued investment in Killam. I invite you to attend the Company's annual meeting on May 7, 2014 at 2:00 pm Atlantic Time at the Halifax Marriott Harbourfront Hotel, either in person or via webcast.\n\nYours truly,\n\nPhilip Fraser", - "page_start": 10, - "page_end": 10, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "- Gleick, James, \"The Fate of Free Will\" (review of Kevin J. Mitchell, *Free Agents: How Evolution Gave Us Free Will*, Princeton University Press, 2023, 333 pp.), *The New York Review of Books*, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. \"Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that.\" (p. 30.)\n- Halpern, Sue, \"The Coming Tech Autocracy\" (review of Verity Harding, *AI Needs You: How We Can Change AI's Future and Save Our Own*, Princeton University Press, 274 pp.; Gary Marcus, *Taming Silicon Valley: How We Can Ensure That AI Works for Us*, MIT Press, 235 pp.; Daniela Rus and Gregory Mone, *The Mind's Mirror: Risk and Reward in the Age of AI*, Norton, 280 pp.; Madhumita Murgia, *Code Dependent: Living in the Shadow of AI*, Henry Holt, 311 pp.), *The New York Review of Books*, vol. LXXI, no. 17 (7 November 2024), pp. 44–46. \"'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on governments driven by campaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight.... [T]he Fordham law professor Chinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to do no harm?'\" (p. 46.)\n- Henderson, Mark (24 April 2007). \"Human rights for robots? We're getting carried away\" (http:// www.thetimes.co.uk/tto/technology/article1966391.ece). *The Times Online*. London. Archived (https://web.archive.org/web/20140531104850/http://www.thetimes.co.uk/tto/techn ology/article1966391.ece) from the original on 31 May 2014. Retrieved 31 May 2014.\n- Hughes-Castleberry, Kenna, \"A Murder Mystery Puzzle: The literary puzzle *Cain's Jawbone*, which has stumped humans for decades, reveals the limitations of natural-languageprocessing algorithms\", *Scientific American*, vol. 329, no. 4 (November 2023), pp. 81–82. \"This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose.\" (p. 82.)\n- Immerwahr, Daniel, \"Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?\", *The New Yorker*, 20 November 2023, pp. 54–59. \"If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.\" (p. 59.)\n- Johnston, John (2008) *The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI*, MIT Press.", - "page_start": 67, - "page_end": 67, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n- 283. Christian (2020), pp. 67, 73.\n- 284. Yudkowsky (2008).\n- 285. Anderson & Anderson (2011).\n- 286. AAAI (2014).\n- 287. Wallach (2010).\n- 288. Russell (2019), p. 173.\n- 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). *Business Insider*. Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n- 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). *TechCrunch*. Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n- 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). *MIT Technology Review*. Retrieved 14 April 2024.\n- 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). *AI Business*. Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n- 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). *Ars Technica*. Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). *VentureBeat*. Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n- 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). *Vox*. Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n- 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and _safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 160. Alex McFarland: *7 Best AI for Math Tools.* (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n- 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n- 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n- 163. Congressional Research Service (2019). *Artificial Intelligence and National Security* (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n- 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n- 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). *+972 Magazine*. Retrieved 6 April 2024.\n- 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). *The Guardian*. Retrieved 4 December 2023.\n- 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender – deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). *Neue Zürcher Zeitung* (in German). Retrieved 10 August 2024.\n- 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n- 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n- 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). *The New York Times*. Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n- 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). *Bloomberg News*. Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[249] By 2015, over fifty countries were reported to be researching battlefield robots.[250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[254]\n\n#### **Technological unemployment**\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI.[256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\".[p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; *The Economist* stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\".[262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]\n\nFrom the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Is there any chance that my cousin has been granted financial aid from Chesapeak Energy? He's studying at a college in Oklahoma.", - "target_page": 26, - "target_passage": "hat’s why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n# **Community Impact**\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual volunteer program in which employees roll up their sleeves in the communities they call home.\n\nChesapeake's contributions take many forms: financial and equipment donations, volunteerism and scholarships. Last year, we made numerous in-kind donations of laptops, reconditioned Chesapeake fleet vehicles and subsidized office space. These contributions provide essential operating tools as nonprofit organizations across the nation attempt to serve more people — often with lower budgets — in tough economic times.\n\nFor example, in Louisiana we donated 12 vehicles in 2010, including one to the Panola College Oil and Natural Gas Technology Program, which teaches students about the natural gas industry and provides them with hands-on technical training. Across many of the company's operating areas, we've donated computers to deserving students, schools and organizations through Chesapeake's Discovering Tomorrow's Leaders program. In 2010 the company equipped 14 students with laptops and donated 70 computers to schools or supporting nonprofit organizations.\n\nChesapeake partners with other companies and organizations to meet basic, practical needs in hundreds of communities. An example is our\n\n*Putting food on the table — Employees volunteer at the Regional Food Bank of Oklahoma as part of Operation Blue.*\n\nsponsorship of the annual Day of Caring at the Ganus Center of Harding University in White County, Arkansas. During the event, approximately 1,200 uninsured or underinsured residents received a day of free medical, dental and eye screenings.\n\nTo help cultivate an appreciation for the great outdoors, in 2010 Chesapeake provided $25,000 to REAL School Gardens, a Fort Worthbased organization that establishes gardens at approximately 70 lower income elementary schools in North Texas. At I.M. Terrell Elementary School, students, parents, teachers and volunteers from Chesapeake and other groups worked together to prepare vegetable gardens and flower beds. In addition to teamwork skills and gardening, students learned about nutrition and took home food from the garden's bounty.\n\nWe supported servicemen and servicewomen by partnering with the Shreveport Chapter of Operation Support Our Troops, Inc. Our contribution helped offset the postage to send more than 100 care packages to troops overseas. The shipment was the largest in the organization's history and included Christmas cards, games and nonperishable food items.\n\nBy investing in the communities where we operate and the people whose lives we touch, we ensure a stronger today and a more hopeful tomorrow.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "*Dollar amounts are in thousands of Canadian dollars (except as noted)*\n\n## *Apartment Property Expenses*\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## **Utility and Fuel Expense ‑ Same Store**\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % Change |\n| --- | --- | --- | --- |\n| Natural gas | $4,565 | $2,729 | 67.3% |\n| Oil | 1,523 | 2,095 | (27.3)% |\n| Electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| Other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year‑to‑date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## **Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas**", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "**6100 NORTH WESTERN AVENUE OKLAHOMA CITY, OK 73118 WWW.CHK.COM**", - "page_start": 47, - "page_end": 47, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# Doing the Right Thing\n\nAt Killam we are investing in our communities, as well as our real estate. We believe that giving back to the community is an important part of being a responsible corporate citizen.\n\n## **Supporting Killam Families with Scholarship Program**\n\nKillam's Scholarship Program awards three $3,000 scholarships to children or grandchildren of Killam employees on an annual basis. After a competitive application process in 2013, Bradley Price, Hayley Gillis and Georgia Telman were selected for demonstrating an outstanding combination of academic excellence and community involvement.\n\n## **Home Away from Home**\n\nOn an annual basis, Killam donates six fully furnished apartments to hospitals in Halifax, Saint John, Moncton, Fredericton and Charlottetown. These units are used by families of patients who need to travel away from home for health care.\n\n## **Red Cross**\n\nKillam has partnered with the Red Cross in many of its core markets. The Red Cross is on hand to help when emergencies and disasters impact communities. Over the last six years, Killam has provided the Red Cross with financial assistance to fund their operations. In return, the Red Cross has provided emergency training to Killam staff, helping us react effectively to emergencies when required.\n\n## **Supporting Higher Education in Atlantic Canada**\n\nOn an annual basis, Killam's board of directors join together to support a common charity or organization. During 2013 the board members together donated $100,000 to establish an endowment at Mount Allison University in Sackville, New Brunswick, providing an annual entrance scholarship to the university. Previous $100,000 board donations supported the Boys and Girls Clubs of Prince Edward Island, the YMCA of Greater Halifax/Dartmouth and Saint Mary's University in Halifax.\n\n## **Caring for Kids**\n\nDuring 2013 Killam organized the Caring for Kids Lottery, a fundraiser in support of the IWK Health Centre in Halifax. The IWK Health Centre provides quality medical care to women, children, youth and families in the Maritime provinces. Killam tenants supported the cause through the purchase of lottery tickets for the chance to win free rent for a year. All funds raised went directly to the IWK Foundation.", - "page_start": 19, - "page_end": 19, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Specific Examples of CSR Activities\n\n# **Together with Our Customers**\n\n**We work as a team to improve customer satisfaction and product quality, and, while supporting the customer, contribute to the sustainable development of society as a whole.**\n\n# **The financial sector's role in improving the nation's diet and in strengthening the agricultural and fisheries sectors**\n\nFor many years, food supply networks in For many years, food supply networks in Japan were premised on mass production and Japan were premised on mass production and mass consumption, enabling the country to mass consumption, enabling the country to meet soaring food demand at a time of rapid meet soaring food demand at a time of rapid growth in the population and economy. growth in the population and economy. But in recent years, consumers have come to But in recent years, consumers have come to place more priority on factors other than place more priority on factors other than volume and price, such as food safety and volume and price, such as food safety and healthiness, and the cultural aspects of diet. healthiness, and the cultural aspects of diet. As discussion continues on the need for As discussion continues on the need for farmers to increase production scale and farmers to increase production scale and move into processing and marketing, major move into processing and marketing, major changes are underway in the agriculture and changes are underway in the agriculture and fisheries sector in Japan. fisheries sector in Japan.\n\nAgainst this backdrop, SMBC has developed Against this backdrop, SMBC has developed a new financial product for this sector. a new financial product for this sector. The SMBC Food and Agricultural Assessment The SMBC Food and Agricultural Assessment Loan comes with conditions, depending on Loan comes with conditions, depending on the results of an evaluation of food-producers' the results of an evaluation of food-producers' progress in areas such as food safety and progress in areas such as food safety and environment-friendliness, healthiness and environment-friendliness, healthiness and nutritional value, and efficiency of distribution. nutritional value, and efficiency of distribution. The Japan Research Institute researches The Japan Research Institute researches\n\nmeasures in the me a s u r e s i n t h e areas of food and of food and farming being taken farming being taken by the loan applicant, by the loan applicant, and drafts a simple and drafts a simple \"diagnosis\" stating \"diagnosis\" stating whether there is room whether there is room\n\nfor future improvement. Ernst & Young for future improvement. Ernst & Young ShinNihon LLC provides expert opinions on ShinNihon LLC provides expert opinions on ongoing improvement of this system. ongoing improvement of this system.\n\nBy backing customer companies' own By backing customer companies' own initiatives in the areas of food and agriculture initiatives in the areas of food and agriculture in this way, SMBC will be supporting measures in this way, SMBC will be supporting measures to improve the diet of the Japanese and to improve the diet of the Japanese and strengthen the agriculture and fisheries sector. strengthen the agriculture and fisheries sector.\n\n#### **For further details, please see our website.**\n\nA roundtable session with experts held in August 2011 eyesight concerns. eyesight concerns. considered the role of the new SMBC Food and Agricultural Assessment Loan in improving the food supply chain that links food and fishery producers with food processors and consumers. Opinions were also exchanged on what other future role the bank might assume in this regard, given the current situation and issues facing the food industry\n\nand agriculture in Japan.\n\n**Roundtable session: SMBC Food and Agricultural Assessment Loan**\n\n#### **Key comments of participants**\n\n\"We want to deliver value by creating demand and quality combined with safety, peace of mind and trust.\" Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd.\n\nYasuhiro Nakashima Associate Professor Graduate School of Agricultural and Life Sciences, The University of Tokyo\n\n\"Eating should be something that generates emotion. New potential exists in the world of cuisine.\" Daisuke Yamamoto, Vice Senior Consultant, Research Department, The Japan Research Institute, Limited\n\n\"As consumer tastes go through a time of great change, I think it is important to prioritize ingredients and the attitude of customers toward eating.\"\n\n\"An important concept is multilateral dialogue as the number of parties involved in food production increases throughout the supply chain.\" Yoichiro Fukayama, Planning Dept., Deputy Head (with powers of representation) of the Corporate Banking Unit & Middle Market Banking Unit, SMBC\n\nModerated by Kenji Sawami, Partner, Ernst & Young ShinNihon LLC\n\n# **Making banking a more pleasant experience for all customers**\n\nWith the old-age dependency ratio soaring, With the old-age dependency ratio soaring, the SMFG Group aims to provide friendly, the SMFG Group aims to provide friendly, easy-to-use banking services for all its easy-to-use banking services for all its customers. customers.\n\nSome Group companies are likewise making Some Group companies are likewise making their facilities barrier-free at bank branches their facilities barrier-free at bank branches with large numbers of customers, to tailor with large numbers of customers, to tailor services to the needs of all customers. services to the needs of all customers.\n\nFor example at the Minato Bank, we have For example at the Minato Bank, we have equipped all ATMs at all our branches and equipped all ATMs at all our branches and cashpoints with voice-guidance handsets for cashpoints with voice-guidance handsets for the visually impaired. the visually impaired.\n\nIn addition, we have set up priority seating In addition, we have set up priority seating in the lobby of each of our branches for in the lobby of each of our branches for customers who are very old or who have customers who are very old or who have mobility problems. We are also steadily mobility problems. We are also steadily introducing queue-number displays using introducing queue-number displays using Color Universal Design (CUD) principles, Color Universal Design (CUD) principles, which are easier to read for customers with which are easier to read for customers with\n\nHandheld hearing support device (The Minato Bank)\n\nA further measure is installation of handheld A further measure is installation of handheld hearing support devices at all branches hearing support devices at all branches (except housing loan promotion offices), to (except housing loan promotion offices), to allay the concerns of hearing-impaired allay the concerns of hearing-impaired customers who find it difficult to converse customers who find it difficult to converse and follow spoken instructions. By using the and follow spoken instructions. By using the devices as communication tools, bank devices as communication tools, bank employees can respect customer privacy employees can respect customer privacy and do not have to talk loudly. and do not have to talk loudly. Further measures include posting of \"green Further measures include posting of \"green ear\" logos at branches to reassure customers ear\" logos at branches to reassure customers that the bank has facilities for conversing that the bank has facilities for conversing in writing. All branches are being equipped writing. All branches are being equipped with white boards and special message with white boards and special message tablets for dialogue with customers who ablets for dialogue with customers who have concerns about their hearing and who have concerns about their hearing and who dislike written conversations. dislike written conversations.\n\n# **Peace of mind at the bank counter**\n\nThe Minato Bank has created a position The Minato Bank has created a position titled \"Service Care Manager\" at each of titled \"Service Care Manager\" at each of its branches, filled by at least one branch its branches, filled by at least one branch managerial staffer, as part of measures to managerial staffer, as part of measures to make branch visits more pleasant for make branch visits more pleasant for customers, following earlier nuts-and-bolts customers, following earlier nuts-and-bolts improvements. improvements.\n\nService Care Managers are dedicated to Service Care Managers are dedicated to improving support and services for the improving support and services for the customer at each branch. Their training customer at each branch. Their training includes simulations of the problems faced includes simulations of the problems faced by persons with disabilities, awareness by persons with disabilities, awareness raising and support methods for the elderly raising and support methods for the elderly and persons with disabilities. and persons with disabilities.\n\n### **New queue-number display system installed at bank counters**\n\nColors and special designs are used to make queue-number displays more visible to all customers (The Minato Bank)\n\nTelephone handset-type ATM (The Minato Bank)\n\n# **Preparing our businesses for a higher old-age dependency ratio**\n\nIn addition to removing mobility barriers at In addition to removing mobility barriers at branches, the bank plans to aggressively branches, the bank plans to aggressively support installation of facilities needed to support installation of facilities needed to cope with the rapidly rising old-age cope with the rapidly rising old-age dependency ratio. As a first step, SMBC dependency ratio. As a first step, SMBC has established clear guidelines for has established clear guidelines for supporting the construction of rental supporting the construction of rental housing for the elderly, expected to be a housing for the elderly, expected to be a future growth area. future growth area.\n\nWhile continuing to tailor business While continuing to t ailor busines s activities to the needs of the community at activities to the needs of the community at large and ensuring a friendly banking large and ensuring a friendly banking environment for our customers, the SMFG environment for our customers, the SMFG Group also plans to support the creation of Group also plans to support the creation of frameworks that enable the elderly to live frameworks that enable the elderly to live active lives with peace of mind. active lives with peace of mind.", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **NOTE 20 – OTHER NON-CURRENT ASSETS**\n\n| | 2014 | 2013 |\n| --- | --- | --- |\n| Year ended 31 December | US$'000 | US$'000 |\n| Escrow accounts | 998 | 2,000 |\n| Other | - | 19 |\n| Total other non-current assets | 998 | 2,019 |\n\n### **NOTE 21 – TRADE AND OTHER PAYABLES AND ACCRUED EXPENSES**\n\n| | 2014 | 2013 |\n| --- | --- | --- |\n| Year ended 31 December | US$'000 | US$'000 |\n| Oil and natural gas property and operating related | 117,117 | 123,938 |\n| Administrative expenses, including salaries and wages | 2,077 | 5,146 |\n| Total trade, other payables and accrued expenses | 119,194 | 129,084 |\n\nAt 31 December 2013, the Group had payable balances of $16.7 million which was outside normal payment terms, offset by a receivable balance of $11.7 million to the same creditor company (see Note 12 for additional information). The Company's remaining Bakken assets were sold to this company in July 2014, for approximately $14.0 million, including the settlement of the net liability.\n\n## **NOTE 22 – CREDIT FACILITIES**\n\n| | 2014 | 2013 |\n| --- | --- | --- |\n| Year ended 31 December | US$000 | US$000 |\n| Senior Credit Facility | 95,000 | 15,000 |\n| Junior Credit Facility | 35,000 | 15,000 |\n| Total credit facilities | 130,000 | 30,000 |\n| Deferred financing fees | (1,195) | (859) |\n| Total credit facilities, net of deferred financing fees | 128,805 | 29,141 |\n\n### **Junior Credit Facility**\n\nIn August 2013, Sundance Energy, Inc. (\"Sundance Energy\"), a wholly owned subsidiary of the Company, entered into a second lien credit agreement with Wells Fargo Energy Capital, Inc., as the administrative agent (the \"Junior Credit Facility\"), which provides for term loans to be made in a series of draws up to $100 million. The Junior Credit Facility matures in June 2018 and is secured by a second priority lien on substantially all of the Company's assets. Upon entering into the Junior Credit Facility, the Company immediately borrowed $15 million pursuant to the terms of the Junior Credit Facility and paid down the outstanding principal of the Senior Credit Facility. In May 2014, the Company's borrowing capacity increased to $35 million. As at 31 December 2014, the borrowing capacity under the Junior Credit Facility remains at $35 million.", - "page_start": 87, - "page_end": 87, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Home / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n#### MONEY\n\n### 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n\n2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n\n3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…\n\n### RELATED ARTICLES", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "### *Financial Position*\n\nIn May 2014, the borrowing capacity under our credit facilities increased from an aggregate of $63 million to $135 million. The increase in the borrowing capacity was driven by the significant uplift of the Company's proved oil and gas reserves as at 31 December 2013. In conjunction with the increase in the Company's borrowing capacity, the Company expanded the syndicate of banks under the Senior Credit Facility. Bank of America Merrill Lynch and the Bank of Nova Scotia have now joined the bank group which is led by Wells Fargo.\n\nIn July 2014, the borrowing capacity increased an additional net $10 million, to $145 million, after taking into consideration the removal of proved oil and gas reserves associated with the DJ and Williston Basin dispositions and the development of proved oil and gas reserves in the Eagle Ford Formation.\n\nAt 31 December 2014, the Company had $130 million outstanding under our credit facilities and $15 million available under our borrowing capacity. Ending cash at 31 December 2014 was $69.2 million.\n\n### *Cashflow*\n\nCash provided by operating activities for the year ended 31 December 2014 increased 104.5% to $128.1 million compared to the prior year. This increase was primarily due to receipts from sales increasing $85.7 million, or 101.2%, to $170.4 million, while keeping payments to suppliers and employees relatively stable with an increase of $8.2 million, or 37.7%, to $30.0 million. See Review of Operations for more information.\n\nCash used in investing activities for the year ended 31 December 2014 increased $158.9 million, or 96.7%, to $323.2 million. This increase is due to successful implementation of the Company's strategy to develop and grow the reserves from our high working interest, repeatable resource plays, primarily in the Eagle Ford. Due to funding available to the Company through asset sales, capital raises and credit facilities, the Company was able to accelerate its 2015 drilling program into 2014. However, due to the reduction in crude oil prices in the fourth quarter of 2014 and continuing into early 2015, the Company will scale back its drilling program to concentrate on limited drilling obligations to hold Eagle Ford acreage during the 2015 year.\n\nCash provided by financing activities for the year ended 31 December 2014 increased $123.1 million, or 277.0%, to $167.6 million. This increase is a result of the increased availability and draws under the Company's credit facilities and proceeds received in a private placement of shares. In February 2014, the Company completed a private placement in which we sold 84.2 million ordinary shares at A$0.95 per share, resulting in net proceeds of approximately $68.4 million. The first tranche of 63.7 million shares was issued in March 2014 and the second tranche of 20.5 million shares was issued in April 2014.\n\n#### **Matters Subsequent to the End of the Financial Year**\n\nSubsequent to 31 December 2014, an additional $13.9 million was drawn-down the credit facilities, bringing total outstanding debt to $143.9 million, with undrawn funds of $1.1 million.\n\nIn January 2015, the company acquired three leases totalling approximately 14,180 net acres in the Eagle Ford for approximately $13.4 million.\n\n### **Future Developments, Prospects and Business Strategies**\n\nThe Group's business strategies and prospects for growth in future financial years are presently concentrated on growing the value of the Group's current resource plays through direct leasing from mineral owners, small acquisitions of producing properties, drilling inventory within the Group's current balance sheet capabilities, and development of the Group's current acreage. Further information on likely development in the operations of the Group and expected results of operations has not been included because the Directors believe it would result in unreasonable prejudice to the Group.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "### **First Financial Bankshares customers and shareholders also know a thing or two about Value and Values – and we learn from them every day. We're proud to share in their success. Here are just a few of their stories.**\n\n**George Marti believes in doing things. Good things.** \n\nBorn to humble roots on his parents' farm in 1920, Marti has accomplished much, including founding three radio stations (and investing in 10 more) and developing a remote pickup device that became standard equipment in 80 percent of all radio stations worldwide. He still has part ownership of KCLE in Cleburne, Texas (the town where he was once mayor for 12 years).\n\nMarti's dedication to his hometown is part of the reason why he bought Cleburne State Bank in 1992. His business skills (and success in the broadcasting industry) gave him the resources to turn the bank into yet another winning venture. Five years later, he sold it to First Financial, which merged it with their existing First Financial Bank, Cleburne.\n\nThe proceeds from the sale helped Marti complete the funding for his proudest achievement: the Marti Foundation, which he created in the 1970s to help send students from Johnson County to college. \"We help over 100 students a year … most are the first from their family ever to attend college,\" says Marti. \"I know what education did for me, so it's a great thing to help these young people.\" Marti says that when he dies, the Foundation will live on, $20 million strong.\n\nMarti still serves on the board of First Financial Bank, Cleburne. \"First Financial's merger of the banks was positive for the community. They have a good customer base. They are friendly, helpful and creative. They are growing, and the branches in Alvarado and Burleson are both doing well. Those are all good things.\"\n\n\"They are friendly, helpful and creative. Those are all good things.\"\n\nGeorge Marti Founder Marti Enterprises Cleburne, Texas 6", - "page_start": 7, - "page_end": 7, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "# CHAPTER 7:\n\n## HOW TO ASK FOR HELP FROM YOUR TUTOR\n\nAs a student, you are going to experience times when you need help with your studies. You might be unsure about an assignment question, you might be confused by a particular concept, or you might be stressed about the upcoming exams.\n\nAnd if you are studying via distance learning (www.oxbridgeacademy.co. za/distance-learning/), where you don't have any face-to-face interaction with lecturers, you will need to rely on your tutors for the necessary academic support.", - "page_start": 32, - "page_end": 32, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "Has the Sumitomo Mitsui Financial Group offered help to the elderly?", - "target_page": 6, - "target_passage": "Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycleframeworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a soundplanning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound balance between work and care needs, given that many group employees will later need to nurse ailing relatives.balance between work and care needs, given that many group employees will later need to nurse ailing relatives", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "Sumitomo Mitsui Financial Group CSR Report **Digest version**", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# Commitment from the Top\n\n**A Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata** \n\n# **What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?**\n\n#### *Uplifting the nation's spirits Uplifting the nation's spirits*\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region o Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan) after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling after the March 11 earthquake and tsunami (\"the Great East Japan Earthquake\") to a shrinking and aging population, with falling birth rates birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues fa Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society ing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n# Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\n**Our measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits**\n\n̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrink ing a nd aging population, and global challenges. — population, and global challenges. —\n\n**Kunibe**: Japan is facing a difficult period Japan is facing a difficult period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japa n Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also ways to uplift the nation's spirits. ways to uplift the nation's spirits.\n\n**Ando**: Japan has achieved two miracles - the : Japan has achieved two miracles - the Meiji Restoration of 1868, and the economic Meiji Restoration of 1868, and the economic recovery following the end of World War II in recovery following the end of World War II in 1945. Both events are also regarded globally 1945. Both events are also regarded globally as being miraculous. as being miraculous.\n\nIn 1945, foreign diplomats and businessmen In 1945, foreign diplomats and businessmen visiting Japan were fully confident that the visiting Japan were fully confident that the country would recover as they surveyed the country would recover as they surveyed the ruins and the scorched earth around them, ruins and the scorched earth around them, because, in the words of one of them, \"People because, in the words of one of them, \"People really work hard and help each other, and really work hard and help each other, and children take heed of what their parents say children take heed of what their parents say and study hard. And because there is a and study hard. And because there is a sparkle in their eyes.\" sparkle in their eyes.\"\n\nThereafter, the Japanese worked furiously Thereafter, the Japanese worked furiously\n\nuntil the country became an economic until the country became an economic juggernaut. However, in the early 1970s, juggernaut. However, in the early 1970s, people became complacent about their people became complacent about their affluence, and stopped working hard and affluence, and stopped working hard and making efforts. Children assumed that if they making efforts. Children assumed that if they went to a top-class university they would walk went to a top-class university they would walk into a top-class company and have nothing to into a top-class company and have nothing to worry about thereafter. So they started going worry about thereafter. So they started going to cram schools even before kindergarten. to cram schools even before kindergarten. I give lectures on the theme \"students born in I give lectures on the theme \"students born in and after 1980 are hopeless cases\" (laughs). and after 1980 are hopeless cases\" (laughs). That was because of the prevailing attitude at That was because of the prevailing attitude at the time that Japan the time that Japan's national development s national development would go on for ever and the economy would would go on for ever and the economy would remain stable. As a result, parents spoilt their remain stable. As a result, parents spoilt their children, and we saw more children who children, and we saw more children who could not do anything. Many such children could not do anything. Many such children are in their 30s now. are in their 30s now.\n\nAnd in this situation, the asset bubble burst And in this situation, the asset bubble burst [in the early 1990s], and the collapse of [in the early 1990s], and the collapse of Lehman [hit world markets] in 2008, and Lehman [hit world markets] in 2008, and now we have the earthquake and tsunami now we have the earthquake and tsunami disaster. It seems that everything that disaster. It seems that everything that happens these days merely makes us more happens these days merely makes us more anxious. I think everyone needs to hit the anxious. I think everyone needs to hit the 'reset' button in some sense. If we don 'reset' button in some sense. If we don't, more difficulties lie ahead. more difficulties lie ahead.\n\n**Miyata**: Indeed, prior to 1970, living : Indeed, prior to 1970, living standards or wage levels were very low, standards or wage levels were very low, but I think it was a very happy time. People but I think it was a very happy time. People believed that if they really worked hard, believed that if they really worked hard, their daily lives would improve and their their daily lives would improve and their\n\n# Takeshi Kunibe\n\nPresident and CEO Sumitomo Mitsui Banking Corporation\n\ncompanies would do better and companies would do better and the whole country would benefit. the whole country would benefit. Returning to Mr. Ando Returning to Mr. Ando's words, s words, and his comments about a nd h is c omme n ts a b ou t clinging to the status quo, more clinging to the status quo, more people now think, \"Oh, well, my people now think, \"Oh, well, my life is fairly comfortable and life is fairly comfortable and that's enough for me.\" This sense that's enough for me.\" This sense of stagnation, or resignation, of stagnation, or resignation,\n\nthat people feel in their lives has spread that people feel in their lives has spread throughout Japan. But when the disaster throughout Japan. But when the disaster struck, people again came together and struck, people again came together and worked together in the recovery effort. I worked together in the recovery effort. I thought, \"Not everything that happened has thought, \"Not everything that happened has been bad.\" But I fear the consequences if we been bad.\" But I fear the consequences if we don't galvanize, coordinate and maximize t galvanize, coordinate and maximize efforts more effectively. efforts more effectively.\n\n**Kunibe**: As for SMBC, I wondered if : As for SMBC, I wondered if employees at all the branches and other employees at all the branches and other offices in the affected areas would be able to offices in the affected areas would be able to get to work and carry out their duties at such get to work and carry out their duties at such a difficult time for their own families; or if a difficult time for their own families; or if they would be able to open their offices for they would be able to open their offices for business on weekends and other holidays. business on weekends and other holidays. Despite the lack of water and gas, they really Despite the lack of water and gas, they really gave their all to provide banking services. gave their all to provide banking services. It was really uplifting to see such dedication It was really uplifting to see such dedication and sense of responsibility as an employee of and sense of responsibility as an employee of a financial institution entrusted with essential a financial institution entrusted with essential social infrastructure. I talk about \"the strength social infrastructure. I talk about \"the strength of our front-line staff,\" but I was able to fully of our front-line staff,\" but I was able to fully appreciate just how extraordinarily strong appreciate just how extraordinarily strong SMFG and SMBC are thanks to SMFG and SMBC are thanks to this display display of front-line commitment. of front-line commitment.\n\nMoving forward on the reconstruction of Moving forward on the reconstruction of the Tohoku region, I believe we can also the Tohoku region, I believe we can also contribute to the rebuilding of infrastructure contribute to the rebuilding of infrastructure through project finance and other t h roug h project f i n a nce a nd ot her fundamental businesses of financial f undamental businesses of financial institutions in which we excel. institutions in which we excel. We are now actively engaged in promoting We are now actively engaged in promoting business in the Tohoku region, including business in the Tohoku region, including business matching with parties outside business matching with parties outside the region. In addition, we have a range of the region. In addition, we have a range of support activities in partnership with the Miyagi support activities in partnership with the Miyagi prefectural government and The 77 Bank, prefectural government and The 77 Bank, Ltd., which is based in Miyagi. Ltd., which is based in Miyagi.\n\n**Miyata**: In the same way, other SMFG In the same way, other SMFG Group companies have been sending out Group companies have been sending out volunteers, and providing donations not only volunteers, and providing donations not only as a company, but also through individual as a company, but also through individual employees. SMBC was at the heart of all these employees. SMBC was at the heart of all these activities, and this was a good opportunity activities, and this was a good opportunity for us to appreciate anew how our business for us to appreciate anew how our business contributes to the public good. contributes to the public good.\n\n# Koichi Miyata\n\nPresident Sumitomo Mitsui Financial Group, Inc.\n\nThe SMFG Group has 62,000 employees, The SMFG Group has 62,000 employees, \"stepping up to the plate and working hard \"stepping up to the plate and working hard to give something back to society.\" I think it to give something back to society.\" I think it is important to develop ways of making this is important to develop ways of making this a shared aspiration of all the employees of a shared aspiration of all the employees of the Group. the Group.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Social Contribution Activities**\n\n**SMFG as a corporate citizen: Working to create a prosperous society for all**\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n# **SMFG and its Group companies participate in neighborhood cleanup programs**\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on \"SMFG Clean-up Day.\" This initiative is on \"SMFG Clean-up Day.\" This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n# **Supporting education in developing countries, together with our customers and employees**\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\nCollection box for used books and other items\n\nBuilding libraries in developing countries through the NGO Room to Read\n\ninstalled in an employee canteen Supporting education in developing countries\n\n# **Donations through \"The World Bank Green Fund\"**\n\nSMBC and SMBC Nikko Securities donate a SMBC and SMBC Nikko Securities donate a portion of the profits from marketing of the portion of the profits from marketing of the SMBC Nikko World Bank Bond Fund SMBC Nikko World Bank Bond Fund ( \"The World Bank Green Fund World Bank Green Fund\" ) to the Japanese ) to the Japanese Red Cross Society and the Japan Committee Red Cross Society and the Japan Committee for UNICEF. for UNICEF.\n\nThis investment trust is the world This investment trust is the world's first s first fund developed in cooperation with the fund developed in cooperation with the World Bank that invests in World Bank green World Bank that invests in World Bank green bonds, according to research by Nikko bonds, according to research by Nikko Asset Management Co., Ltd. Funds from Asset Management Co., Ltd. Funds from the World Bank green bonds support only the World Bank green bonds support only World Bank-funded projects in developing World Bank-funded projects in developing countries to mitigate global warming. countries to mitigate global warming.\n\n*Research by Nikko Asset Management Co., Ltd.\n\nDonating to the Japanese Red Cross\n\n# **SMBC Nikko Securities' \"Green Week\"**\n\nIn the fall of 2010, SMBC Nikko Securities In the fall of 2010, SMBC Nikko Securities established its \"Green Week\" for strength established its \"Green Week\" for strengthening environmental protection and social ening environmental protection and social contribution activities, with the aim of contribution activities, with the aim of promoting communication within regional promoting communication within regional society and among participating employees society and among participating employees and their families, while deepening under and their families, while deepening understanding of environmental protection through standing of environmental protection through participation in social contribution activities. participation in social contribution activities. Between November 13 and December 5, Between November 13 and December 5, 2010, environmental protection programs 2010, environmental protection programs were rolled out by cross-organizational were rolled out by cross-organizational \"Green Committees\" in four locations in \"Green Committees\" in four locations in Japan, with the participation of 280 employ Japan, with the participation of 280 employees and their families. In addition, regional ees and their families. In addition, regional contribution activities were carried out by contribution activities were carried out by\n\nRegional contribution activities at the branch level\n\nCollection of PET bottle caps Donating to Japan Committee for UNICEF for international contribution purposes\n\nbranches at their own initiative. A wide variety branches at their own initiative. A wide variety of social contribution activities, such as the of social contribution activities, such as the collection of used stamps and PET bottle collection of used stamps and PET bottle caps, were carried out for global causes. caps, were carried out for global causes. SMBC Nikko Securities will continue activi SMBC Nikko Securities will continue activities that contribute to society and prioritize ties that contribute to society and prioritize communication between employees. communication between employees.\n\nEmployees and their families pitch in to clean up the bed of the Ara River in Tokyo\n\n| Environmental protection activities |\n| --- |\n| Forestry management volunteering experience in Osaka |\n| (Izumi no Mori) |\n| 117 participants |\n| Volunteers at the Shonan Erosion Control Forest project |\n| 62 participants |\n| Helping clean up Senju Shinbashi bridge that spans Ara River |\n| 64 participants |\n| Helping clean up Nishi Araibashi bridge that spans Ara River |\n| 37 participants |\n| Social contribution collection activities |\n| Support for overseas causes through used-stamp collection |\n| 11.4 kg of stamps were collected |\n| Presentation of stationery to children in developing countries |\n| 788 ballpoint pens and pencils |\n| Vaccine donation from the collection of PET bottle caps |\n| 168.9 kg (enough to vaccinate 84.45 people against polio) |\n| Activities organized by branches |\n| Sendai Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Matsudo Branch |\n| Accepting middle school students |\n| for workplace experience programs |\n| Shizuoka Branch |\n\nAbekawa River driftwood-clearing festival", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n#### **BOARD OF DIRECTORS AND AUDITORS**\n\n#### **Representative Board Members**\n\nCarlos Ghosn President and Co-Chairman\n\nItaru Koeda Co-Chairman\n\nToshiyuki Shiga Co-Chairman\n\n#### **Board Members**\n\n- Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Shemaya Lévy Patrick Pélata\n- **Auditors** Hisayoshi Kojima Shinji Ichishima Keishi Imamura Haruo Murakami\n\n#### **EXECUTIVE COMMITTEE MEMBERS**\n\n- Carlos Ghosn Toshiyuki Shiga Itaru Koeda Tadao Takahashi Hiroto Saikawa Mitsuhiko Yamashita Carlos Tavares Alain-Pierre Raynaud\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "# **Priority Issues for Us** As one of Japa As one of Japan's leading financial services groups, s leading financial services groups,\n\nthe SMFG Group is taking the lead in aggressively addressing the four priority issues the SMFG Group is taking the lead in aggressively addressing the four priority issues we have identified as significantly impacting the nation. we have identified as significantly impacting the nation.\n\n**Measures for Japan's regeneration**\n\n# **Reconstruction after the earthquake and tsunami**\n\nMitsui Charity Hospital at its establishment Mitsui Charity Hospital at its establishment\n\nBesshi copper mine in the Meiji era Besshi copper mine in the Meiji era And today And today\n\nThe March 11 earthquake and tsunami (The Gr The March 11 earthquake and tsunami (The Great East Japan Earthquake) undermined power eat East Japan Earthquake) undermined power generation capacity and severed manufacturing supply chains across the nation. This was in addition generation capacity and severed manufacturing supply chains across the nation. This was in addition to the severe damage sustained by agriculture and fisheries in the Northeast. to the severe damage sustained by agriculture and fisheries in the Northeast.\n\nThe disaster also threw into relief many social issues facing the nation. By leveraging our role as The disaster also threw into relief many social issues facing the nation. By leveraging our role as a leading financial services group, we are committing our full range of resources to dealing with the a leading financial services group, we are committing our full range of resources to dealing with the enormous task of regional reconstruction after the earthquake, in partnership with stakeholders enormous task of regional reconstruction after the earthquake, in partnership with stakeholders including enterprises, local governments and non-profit organizations. including enterprises, local governments and non-profit organizations.\n\n#### **Further measures needed**\n\n- Wide-ranging financial support for the reconstruction of infrastructure Wide-ranging financial support for the reconstruction of infrastructure\n- Ongoing disaster recovery activities by employee volunteers Ongoing disaster recovery activities by employee volunteers\n- Comprehensive support for industrial recovery Comprehensive support for industrial recovery in partnership with local governments and in partnership with local governments and financial institutions in the disaster-affected areas financial institutions in the disaster-affected areas\n\n**Environmental measures Creating systems for sustainability Global challenges**\n\nThe SMFG Group has positioned environmental businesses as an area where it can most effectively The SMFG Group has positioned environmental businesses as an area where it can most effectively leverage its role as a leading financial services group. This is a priority field for the future. leverage its role as a leading financial services group. This is a priority field for the future. Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but Measures are being stepped up on a range of fronts — not only involving a low-carbon society, but also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to also dealing with issues such as water supply, soil contamination, energy and biodiversity. We aim to contribute to sustainable development by supporting contribute to sustainable development by supporting the worldwide adoption of Japan's much-admired the worldwide adoption of Japan's much-admired technological breakthroughs, with a particular focus on the Asian region. technological breakthroughs, with a particular focus on the Asian region.\n\n#### **Further measures needed**\n\n- Give further support for businesses involved in greenhouse gas Give further support for businesses involved in greenhouse gas reduction, water supply, new energy and resource initiatives reduction, water supply, new energy and resource initiatives\n- Do more to safeguard biodiversity, in our capacity as a Do more to safeguard biodiversity, in our capacity as a financial institution financial institution\n- Share our information assets and know-how globally in the Share our information assets and know-how globally in the environmental business environmental business\n\nprograms to solve the problem of programs to solve the problem of pollution around the Besshi copper pollution around the Besshi copper mine, while the Mitsui Group set up mine, while the Mitsui Group set up the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to give the poorest in society access to give the poorest in society access to basic medical care. Based on this basic medical care. Based on this corporate social responsibility corporate social responsibility DNA embedded in the business DNA embedded in the business philosophies of both the Sumitomo philosophies of both the Sumitomo and Mitsui groups over the 400 and Mitsui groups over the 400 years of their existence, we will years of their existence, we will continue to play our part in solving continue to play our part in solving problems facing the international problems facing the international community through our financial community through our financial service service operations. operations.\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group undertook large-scale afforestation undertook large-scale afforestation\n\n# **Shrinking and aging population Ensuring peace of mind for the future**\n\nCurrently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifest frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle yle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to crea planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound te systems and a corporate culture that foster a sound balance between work and care needs, given that many gr balance between work and care needs, given that many group employees will later need to nurse ailing relatives. oup employees will later need to nurse ailing relatives. *Estimates by the Statistics Bureau, Ministry of Internal Affairs and Communications (October 1, 2011)\n\n#### **Further measures needed**\n\n- nursing care nursing care\n- elderly (planning for asset management for old age) elderly (planning for asset management for old age)\n- Foster a better work-life balance Foster a better work-life balance\n\n# **Symbiosis and diversity**\n\nSupport businesses involved in health, medical and Support businesses involved in health, medical and\n\nExpand range of financial products and services for the Expand range of financial products and services for the\n\nIn anticipation of further global expansion, the SMFG Group is aggressively internationalizing its In anticipation of further global expansion, the SMFG Group is aggressively internationalizing its operations both in Japan and overseas. Initiative operations both in Japan and overseas. Initiatives include aggressive development of advisory include aggressive development of advisory services for infrastructure upgrades in emergi services for infrastructure upgrades in emerging economies, a cross-departmental endeavor, g economies, a cross-departmental endeavor, as well as contributions to the international community and the environmental business, chiefly as well as contributions to the international community and the environmental business, chiefly through branches and representative offices overseas. through branches and representative offices overseas.\n\nWe will continue to discuss and review various approaches to issues facing the international We will continue to discuss and review various approaches to issues facing the international community so as to build up trust internationally as a global player. community so as to build up trust internationally as a global player.\n\n#### **Further measures needed**\n\n- Share expertise in corporate social responsibility Share expertise in corporate social responsibility with the international community with the international community\n- Improve financial services in preparation for the Improve financial services in preparation for the globalization of operations in Japan (multilingual globalization of operations in Japan (multilingual support) support)\n- Promote diversity Promote diversity", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## **Corporate Outline (as of September 30, 2011)**\n\n| Company Name | : | Sumitomo Mitsui Financial Group, Inc. |\n| --- | --- | --- |\n| Business Description | : | Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of |\n| | | non-bank subsidiaries, as well as the performance of ancillary functions |\n| Established | : | December 2, 2002 |\n| Head Office | : | 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan |\n| Chairman of the Board | : | Masayuki Oku |\n| President | : | Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) |\n| Capital | : | ¥2,337.8 billion |\n| Stock Exchange Listings | : | Tokyo Stock Exchange (First Section) |\n| | | Osaka Securities Exchange (First Section) |\n| | | Nagoya Stock Exchange (First Section) |\n| | | Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange. |\n\n## **Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)**\n\n# **Our CSR reporting**\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n| CSR report 2011 (digest version) | CSR disclosure through |\n| --- | --- |\n| Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, | specific examples |\n| centered on specific examples centered on specific examples | |\n| CSR report 2011 | Comprehensive |\n| (digest version with examples of activities and | |\n| statistical performance, online PDF file) | disclosure of |\n| Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed | CSR activities |\n| information on CSR activities information on CSR activities | |\n| CSR report (online version, Japanese only) | Enriched |\n| www.smfg.co.jp/responsibility | CSR disclosure |\n| This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of | |\n| CSR activities at SMFG CSR activities at SMFG | |\n\n# **Editorial Policy**\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society. We have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is the essence of business itself, and our initiatives act upon this. Our CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version). We disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n# **Scope of this Report**\n\n- Sumitomo Mitsui Financial Group, Inc.\n- Sumitomo Mitsui Banking Corporation\n- SMFG Card & Credit, Inc.\n- Sumitomo Mitsui Card Company, Limited\n- Cedyna Financial Corporation\n- Sumitomo Mitsui Finance and Leasing Co., Ltd.\n- The Japan Research Institute, Limited\n- SMBC Friend Securities Co., Ltd.\n- SMBC Nikko Securities Inc.\n- THE MINATO BANK, LTD.\n- Kansai Urban Banking Corporation\n- Other Group companies\n\nThroughout this report, **\"Sumitomo Mitsui Financial Group\"** or **\"SMFG\"** refers to the holding company alone. **\"The SMFG Group\"** refers to the holding company and its primary domestic and international subsidiaries and affiliates. Company name abbreviations and other special terminology\n\n## **Reference guidelines**\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3) * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n# **About this Report**\n\n- Period Covered : April 1, 2010 to March 31, 2011 ( \"Fiscal 2010\" ) Note: Certain items in this report refer to activities taking place after April 2011.\nPublication Date of Japanese Document : December 2011\n\n- Contact :\n\t- 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111\n\nGroup CSR Department, Sumitomo Mitsui Financial Group, Inc.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Environmental Activities**\n\n**International initiatives in Asian countries and others**\n\n# **Taking a leading role in environmental businesses in Asia**\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia 2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The exhibition, visited on successive days exhibition, visited on successive days by Malaysia Malaysia's King, prime minister, some of s King, prime minister, some of the regional Kings of Malaysia, the regional Kings of Malaysia, and cabinet ministers, raised awareness cabinet ministers, raised awareness of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of \"green townships\" based establishment of \"green townships\" based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n# **Promoting energy-saving and low-emission industries in China**\n\nIn China, which emits more carbon dioxide In China, which emits more carbon dioxide than any other country, finding ways of than any other country, finding ways of promoting new energy-saving measures promoting new energy-saving measures and restructuring industry have become and restructuring industry have become pressing issues. pressing issues.\n\nThe Japan Research Institute has built up a The Japan Research Institute has built up a successful track record in the course of its successful track record in the course of its advisory activities in China, in joint research advisory activities in China, in joint research into local-level microgrid construction at into local-level microgrid construction at the Tianjin Eco-City, and in policy-making the Tianjin Eco-City, and in policy-making relating to renewable energy management relating to renewable energy management systems and other areas. ems and other areas. In partnership with the Guangdong Provincial In partnership with the Guangdong Provincial Department of Science and Technology, the Department of Science and Technology, the Japan Research Institute also advises Japan Research Institute also advises government departments on system government departments on system establishment for new energy-saving establishment for new energy-saving businesses. Guangdong is China businesses. Guangdong is China's richest s richest province by gross provincial product, and province by gross provincial product, and here both needs and potential in the field here both needs and potential in the field of energy-saving are very great. The Japan of energy-saving are very great. The Japan Research Institute also supports industrial Research Institute also supports industrial restructuring and low-carbon projects in the restructuring and low-carbon projects in the province through model projects. province through model projects.\n\n**Support for adoption of electric vehicles and car-sharing**\n\nIn the battle against global warming, both In the battle against global warming, both public and private sectors are facing mounting public and private sectors are facing mounting pressure to curb carbon dioxide pollution from pressure to curb carbon dioxide pollution from transportation, one of the major sources of transportation, one of the major sources of emissions. Against this backdrop, the Japan emissions. Against this backdrop, the Japan Research Institute is supporting environmental Research Institute is supporting environmental businesses that map out pathways and businesses that map out pathways and develop projects, tailored to the needs of develop projects, tailored to the needs of particular localities, to bring about a particular localities, to bring about a low-carbon society. Experimental projects are low-carbon society. Experimental projects are currently underway in Kanagawa Prefecture, currently underway in Kanagawa Prefecture, Saitama Prefecture, Kyoto and Sapporo. Saitama Prefecture, Kyoto and Sapporo. These initiatives are aimed at hastening the These initiatives are aimed at hastening the adoption of electric vehicles and car-sharing adoption of electric vehicles and car-sharing to cut carbon dioxide emissions. The Institute to cut carbon dioxide emissions. The Institute is working in cooperation with government is working in cooperation with government bodies, car-rental, commercial vehicle-leasing bodies, car-rental, commercial vehicle-leasing and parking-facility management companies, and parking-facility management companies, railways, communications providers and railways, communications providers and other entities. other entities.\n\nElectric vehicles not only emit no carbon dioxide, but offer a comfortable drive as well\n\nIGEM2010 greeted many visitors", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# Today, Tomorrow and Beyond\n\n**President Sumitomo Mitsui Financial Group, Inc.**\n\n**Koichi Miyata**\n\nFirst, I would like to extend our deepest sympathies and heartfelt First, I would like to extend our deepest sympathies and heartfelt condolences to all those who have suffered and condolences to all those who have suffered and to the families and friends of those who tragically lost their lives in to the families and friends of those who tragically lost their lives in the devastating earthquake and tsunami the devastating earthquake and tsunami that struck northeastern Japan on March 11, 2011. We pray for the that struck northeastern Japan on March 11, 2011. We pray for the early recovery of the affected people and areas. early recovery of the affected people and areas. SMFG is dedicated to seamlessly responding to clients' needs by SMFG is dedicated to seamlessly responding to clients' needs by leveraging our group-wide capabilities, leveraging our group-wide capabilities, offering optimal products and services, and ensuring that every offering optimal products and services, and ensuring that every employee and the overall group are capable of employee and the overall group are capable of responding to the challenges of globalization. I believe that responding to the challenges of globalization. I believe that through these measures, through these measures, we will contribute to the growth and development of our clients we will contribute to the growth and development of our clients and society, and ourselves grow in partnership with them. and society, and ourselves grow in partnership with them. Through our basic policy of becoming \"a globally competitive Through our basic policy of becoming \"a globally competitive financial services group financial services group with the highest trust of our clients, society and other stakeholders\" with the highest trust of our clients, society and other stakeholders\" by maximizing our core strengths of by maximizing our core strengths of \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we \"Spirit of Innovation,\" \"Speed\" and \"Solution & Execution,\" we will continue to stay ahead of the times, will continue to stay ahead of the times, no matter how challenging, and actively adapt to changes in our no matter how challenging, and actively adapt to changes in our business environment. business environment.\n\n## **INDEX**\n\n| Foreword | 1 |\n| --- | --- |\n| Commitment from the Top A Conversation with Tadao Ando, | 3 |\n| Takeshi Kunibe and Koichi Miyata | |\n| What can we do now to spur the reconstruction and revitalization of Japan, | |\n| and help resolve global issues? | |\n| Measures to Support Reconstruction | |\n| after the March 11 | |\n| Earthquake and Tsunami | 8 |\n| Priority Issues for Us | 9 |\n| Our Mission and CSR at SMFG | 11 |\n| 〈Specific Examples of CSR Activities〉 | |\n| Together with Our Customers | 13 |\n| Together with Our Shareholders | |\n| and Markets | 17 |\n| Together with Our Employees | 19 |\n| Environmental Activities | 21 |\n| Social Contribution Activities | 25 |\n| Corporate Outline/Editorial Policy | 29 |", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "# **Social Contribution Activities**\n\n# **Helping build prosperity in Asia and the world**\n\nThe SMFG Group is engaged in a range of activities The SMFG Group is engaged in a range of activities that contribute to development at both the regional that contribute to development at both the regional and international level. In addition to overseas units' and international level. In addition to overseas units' independent initiatives, which are geared to host independent initiatives, which are geared to host country issues and characteristics, the Group supports country issues and characteristics, the Group supports projects that have contributed to achievement of the projects that have contributed to achievement of the United Nations' global Millennium Development Goals, United Nations' global Millennium Development Goals, such as poverty eradication, health improvement and such as poverty eradication, health improvement and status improvement for education and women in status improvement for education and women in developing countries. Our support takes the form of developing countries. Our support takes the form of donations to non-profit and non-governmental donations to non-profit and non-governmental organizations, through the employee volunteer fund. organizations, through the employee volunteer fund. (The map shows areas where fund money is used, (The map shows areas where fund money is used, marked with a marked with a ★ symbol). Please see our website for symbol). Please see our website for more details. more details.\n\n### **International cooperation begins at home**\n\n#### **Employees put school meals on the table through their purchases in staff canteens**\n\nSMBC and Sumitomo Mitsui Finance and Leasing SMBC and Sumitomo Mitsui Finance and Leasing have a program that provides donations to the non have a program that provides donations to the nonprofit organization TABLE FOR TWO International to profit organization TABLE FOR TWO International to\n\nfund school meals in developing fund school meals in developing countries, for every low-calorie countries, for every low-calorie meal ordered for lunch. SMBC meal ordered for lunch. SMBC Friend Securities has also F riend S e curitie s ha s a l s o installed vending machines ins talled vending machines selling healthy drinks, donating selling healthy drinks, donating part of their sales to TABLE FOR part of their sales to TABLE FOR TWO International. TWO International.\n\n#### **Donation boxes for foreign currency coins**\n\nSMBC places donation boxes for foreign currency SMBC places donation boxes for foreign currency coins at the entrances of all manned branches and coins at the entrances of all manned branches and offices in Japan, and sorts such collected coins by offices in Japan, and sorts such collected coins by currency for delivery to UNICEF. currency for delivery to UNICEF.\n\n#### **The SMBC Foundation for International Cooperation**\n\nThe SMBC Foundation for International Cooperation The SMBC Foundation for International Cooperation strives to assist in developing the human resources strives to assist in developing the human resources necessary to achieve sustainable growth in develop necessary to achieve sustainable growth in developing economies as well as to promote international ing economies as well as to promote international exchange activities. The foundation has provided exchange activities. The foundation has provided financial support for students from Asian countries financial support for students from Asian countries each year, enabling them to attend universities in each year, enabling them to attend universities in Japan. The foundation also offers subsidies to Japan. The foundation also offers subsidies to research institutes and researchers undertaking research institutes and researchers undertaking projects related to developing countries. projects related to developing countries.\n\n#### **1 South Korea**\n\n**Support for a South Korean students' Japanese-language theater competition**\n\nAs a way of increasing understanding of Japanese culture, As a way of increasing understanding of Japanese culture, SMBC's Seoul Branch donates funds to make possible the s Seoul Branch donates funds to make possible the holding of a competition holding of a competition\n\ninvolving theatrical perfor involving theatrical performances in the Japanese mances in the Japanese language by South Korean language by South Korean students of Japanese. students of Japanese.\n\nPerforming a Japanese-language drama\n\n### **Scholarships at major universities**\n\nSumitomo Mitsui Banking Corporation (China) Limited Sumitomo Mitsui Banking Corporation (China) Limited established a scholarship program for students of Zhejiang established a scholarship program for students of Zhejiang\n\nUniversity, Shanghai Inter University, Shanghai International Studies University, national Studies University, Sun Yat-sen University, Sun Yat-sen University, and other universities. and other universities.\n\n#### Scholarship students at Sun Yat-sen University\n\n# **3 Hong Kong**\n\n**2**\n\n**China**\n\n#### **Supporting performances by young Asian musicians**\n\nSMBC Hong Kong Branch makes donations to the Asian SMBC Hong Kong Branch makes donations to the Asian\n\nYouth Orchestra (AYO), Youth Orchestra (AYO), comprising young Asian comprising young Asian musicians selected mu s i c i a n s s e l e c t e d through auditioning who through auditioning who perform all over Asia. perform all over Asia.\n\nPhotographs supplied by AYO\n\n#### **Providing work 4 Vietnam**\n\n**experience to students** SMBC's Hanoi Branch provided s Hanoi Branch provided international school students international school students with vocational experiences. with vocational experiences.\n\n#### **5 Thailand**\n\n#### **Supporting farming villages in the northeast**\n\nSMBC's Bangkok Branch assisted s Bangkok Branch assisted farmers by donating underground farmers by donating underground water storage tanks and assisting water storage tanks and assisting with vegetable planting and with vegetable planting and harvesting. harvesting.\n\nBank employees helped plant\n\n# vegetables as volunteers\n\n### **Donating furniture to welfare facilities 6 Malaysia**\n\nSMBC' s Labuan Branch in s L abuan Br anch in Malaysia, following its relocation, Malaysia, following its relocation, donated desks, chairs and donated desks , chair s and cabinets to occupational training cabinets to occupational training centers for the disabled. centers for the disabled.\n\n# **Europe**\n\n**7**\n\n#### **Donations to charity groups**\n\nEmployees of Sumitomo Mitsui Banking Corporation Europe Employees of Sumitomo Mitsui Banking Corporation Europe (SMBCE) conducted volunteer activities in their time off. (SMBCE) conducted volunteer activities in their time off. SMBCE contributes to charitable organizations through an SMBCE contributes to charitable organizations through an in-house fund and also uses a matching gifts program under in-house fund and also uses a matching gifts program under\n\nwhich it donates a which it donates a certain amount for certain amount for every donation made every donation made by its employees. by its employees.\n\nEmployee volunteers who participated in landscape improvement projects\n\n## **8 Europe**\n\n### **Donation for a Japanese-language speech contest**\n\nThe European office of the Japan Research Institute (JRI) The European office of the Japan Research Institute (JRI) made a donation in support of a Japanese-language speech made a donation in support of a Japanese-language speech contest. contest.\n\n## **UNICEF support initiatives**\n\nThrough the Climate & Children Supporters project, the bank Through the Climate & Children Supporters project, the bank has supported UNICEF projects in Mozambique benefitting has supported UNICEF projects in Mozambique benefitting children and improving children and improving\n\nfor further details (in Japanese): www.smbc.co.jp/ccs/\n\n#### **SMBC GLOBAL FOUNDATION 10 The United States**\n\nBased in the United States, SMBC Global Foundation has Based in the United States, SMBC Global Foundation has provided scholarships to more than 5,000 university students provided scholarships to more than 5,000 university students in Asian countries since its establishment in 1994. In the in Asian countries since its establishment in 1994. In the United States, it supports educational trips to Japan United States, it supports educational trips to Japan organized by a high school located in Harlem, New York City, organized by a high school located in Harlem, New York City, and volunteer employees of SMBC and JRI to participate in and volunteer employees of SMBC and JRI to participate in school beautification programs. The foundation also provides school beautification programs. The foundation also provides matching gifts for SMBC employees. matching gifts for SMBC employees.\n\nHigh school students from New York who visited Japan on a study trip\n\nScholarship award ceremony for university students in Vietnam", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "### INFORMATION ON SUBSIDIARIES AND AFFILIATES\n\n| Consolidated subsidiaries | | | | As of Mar. 31, 2005 |\n| --- | --- | --- | --- | --- |\n| Company | Location | Principal business | Capital (millions) | Nissan share*(%) |\n| Japan | | | | |\n| Nissan Shatai Co., Ltd. | Hiratsuka-shi, Kanagawa | Manufacture and sales of automobiles and parts | ¥7,904 | 43.80 |\n| Aichi Machine Industry Co., Ltd. | Nagoya, Aichi | Manufacture and sales of automotive parts | ¥8,518 | 41.70 |\n| JATCO Ltd. | Fuji, Shizuoka | Manufacture and sales of automotive parts | ¥29,935 | 81.76 |\n| Nissan Kohki Co., Ltd. | Samukawa, Kanagawa | Manufacture and sales of automotive parts | ¥2,020 | 97.73 |\n| Calsonic Kansei Corporation | Tokyo | Manufacture and sales of automotive parts | ¥40,606 | 41.87 |\n| Nissan Motor Car Carrier Co., Ltd. | Tokyo | International automobile transport | ¥640 | 60.00 |\n| Nissan Trading Co., Ltd. | Yokohama, Kanagawa | Import and export of automobiles, parts, etc. | ¥320 | 100.00 |\n| Nissan Financial Services Co., Ltd. | Chiba, Chiba | Automobile financing and leasing | ¥16,387 | 100.00 |\n| Autech Japan, Inc. | Chigasaki, Kanagawa | Development, manufacture and sales of limited-edition automobiles | ¥480 | 100.00 |\n| Nissan Real Estate Development | Tokyo | Real estate sales, purchase and leasing | ¥1,000 | 70.50 |\n| Corporation | | | | |\n| Nissan Finance Co., Ltd. | Tokyo | Finance and accounting support | ¥2,491 | 100.00 |\n| Aichi Nissan Motor Co., Ltd. | Nagoya, Aichi | Sales of automobiles and parts | ¥100 | 100.00 |\n| Tokyo Nissan Motor Sales Co., Ltd. | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Nissan Prince Tokyo Motor Sales | Tokyo | Sales of automobiles and parts | ¥100 | 100.00 |\n| Co., Ltd. | | | | |\n| Nissan Chuo Parts Sales Co., Ltd. | Yokohama, Kanagawa | Sales of automobile repair parts | ¥545 | 80.61 |\n| US | | | | |\n| Nissan North America, Inc. | Gardena, California | Management of North American subsidiaries, manufacture and sales of automobiles and parts | $1,791 | 100.00 |\n| Nissan Motor Acceptance Corporation | Torrance California | Finance of wholesale and retail automobile sales in US | $499 | 100.00 |\n| Nissan Motor Corporation | Honolulu, Hawaii | Sales of automobiles and parts | $6 | 100.00 |\n| in Hawaii, Ltd. | | | | |\n| Nissan Capital of America, Inc. | Torrance, California | Financing for group companies | $1 | 100.00 |\n| Nissan Technical Center | Farmington Hills | Research and development, testing | $16 | 100.00 |\n| North America, Inc. | Michigan | | | |\n| Nissan Motor Insurance Corporation | Honolulu, Hawaii | Casualty insurance | $10 | 100.00 |\n| Nissan Forklift Co., North America | Marengo, Illinois | Manufacture and sales of forklifts and parts | $34 | 100.00 |\n| Canada | | | | |\n| Nissan Canada, Inc. | Mississauga, Ontario | Sales of automobiles and parts | CAN$68 | 100.00 |\n| Mexico | | | | |\n| Nissan Mexicana, S.A. de C.V. | Mexico D.F. | Manufacture and sales of automobiles and parts | P17,056 | 100.00 |", - "page_start": 107, - "page_end": 107, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Does Chesapeake Energy have a project to reduce excessive water use?", - "target_page": 28, - "target_passage": "Created to meet the challenge of reducing our water usage, Chesapeake’s Aqua Renew® program uses state-of-the-art technology to recycle pro- duced water.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "# INVESTING IN OUR WORLD AND OUR PEOPLE »\n\nAs we explore for and produce clean, affordable, abundant, American natural gas, we provide an important solution to our nation's energy challenges and its quest for energy independence. With at least a 200 year supply of natural gas located right here in the U.S., this versatile fuel can be used to not only heat homes, create electricity and meet America's transportation needs, but also to fuel the country's future by creating jobs and stimulating local and national economies through investment and taxes.\n\n# **Environmentally Friendly Operations**\n\nAt Chesapeake, we realize that the way a great product is produced is as important as the product itself. For example, we have helped pioneer the use of multiwell padsites to drill up to 16 wells from a single location, greatly reducing our land and road use and overall environmental footprint. We use the latest horizontal and directional drilling technology to place wells at a safe distance from homes, schools and businesses. In addition, we build and maintain access roads and work to eliminate soil erosion near our sites, as well as restore local vegetation.\n\nWe implement advanced, modern protective measures known as Best Management Practices (BMPs) to help ensure energy development is conducted in an environmentally responsible manner. Procedures are implemented throughout our operations to protect freshwater aquifers and reduce environmental impacts. BMPs protect wildlife, air quality, water and landscapes as we work to develop vitally needed domestic energy sources.\n\nImplemented throughout the entire life cycle of a well, BMPs can be as simple as strategically placing a berm, or land barrier, on locations to control surface water runoff. Others involve cutting-edge operational technologies such as utilizing the most advanced techniques offered in drilling fluids, well casing and cement design. Regardless of complexity, all BMPs are based on the idea that the environmental footprint of energy development should be as small and temporary as possible. These practices are continually evolving and further improving as Chesapeake and the industry develop new innovative techniques and approaches to business.\n\nIn addition to our BMPs, Chesapeake has also initiated several innovative internal programs focused on water recycling and greener hydraulic fracturing processes.\n\n# *Aqua Renew***®**\n\nCreated to meet the challenge of reducing our water usage, Chesapeake's *Aqua Renew*® program uses state-of-the-art technology to recycle pro-\n\nduced water. Since the company's preliminary reclamation project in\n\n2006, our focus on water reuse and conservation has become a companywide endeavor, stretching from the Barnett Shale of North Texas to the Marcellus Shale of northern Pennsylvania.\n\nThe *Aqua Renew* program has yet to find a limit to how much recycled water could be used without compromising well production. In fact, our Marcellus Shale operations are treating and recycling virtually 100% of produced water (more than 10 million gallons per month) for reuse in our hydraulic fracturing operations. Properly conducted modern fracking is a highly engineered, controlled, sophisticated and safe procedure.\n\nWith such large volumes of recycled water, the company is seeing more than just environmental advantages. We estimate that this\n\n*Green operations — Chesapeake's Best Management Practices ensure our operations are as environmentally friendly as possible, while protecting our employees, neighbors and the areas where we operate.*", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "wet natural gas and dry natural gas), similar to the components of the Eagle Ford Shale. We have made a large commitment to this play and have acquired approximately 1.2 million net leasehold acres and expect to increase this total to as much as 1.5 million net leasehold acres in the coming months. We are currently using three rigs to evaluate the play and believe our leasehold could support the drilling of up to 12,000 net wells. This is an area where we anticipate bringing in a joint venture partner late in 2011 or early in 2012.\n\n# **Our People**\n\nGreat assets cannot exist without great people, so we take great pride in hiring, training, motivating, rewarding and retaining what we regard\n\nas the best employees in the industry. From our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture. Talk to Chesapeake employees and you will note genuine pride and great enthusiasm about the company and the critical role that we play in delivering increasing quantities of clean and affordable American natural gas and valuable and reliable liquids to energy consumers across the country.\n\nChesapeake employees are distinctive in other ways as well. They are much younger than the industry average, with half of our almost 4,000 Oklahoma City-based headquarters employees 33 years old or younger. Their enthusiasm and willingness to learn create an atmosphere of vitality and energy at Chesapeake, important ingredients of our distinctive culture. These attributes, along with a vibrant and attractive corporate headquarters campus, low levels of bureaucracy, great assets and a well-executed corporate strategy combine to create our culture of success and innovation.\n\nThis has generated extremely positive external feedback as Chesapeake was recently recognized for the fourth consecutive year as one of the FORTUNE 100 Best Companies to Work For®(3) in the U.S. In fact, we moved up to #32 overall and #1 in our industry — we are very proud of having created and sustained what is now considered the best place to work in all of the U.S. energy production industry.\n\nIn addition, we were honored in December 2010 at the 12th Annual Platts Global Energy Awards as finalists for CEO of the Year, Community\n\nFrom our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture.\n\n*<< A Chesapeake rig drills in the Marcellus Shale, where the company is the leading leasehold owner, largest producer and most active driller.*\n\nDevelopment Program of the Year, Deal of the Year, Energy Producer of the Year and the Industry Leadership Award. Chesapeake was one of only two companies selected as a finalist in five or more categories. The company was also honored in 2010 with a Certificate of Recognition for our military reserve recruiting efforts, named a 2010 Best Diversity Company by Engineering & Information Technology Magazine and recognized for Best Investor Relations in Energy Sector and Best Investor Relations Website at the 2010 IR Magazine U.S. Awards.\n\n# **Recent Events and a Better Way Forward**\n\nYou may be aware that I have been outspoken in attempting to persuade our country's political leadership to recognize that the discovery of vast resources of unconventional natural gas and oil in the U.S. is a complete game changer for our country from an economic, national security and environmental perspective. After two years of my best efforts and the efforts of many others in the industry, most notably T. Boone Pickens,", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n# **Community Impact**\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual volunteer program in which employees roll up their sleeves in the communities they call home.\n\nChesapeake's contributions take many forms: financial and equipment donations, volunteerism and scholarships. Last year, we made numerous in-kind donations of laptops, reconditioned Chesapeake fleet vehicles and subsidized office space. These contributions provide essential operating tools as nonprofit organizations across the nation attempt to serve more people — often with lower budgets — in tough economic times.\n\nFor example, in Louisiana we donated 12 vehicles in 2010, including one to the Panola College Oil and Natural Gas Technology Program, which teaches students about the natural gas industry and provides them with hands-on technical training. Across many of the company's operating areas, we've donated computers to deserving students, schools and organizations through Chesapeake's Discovering Tomorrow's Leaders program. In 2010 the company equipped 14 students with laptops and donated 70 computers to schools or supporting nonprofit organizations.\n\nChesapeake partners with other companies and organizations to meet basic, practical needs in hundreds of communities. An example is our\n\n*Putting food on the table — Employees volunteer at the Regional Food Bank of Oklahoma as part of Operation Blue.*\n\nsponsorship of the annual Day of Caring at the Ganus Center of Harding University in White County, Arkansas. During the event, approximately 1,200 uninsured or underinsured residents received a day of free medical, dental and eye screenings.\n\nTo help cultivate an appreciation for the great outdoors, in 2010 Chesapeake provided $25,000 to REAL School Gardens, a Fort Worthbased organization that establishes gardens at approximately 70 lower income elementary schools in North Texas. At I.M. Terrell Elementary School, students, parents, teachers and volunteers from Chesapeake and other groups worked together to prepare vegetable gardens and flower beds. In addition to teamwork skills and gardening, students learned about nutrition and took home food from the garden's bounty.\n\nWe supported servicemen and servicewomen by partnering with the Shreveport Chapter of Operation Support Our Troops, Inc. Our contribution helped offset the postage to send more than 100 care packages to troops overseas. The shipment was the largest in the organization's history and included Christmas cards, games and nonperishable food items.\n\nBy investing in the communities where we operate and the people whose lives we touch, we ensure a stronger today and a more hopeful tomorrow.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President – Production\n\n# **What advantages does CHK's unique vertical integration strategy provide?**\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n# **Will U.S. natural gas prices reconnect with world natural gas prices?**\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by production (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President – Investor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why *shouldn't* it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso Executive Vice President and Chief Financial Officer\n\n# **Why is an investment grade rating on its debt securities important to CHK?**\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an investment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# INVESTING IN OUR COMMUNITIES »\n\nChesapeake's sense of civic commitment provides a bountiful harvest of benefits to cities large and small. We partner with groups and organizations across all of our operating areas to improve the communities our employees, contractors, vendors, land and mineral owners call home. We believe the success of our business depends on the strength, goodwill and vitality of those communities. Most importantly, we believe it is the responsibility of every successful business to share success with its neighbors.\n\nIn 2010 we gave more than $25 million to charitable organizations and projects across our operating areas, primarily focusing on community development, education, health and medical and social services.\n\n# **Economic Impact**\n\nWhile much of the U.S. is still struggling to recover from the economic recession, the positive impact of natural gas and oil operations has provided a valuable economic recovery stimulus for states that are home to exploration and development activities. As the nation's second-largest producer of natural gas, a Top 15 producer of liquids and most active driller of new wells, Chesapeake's arrival in a new play stimulates economic activity, augments personal income through jobs and royalty payments, generates substantial tax revenue and sustains communities throughout its operating areas.\n\nIn addition to the general economic impact of our activities on local economies, the company's tax contributions are substantial. In 2010 Chesapeake paid approximately $675 million in taxes, including ad valorem, severance, sales, employer, and corporate income and franchise taxes. These taxes pay for ongoing government services and also build and maintain schools, recreational facilities, and parks and roads — at a time when state and local governments are still feeling the pinch of recession. We are proud to support America's economy with our growth while also helping to protect the environment through the greater use of clean-burning natural gas and reducing the country's dependence on expensive foreign oil.\n\nChesapeake also makes contributions that help improve lives and economies in cities where we operate: $25 million in 2010 alone. For example, this past year we donated $200,000 to establish the Chesapeake Environmental and Recycling Center at Goodwill Industries of Central Oklahoma. The center will provide an additional 80 jobs to disabled Oklahomans, as well as help Goodwill recycle 10 million pounds a year, which\n\n### **Chesapeake's $25 million of charitable giving in 2010**\n\n- Community Development\n- Education\n- Health and Medical\n- Social Services\n\nequates to one-third of the goods that otherwise would have been destined for Oklahoma City-area landfills. In West Virginia, we helped fund construction of the Morgantown Market\n\n*Equipping the next generation — West Virginia students hold their new laptops from Chesapeake as part of the company's Discovering Tomorrow's Leaders program.* \n\nPlace, a permanent site for the city's farmers' market, creating more business opportunities for local farmers.\n\nChesapeake also supports local chambers of commerce and city councils in all of its operating areas. In the Haynesville Shale last year, we awarded grants to the Shelby County, Sabine Parish and Coushatta-Red River chambers of commerce to help fund tourism, business communications and chamber events. In Texas, we assisted more than 250 civic, professional and community service organizations throughout Johnson, Tarrant and western Dallas counties, and sponsored memberships in 35 local Texas chambers of commerce. By helping local chambers and businesses grow and thrive, we are creating stronger economies.\n\nWe also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n# **Educational Impact**\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# DEAR FELLOW SHAREHOLDERS »\n\n2010 was a very important year of transition and achievement for Chesapeake, a year in which we initiated three very important strategic shifts: from asset gathering to asset harvesting, from focusing exclusively on natural gas to a balanced focus on natural gas and liquids and from having a leveraged balance sheet to one worthy of an investment grade rating.\n\n*Home to three distinct forms of hydrocarbons: dry natural gas, natural gas liquids and oil, the Eagle Ford Shale in South Texas epitomizes Chesapeake's shift to a balanced focus on natural gas and liquids.*\n\n2010 also marked a truly transformative year for our industry. We and a handful of our peers enhanced our capabilities to find and produce significant new resources of oil and natural gas liquids (collectively, \"liquids\") in unconventional formations. Chesapeake and these other companies combined creativity, innovation and technology to reinvent the way that our industry explores for and produces natural gas and liquids.\n\nFurthermore, 2010 was the year when global energy companies more fully recognized the importance of these developments and the tremendous opportunities that have emerged in the U.S. Through a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world. This realization has already increased the value of highquality unconventional assets in the U.S. and, in time, should lead to higher\n\nstock prices for the leading U.S. onshore E&P companies, especially Chesapeake. Simply put, the global energy industry is beating a path to our door, and we are welcoming it with open arms.\n\nBefore we move ahead, I want to emphasize that even though 2010 was a year of transition and achievement, our stock price was essentially unchanged. Nevertheless, it was still a very strong year for the company operationally and financially. Here are the year's highlights for your review:\n\n- >> Average daily natural gas and oil production increased 14% from 2.5 billion cubic feet of natural gas equivalent (bcfe) in 2009 to 2.8 bcfe in 2010;\n- >> Proved natural gas and oil reserves increased 20% in 2010, from 14.3 trillion cubic feet of natural gas equivalent (tcfe) to 17.1 tcfe;\n- >> Reserve replacement for 2010 reached 375% at a drilling, completion and net acquisition cost of only $0.76 per thousand cubic feet of natural gas equivalent (mcfe)(1);\n- >> Realized hedging gains were $2.1 billion;\n- >> Revenues increased 22% to $9.4 billion;\n- >> Adjusted ebitda(2) increased 15% to $5.1 billion;\n- >> Operating cash flow(2) increased 5% to $4.5 billion; and\n- >> Adjusted earnings per fully diluted share(2) increased 16% to $2.95.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "### **LIQUIDS-RICH AREAS**\n\n2010 Total Production: 0 bcfe, NM, NM\n\n205,000, +14%, 2%\n\n12/31/10 Proved Reserves: 10 bcfe, NM, NM\n\n12/31/10 Net Leasehold Acres:***\n\nAnadarko Basin The Anadarko Basin is home to four of Chesapeake's liquids-rich plays, which we anticipate will become significant contributors to our growth in the years ahead. Chesapeake was one of the first to utilize modern horizontal drilling methods and has assembled an unrivaled leasehold position in numerous horizontal liquids-rich plays in the basin. Chesapeake will continue drilling with a focus on the Granite Wash, where rates of return are the highest in our company, and with an increasing focus on the Cleveland, Tonkawa and Mississippian liquids-rich unconventional plays. We estimate we could drill up to 11,400 net wells on our Anadarko Basin acreage in the future and plan to utilize an average of 31 operated rigs in 2011 to further develop our current 1.7 million net leasehold acres. **5** 2010 Total Production:\n\n145 bcfe, +4%, 14%\n\n12/31/10 Proved Reserves: 2,440 bcfe, +21%, 14%\n\n12/31/10 Net Leasehold Acres: 1,420,000, +15%, 11%\n\nEagle Ford Shale As part of a growing emphasis on increasing oil and natural gas liquids production, Chesapeake has built the industry's second-largest leasehold position in the Eagle Ford Shale play in South Texas. In 2010 Chesapeake increased its leasehold from 80,000 net acres at the beginning of the year to more than 600,000 net acres. In November 2010, Chesapeake completed a $2.2 billion Eagle Ford Shale joint venture agreement with Beijing-based CNOOC Limited (NYSE:CEO), whereby CNOOC acquired a 33.3% interest in 600,000 net leasehold acres in the Eagle Ford Shale. CNOOC paid Chesapeake approximately $1.12 billion in cash at closing and will pay 75% of Chesapeake's share of drilling and completion expenditures until the $1.08 billion carry obligation has been funded, which Chesapeake expects to occur by year-end 2012. Our focus has been in the wet gas and oil prone portions of the play. We estimate we could drill up to 5,500 net wells on our Eagle Ford acreage and plan to utilize an average of 23 operated rigs in 2011 to further develop our leasehold position in the Eagle Ford Shale. In addition, we believe that the Pearsall Shale should be prospective for natural gas underneath approximately 75% of our Eagle Ford leasehold. **6**\n\n2010 Total Production: 2 bcfe, NM, NM 12/31/10 Proved Reserves:\n\n110 bcfe, NM, 1%\n\n12/31/10 Net Leasehold Acres: 470,000, +488%, 4%\n\nPermian Basin Chesapeake has built a strong position of approximately 1.2 million net leasehold acres in the Permian Basin including 560,000 net leasehold acres in the Bone Spring, Avalon, Wolfcamp and Wolfberry unconventional liquids plays. This area has the potential to deliver significant upside as we move toward increasing our oil production substantially in the years ahead. We have developed multiple new horizontal oil projects in this area, where we plan to utilize an average of approximately eight operated rigs in 2011 to further develop our leasehold in the Permian and Delaware basins and estimate we could drill up to 4,400 net wells. **7**\n\n2010 Total Production: 60 bcfe, -20%, 6%\n\n12/31/10 Proved Reserves: 770 bcfe, +4%, 5%\n\n12/31/10 Net Leasehold Acres: 1,200,000, -44%, 9%\n\nRockies Chesapeake is the second-largest leasehold owner in the Niobrara Shale, Frontier and Codell plays in the Powder River and Denver Julesburg (DJ) basins of Wyoming and Colorado. In February 2011, Chesapeake completed a $1.3 billion joint venture agreement with CNOOC, whereby CNOOC acquired a 33.3% interest in Chesapeake's approximately 800,000 net leasehold acres in the Powder River and DJ basins. CNOOC paid Chesapeake approximately $570 million in cash at closing and will pay an additional $697 million in carries by funding 66.7% of Chesapeake's **8**\n\nNote: Figures do not add to company totals.\n\n- * Compared to last year\n- ** % of company total\n- *** Bossier Shale acreage overlaps with Haynesville Shale acreage NM Not meaningful\n\nshare of drilling and completion expenditures, which Chesapeake expects to occur by year-end 2014. We plan to utilize an average of approximately 11 rigs in 2011 to develop our current 535,000 net leasehold acres with our partner and estimate that we could drill up to 7,600 net wells.\n\n2010 Total Production: 0 bcfe, NM, NM\n\n12/31/10 Proved Reserves: 10 bcfe, NM, NM\n\n12/31/10 Net Leasehold Acres: 800,000, +135%, 6%", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n*Advancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.*\n\n> The good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30–50% during the next 5–10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5–10 years earlier than hoped.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\nAubrey K. McClendon Chairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# CHESAPEAKE MANAGEMENT PERSPECTIVES »\n\nSteve Dixon Executive Vice President – Operations and Geosciences and Chief Operating Officer\n\n# **What innovations and advancements have led to CHK's ability to produce liquids from shales and other tight reservoirs?**\n\nDuring the past five years, Chesapeake and a few other leaders in the independent E&P industry have developed expertise in exploiting shales and other tight reservoir formations targeting natural gas through the combination of horizontal drilling and advanced fracture stimulation techniques. This has allowed the commercialization of plays that were previously uneconomic, most notably in shale formations. Part of our success in producing liquids from tight reservoirs has come from the company's ability to extend the technological advances gained in the development of tight natural gas formations to new formations known to contain substantial liquids. This led to our first liquids-rich play discovery in the Colony Granite Wash in 2007. As we have increased our focus on liquids-rich plays, we have benefited from a growing understanding and mapping of petrophysical properties in unconventional formations as well as an enhanced understanding of the geochemical nature of liquids-rich reservoirs. This has allowed Chesapeake to better identify formations most likely to generate liquids-rich production, including more than a dozen new plays for the company. We have subsequently improved the success of our liquids-rich plays through the use of optimal wellbore lateral lengths, better placement of well laterals though advanced wellbore steering techniques and customized fracture stimulation designs for liquids-rich plays that allow the company to achieve a greater stimulated rock volume in low permeability reservoirs. Finally, the advancements Chesapeake has made in developing liquids-rich plays have\n\nbeen made possible through the use of our proprietary Reservoir Technology Center that has become the industry's most advanced shale core laboratory.\n\n# **It is often said that the energy industry has an aging work force that is fast approaching retirement age. How is Chesapeake addressing this?**\n\nIt is no secret that there is a shortage of experienced professionals in the natural gas and oil industry. The industry downturn of the 1980s and 1990s discouraged many from pursuing energy careers. In the following decades, strong competition from other industries lured away many of the best and brightest science and technology graduates, and today many experienced professionals who stayed in the industry through the downturn are approaching retirement age. As a result, one of our industry's greatest challenges over the past 10 years has been to develop a new generation of natural gas and oil professionals who have the knowledge and experience required to meet the nation's growing energy needs.\n\nIn 2000 Chesapeake was one of the first companies to recognize this trend and to understand how recruiting and training a new generation of energy professionals would impact the company's future success and its ability to compete in the industry. At that time, Chesapeake formulated a business strategy to address future staffing needs and decided to create a world-class college recruiting and intern program to recruit the most promising industry talent. Today, Chesapeake hosts more than 150 interns every summer in its internship program, many of whom go on to become full-time Chesapeake employees upon graduation. In addition, we have 350 students who receive\n\nMartha Burger Senior Vice President – Human and Corporate Resources\n\nscholarships through Chesapeake programs, and our staff of college recruiters has developed strong relationships with professors, department heads and career counselors at the more than 31 universities where we actively recruit.\n\nAs a result of these efforts, young professionals in a wide range of disciplines, from scientists and engineers to land management and legal specialists, are being groomed to take over the reins as they learn the business through mentoring, extensive training, development opportunities and challenging work assignments. They are generously rewarded with excellent compensation and benefits, as well as an industry-leading working environment that encourages camaraderie and teamwork. The success of Chesapeake's strategy is apparent: the average age of the company's geoscience, land and engineering departments has dropped from 49 in 2000 to 36 today. In addition, the average age of the company's 4,000 Oklahoma City headquarters employees is 33. Even as some of Chesapeake's employees retire, the company is well equipped with a seasoned work force that is prepared to support and lead the way in Chesapeake's continued growth.", - "page_start": 21, - "page_end": 21, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# CHESAPEAKE'S COMMITMENT TO BEING A GOOD NEIGHBOR »\n\nThrough volunteer programs and responsible operations, we strive to be the best neighbor possible in every one of our operating areas by investing in our communities.", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Has the CEO of Chesapeake Energy met with the US President about America's energy production?", - "target_page": 16, - "target_passage": "I am pleased to report that we have apparently finally convinced President Barack Obama and Congressional leadership to recognize that the energy path America is on today is completely unsustainable.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "wet natural gas and dry natural gas), similar to the components of the Eagle Ford Shale. We have made a large commitment to this play and have acquired approximately 1.2 million net leasehold acres and expect to increase this total to as much as 1.5 million net leasehold acres in the coming months. We are currently using three rigs to evaluate the play and believe our leasehold could support the drilling of up to 12,000 net wells. This is an area where we anticipate bringing in a joint venture partner late in 2011 or early in 2012.\n\n# **Our People**\n\nGreat assets cannot exist without great people, so we take great pride in hiring, training, motivating, rewarding and retaining what we regard\n\nas the best employees in the industry. From our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture. Talk to Chesapeake employees and you will note genuine pride and great enthusiasm about the company and the critical role that we play in delivering increasing quantities of clean and affordable American natural gas and valuable and reliable liquids to energy consumers across the country.\n\nChesapeake employees are distinctive in other ways as well. They are much younger than the industry average, with half of our almost 4,000 Oklahoma City-based headquarters employees 33 years old or younger. Their enthusiasm and willingness to learn create an atmosphere of vitality and energy at Chesapeake, important ingredients of our distinctive culture. These attributes, along with a vibrant and attractive corporate headquarters campus, low levels of bureaucracy, great assets and a well-executed corporate strategy combine to create our culture of success and innovation.\n\nThis has generated extremely positive external feedback as Chesapeake was recently recognized for the fourth consecutive year as one of the FORTUNE 100 Best Companies to Work For®(3) in the U.S. In fact, we moved up to #32 overall and #1 in our industry — we are very proud of having created and sustained what is now considered the best place to work in all of the U.S. energy production industry.\n\nIn addition, we were honored in December 2010 at the 12th Annual Platts Global Energy Awards as finalists for CEO of the Year, Community\n\nFrom our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture.\n\n*<< A Chesapeake rig drills in the Marcellus Shale, where the company is the leading leasehold owner, largest producer and most active driller.*\n\nDevelopment Program of the Year, Deal of the Year, Energy Producer of the Year and the Industry Leadership Award. Chesapeake was one of only two companies selected as a finalist in five or more categories. The company was also honored in 2010 with a Certificate of Recognition for our military reserve recruiting efforts, named a 2010 Best Diversity Company by Engineering & Information Technology Magazine and recognized for Best Investor Relations in Energy Sector and Best Investor Relations Website at the 2010 IR Magazine U.S. Awards.\n\n# **Recent Events and a Better Way Forward**\n\nYou may be aware that I have been outspoken in attempting to persuade our country's political leadership to recognize that the discovery of vast resources of unconventional natural gas and oil in the U.S. is a complete game changer for our country from an economic, national security and environmental perspective. After two years of my best efforts and the efforts of many others in the industry, most notably T. Boone Pickens,", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n*Advancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.*\n\n> The good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30–50% during the next 5–10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5–10 years earlier than hoped.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\nAubrey K. McClendon Chairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Home / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n#### ARTS AND ENTERTAINMENT\n\n# New Artificial Intelligence Summit Series Begins With Energy\n\n### 07/31/2024\n\n (AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent \"Action Plan for U.S. Leadership in Next-Generation Energy,\" raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\n#### Article Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n#### RELATED ARTICLES\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage Mar 06, 2024\n\n| CATEGORIES |\n| --- |\n| FASHION |\n| BUSINESS |\n| INFOGRAPHIC |\n| ENVIRONMENT |\n| HEALTH |\n| MONEY |\n| FOOD |\n| TRAVEL |\n| BRIDAL |\n| RECREATION |\n| TECHNOLOGY |\n| HOME |\n| EDUCATION |\n| ARTS & ENTERTAINMENT |\n| AUTO |\n| CHILDREN |\n| FITNESS |\n| HOLIDAY |\n| INSURANCE |\n| LAWN & GARDEN |\n| LISTICLE |\n| NUTRITION |\n| PARENTING |\n| PETS |\n| SEASONAL |\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\n#### Mar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\n#### Mar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nSPANISH\n\nSENIORS\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK_REVIEW\n\nRECIPE\n\nAFRICAN_AMERICANS\n\nHOW_TO\n\nBYLINED_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\n## RECENT POSTS\n\n| 01 | School Choice Combines Nature And |\n| --- | --- |\n| | Nuture for Success |\n| 02 | Think Outside the (Gift) Box, Contribute to a 529 Plan |\n| 03 | Black Friday Bonanza—Don't Miss These Hot Gifts |\n| | Self-Publishing Helps Parents Share New |\n| 04 | Books with Kids |\n| 05 | Five Tips to Safely Manage Medications |\n| 06 | Self-care on Your Schedule with Mental |\n| | Wellness App |\n\n#### MOST POPULAR\n\nEntrepreneur Inspires Youth with Community Projects 08 Jul 21\n\nWho Celebrates National School Choice Week? 22 Jan 18\n\nNo Arms, No Legs, No Worries 13 Dec 18\n\nScent-imental: Holiday Smells Evoke Happy Memories 30 Oct 18\n\nTechnology Breakthroughs Drive Clean Energy Success 01 Oct 18\n\nSafety App Empowers Students, Offers Peace of Mind\n\n| TAGS | |\n| --- | --- |\n| Fashion | Business Infographic |\n| Environment | Health Money |\n| Food Travel | Bridal Recreation |\n| Technology | Home Education |\n| Arts & Entertainment | Auto Children |\n| Fitness | Holiday Insurance |\n| Lawn & Garden | Listicle Nutrition |\n| Parenting | Pets Seasonal Seniors |\n| Spanish | Tips and How To |\n| Entertainment | Career Community |\n| Family Tips | Internet |\n| Human_Interest | Beauty Arts |\n| RealEstate | Safety Medicine |\n| Book_Review | Recipe |\n| African_Americans | How_To |\n| Bylined_Column | Charity Sports |\n| Home_Improvement | Tech Wellness |\n| Arts and Entertainment | Food & Drink |\n| Real_Estate | Veterans Outdoors |\n| Real Estate | Human Interest |\n| Money & Finance | Fashion & Beauty |\n| Money and Finance | |\n| Books & Entertainment | Books |\n| Arts & Entertainment | |\n\nContact Us Work From Home Privacy Policy Terms of Use", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "# DEAR FELLOW SHAREHOLDERS »\n\n2010 was a very important year of transition and achievement for Chesapeake, a year in which we initiated three very important strategic shifts: from asset gathering to asset harvesting, from focusing exclusively on natural gas to a balanced focus on natural gas and liquids and from having a leveraged balance sheet to one worthy of an investment grade rating.\n\n*Home to three distinct forms of hydrocarbons: dry natural gas, natural gas liquids and oil, the Eagle Ford Shale in South Texas epitomizes Chesapeake's shift to a balanced focus on natural gas and liquids.*\n\n2010 also marked a truly transformative year for our industry. We and a handful of our peers enhanced our capabilities to find and produce significant new resources of oil and natural gas liquids (collectively, \"liquids\") in unconventional formations. Chesapeake and these other companies combined creativity, innovation and technology to reinvent the way that our industry explores for and produces natural gas and liquids.\n\nFurthermore, 2010 was the year when global energy companies more fully recognized the importance of these developments and the tremendous opportunities that have emerged in the U.S. Through a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world. This realization has already increased the value of highquality unconventional assets in the U.S. and, in time, should lead to higher\n\nstock prices for the leading U.S. onshore E&P companies, especially Chesapeake. Simply put, the global energy industry is beating a path to our door, and we are welcoming it with open arms.\n\nBefore we move ahead, I want to emphasize that even though 2010 was a year of transition and achievement, our stock price was essentially unchanged. Nevertheless, it was still a very strong year for the company operationally and financially. Here are the year's highlights for your review:\n\n- >> Average daily natural gas and oil production increased 14% from 2.5 billion cubic feet of natural gas equivalent (bcfe) in 2009 to 2.8 bcfe in 2010;\n- >> Proved natural gas and oil reserves increased 20% in 2010, from 14.3 trillion cubic feet of natural gas equivalent (tcfe) to 17.1 tcfe;\n- >> Reserve replacement for 2010 reached 375% at a drilling, completion and net acquisition cost of only $0.76 per thousand cubic feet of natural gas equivalent (mcfe)(1);\n- >> Realized hedging gains were $2.1 billion;\n- >> Revenues increased 22% to $9.4 billion;\n- >> Adjusted ebitda(2) increased 15% to $5.1 billion;\n- >> Operating cash flow(2) increased 5% to $4.5 billion; and\n- >> Adjusted earnings per fully diluted share(2) increased 16% to $2.95.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Chesapeake Energy Corporation is the second-largest producer of natural gas, a Top 15 producer of oil and natural gas liquids and the most active driller of new wells in the U.S.\n\nHeadquartered in Oklahoma City, the company's operations are focused on discovering and developing unconventional natural gas and oil fields onshore in the U.S. Chesapeake owns leading positions in the Barnett, Haynesville, Bossier, Marcellus and Pearsall natural gas shale plays and in the Granite Wash, Cleveland, Tonkawa, Mississippian, Bone Spring, Avalon, Wolfcamp, Wolfberry, Eagle Ford,\n\nNiobrara and Utica unconventional liquids-rich plays. The company has also vertically integrated its operations and owns substantial midstream, compression, drilling and oilfield service assets. Chesapeake's stock is listed on the New York Stock Exchange under the symbol CHK. Further information is available at **www.chk.com** where Chesapeake routinely posts announcements, updates, events, investor information, presentations and press releases.\n\n# **CONTENTS**\n\n- 1 Financial Review\n- 4 Letter to Shareholders\n- 16 Operating Areas\n- 20 Investor Q&A\n- 22 Social Responsibility\n\t- 24 Community Relations\n\t- 26 Environmental, Health & Safety\n- 28 Board of Directors\n- 28 Governance\n- 29 Officers\n- 30 Employees\n- 45 Form 10-K\n- Inside Back Cover\n\t- Corporate Information\n\n### **ON THE COVER**\n\n*Moving west, a Chesapeake rig drills toward the Niobrara Shale in the Powder River Basin of southeastern Wyoming, one of several new liquids-rich plays that are enabling the company to increase its profitability and return on capital.*", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n# **Community Impact**\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual volunteer program in which employees roll up their sleeves in the communities they call home.\n\nChesapeake's contributions take many forms: financial and equipment donations, volunteerism and scholarships. Last year, we made numerous in-kind donations of laptops, reconditioned Chesapeake fleet vehicles and subsidized office space. These contributions provide essential operating tools as nonprofit organizations across the nation attempt to serve more people — often with lower budgets — in tough economic times.\n\nFor example, in Louisiana we donated 12 vehicles in 2010, including one to the Panola College Oil and Natural Gas Technology Program, which teaches students about the natural gas industry and provides them with hands-on technical training. Across many of the company's operating areas, we've donated computers to deserving students, schools and organizations through Chesapeake's Discovering Tomorrow's Leaders program. In 2010 the company equipped 14 students with laptops and donated 70 computers to schools or supporting nonprofit organizations.\n\nChesapeake partners with other companies and organizations to meet basic, practical needs in hundreds of communities. An example is our\n\n*Putting food on the table — Employees volunteer at the Regional Food Bank of Oklahoma as part of Operation Blue.*\n\nsponsorship of the annual Day of Caring at the Ganus Center of Harding University in White County, Arkansas. During the event, approximately 1,200 uninsured or underinsured residents received a day of free medical, dental and eye screenings.\n\nTo help cultivate an appreciation for the great outdoors, in 2010 Chesapeake provided $25,000 to REAL School Gardens, a Fort Worthbased organization that establishes gardens at approximately 70 lower income elementary schools in North Texas. At I.M. Terrell Elementary School, students, parents, teachers and volunteers from Chesapeake and other groups worked together to prepare vegetable gardens and flower beds. In addition to teamwork skills and gardening, students learned about nutrition and took home food from the garden's bounty.\n\nWe supported servicemen and servicewomen by partnering with the Shreveport Chapter of Operation Support Our Troops, Inc. Our contribution helped offset the postage to send more than 100 care packages to troops overseas. The shipment was the largest in the organization's history and included Christmas cards, games and nonperishable food items.\n\nBy investing in the communities where we operate and the people whose lives we touch, we ensure a stronger today and a more hopeful tomorrow.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# **Strong Partners**\n\nOver the past few years, in addition to gathering the industry's best assets, Chesapeake has also built the industry's finest collection of global energy partners and energy stock investors. We have now entered into transactions with PXP, BP, Statoil, Total, CNOOC and BHP Billiton. Collectively, we have sold these companies certain assets for total consideration of $20.5 billion in the form of cash and drilling and completion carries for which our net cost was only $6.1 billion resulting in overall value creation of $14.4 billion. While these transactions have been very\n\nrewarding to our buyers, they have been truly outstanding for Chesapeake, providing us an attractive source of capital, a reduction of risk, a quick recovery of our leasehold investment in new plays and a much greater ability to capture a large resource base with decades of highly profitable drilling opportunities.\n\nIn addition, we are the only U.S. E&P company that has attracted to its stock ownership roster some of the world's leading governmentsponsored investors: Temasek Holdings (Singapore), China Investment Corporation, Korea Investment Corporation and Abu Dhabi Investment Authority. Along with our largest shareholder, Memphis, Tennesseebased Southeastern Asset Management (12%), these shareholders are some of the world's largest and most astute investors, and who also happen to manage some of the world's largest pools of capital and have a very long-term investment horizon. Their support is an important validation of our strategy.\n\n# **Short-Term Pain for Long-Term Gain**\n\nDespite this all-star lineup of global partners and investors, some other investors have not yet fully recognized the benefits of our industry leadership in acquiring unconventional natural gas and liquids assets. Whether it was our leveraged balance sheet during recent tough recessionary times, our heavy focus on natural gas during a time of persistent market pessimism about natural gas prices or our large capital investments in undeveloped liquids-rich leasehold undertaken to enable Chesapeake to remain an industry leader in the years ahead, it is clear\n\nThrough a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world.\n\n### *<< Aubrey K. McClendon, Co-Founder, Chairman and Chief Executive Officer*\n\nthat we were less popular in the stock market in 2010 than we were in 2009, when our stock price increased by 60%.\n\nWe anticipated that some market unpopularity in 2010 would likely be the price we would pay as we positioned Chesapeake to be the leader not only in unconventional U.S. natural gas, but also in unconventional U.S. liquids. However, now that we have largely completed the investments needed to accomplish this transition to a portfolio balanced with liquids, the rebound in our stock price could be sharp as investors begin to focus more clearly on Chesapeake's three-way transition from an asset gatherer to an asset harvester, from less natural gas exposure to more liquids exposure and from a leveraged balance sheet to one worthy of an investment grade rating.\n\nAccordingly, in early January 2011, we announced our \"25/25 Plan,\" a two-year plan designed to reduce our long-term debt by 25% while still growing the company's production by 25%. We designed this plan to articulate very clearly the benefits of becoming an asset harvester", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# BOARD OF DIRECTORS »\n\n#### STANDING (LEFT TO RIGHT)\n\n### Merrill A. \"Pete\" Miller, Jr. (1,2)\n\nChairman, President and CEO National Oilwell Varco, Inc. Houston, Texas\n\n### SEATED (LEFT TO RIGHT)\n\n### Don Nickles(4)\n\nFormer U.S. Senator, Oklahoma Founder and President The Nickles Group, LLC Washington, D.C.\n\n# V. Burns Hargis(1)\n\nPresident Oklahoma State University Stillwater, Oklahoma\n\n> Charles T. Maxwell (3,4) Senior Energy Analyst Weeden & Co. Greenwich, Connecticut\n\nAubrey K. McClendon Chairman of the Board and Chief Executive Officer Chesapeake Energy Corporation Oklahoma City, Oklahoma\n\nFrederick B. Whittemore(3,4) Advisory Director Morgan Stanley New York, New York *Retiring from the Board in June 2011*\n\n# Richard K Davidson(1)\n\nRetired Chairman and CEO Union Pacific Corporation Bonita Springs, Florida\n\nFrank Keating(3) Former Governor, Oklahoma President and CEO American Bankers Association Washington, D.C.\n\n- (1) Audit Committee\nFounder and CEO Next Decade The Woodlands, Texas\n\n- (2) Lead Independent Director\nKathleen M. Eisbrenner (3,4)\n\n- (3) Compensation Committee\n- (4) Nominating and Corporate Governance Committee\n\nLouis A. Simpson Chairman SQ Advisors, LLC Naples, Florida *Nominated for* \n\n*election in June 2011*\n\n# Governance\n\nOur Board of Directors is responsible to our shareholders for the oversight of the company and for the implementation and operation of an effective and sound corporate governance environment. We believe that effective corporate governance contributes to long-term corporate performance. An effective governance structure should reinforce a culture of corporate integrity, foster the company's pursuit of long-term strategic goals of growth and profit and ensure quality and continuity of corporate leadership. Our directors will continue to be diligent in their efforts to preserve the public trust while fostering the long-term success of the company.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President – Production\n\n# **What advantages does CHK's unique vertical integration strategy provide?**\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n# **Will U.S. natural gas prices reconnect with world natural gas prices?**\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by production (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President – Investor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why *shouldn't* it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso Executive Vice President and Chief Financial Officer\n\n# **Why is an investment grade rating on its debt securities important to CHK?**\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an investment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "# INVESTING IN OUR COMMUNITIES »\n\nChesapeake's sense of civic commitment provides a bountiful harvest of benefits to cities large and small. We partner with groups and organizations across all of our operating areas to improve the communities our employees, contractors, vendors, land and mineral owners call home. We believe the success of our business depends on the strength, goodwill and vitality of those communities. Most importantly, we believe it is the responsibility of every successful business to share success with its neighbors.\n\nIn 2010 we gave more than $25 million to charitable organizations and projects across our operating areas, primarily focusing on community development, education, health and medical and social services.\n\n# **Economic Impact**\n\nWhile much of the U.S. is still struggling to recover from the economic recession, the positive impact of natural gas and oil operations has provided a valuable economic recovery stimulus for states that are home to exploration and development activities. As the nation's second-largest producer of natural gas, a Top 15 producer of liquids and most active driller of new wells, Chesapeake's arrival in a new play stimulates economic activity, augments personal income through jobs and royalty payments, generates substantial tax revenue and sustains communities throughout its operating areas.\n\nIn addition to the general economic impact of our activities on local economies, the company's tax contributions are substantial. In 2010 Chesapeake paid approximately $675 million in taxes, including ad valorem, severance, sales, employer, and corporate income and franchise taxes. These taxes pay for ongoing government services and also build and maintain schools, recreational facilities, and parks and roads — at a time when state and local governments are still feeling the pinch of recession. We are proud to support America's economy with our growth while also helping to protect the environment through the greater use of clean-burning natural gas and reducing the country's dependence on expensive foreign oil.\n\nChesapeake also makes contributions that help improve lives and economies in cities where we operate: $25 million in 2010 alone. For example, this past year we donated $200,000 to establish the Chesapeake Environmental and Recycling Center at Goodwill Industries of Central Oklahoma. The center will provide an additional 80 jobs to disabled Oklahomans, as well as help Goodwill recycle 10 million pounds a year, which\n\n### **Chesapeake's $25 million of charitable giving in 2010**\n\n- Community Development\n- Education\n- Health and Medical\n- Social Services\n\nequates to one-third of the goods that otherwise would have been destined for Oklahoma City-area landfills. In West Virginia, we helped fund construction of the Morgantown Market\n\n*Equipping the next generation — West Virginia students hold their new laptops from Chesapeake as part of the company's Discovering Tomorrow's Leaders program.* \n\nPlace, a permanent site for the city's farmers' market, creating more business opportunities for local farmers.\n\nChesapeake also supports local chambers of commerce and city councils in all of its operating areas. In the Haynesville Shale last year, we awarded grants to the Shelby County, Sabine Parish and Coushatta-Red River chambers of commerce to help fund tourism, business communications and chamber events. In Texas, we assisted more than 250 civic, professional and community service organizations throughout Johnson, Tarrant and western Dallas counties, and sponsored memberships in 35 local Texas chambers of commerce. By helping local chambers and businesses grow and thrive, we are creating stronger economies.\n\nWe also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n# **Educational Impact**\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - } - ] -] \ No newline at end of file